0% found this document useful (0 votes)
1 views

File Up 0 Jrgev7a25-Olh

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

File Up 0 Jrgev7a25-Olh

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

280 Journal of Reviews on Global Economics, 2018, 7, 280-296

Pros and Cons of the Impact Factor in a Rapidly Changing Digital


World

Michael McAleer1-5, Judit Oláh6,* and József Popp7

1
Department of Finance, Asia University, Taiwan
2
Discipline of Business Analytics, University of Sydney Business School, Australia
3
Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands
4
Department of Economic Analysis and ICAE, Complutense University of Madrid, Spain
5
Institute of Advanced Studies, Yokohama National University, Japan
6
Faculty of Economics and Business, Institute of Applied Informatics and Logistics, University of Debrecen,
Hungary
7
Faculty of Economics and Business, Institute of Sectoral Economics and Methodology, University of
Debrecen, Hungary
Abstract: The purpose and novlty of the paper is to present arguments for and against the use of the Impact Factor (IF)
in a rapidly changing digital world. The paper discusses the calculation of IF, as well as the pros and cons of IF. Editorial
policies that affect IF are examined, and the merits of open access online publishing are presented. Scientific quality and
the IF dilemma are analysed, and alternative measures of impact and quality are evaluated. The San Francisco
declaration on research assessment is also discussed.

Keywords: Impact Factor, Quality of research, Pros and Cons, Implications, Digital world, Editorial policies, Open
access online publishing, SCIE, SSCI.

1. INTRODUCTION journal editors, and was published before Journal


Citation Reports (JCR) existed. A quarterly issue of the
Librarians and information scientists have been 1969 SCI was used to identify the most significant
evaluating journals for almost 90 years. Gross and journals in science, where the analysis was based on a
Gross (1927) conducted a classic study of citation large sample of the literature. After using journal
patterns in the 1920s, followed by Brodman (1944), statistical data to compile the SCI for many years, the
with studies of physiology journals and subsequent Institute for Scientific Information (ISI) in Philadelphia
reviews following this lead. Garfield (1955) first started to publish Journal Citation Reports (JCR) in
mentioned the idea of an impact factor in Science. The 1975 as part of the SCI and the Social Sciences
introduction of the experimental Genetics Citation Index Citation Index (SSCI).
in 1961 led to the publication of the Science Citation
Index (SCI). In the early 1960s, Sher and Garfield However, ISI recognized that smaller but important
created the journal impact factor to assist in selecting review and specialty journals might not be selected if
journals for the new SCI (Garfield and Sher, 1963). they depended solely on total publication or citation
counts (Garfield, 2006). A simple method for comparing
In order to do this, they simply re-sorted the author journals, regardless of size or citation frequency, was
citation index into the journal citation index an, from this needed and the Thomson Reuters Impact Factor (IF)
exercise, they learned that initially a core group of large was created. The term “impact factor” has gradually
and highly cited journals needed to be covered in the evolved, especially in Europe, to describe both journal
new SCI. They sampled the 1969 SCI to create the first and author impact. This ambiguity often causes
published ranking by impact factor. Garfield’s (1972) problems.
paper in Science on “Citation analysis as a tool in
journal evaluation” has received most attention from It is one thing to use impact factors to compare
journals and quite another to use them to compare
authors. Journal impact factors generally involve
*Address correspondence to this author at the Faculty of Economics and relatively large populations of articles and citations.
Business, Institute of Applied Informatics and Logistics, University of Debrecen,
Hungary; Tel: 0036202869085; E-mail: [email protected] Indeed, most metrics relating to impact and quality are
JEL: O34, O31, D02. based on citations data (Chang and McAleer, 2015).

E-ISSN: 1929-7092/18 © 2018 Lifescience Global


Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 281

Individual authors, on average, produce much smaller have been indexed for three years. IF relates to a
numbers of articles, although some can be specific time period. It is possible to calculate it for any
phenomenal. The impact factor is used to compare desired period, and the JCR also includes a five-year
different journals within a certain field. The ISI Web of IF. The JCR shows rankings of journals by IF, if desired
Science (WoS) indexes more than 12,000 science and by discipline, such as organic chemistry or psychiatry.
social science journals.
Citation data are obtained from a database
JCR offers “a systematic, objective means to produced by ISI, which continuously records scientific
critically evaluate the world’s leading journals, with citations as represented by the reference lists of
quantifiable, statistical information based on citation articles from a large number of the world’s scientific
data” (Thomson Reuters, 2015). However, there are journals. The references are rearranged in the
increasing concerns that the impact factor is being database to show how many times each publication
used inappropriately and not in ways as originally has been cited within a certain period, and by whom,
envisaged (Garfield, 2006; Adler et al., 2009). IF and the results are published as the SCI. On the basis
reveals several weaknesses, including the mismatch of the SCI and author publication lists, the annual
between citing and cited documents. The scientific citation rate of papers by a scientific author or research
community seeks and needs better certification of group can be calculated. Similarly, the citation rate of a
journal procedures and metrics to improve the quality scientific journal can be calculated as the mean citation
of published science and social science. rate of all the articles contained in the journal (Garfield,
1972). This means that IF is a measure of the
The plan and novelty of the remainder of the paper frequency with which the “average article” in a journal
are as follows. Section 2 discusses calculation of the has been cited in a particular year or period.
Impact Factor (IF), and the pros and cons of IF are
given in Section 3. Editorial policies that affect IF are IF could just as easily be based on the previous
examined in Section 4. The merits of open access year’s articles alone, which would give even greater
online publishing are presented in Section 5. Scientific weight to rapidly changing fields. A less current IF
quality and the IF dilemma are analysed in Section 6, could take into account longer periods of citations
and alternative measures of impact and quality are and/or sources, but the measure would then be less
evaluated in Section 7. The San Francisco declaration current. The JCR ’help page’ provides instructions for
on research assessment is discussed in Section 8. computing five-year impact factors. Nevertheless, when
Concluding comments are given in Section 9. journals are analysed within discipline categories, the
rankings based on 1-, 7- or 15-year IF do not differ
2. CALCULATION OF IMPACT FACTOR (IF) significantly. Garfield reported on this in The Scientist
(Garfield, 1998a, b).
IF is calculated yearly, starting from 1975 for those
journals that are indexed in the JCR. In any given year, When journals were studied across fields, the
the impact factor of a journal is the average number of ranking for physiology journals improved significantly
citations received per paper published in that journal as the number of years increased, but the rankings
during the two preceding years. Thus, the impact factor within the physiology category did not change
of a journal is calculated by dividing the number of significantly. Similarly, Hansen and Henrikson (1997)
current year citations to the source items published in reported “good agreement between the journal impact
that journal during the previous two years (Garfield, factor and the overall (cumulative) citation frequency of
1972). For example, if a journal has an impact factor of papers on clinical physiology and nuclear medicine.”
3 in 2013, then its papers published in 2011 and 2012
IF is useful in clarifying the significance of absolute
received 3 citations each, on average, in 2013.
(or total) citation frequencies. It eliminates some of the
New journals, which are indexed from their first bias in such counts, which favor large over small
journals, or frequently issued over less frequently
published issue, will receive an IF after two years of
issued journals, and of older over newer journals. In the
indexing. In this case, the citations to the year prior to
latter case, in particular, such journals have a larger
Volume 1, and the number of articles published in the
citable body of literature than do smaller or younger
year prior to Volume 1, are known zero values.
journals. All things being equal, the larger is the
Journals that are indexed starting with a volume other
number of previously published articles, the more often
than the first volume will not be given an IF until they
will a journal be cited (Garfield, 1972).
282 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

The integrity of data, and transparency about their with, for example, adult body mass), the mode, mean
acquisition, are vital to science. IF data that are and median all have similar values. However, with
gathered and sold by Thomson Scientific (formerly the citations data, these common statistics may differ
Institute of Scientific Information, or ISI) have a strong dramatically because the median calculation would
influence on the scientific community, affecting typically be much lower than the mean.
decisions on where to publish, whom to promote or
hire, the success of grant applications, and even salary Most articles are not well-cited, but some articles
bonuses, among others. may have unusual cross-disciplinary impacts. The so-
called 80/20 phenomenon applies, in that 20% of
3. PROS AND CONS OF IF articles may account for 80% of the citations. The key
determinants of impact factor are not the number of
In an ideal world, IF would rely only on complete authors or articles in the field, but rather the citation
and correct citations, reinforcing quality control density and the age of the literature that is cited. The
throughout the entire journal publication chain. There is size of a field, however, will increase the number of
a long history of statistical misuse in science (Cohen, “super-cited” papers. Although a few classic
1938), but citation metrics should not perpetuate this methodological papers may exceed a high threshold of
failing. Numerous criticisms have been made of the use citation, many other methodological and review papers
of IF. The research community seems to have little do not. Publishing mediocre review papers will not
understanding of how impact factors are determined, necessarily boost a journal’s impact (Garfield, 2006).
with no audited data to validate their reliability (Rossner
et al., 2007). Some examples of super-citation classics include
the Lowry method (Lowry et al., 1951), which has been
Other criticism focuses on the effect of the impact cited 300,000 times, and the Southern Blot technique
factor on the behavior of scholars, editors and other that has been cited 30,000 times (Southern, 1975). As
stakeholders (van Wesel, 2016; Moustafa, 2015). The the roughly 60 papers cited more than 10,000 times are
use of IF instead of actual article citation counts to decades old, they do not affect the calculation of the
evaluate individuals is a highly controversial issue. current impact factor. Indeed, of 38 million items cited
Grants and other policy agencies often wish to bypass from 1900-2005, only 0.5% were cited more than 200
the work involved in obtaining citation counts for times, one-half were not cited at all (which relates to
individual articles and authors. the PI-BETA (Papers Ignored - By Even The Authors)
metric presented in Chang et al. (2011)), and about
It is well known that there is a skewed distribution of one-quarter were not substantive articles but rather the
citations in most fields, with a few articles cited
editorial ephemera mentioned earlier (Garfield, 2006).
frequently, and many articles cited rarely, iof at all (see
The appearance of articles on the same subject in the
Chang et al., 2011). There are other statistical
same issue may have an upward effect, as shown in
measures to describe the nature of the citation Opthof (1999).
frequency distribution skewness. However, so far no
measures other than the mean have been provided to Another aspect is self-citation, in which citations to
the research community (Rossner et al., 2007). For articles may originate from within a journal, or from
example, the initial human genome paper in Nature other journals. In general, most citations originate from
(Lander et al., 2001) has been cited a total of 5,904 other journals, but the proportion of self-citation varies
times (as of November 20, 2007). In a self-analysis of with discipline and journal. Generally, self-citation rates
their 2004 impact factor, Nature noted that 89% of their for most journals remain below 20% (ISI, 2002). It
citations came from only 25% of the papers published, seems to be harmless in many cases, with few editorial
and so the importance of any one publication will be citations (Archambaultb and Lariviere, 2009). However,
different from, and in most cases less than, the overall it is potentially problematic when editors choose to
number (Editorial, 2005). manipulate the IF with self-citations within their own
journal (Rieseberg and Smith, 2008; Rieseberg et al.,
IF is based on the number of citations per paper, yet 2011).
citation counts follow a Bradford distribution (that is, a
power law distribution), so that the arithmetic mean is a In addition, the definition of what is considered an
statistically inappropriate measure (Adler et al., 2008). “article” is often a source of controversy for journal
With a normal distribution (such as would be expected editors. For example, some editorial material may cite
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 283

articles (items by the Editor, and Letters to the Editor WoS and JCR suffer from several systemic errors.
commenting on previously published articles), thereby A report feature of WoS often arrives at different results
creating an opportunity to manipulate IF. In some from the figures published in JCR because WoS and
cases, the Letters section can be divided into JCR use different citation matching protocols. WoS
correspondence and research letters, the latter being relies on matching citing articles to cited articles, and
peer-reviewed, and hence citable for the denominator, requires either a digital object identifier (DOI) or
which can lead to an increase in the denominator and enough information to make a credible match. An error
to a fall in IF as Letterstend not to be highly cited. in the author, volume or page numbers may result in a
missed citation. WoS attempts to correct for errors if
It has been stated that IF and citation analysis are, there is a close match. In contrast, all that is required to
in general, affected by field-dependent factors register a citation in JCR is the name of the journal and
(Bornmann and Daniel, 2008). This may invalidate the publication year.
comparisons, not only across disciplines, but even
within different fields of research in a specific discipline With a lower bar of accuracy required to make a
(Anauati et al., 2014). The percentage of total citations match, it is more likely that JCR will pick up citations
occurring in the first two years after publication also that are not registered in WoS. Furthermore, WoS and
varies highly among disciplines, from 1-3% in the JCR use different citation windows. The WoS Citation
mathematical and physical sciences, to 5-8% in the Report will register citations when they are indexed,
biological sciences (van Nierop, 2009). In short, impact and not when they are published. If a December 2014
factors should not be used to compare journals across issue is indexed in January 2015, then the citations will
disciplines. be counted as being made in 2015, not 2014. In
comparison, JCR counts citations by publication year.
The fact that WoS represents a sample of the
For large journals, this discrepancy is not normally an
scientific literature is often overlooked, and IF is often
issue, as a citation gain at the beginning of the cycle is
treated as if it was based on a census. In reality, WoS
balanced by the omission of citations at the end of the
draws on a sample of the scientific literature, selected
following their own criteria (Vanclay, 2012), as cycle. For smaller journals that may publish less
amended from time to time (for example, through frequently, the addition or omission of a single issue
suspensions for self-citation, although this is not as may make a significant difference in the IF.
common as might be expected). Other providers, such
In contrast, WoS is dynamic, while JCR is static. In
as Scopus and Google Scholar, and evaluation
order to calculate journal IF, Thomson Reuters takes
agencies (for example, the Excellence for Research in
an extract of their dataset in March, whether or not it
Australia) use different samples of the scientific
has received and indexed all journal content from the
literature, so their interpretation of corresponding
previous year. In comparison, WoS continues to index
impact and quality would differ from IF.
as issues are received. There are also differences in
WoS policies and decisions to include or suspend a indexing. Not all journal content is indexed in WoS. For
journal also affect IF. For example, World Journal of example, a journal issue containing conference
Gastroenterology was suspended in 2005, so that WoS abstracts may not show up in the WoS dataset, but
has no data, but Scopus indicates that the journal had citations to these abstracts may count toward
over 6000 citations to articles during 2004-05. calculating a journal IF.
Therefore, the suspension of one journal could have
deflated IF for other gastroenterology journals by as While there may be a delay of several years for
much as 1%. These sources of variation lead one to some topics, papers that achieve high impact are
question the practice of publishing IF with three usually cited within months of publication, and almost
decimal points, and to ask why there is no statement certainly within a year or so. This pattern of immediacy
regarding variability (Vanclay, 2012). However, the has enabled Thomson Scientific to identify “hot papers”
annual JCR is not based on a sample, and includes in its bimonthly publication, Science Watch. However,
every citation that appears in the 12,000 plus journals full confirmation of high impact is generally obtained
that it covers, so that discussions of sampling errors in two years later. The Scientist waits up to two years to
relation to JCR are not particularly meaningful. select hot papers for commentary by authors. Most of
Furthermore, ISI uses three decimal places to reduce these papers will eventually become “citation classics”.
the number of journals with the identical impact rank However, the chronological limitation on the impact
(Garfield, 2006). calculation eliminates the bias that “super classics”
284 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

might introduce. Absolute citation frequencies are database links each source item to its own unique
biased in this way but, on occasion, a hot paper might citations. Therefore, the impact calculations are more
affect the current IF of a journal. precise as only citations to the substantive items that
are in the denominator are included.
JCR provides quantitative tools for ranking,
evaluating, categorizing, and comparing journals as IF Recently, Webometrics has been brought
is widely regarded as a quality ranking for journals, and increasingly into play, though there is as yet little
is used extensively by leading journals in advertising. evidence that this approach is any better than
The heuristic methods used by Thomson Scientific traditional citation analysis. Web “citations” may occur
(formerly Thomson ISI) for categorizing journals are by slightly earlier, but they are not the same as “citations”.
no means perfect, even though citation analysis Thus, one must distinguish between readership, or
informs their decisions. Pudovkin and Garfield (2004) downloading, and actual citations in newly published
attempted to group journals objectively by relying on papers. Some limited studies indicate that Web
the 2-way citational relationships between journals to citations are a harbinger of future citations (Lawrence,
reduce the subjective influence of journal titles, such as 2001; Vaughan and Shaw, 2003; Antelman, 2004;
Journal of Experimental Medicine, which is one of the Kurtz et al., 2005).
top 5 immunology journals (Garfield, 1972).
4. EDITORIAL POLICIES THAT AFFECT IF
JCR recently added a new feature that provides the
ability to establish more precisely journal categories A journal can adopt different editorial policies to
based on citation relatedness A general formula based increase IF (Arnold and Fowler, 2011). For example,
on the citation relatedness between two journals is journals may publish a larger percentage of review
used to express how close they are in subject matter. articles, which are generally cited more fequently than
However, in addition to helping libraries decide which research reports as the former tends to include many
journals to purchase, IF is also used by authors to more papers in the extended reference list. Therefore,
decide where to submit their research papers. As a review articles can raise IF of a journal, and review
general rule, journals with high IF typically include the journals tend to have the highest IF in their respective
most prestigious journals. fields. No calculation of primary research papers only is
1
made by Thomson Scientific . The numerator restricts
IF reported by JCR imply that all editorial items in the count of citations to scientific articles excluding, for
Science, Nature, JAMA, NEJM, and so on, can be example, editorial comment. However, most citations
neatly categorized. Such journals publish large are made by articles (including reviews) to earlier
numbers of articles that are not substantive research or articles (Hernan, 2009).
review articles. Correspondence, letters,
commentaries, perspectives, news stories, obituaries, Journal editors could also cite ghost articles that
editorials, interviews, and tributes are not included in could usefully increase IF, thereby distorting the
the JCR denominator. However, they may be cited, performance indicators for real contributors. Given the
especially in the current year, but that is also why they relatively lax error checking by WoS, it is tempting to
do not significantly affect impact calculations. include a series of ghost articles in a review of this kind
Nevertheless, as the numerator includes later citations to demonstrate weaknesses of IF (Rieseberg et al.,
to these ephemera, some distortion will arise. 2011). Some journal editors set their submissions
policy as “by invitation only” to invite exclusively senior
Only a small group of journals are affected, if at all. scientists to publish “citable” papers to increase IF
Those that are affected change by 5 or 10% (Pudovkin (Moustafa, 2015).
and Garfield, 2004). According to Thomson Reuters,
Journals may also attempt to limit the number of
98% of the citations in the numerator of the impact
“citable items”, that is, the denominator in IF, either by
factor are to items that are considered as citable, and
hence are counted in the denominator. The degree of
misrepresentation is small. Many of the discrepancies
inherent in IF are eliminated altogether in another 1
Thomson Scientific was one of the operating divisions of the Thomson
Corporation from 2006 to 2008. Following the merger of Thomson with Reuters
Thomson Scientific database called Journal to form Thomson Reuters in 2008, it became the scientific business unit of the
Performance Indicators (Fassoulaki et al., 2002). new company. The IF is now produced by Clarivate Analytics, which was
formerly the Intellectual Property and Science business of Thomson Reuters
Unlike JCR, the Journal Performance Indicators until 2016,when it was acquired by corporate investors and spun off into an
independent company.
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 285

declining to publish articles (such as case reports in 5. MERITS OF OPEN ACCESS ONLINE
PUBLISHING
medical journals) that are unlikely to be cited, or by
altering articles (by not allowing an abstract or
The term “open access” basically refers to free
biblography) in the hope that Thomson Scientific will
public access to research papers. Academics have
not deem it a “citable item”. As a result of negotiations
argued that since academic research and publishing
over whether items are “citable”, IF variations of more
were publicly funded, the public should have free online
than 300% have been observed (PLoS Medicine
access to the papers being published as a result.
Editors, 2006). Journals prefer to publish a large
Publishing is a highly competitive market, no less so for
proportion of papers, or at least the papers that are
the open access segment. The big publishers have
expected to be highly cited, early in the calendar year
long recognised the popularity of open access, and
as this will give those papers more time to gather
now offer a range of publications accordingly. However,
citations. Several methods exist for a journal to cite
somebody always has to pay for publication. This
articles in the same journal that will increase IF
means that the new scientific findings become freely
(Fassoulaki et al., 2002; Agrawal, 2005).
accessible, but researchers generally have to include
Beyond editorial policies that may skew IF, journals publication costs in their research budget. Gates
can take overt steps to game the system. For example, Foundation is already going one step further and
in 2007, the specialist journal Folia Phoniatrica et linking future funding to a requirement of publication
Logopaedica, with an impact factor of 0.66, published under the “creative commons” license, allowing
an editorial that cited all its articles from 2005 to 2006 material to be used free of charge for the rapid and
in a protest against the “absurd scientific situation in widespread dissemination of scientific knowledge.
some countries” related to use of IF (Schuttea and
The strength of the relationship between journal IF
Svec, 2007). The large number of citations meant that
and the citation rates of papers has been steadily
IF for that journal increased to 1.44. As a result of the
decreasing since articles began to be available digitally
unedifying increase, the journal was not included in the
(Lozano et al., 2012). The aggressive expansion of
2008 and 2009 JCR.
large commercial publishers has increasingly
Coersive citation is a practice in which an editor consolidated the control of scientific communication in
forces an author to add spurious self-citations to an the hands of ’for-profit’ corporations. Such publishers
article before the journal will agree to publish it in order presented a challenge to the open access movement
and online publishing, the development of a model of a
to inflate IF. A survey published in 2012 indicates that
not for-profit journals run by and for scientists.
coercive citation has been experienced by one in five
However, the last decade have revolutionized the
researchers working in economics, sociology,
landscape of scientific publishing and communication.
psychology, and multiple business disciplines, and it is
more common in business and in journals with a lower For the Open Access movement, the last 15 years
IF (Wilhite and Fong, 2012). However, cases of have been a pivotal time for addressing the financial
coercive citation have occasionally been reported for and commercial considerations of academic publishing,
other scientific disciplines (Smith, 1997; Chang et al., moving from grass roots initiatives to the introduction of
2013). government policy changes. Over the last decade,
there has been an immense effort to change how
Even citations to retracted articles may be counted accessible all of this new (and old) information is to the
in calculating IF (Liu, 2007). In an example, Woo Suk world at large.
Hwang’s stem cell papers in Science from 2004 and
2005, both subsequently retracted, have been cited a The Hindawi Publishing Corporation seems to have
total of 419 times (as of November 20, 2007). The been the first open access publisher. However, PLOS
denominator of IF, however, contains only those (BioMed Central launched open access in 2000) played
articles designated by Thomson Scientific as primary a pivotal role in promoting and supporting the Open
research articles or review articles, but Nature “News Access movement. The launch of PLOS had the
and Views”, among others, is not counted (Editorial, additional effect of creating pressure on traditional
2005). Therefore, IF calculation contains citation values publishers to consider their business models,
in the numerator for which there is no corresponding demonstrating that open access publishing was not
value in the denominator. equivalent to vanity publishing, even though it is the
286 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

author who pays the costs associated with publishing in rates and journal IF, which seem to be quantitative and
this model. PLOS also showed that open access objective indicators directly related to published
publishing could be done in a way that might tempt science.
scientists to submit their best work to somewhere other
than the established traditional journals. The Experience has shown that, in each specialty or
involvement of PLOS in the Open Access movement discipline, the best journals are those in which it is most
has seen the acceptance of open access publishing difficult to have an article accepted, and these are the
(Ganley, 2013). journals that have a high IF. Many of these leding
journals existed long before the IF was devised. It is
The Fair Access to Science and Technology important to note that IF is a journal metric, and should
Research Act in the US has mandated earlier public not be used to assess individual researchers or
release of taxpayer-funded research. In the UK, the institutions (Seglen, 1997). As the IF is readily
Research Councils provide grants to UK Higher available, it has been tempting to use IF for evaluating
Education Institutes to support payment of article individual scientists or research groups because it is
widely held to be a valid evaluation criterion (Martin,
processing charges associated with open access
1996), and is probably the most widely used indicator
publishing. The European Commission has a strategy
apart from a simple count of publications. On the
in place that aims to make the results of projects
assumption that the journal is representative of its
funded by the EU Research Framework open access
articles, the journal IF of an author’s articles can simply
via either “green” or “gold” publishing. The Australian be aggregated to obtain an apparently objective and
Research Council (ARC) implemented a policy quantitative measure of the author’s scientific
requiring deposition of ARC-funded research achievements.
publications in an open access institutional repository
within 12 months of publication. However, IF is not statistically representative of
individual journal articles, and correlate poorly with
The future for improved access to research is bright. actual citations of individual articles (the citation rate of
The Howard Hughes Medical Institute, the Max Plank articles determines journal impact, but not vice-versa).
Society and the Wellcome Trust launched in 2012 the Furthermore, citation impact is primarily a measure of
online, open access, peer reviewed journal eLife, which scientific utility rather than of scientific quality, and the
publishes articles in biomedicine and life sciences. The selection of references in a paper is subject to strong
journal does not promote IF, but provides qualitative biases that are unrelated to quality (MacRoberts and
and quantitative indicators regarding the scope of MacRoberts, 1989; Seglen, 1992, 1995). For
published articles. Moreover, articles are published evaluation of scientific quality, there seems to be no
together with a simplified language summary in eLife alternative to qualified experts reading the publications.
In the prescient words of Brenner (1995): “What
Digests to make them accessible to a wider audience,
matters absolutely is the scientific content of a paper,
including students, researchers from other areas, and
and nothing will substitute for either knowing or reading
the general public, which also attracts scientific
it”.
dissemination vehicles and major newspapers
(Malhotra and Marder, 2015). Acccording to Sally et al. (2014), journal rankings
that are constructed solely on the basis of IF are only
However, not all forms of open access publishing moderately correlated with those compiled from the
are equal. A key purpose of providing access is to results of experts. The use of journal IF in evaluating
enable and facilitate reuse of the content, but the
individuals has inherent dangers. In an ideal world,
licenses publishers use can vary radically from one
evaluators would read each and every article, and
journal to another. If a paper is open via deposition in a make personal judgments. The recent International
repository, or as part of a publisher’s hybrid access Congress on Peer Review and Biomedical Publication
model, it may still, unfortunately, remain closed from a held from 8-10 September 2013 in Chicago
reuse perspective.
demonstrated the difficulties in reconciling such peer
6. SCIENTIFIC QUALITY AND THE IF DILEMMA judgments. Most individuals do not have the time to
read all the relevant articles. Even if they do, their
It is not suprising that alternative methods for judgment would likely be tempered by observing the
evaluating research are being sought, such as citation comments of those who have cited the work. Despite
wide use of peer reviews, little is known about its
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 287

impact on the quality of reporting of published publish across a whole range of journals, with both high
research. Moreover, it seems that peer reviewers and low IF. It will, therefore, usually be much more
frequently fail to detect important deficiencies and fatal accurate to analyze the influence of these bodies of
flaws in papers. work, rather than fall back on the journal indicators,
such as IF (Wouters, 2013b).
7. ALTERNATIVE MEASURES OF IMPACT AND
QUALITY As a result, it does not make sense to compare IF
across research fields. Although it is a well known,
In the 1990s, the Norwegian researcher Seglen
comparisons are still made frequently, for example,
developed a systematic critique of IF, its validity, and
when publications are compared based on IF in
the way in which it is calculated (Moed et al., 1996; multidisciplinary settings (such as in grant proposal
Seglen, 1997). This line of research has identified
reviews). In addition, the way in which IF is calculated
several reasons for not using IF in research
in WoS has a number of technical characteristics such
assessments of individuals and research groups
that IF can be gamed relatively easily by unscrupulous
(Wouters, 2013a). As the values of journal IF depend
journal editors. A more generic problem with using IF in
on the aggregated citation rates of the individual
research assessment is that not all fields have IF as
articles, IF cannot be used as a substitute for individual they are only based on journals in WoS that have IF.
articles in research assessments, especially as a small
number of articles may be cited heavily, while a large Scholarly fields that focus on books, monographs or
number of articles are only cited infrequently, and some technical designs are disadvantaged in evaluations in
are not cited at all (see Chang et al., 2011). This which IF is important (Wouters, 2013b). IF creates a
skewed distribution is a general phenomenon in citation strong disincentive to pursue risky and potentially
patterns for all journals. Therefore, if an author has groundbreaking research as it takes years to create a
published an article in a high impact journal, this does new approach in a new experimental context, during
not mean that the research will also have a high which no publications might be expected. Such metrics
impact. can block innovation because they encourage
scientists to work in areas of science that are already
Furthermore, fields differ strongly in their IF. A field
highly populated, as it is only in these fields that large
with a rapid turnover of research publications and long
numbers of scientists can be expected to cite
reference lists (such as in biomedical research) will
references to one’s work, no matter how outstanding it
tend to have much higher IF for its journals than a field
might be (Bruce, 2013). In response to these problems,
with short reference lists, in which older publications five main journal impact indicators have been
remain relevant for much longer (such as fields in
developed as an improvement upon, or alternative to,
mathematics). An average paper is cited ∼6 times in IF (see Chang and McAleer(2015), among others).
life sciences, 3 times in physics, and <1 times in
mathematics. Many groundbreaking older articles are In 1976 a recursive IF was proposed that gives
modestly cited due to a smaller scientific community citations from journals with high impact greater weight
when they were published. than citations from low impact journals (Pinski and
Narin, 1976). Such a recursive IF resembles Google’s
Moreover, publications on significant discoveries
PageRank algorithm, although Pinski and Narin (1976)
often stop accruing citations once their results are use a “trade balance” approach, in which journals score
incorporated into textbooks. Thus, citations consistently
highest when they are often cited but rarely cite other
underestimate the importance of influential vintage
journals (Liebowitz and Palmer, 1984; Palacios-Huerta
papers (Maslov and Redner, 2008). Moreover, smaller
and Volij, 2004; Kodrzycki and Yu, 2006). PageRank
fields will usually have a smaller number of journals,
gives greater weight to publications that are cited by
thereby resulting in fewer possibilities to publish in high
important papers, and also weights citations more
impact journals. Whenever journal indicators and highly from papers with fewer references. As a result of
metrics take the differences between fields and
these attributes, PageRank readily identifies a large
disciplines into account, the number of citations to
number of modestly cited articles that contain
articles produced by research groups as a whole tend
groundbreaking results. In 2006, Bollen et al. (2006)
to show a somewhat stronger correlation with the
proposed replacing impact factors with the PageRank
journal indicators. Nevertheless, the statistical algorithm.
correlation remains modest. Research groups tend to
288 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

The SCImago Journal Rank (SJR) indicator follows interpreted as having anything to do with the score of
the same logic as Google’s PageRank algorithm, an article as each and every article in a journal has the
namely citations from highly cited journals have a same AIS. As a matter of fact, AIS is the “per capita
greater influence than citations from lowly cited Journal Influence Score”, which has no reflection
journals. The SJR indicator is a measure of scientific whatsoever on any article’s influence.
influence of scholarly journals that accounts for both
the number of citations received by a journal and the The source normalized impact per paper (SNIP)
importance or prestige of the journals where such indicator improves upon IF as it does not make any
citations occuer, and has been developed for use in difference in the numerator and denominator regarding
extremely large and heterogeneous journal citation “citeable items”, and because it takes field differences
networks. It is a size-independent indicator, its values in citation density into account. The indicators have
order journals by their average prestige per article, and been calculated by Leiden University’s Centre for
can be used for journal comparisons in science Science and Technology Studies (CWTS), based on
evaluation processes. SCImago (based in Madrid) the Scopus bibliographic database that is produced by
calculates the SJR, though not on the basis of the Elsevier. Indicators are available for over 20,000
Scopus citation database that is published by Elsevier journals indexed in the Scopus database. SNIP
(Butler, 2008). measures the average citation impact of the
publications of a journal.
Eigenfactor is another PageRank-type measure of
journal influence (Bergstrom, 2007), with rankings Unlike the journal IF, SNIP corrects for differences
freely available online, as well as in JCR. A similar logic in citation practices between scientific fields and
is applied in two other journal impact factors from the disciplines, thereby allowing for more accurate
Eigenfactor.org research project, based at the between-field comparisons of citation impact (CWTS,
University of Washington, namely Eigenfactor and 2015). SNIP is computed on the basis of Scopus by
Article Influence Score (AIS). A journal’s Eigenfactor CWTS (Waltman et al., 2013a, b). This indicator also
score is measured as its importance to the scientific weights citations, not on the basis of the number of
community. The Eigenfactor was created to help citations to the citing journal, but on the basis of the
capture the value of publication output versus journal number of references in the citing article. Basically, the
quality (that is, the value of a single publication in a citing paper is seen as giving one vote which is
major journal versus many publications in minor distributed over all cited papers. As a result, a citation
journals). The scores are scaled so that the sum of all from a paper with 10 references adds 1/10th to the
journal scores is 100. citation frequency, whereas a citation from a paper with
100 references adds only 1/100th. The effect is that
For example, in 2006, Nature had the highest score SNIP balances out differences across fields and
of 1.992. The Article Influence Score purportedly disciplines in citation density.
measures the average influence, per article, of the
papers published in a journal, and is calculated It is worth mentioning article-level metrics, which
by dividing the Eigenfactor by the number of articles measure impact at an article level rather than journal
published in the journal. The mean AIS is 1.00, such level, and may include article views, downloads, or
that an AIS greater than 1.00 indicates that the articles mentions in social media. As early as 2004, the British
in a journal have an above-average influence. It does Medical Journal (BMJ) published the number of views
not mean that all relevant differences between for its articles, which was found to be somewhat
disciplines, such as the amount of work that is needed correlated to citations (Perneger, 2004). In 2008 the
to publish an article, is cancelled. However, Eigenfactor Journal of Medical Internet Research began publishing
assigns journals to a single category, making it more views and tweets. These “tweetations” proved to be a
difficult to compare across disciplines. Eigenfactor is good indicator of highly cited articles, leading the
calculated on the basis of WoS and uses citations to an author to propose a “Twimpact factor”, which is the
article in the previous five years, whereas it is two number of Tweets it receives in the, admittedly
years for IF and three years for SJR. arbitrary, first seven days of publication, as well as a
Twindex, which is the rank percentile of an article’s
Chang et al. (2016) argue that Eigenfactor should,
Twimpact factor (Eysenbach, 2011). Starting in March
in fact, be interpreted as a “Journal Influence Score”,
2009, the Public Library of Science (PloS) also
and that the Article Influence Score is incorrectly
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 289

introduced article-level metrics for all articles (Thelwall The weakest link in science communication is the
et al., 2013). certification that establishes that a research paper is a
valid scientific contribution. There are several aspects
8. SAN FRANCISCO DECLARATION ON involved, but few of these are an integral part of the
RESEARCH ASSESSMENT
review process (Weller, 2001; Hames, 2007). Many of
the responsibilities are passed on to voluntary referees,
It is important that IF be improved, because it is
who often lack the time and inclination to check
influential in shaping science and publication patterns
(Knothe, 2006; Larivière and Gingras, 2010). Several rigorously for fraud and duplicate or “salami”
alternative metrics based on citations metrics available publications (Dost, 2008). Indeed, Bornmannn et al.
for Thomson Reuters (for example, Eigenfactor, Article (2008) observe that guidelines for referees rarely
Influence Score, PI-BETA (Papers Ignored - By Even mention such aspects. Wager et al. (2009) noted that
The Authors), IFI (Impact Factor Inflation), C3PO many science editors seem to be unconcerned about
(Citation Performance Per Paper Online), H-STAR publication ethics, fraud, and unprofessional
(Historical Self-citation Threshold Approval Rating), 2Y- misconduct.
STAR (2-Year Self-citation Threshold Approval Rating),
Some editors seek to push ethical responsibilities
CAI (Cited Article Influence), 5YD2 (5YIF Divided By
back on to the author (for example, Abraham, 2000;
2YIF), and ESC (Escalating Self Citations), and ICQ
(Index of Citations Quality) in Chang and McAleer Tobin, 2002; Roberts, 2009), despite the prevalence of
(2015)), and providers (for example, Scopus and duplicate and fraudulent publications, indicating that
SCImago), are forcing change, and threatening the self-regulation by authors is insufficient (Gwilym et al.,
dominance of IF provided by Thomson Reuters. 2004; Johnson, 2006; Berquist, 2008). There is a
However, there remains a need for many of the “gate- potential role for Google Scholar in helping to reduce
keeping” services that Thomson Reuters provides in fraud and plagiarism in science. Google Scholar
assessing timeliness of publication and the rigour of the already routinely displays “n versions of this article” in
review process. This creates the opportunity for search results, and it could usefully display “other
Thomson Reuters (or new providers) to reposition such articles with similar text” and “other articles with similar
services in a way that is more constructive and images”. Such an addition would be very useful for
supportive of science in evaluating the impact and researchers when compiling reviews and meta-
quality of published papers. analyses. Clearly, quality science requires a more
proactive role from editorial offices, and the pursuit of
IF had its origins in the desire to inform library this role is most certainly not reflected in any aspect of
subscription decisions (Garfield, 2006), but it has IF.
gradually evolved into a status symbol for journals
which, at its best, can be used to attract good IF could be retained in a similar form, but amended
manuscripts and, at its worst, can be unscrupulously to deal with its limitations. Specifically, IF should: (1)
and widely manipulated. IF often serves as a proxy for rely on citations from articles and reviews, to articles
journal quality, but it is increasingly used more and reviews; (2) re-examine the timeframe; and (3)
dubiously as a proxy for article quality (Postma, 2007). abandon the 2-year window in favour of an alternative
Despite these failings, in the absence of a clearly that reflects the varying patterns of citations accrual in
superior metric that is based on citations, there remains different disciplines. Furthermore, the scientific
a general perception that IF is useful and a reasonabl;y community could rely on a community-based rating of
good indicator of journal quality. journals, in much the same as PLoS One does for
individual articles, and as other on-line service
The value-added that is offered by editors of providers offer to clients (Jeacle and Carter, 2011).
Thomson Reuters derives from efficient matching of
papers with reviewers (Laband, 1990). However, this Saunders and Savulescu (2008) suggested
neglects the editorial role of checking for duplication, independent monitoring and validation of research.
“salami” (Abraham 2000), plagiarism, and outright There have been several calls (Errami and Garner,
fraud. It is rarely made clear whether this checking is 2008; Butakov and Scherbinin, 2009; Habibzadeh and
expected of reviewers, and /or completed by the Winker, 2009, among others) for greater investment in,
editorial office. Science would be well served by an and more systematic efforts directed at, detecting
independent system to certify that editorial processes plagiarism, duplication, and other unprofesisonal
were prompt, efficient and thorough. lapese in the editorial review process. Callaham and
290 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

McCulloch (2011) concluded that the monitoring of year and subject category. Unlike RCR, percentiles are
reviewer quality is even more crucial to maintain the not affected by skewed distributions, so that highly
mission of scientific journals. cited items do not receive excessively high weights.
Publications are sorted by citation numbers and are
Despite these many calls for reform, IF remains allocated to percentile ranks ranging between 0 and
essentially unchanged, but supplemented with a 5-year 100. The percentile of a publication is its relative
variant, and Eigenfactor and Article Influence Score position within the reference set, so that the higher is
(recall the caveats about these two measures the rank, the greater is the number of citations for the
discussed previously). Thomson Reuters could show publication. For example, a value of 90 indicates that
strong leadership with a system that is better aligned the publication belongs to the 10% of most highly cited
with quality considerations in scientific publications, articles. A value of 50 is the median level, which means
including editorial efficiency and constructiveness of an average impact. The publication set for the
the review process. Moreover, procedures to detect percentiles method ranges from single articles to
and deal with plagiarism, and intentional or publication records of an individual scientist or an
unintentional lapses in professional and ethical institution.
standards, would be most welcome.
Together with percentiles, it is possible to focus on
Comparing citation counts to individual journal specific percentile rank classes, and particularly on the
articles is more informative than weighting the IF assessment of individual scientists, with Ptop 10% or
values of the journals. For bibliometricians, citation PPtop 10% indicators (Bornmann, 2013). Both
analysis is the impact measurement of individual indicators count the number of successful publications
scholarly items based on citation counts. Citation normalised for publication year and subject category.
impact is just one aspect of an article’s quality, which Ptop 10% is the number and PPtop 10% is the
complements its accuracy and originality. As a clear proportion of publications that belong to the top 10%
definition of scientific quality does not exist, no all-in- most highly cited articles. Given the advantages of the
one metric has yet been proposed (Marx and percentiles and related PPtop 10%, the Leiden Ranking
Bornmann, 2013). It is well known that citation-based and SCImago Institutions Rankings have already
data correlate well with research performance (quality) incorporated these metrics in the global rankings of
asserted by peers. academic and research institutions.

Comparing citation counts in various disciplines and The JCR have tremendous importance globally,
at different points in time can be highly misleading, despite a widespread growing demand for more
unless there is appropriate standardisation or intelligent use of such metrics. The European
normalisation. Normalisation is possible by using Association of Science Editors (EASE) published its
reference sets, which assess the citation impact of own statement on inappropriate use of IF in 2007, and
comparable publications (Vinkler, 2010). The reference is one of the signatories of the San Francisco
sets contain publications that were published in the Declaration on Research Assessment (DORA, 2013).
same year and subject category. The arithmetic mean EASE issued an official statement recommending “that
of the citations for all publications in a reference set is journal impact factors are used only - and cautiously -
calculated to specify the expected citation impact for measuring and comparing the influence of entire
(Schubert and Braun, 1986). This enables calculation journals, but not for the assessment of single papers,
of the Relative Citation Rate (RCR), that is, the and certainly not for the assessment of researchers or
observed citation rate of an article divided by the mean research programmes” (EASE, 2007).
expected citation rate. As with IF, the calculation of
RCR has an inherent disadvantage related to the lack In July 2008, the International Council for Science
of normalisation of citations for subject category and (abbreviated as ICSU, after its former name,
publication year. International Council of Scientific Unions)
Committee on Freedom and Responsibility in the
Percentiles, or the percentile rank classes method, Conduct of Science (CFRS) issued a “statement on
is particularly useful for normalisation (Bornmann and publication practices and indices and the role of peer
Marx, 2013). The percentile of a published article gives review in research assessment”, suggesting many
an impression of the impact it has achieved in possible solutions - for example, considering a limit
comparison to similar items in the same publication number of publications per year to be taken into
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 291

consideration for each scientist, or even penalising scientific organizations had signed the declaration.
scientists for an excessive number of publications per DORA has attracted a multitude of comments and
year - for example, more than 20 (ICSU, 2008). This responses, including a statement from Thomson
will, of course, vary according to discipline and team Reuters that reiterates the inappropriateness of IF as a
research, especially in the medical and bio-medical measure of quality of individual articles, and
sciences. encouraging authors to choose publication venues that
are based on factors not limited to IF (Thomson
In February 2010, the Deutsche Reuters, 2013). Nonetheless, it is unlikely that
Forschungsgemeinschaft (German Research alternative and more appropriate citation metrics will
Foundation) published new guidelines to evaluate only soon gain recognition as research assessment tools
articles and no bibliometrics information on candidates outside the community of bibliometricians.
to be evaluated in all decisions concerning
“performance-based funding allocations, postdoctoral The bibliometric evidence confirms the main thrust
qualifications, appointments, or reviewing funding of DORA, namely that it is not sensible to use IF or any
proposals, [where] increasing importance has been other journal impact indicator based on citations as a
given to numerical indicators such as the H-index and predictor of the potential citations of a particular paper
the impact factor” (DFG, 2010). This decision follows or set of papers. However, this does not mean that
similar decisions of the Research Excellence journal IF does not make any sense at all. At the level
Framework (REF) in the UK. The following is what the of the journal, the improved IF do provide interesting
REF2014 guidelines have to say about journal IF: “No information about the role, positionand perceived
sub-panel will make any use of journal impact factors, quality of a journal, especially if this is combined with
rankings, lists or the perceived standing of publishers in qualitative information about an analysis of who is
assessing the quality of research outputs” (REF, 2014). citing the journal and in what context, as well as its
editorial policies.
Cawkell, sometime Director of Research at ISI,
remarked that the SCI, on which the impact factor is Editors generally take the opportunity analyse of
based, “would work perfectly if every author their roles in the scientific communication process, and
meticulously cited only the earlier work related to his journal indicators can play an informative role.
theme; if it covered every scientific journal published Furthermore, it also makes sense in the context of
anywhere in the world; and if it were free from research evaluation to take into account whether a
economic constraints” (Editorial, 2009). researcher has been able to publish in a high quality
scholarly journal.
Scientists at research institutes, funding agencies
and universities have a need to assess the quality and Outputs other than research articles will grow in
impact of scientific outputs. The question arises as to importance in assessing research effectiveness in the
whether scientific output is measured accurately and future, but the peer-reviewed research paper will
evaluated wisely. In order to address this issue, a remain a central research output that informs research
group of editors and publishers of scholarly journals assessment. Focus should be placed primarily on
met during the Annual Meeting of The American practices relating to research articles published in peer-
Society for Cell Biology (ASCB) in San Francisco, USA, reviewed journals, but can be extended by recognizing
on 16 December 2012. The group developed a set of additional products, such as datasets, as important
recommendations, referred to as the San Francisco research outputs by funding agencies, academic
Declaration on Research Assessment (DORA). DORA institutions, journals, organizations that supply metrics,
focuses on IF, and it is a strong plea to base research and individual researchers. This step is needed to
assessments of individual researchers, research eliminate the use of journal-based metrics, such as IF,
groups and submitted grant proposals on article-based in funding, appointment, and promotion considerations.
metrics, combined with peer review, instead of on Research assessment should be evaluated on its own
journal metrics. merits rather than on the basis of the journal in which
the research is published.
DORA has garnered support from thousands of
individuals and hundreds of institutions, all of whom There is a need to capitalize on the opportunities
have endorsed the document on the DORA website. provided by online publications relaxing unnecessary
On 13 May 2013, more than 150 scientists and 75 limits on the number of words, figures, and references
292 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

in articles, and exploring new indicators of significance, scientists are being ranked by weighting each of their
quality and impact. Many funding agencies, institutions, publications according to the IF of the journal in which it
publishers, and researchers are already encouraging appeared. The misuse of the journal IF is highly
improved practices in research assessment. Such destructive, inviting a gaming of the metric that can
steps are beginning to increase the momentum toward bias journals against publishing important papers in
more sophisticated and meaningful approaches to fields such as social sciences and ecology that are
research evaluation that can now be established and much less cited than others (for example, biomedicine).
adopted by all of the key constituencies involved (Dora, Moreover, it can waste the time of scientists by
2013). overloading highly-cited journals with inappropriate
submissions from researchers who are desperate to
For research assessment, the value and impact of gain an IF for their publications.
all research outputs (as well as datasets and software)
have to be considered in addition to research Improved journal impact indicators and metrics
publications. This includes a broad range of impact solve a number of problems that have emerged in the
measures and qualitative indicators of research impact, use of IF, but all journal impact indicators are ultimately
such as influence on policy and practice. A variety of based on a function of the number of citations to the
journal-based metrics (for example, 5-year impact individual articles in a journal. The correlation is,
factor, EigenFactor, SCImago, h-index, editorial and however, too weak to legitimize the application of some
publication times, among others) can provide a richer journal indicators instead of assessing the inherent
assessment of journal quality and performance. Such quality of the articles.
assessments should be based on the scientific content
of an article rather than publication metrics of the IF is suppossed to address the weaknesses it
journal in which it may have been published. It is suffers. Possible improvements include the adoption of
argues that decisions about funding, hiring, tenure, or a ‘like-with-like’ basis (that is, citations to articles,
promotion assessments based on scientific content, divided by the count of articles only), the adoption of a
rather than publication metrics, should be given priority more appropriate reference interval (the present two-
(DORA, 2013). year interval is too short for many disciplines), and the
introduction of confidence intervals. Procedures that
9. CONCLUSION add value and restrict plagiarism and fraud are needed
to maintain quality. The future of quality science
The purpose and novlty of the paper is to present
communication lies in the hands of editors, in
arguments for and against the use of the Impact Factor
particular, and the professions at large, in general.
(IF) in a rapidly changing digital world. The Impact
Factor (IF) is generally used as the primary measure The IF has a large, albeit controversial, influence in
with which to compare the scientific output of the way published scientific research is perceived and
individuals and institutions. As calculated by Thomson evaluated. IF is a very useful tool for evaluation of
Reuters, IF was originally created as a tool to help journals, but it must be used carefully. Considerations
librarians identify which journals to purchase, not as a
include the number of reviews or other types of
measure of the purported intrinsic scientific quality of
material published in a journal, variations between
research. However, IF has a number of well-
disciplines, and item-by-item impacts. A better
documented deficiencies as a tool for research
evaluation system would involve reading each article
assessment of quality. Citation distributions within
for quality, but a simple metric is dedicated to the
journals are highly skewed, and the properties of IF are
difficulties inherent in reconciling peer review
field-specific as it is a composite of multiple, highly
judgments.
diverse article types, including primary research papers
and reviews. Moreover, IF can be manipulated by When it comes time to evaluating faculty, most
editorial policy.
reviewers and assessors do not have the time, or care
As a number that is calculated annually for each to take the time, to read the articles. Even if they did,
scientific journal based on the average number of times their judgment would be tempered by observing the
that articles are cited over a specified period, IF is comments of those who have cited the work.
intended to be used as a measure of journal quality Fortunately, new full-text capabilities in the web make
than an evaluation of individual scientists. However, this more practical to perform.
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 293

ACKNOWLEDGEMENT Bornmann, L., and W. Marx (2013), How good is research really?
Measuring the citation impact of publications with percentiles
increases correct assessments and fair comparisons, EMBO
The authors wish to thank a referee for very helpful Reports, 14(3), 226-230.
comments and suggestions. For financial support, the https://doi.org/10.1038/embor.2013.9
first author is grateful to the Australian Research Brenner, S. (1995), Loose end, Current Biology, 5(5), 568.
https://doi.org/10.1016/S0960-9822(95)00109-X
Council and the Ministry of Science and Technology
Brodman, E. (1944), Choosing physiology journals, Bulletin of the
(MOST), Taiwan. Medical Library Association, 32 (4), 479-483.
Butakov, S., and V. Scherbinin (2009), The toolbox for local and
REFERENCES global plagiarism detection, Computers & Education, 52,
781-788.
Abraham, P. (2000), Duplicate and salami publications, Journal of https://doi.org/10.1016/j.compedu.2008.12.001
Postgraduate Medicine, 46, 67-69. Bruce, A. (2013), Impact factor distortions, Science, 340 (6134), 787.
Adler, R., J. Ewing and P. Taylor (2008), Joint committee on https://doi.org/10.1126/science.1240319
quantitative assessment of research: Citation statistics. [A Butler, D. (2008), Free journal-ranking tool enters citation market,
report from the International Mathematical Union (IMU) in Nature, 451, 6.
cooperation with the International Council of Industrial and https://doi.org/10.1038/451006a
Applied Mathematics (ICIAM) and the Institute of
Callaham, M., and C. McCulloch (2011), Longitudinal trends in the
Mathematical Statistics (IMS).] Australian Mathematical
performance of scientific peer reviewers, Annals of
Society Gazette, 35(3), 166-188. http://www.austms.org.au/
Emergency Medicine, 57, 141-148.
Gazette/2008/Jul08/Gazette35(3)Web.pdf#page=24
https://doi.org/10.1016/j.annemergmed.2010.07.027
Adler, R., J. Ewing and P. Taylor (2009), Citation statistics, Statistical
Chang, C.-L., E. Maasoumi and M. McAleer (2016), Robust ranking
Science, 24(1), 1-14.
of journal quality: An application to economics, Econometric
https://doi.org/10.1214/09-STS285
Reviews, 35(1), 50-97.
Agrawal, A. (2005), Corruption of journal impact factors, Trends in https://doi.org/10.1080/07474938.2014.956639
Ecology and Evolution, 20(4), 157.
Chang, C.-L., and M. McAleer (2015), Bibliometric rankings of
https://doi.org/10.1016/j.tree.2005.02.002
journals based on the Thomson Reuters citations database,
Antelman, K. (2004), Do open-access articles have a greater Journal of Reviews on Global Economics, 4, 120-125.
research impact?, College & Research Libraries News, 65(5), https://doi.org/10.6000/1929-7092.2015.04.11
372-382.
Chang, C.-L., M. McAleer and L. Oxley (2011), Great expectatrics:
https://doi.org/10.5860/crl.65.5.372
Great papers, great journals, great econometrics,
Archambault E., and V. Lariviere (2009), History of the journal impact Econometric Reviews, 30(6), 583-619.
factor: Contingencies and consequences, Scientometrics, 79, https://doi.org/10.1080/07474938.2011.586614
635-649.
Chang, C.-L., M. McAleer and L. Oxley (2013), Coercive journal self
https://doi.org/10.1007/s11192-007-2036-x
citations, impact factor, journal influence and article
Anauati, M.V., S. Galiani, and R.H. Gálvez (2014), Quantifying the influence, Mathematics and Computers in Simulation, 93,
life cycle of scholarly articles across fields of economic 190-197.
research, p. 23 (12 November 2014), Social Science https://doi.org/10.1016/j.matcom.2013.04.006
Research Network (SSRN), SSRN-id2542612.
Cohen, J.B. (1938), The misuse of statistics, Journal of the American
https://doi.org/10.2139/ssrn.2523078
Statistical Association, 33(204), 657-674.
Arnold, D.N., and K.K. Fowler (2011), Nefarious numbers, Notices of https://doi.org/10.1080/01621459.1938.10502344
the American Mathematical Society, 58(3), 434-437.
CWTS (2015), CWTS Journal Indicators, Centre for Science and
Bergstrom, C.T. (2007), Eigenfactor: Measuring the value and Technology Studies, Leiden University, The Netherlands.
prestige of scholarly journals, College & Research Libraries, http://www.journalindicators.com/methodology
68(5), 314-316.
DFG (2010), Deutsche Forschungsgemeinschaft (DFG), “Quality not
https://doi.org/10.5860/crln.68.5.7804
quantity” - DFG Adopts Rules to Counter the Flood of
Berquist T.H. (2008), Duplicate publishing or journal publication Publications in Research Press Release No. 7/23, February
ethics 101, American Journal of Roentgenology, 191, 311- 2010. http://dfg.de/en/service/press/press_releases/2010/
312. pressemitteilung_nr_07/index.html
https://doi.org/10.2214/AJR.07.1275
DORA (2013), San Francisco Declaration on Research Assesment
Bollen, J., M.A. Rodriguez, and H. Van de Sompel (2006), Journal (DORA). Available at http://www.embo.org/news/research-
status, Scientometrics, 69, 669-687. news/research-news-2013/san-francisco-declaration-on-
https://doi.org/10.1007/s11192-006-0176-z research-assessment [Accessed 17 June 2015].
Bornmann, L., and H.D. Daniel (2008), What do citation counts Dost, F.N. (2008), Peer review at a crossroads - A case study,
measure? A review of studies on citing behavior, Journal of Environmental Science and Pollution Research, 15(6), 443-
Documentation, 64(1), 45-80. 447.
https://doi.org/10.1108/00220410810844150 https://doi.org/10.1007/s11356-008-0032-1
Bornmann L., I. Nast, and H.D. Daniel (2008), Do editors and EASE (2007), EASE statement on inappropriate use of impact
referees look for signs of scientific misconduct when factors, European Science Editing, 33(4), 99-100.
reviewing manuscripts? A quantitative content analysis of
Editorial (2005), Not-so-deep impact. Research assessment rests too
studies that examined review criteria and reasons for
heavily on the inflated status of the impact factor, Nature,
accepting and rejecting manuscripts for publication,
435(7045), 1003-1004.
Scientometrics, 77, 415-432.
https://doi.org/10.1007/s11192-007-1950-2 Editorial (2009), Editorial, Journal of Biological Physics and
Chemistry, 9(4), 139-140.
Bornmann, L. (2013), A better alternative to the h index, Journal of
Informetrics, 7(1), 100. Errami, M., and H. Garner (2008), A tale of two citations, Nature,
https://doi.org/10.1016/j.joi.2012.09.004 451, 397-399.
https://doi.org/10.1038/451397a
294 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

Eysenbach, G. (2011), Can tweets predict citations? Metrics of social Johnson, C. (2006), Repetitive, duplicate, and redundant
impact based on Twitter and correlation with traditional publications: A review for authors and readers, Journal of
metrics of scientific impact, Journal of Medical Internet Manipulative and Physiological Therapeutics, 29(7), 505-509.
Research, 13(4): e123. https://doi.org/10.1016/j.jmpt.2006.07.001
https://doi.org/10.2196/jmir.2012 Kodrzycki, Y.K., and P.K. Yu (2006), New approaches to ranking
Fassoulaki, A., K. Papilas, A. Paraskeva, and K. Patris (2002), economics journals, B.E. Journal of Economic Analysis &
Impact factor bias and proposed adjustments for its Policy, 5(1).
determination, Acta Anaesthesiologica Scandinavica, 46(7), https://doi.org/10.2202/1538-0645.1520
902-905. Knothe, G. (2006), Comparative citation analysis of duplicate or
https://doi.org/10.1034/j.1399-6576.2002.460723.x highly related publications, Journal of the American Society
Ganley, E. (2013), A lot can happen in a decade, PLoS Biol, 11(10), for Information Science and Technology, 57(13), 1830-1839.
e1001689. https://doi.org/10.1002/asi.20409
https://doi.org/10.1371/journal.pbio.1001689 Kurtz, M.J., G. Eichhorn, A. Accomazzi, C. Grant, M. Demleither, E.
Garfield, E. (1955), Citation indexes to science: A new dimension in Henneken, S.S. Murray, N. Martimbeau, and B. Elwell
documentation through association of ideas, Science, (2005), The effect of use and access on citations, Information
122(3159), 108-111. Processing & Management, 41(6), 1395-1402.
https://doi.org/10.1126/science.122.3159.108 https://doi.org/10.1016/j.ipm.2005.03.010
Garfield, E. (1972), Citation analysis as a tool in journal evaluation, Laband, D.N. (1990), Is there value-added from the review process in
Science, 178(4060), 471-479. economics? Preliminary evidence from authors, Quarterly
https://doi.org/10.1126/science.178.4060.471 Journal of Economics, 105, 341-352.
Garfield, E. (2006), The history and meaning of the journal impact https://doi.org/10.2307/2937790
factor, Journal of the American Medical Association, 295(1), Lander, E.S., L.M. Linton, B. Birren, C. Nusbaum, M.C. Zody, J.
90-93. Baldwin, K. Devon, K. Dewar, M. Doyle, W. FitzHugh, R.
https://doi.org/10.1001/jama.295.1.90 Funke, D. Gage, K. Harris, A. Heaford, J. Howland, et al.
Garfield, E. (1998a), Long-term vs. short-term journal impact: Does it (2001), Initial sequencing and analysis of the human
matter? The Scientist, 12(3), 10-12. genome, Nature, 409(6822), 860-921.
https://doi.org/10.1038/35057062
Garfield, E. (1998b), Long-term vs. short-term journal impact (part II),
The Scientist 12(14), 12-13. Larivière, V., and Y. Gingras (2010), The impact factor’s Matthew
effect: A natural experiment in bibliometrics, Journal of the
Garfield, E., and I.H. Sher (1963), New factors in the evaluation of American Society for Information Science and Technology,
scientific literature through citation indexing, American 61, 424-427.
Documentation, 14(3), 195-201.
https://doi.org/10.1002/asi.5090140304 Lawrence, S. (2001), Free online availability substantially increases a
paper’s impact, Nature, 2001, 411, 521.
Gross, P.L.K., and E.M. Gross (1927), College libraries and chemical https://doi.org/10.1038/35079151
education, Science, New Series, 66(1713), 385-389.
Liebowitz, I.S.J. and J.P. Palmer (1984), Assessing the relative
Gwilym, S.E., M.C. Swan, and H. Giele (2004), One in 13 ‘original’ impacts of economics journals, Journal of Economic
articles in the Journal of Bone and Joint Surgery are Literature, 22(1), 77-88.
duplicate or fragmented publications, Journal of Bone and
Joint Surgery, 86-B(5), 743-745. Liu, S.V. (2007), Hwang’s retracted publication still contributes to
https://doi.org/10.1302/0301-620X.86B5.14725 Science’s impact factor, Sci. Ethics, 2, 44-45.
Habibzadeh, F., and M.A. Winker (2009), Duplicate publication and Lozano, G.A., V. Larivière, and Y. Gingras (2012), The weakening
plagiarism: Causes and cures, Notfall Rettungsmed, 12, 415- relationship between the impact factor and papers’ citations
418. in the digital age, Journal of the American Society for
https://doi.org/10.1007/s10049-009-1229-7 Information Science and Technology, 63(11), 2140-2145.
https://doi.org/10.1002/asi.22731
Hames, I. (2007), Peer Review and Manuscript Management in
Scientific Journals: Guidelines for Good Practice, Wiley. Lowry, O.H., N.J. Rosebrough, A.L. Farr, and R.J. Randall (1951),
https://doi.org/10.1002/9780470750803 Protein measurement with the folin phenol reagent, Journal
of Biological Chemistry, 193(1), 265-275.
Hansen, H.B., and J.H. Henriksen (1997), How well does journal
“impact” work in the assessment of papers on clinical MacRoberts, M.H., and B.R. MacRoberts (1989), Problems of citation
physiology and nuclear medicine?, Clinical Physiology, 17(4), analysis: A critical review, Journal of the Association for
409-418. Information Science and Technology, 40(5), 342-349.
https://doi.org/10.1046/j.1365-2281.1997.04545.x https://doi.org/10.1002/(SICI)1097-
4571(198909)40:5<342::AID-ASI7>3.0.CO;2-U
Hernan, M.A. (2009), Impact factor: A call to reason, Epidemiology,
20, 317-318. Malhotra, V., and E. Marder (2015), Peer review: The pleasure of
https://doi.org/10.1097/EDE.0b013e31819ed4a6 publishing, eLife, 4, e05770.
https://doi.org/10.7554/eLife.05770
ICSU (2008), Publication practices and indices and the role of peer
review in research assessment (July 2008), International Martin, B.R. (1996), The use of multiple indicators in the assessment
Council for Science. http://www.icsu.org/publications/cfrs- of basic research, Scientometrics, 36, 343-362.
statements/publication-practices-peer-review/statement- https://doi.org/10.1007/BF02129599
publication-practices-and-indices-and-the-role-of-peer- Marx, W., and L. Bornmann (2013), Journal impact factor: “The poor
review-in-research-assessment-july-2008 man’s citation analysis” and alternative approaches,
ISI (2002), Journal self-citation in the journal citation reports - European Science Editing, 39(3), 62-63.
Science Edition (2002), The Institute for Scientific Information Maslow, S., and S. Redner (2008), Promise and pitfalls of extending
(ISI), Thomson Reuters. http://wokinfo.com/essays/journal- Google’s PageRank algorithm to citation networks, Journal of
self-citation-jcr/ Neuroscience, 28(44), 11103-11105.
Jeacle, I., and C. Carter (2011), In TripAdvisor we trust: Rankings, https://doi.org/10.1523/JNEUROSCI.0002-08.2008
calculative regimes and abstract systems, Accounting, Moed, H.F., Th.N. Van Leeuwen, and J. Reedijk (1996), A critical
Organizations and Society, 36(4-5), 293-309. analysis of the journal impact factors of Angewandte Chemie
https://doi.org/10.1016/j.aos.2011.04.002
Pros and Cons of the Impact Factor in a Rapidly Changing Digital World Journal of Reviews on Global Economics, 2018, Vol. 7 295

and the Journal of the American Chemical Society: Seglen, P.O. (1995), Siteringer og tidsskrift-impakt som kvalitetsmål
Inaccuracies in published impact factors based on overall for forskning, Klinisk Kemi i Norden, 7, 59-63.
citations only, Scientometrics, 37, 105-116. Seglen, P.O. (1997), Why the impact factor of journals should not be
https://doi.org/10.1007/BF02093487 used for evaluating research, BMJ, 314(7079), 498-502.
Moustafa, K. (2015), The disaster of the impact factor, Science and https://doi.org/10.1136/bmj.314.7079.497
Engineering Ethics, 21 (1), 139-142. Southern, E.M. (1975), Detection of specific sequences among DNA
https://doi.org/10.1007/s11948-014-9517-0 fragments separated by gel-electrophoresis, J Mol Biol, 98,
van Nierop, E. (2009), Why do statistics journals have low impact 503-517.
factors?, Statistica Neerlandica, 63 (1), 52-62. https://doi.org/10.1016/S0022-2836(75)80083-0
https://doi.org/10.1111/j.1467-9574.2008.00408.x Smith, R. (1997), Journal accused of manipulating impact factor,
Opthof, T. (1999), Submission, acceptance rate, rapid review system BMJ, 314(7079), 461.
and impact factor, Cardiovasc Research, 41, 1-4. https://doi.org/10.1136/bmj.314.7079.461d
https://doi.org/10.1016/S0008-6363(99)00158-3 Schubert, A., and T. Braun (1986), Relative indicators and relational
Palacios-Huerta, I., and O. Volij (2004), The measurement of charts for comparative assessment of publication output and
intellectual influence, Econometrica, 72(3), 963-977. citation impact, Scientometrics, 9(5-6), 281-291.
https://doi.org/10.1111/j.1468-0262.2004.00519.x https://doi.org/10.1007/BF02017249
Perneger, T.V. (2004), Relation between online “hit counts” and Thelwall, M., S. Haustein, V. Larivière, and C.R. Sugimoto (2013), Do
subsequent citations: Prospective study of research papers Altmetrics work? Twitter and ten other social web services,
in the BMJ, BMJ, 329(7465), 546-547. PLoS ONE, 8(5), e64841.
https://doi.org/10.1136/bmj.329.7465.546 https://doi.org/10.1371/journal.pone.0064841
Pinski, G., and F. Narin (1976), Citation influence for journal Thomson Reuters (2013), Thomson Reuters statement regarding the
aggregates of scientific publications: Theory with application San Francisco Declaration on Research Assessment.
to literature of physics, Information Processing & Available at http://researchanalytics.thomsonreuters.com/
Management, 12(5), 297-312. statement_re_sfdra/
https://doi.org/10.1016/0306-4573(76)90048-0 Thomson Reuters (2015), Journal Citation Reports.
PLoS Medicine Editors (6 June 2006), The impact factor game, PLoS http://wokinfo.com/products_tools/analytical/jcr/
Medicine, 3(6): e291. Tobin, M.J. (2002), AJRCCM’s policy on duplicate publication,
https://doi.org/10.1371/journal.pmed.0030291 American Journal of Respiratory and Critical Care Medicine,
Postma, E. (2007), Inflated impact factors? The true impact of 166, 433-437.
evolutionary papers in non-evolutionary journals, PLoS ONE, https://doi.org/10.1164/rccm.2205022
2(10): e999. Vaughan, L., and D. Shaw (2003), Bibliographic and web citations:
https://doi.org/10.1371/journal.pone.0000999 What is the difference?, Journal of the Association for
Pudovkin, A.I., and E. Garfield (2004), Rank-normalized impact Information Science and technology, 54, 1313-1322.
factor: A way to compare journal performance across subject https://doi.org/10.1002/asi.10338
categories, Presented at the American Society for Vanclay, J.K. (2012), Impact factor: Outdated artefact or stepping-
Information Science and Technology Annual Meeting, stone to journal certification?, Scientometrics, 92(2), 211-
Providence, RI, 17 November 2004. Published in 238.
Proceedings of the 67th Annual Meeting of the American https://doi.org/10.1007/s11192-011-0561-0
Society for Information Science & Technology, 41, pp. 507-
515. Vinkler P. (2010), The Evaluation of Research by Scientometric
https://doi.org/10.1002/meet.1450410159 Indicators, Oxford, UK: Chandos.
https://doi.org/10.1533/9781780630250
REF (2014), Research Excellence Framework (REF), REF2014
guidelines. http://www.ref.ac.uk/about/guidance/ van Wesel, M. (2016), Evaluation by citation: Trends in publication
behavior, evaluation criteria, and the strive for high impact
Rieseberg, L., and H. Smith (2008), Editorial and retrospective, publications, Science and Engineering Ethics, 22(1), 199-
Molecular Ecology, 17, 501-513. 225.
https://doi.org/10.1111/j.1365-294X.2007.03660.x https://doi.org/10.1007/s11948-015-9638-0
Rieseberg, L., T. Vines, and N. Kane (2011), Editorial - 20 years of Wager E., S. Fiack, C. Graf, A. Robinson, and I. Rowlands (2009),
molecular ecology, Molecular Ecology, 20, 1-21. Science journal editors’ views on publication ethics: Results
https://doi.org/10.1111/j.1365-294X.2010.04955.x of an international survey, Journal of Medical Ethics, 35, 348-
Roberts, J. (2009), An author’s guide to publication ethics: A review 353.
of emerging standards in biomedical journals, Headache, 49, https://doi.org/10.1136/jme.2008.028324
578-589. Waltman, L., and N.J. Van Eck (2013a), Source normalized
https://doi.org/10.1111/j.1526-4610.2009.01379.x indicators of citation impact: An overview of different
Rossner, M., H. Van Epps, and E. Hill (2007), Show me the data, approaches and an empirical comparison, Scientometrics,
Journal of Cell Biology, 179(6), 1091-1092. 96(3), 699-716.
https://doi.org/10.1083/jcb.200711140 https://doi.org/10.1007/s11192-012-0913-4
Saunders, R., and J. Savulescu (2008), Research ethics and lessons Waltman, L., N.J. Van Eck, T.N. Van Leeuwen, and M.S Visser
from Hwanggate: What can we learn from the Korean cloning (2013b), Some modifications to the SNIP journal impact
fraud?, J Med Ethics, 34, 214-221. indicator, Journal of Informetrics, 7(2), 272-285.
https://doi.org/10.1136/jme.2007.023721 https://doi.org/10.1016/j.joi.2012.11.011
Schuttea, H.K., and J.G. Svec (2007), Reaction of Folia Phoniatrica Weller A.C. (2001), Editorial peer review: Its strengths and
et Logopaedica on the current trend of impact factor weaknesses, American Society for Information Science and
measures, Folia Phoniatrica et Logopaedica, 59(6), 281-285. Technology Monograph Series, Information Today, ISBN
https://doi.org/10.1159/000108334 9781573871006, pp. 342.
Seglen, P.O. (1992), Journal impact: How representative is the Wilhite, A.W. and Fong, E.A. (2012), Coercive Citation in Academic
journal impact factor?, Research Evaluation, 2, 143-149. Publishing, Science, 335 (6068), 542-543.
https://doi.org/10.1093/rev/2.3.143 https://doi.org/10.1126/science.1212540
296 Journal of Reviews on Global Economics, 2018, Vol. 7 McAleer et al.

Wouters, P. (2013a), Bibliometrics of individual researchers - The Wouters, P. (2013b), The evidence on the journal impact factor,
debate in Berlin, Citation Culture, page 3, 3 October 2013. Citation Culture, page 2, 3 June 2013.
https://citationculture.wordpress.com/2013/10/03/bibliometric https://citationculture.wordpress.com/2013/06/03/the-
s-of-individual-researchers-the-debate-in-berlin/ evidence-on-the-journal-impact-factor/

Received on 07-05-2018 Accepted on 25-05-2018 Published on 22-06-2018

DOI: https://doi.org/10.6000/1929-7092.2018.07.25

© 2018 McAleer et al.; Licensee Lifescience Global.


This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License
(http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in
any medium, provided the work is properly cited.

You might also like