Mesh Now: Automatic Mesh Indexing at Pubmed Scale Via Learning To Rank
Mesh Now: Automatic Mesh Indexing at Pubmed Scale Via Learning To Rank
DOI 10.1186/s13326-017-0123-3
Abstract
Background: MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly
publications by human indexers. The task is highly important for improving literature retrieval and many other
scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH
indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly
(approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed
and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a
community-wide shared task called BioASQ was recently organized.
Methods: We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a
combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now
then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the
highest-ranked MeSH terms via a post-processing module.
Results: We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F1-
score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.
612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to
process MEDLINE documents on a large scale.
Conclusions: We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic
MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time
frame. Availability: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/.
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Mao and Lu Journal of Biomedical Semantics (2017) 8:15 Page 2 of 9
visualization of search results [5, 6] and to help distin- [36], naïve Bayes with optimal training set [37], Stochastic
guish between publication authors with identical names Gradient Descent [38], and meta-learning [39]. While the
[7, 8]. Another major use of MeSH indexing is in bio- pattern matching and text classification methods use only
medical text mining, where it has been applied to prob- the information in the MeSH thesaurus and document it-
lems such as document summarization [9], document self, the k-Nearest Neighbours (k-NN) approach takes ad-
clustering [10], and word sense disambiguation [11]. vantage of the manual annotations of documents similar
MeSH indexing also serves several key roles in cit- to the target document, e.g. [40, 41]. Additional informa-
ation analysis, from identifying emerging research tion, such as citations, can also be utilized for auto-
trends [12, 13] to measuring similar journals [14] and matic MeSH indexing. For example, Delbecque and
characterizing research profiles for an individual re- Zweigenbaum [42] investigated computing neighbour
searcher, institute or journal [15]. In the era of documents based on the cited articles and cited au-
evidence-based practice, MeSH becomes increasingly thors. More recently, Huang et al. [3] reported a
important in assessing and training the literature novel approach based on learning-to-rank algorithms
search skills of healthcare professionals [16, 17], as [43]. This approach has been shown to be highly suc-
well as in assisting undergraduate education in bio- cessful in the recent BioASQ2 challenge evaluations
logical sciences [18]. Finally, much bioinformatics re- [44–46] and has also been adopted by many others
search, such as gene expression data analysis [19, 20], [47, 48]. Finally, many methods attempt to combine
greatly benefits from MeSH indexing [21–25]. results of different approaches [49, 50]. For instance,
Like many manual annotation projects [26–30], the current production system in MeSH indexing at
MeSH indexing is a labour-intensive process. As the NLM is called Medical Text Indexer (MTI),
shown in [3, 31], it can take an average of 2 to which is a hybrid system that combines both pattern
3 months for an article to be manually indexed with matching and k-NN results [51] via manually-
relevant MeSH terms after it first enters PubMed. In developed rules and continues to be improved over
response, many automated systems for assisting the years [52, 53]. The proposed method in this work
MeSH indexing have been previously proposed. In is also a hybrid system but unlike MTI, which only
general, most existing methods are based on the following uses machine learning to predict a small set of MeSH
techniques: i) pattern matching, ii) text classification, iii) terms, it combines individual results and ranks the
k-Nearest Neighbours, iv) learning-to-rank, or v) combin- entire set of recommendations through machine
ation of multiple techniques. Pattern-matching methods learning instead of heuristic rules.
[32] search for exact or approximate matches of MeSH Despite these efforts, automatic MeSH indexing re-
terms in free text. Automatic MeSH indexing can also be mains a challenging task: the current state-of-the-art
regarded as a multi-class text classification problem where performance remains at about 0.6 in F-measure [54].
each MeSH term represents a distinct class label. Thus Several factors contribute to this performance bottle-
many multi-label text classification methods have been neck: First, since each PubMed article can be assigned
proposed, such as neural networks [33], Support Vector with multiple MeSH terms, i.e. class labels, the task of
Machines (SVM) [34, 35], Inductive Logic Programming automatic MeSH indexing can be seen as a multi-class
Mao and Lu Journal of Biomedical Semantics (2017) 8:15 Page 3 of 9
terms in those neighbour documents, which can be classification, xi is a document of a given class (ie
harmful to the accuracy of our approach. Moreover, assigned with a specific MeSH term), λ is a
the word frequencies are also different in the older and regularization parameter, w is a vector of feature
more recent articles, which are closely related to the simi- weights, and θ is a threshold. The function h is the
larity score for two articles. Therefore, we built our index modified Huber loss function and has the form:
with only articles that were assigned with MeSH terms
after 2009, and retrieved the neighbour documents using 8
< −4⋅z; z≤−1
such a new index instead of retrieving similar documents
hðzÞ ¼ ð1−zÞ2 ; −1 < z < 1
from the whole PubMed. When building our document :
0; 1≤z
index for the PubMed Related Articles algorithm3, we also
make sure that all annotated MeSH terms are removed
such that they are not used in the computation of the We can choose C+ to be greater than C− to overcome
neighbour documents. In other words, the similarity be- the dominance of negative points in the decision process
tween two documents is solely based on the words they (here we set C+ = rC− and the ratio r to be 1.5). To train
have in common. these 20,000 classifiers, we used the MEDLINE articles
The parameter k was fixed (k = 20) in [3], which that were indexed with MeSH terms between January
means the same number of neighbours will be in- 2009 and March 2014.
cluded for all target articles. However, we observed
that some articles may only have a few very similar Input source #3: MTI results
documents. We therefore adjust the parameter k dy- MTI is used as one of the baselines in the BioASQ Task,
namically between 10 to 40 in this work according to which primarily uses MetaMap to map the phrases in
the similarity scores of the neighbours: the smaller the text to UMLS (Unified Medical Language System)
the average similarity score of the neighbours, the concepts [61]. We thus add all MeSH terms predicted
fewer neighbours will be used. Once those k-nearest by MTI as candidates, and obtained the feature vectors
neighbour documents are retrieved, we collect all of for those MeSH terms. This is useful since the MTI re-
the unique MeSH terms associated with those neigh- sults can return correct MeSH terms not found by the
bour documents. Note that we only considered the other two methods.
main headings and removed subheadings attached to
the main headings. Learning to rank
Once an initial list of candidate MeSH terms from
Input source #2: multi-label text classification all three sources are obtained, we approached the
Motivated by [57], we implemented a multi-label text task of MeSH indexing as a ranking problem. In our
classification approach where we treat each MeSH con- previous work, we trained the ranking function with
cept as a label and build a binary classifier accordingly. ListNet [62], which sorts the results based on a list
More specifically, we first train individual classification of scores. In this work we evaluated several other
models for each of the most frequently indexed 20,000 learning-to-rank algorithms [43] on the BioASQ test
MeSH terms, as the remaining ones are rarely used in dataset, including MART [63], RankNet [64], Coord-
indexing. Then we apply these models to the new article inate Ascent [65], AdaRank [66], and LambdaMART,
and add those positively classified MeSH concepts as which are available in RankLib v2.24, and found that
candidates to the initial list. We also keep those associ- LambdaMART achieved the best performance.
ated numerical prediction scores and use them as fea- LambdaMART [67] is a combination of MART and
tures in the next step. LambdaRank, where the MART algorithm can be
Our implementation is based on the cost-sensitive viewed as generalizations of logistic regression [63]
SVM classifiers [58] with Huber loss function [59]. Cost- and LambdaRank is a method for learning arbitrary
sensitive SVMs have been shown to be a good solution information retrieval measures [68]. To train such a
for dealing with imbalanced and noisy data in biomed- model, LambdaMART uses gradient boosting to
ical documents [60]. Let C+ denote the higher misclassi- optimize a ranking cost function where the base
fication cost of the positive class and C− denote the learners are limited-depth regression trees. New trees
lower misclassification cost of the negative class, the cost are added to an ensemble sequentially that best ac-
function is formulated as: count for the remaining regression error of the train-
X X
ing samples, i.e., each new tree greedily minimizes
λ
kwk2 þ C þ i:y ¼1 hðyi ðθ þ w⋅xi ÞÞ þ C − i:y ¼−1 hðyi ðθ þ w⋅xi ÞÞ the cost function. LambdaMART uses MART with
2 i i
specified gradients and Newton’s approximation.
where MeSH terms are treated as class labels C in the LambdaMART is briefly presented as follows [67]:
Mao and Lu Journal of Biomedical Semantics (2017) 8:15 Page 5 of 9
First, we obtained a training set consisting of biomedical translation probability features [69], query-likelihood fea-
articles with human assigned MeSH terms from MED- tures [70, 71], and synonym features.
LINE. For each article, we obtain an initial list of MeSH For neighbourhood features, we calculate both neigh-
terms from its neighbour documents. Each MeSH term is bourhood frequency – the number of times the MeSH
then represented as a feature vector. For the list of MeSH term appears in the neighbours, and neighbourhood
terms from its neighbour documents, denoted by {M1, similarity – the sum of similarity scores for these
M2, …, MN}, where N is the number of feature vectors and neighbours.
Mi is the ith feature vector, we obtain a corresponding list For translation probability features, we use the IBM
{y1, y2, …, yN}, where yi∈{0,1} is the ith class label. yi = 1 if translation model [69], which uses title and abstract as
the MeSH term was manually assigned to the target article source language, and MeSH terms as target language.
by expert indexers of the NLM, otherwise yi =0. We then utilize an EM-based algorithm to train the
BioASQ provided approximately 12.6 million PubMed translation probabilities.
documents for system development. Since all PubMed For query-likelihood features, we treat each MeSH
documents can be used as training data, we randomly term as Query (Q), title and abstract as document, and
selected a set of 5,000 MEDLINE documents from the use two genres of query models: classic BM25 model
list of the journals provided by BioASQ for training and [70] and translation-based query model [71], to calculate
optimizing our learning-to-rank algorithm. the probability of whether a MeSH term should be
assigned to the article.
Features In this work, we added a new domain-specific know-
We reused many features developed previously: neighbour- ledge feature. We used a binary feature indicating
hood features, word unigram/bigram overlap features, whether a candidate term is observed by MTI, which
Mao and Lu Journal of Biomedical Semantics (2017) 8:15 Page 6 of 9
relies heavily on the domain-specific UMLS Meta- (1) is that if the (i + 1)th MeSH term was assigned with a
thesaurus [72], for generating its results. score much smaller than the ith MeSH term, the MeSH
To compute the average length of documents and the terms ranked lower than i would not be considered rele-
document frequency for each word, a set of approxi- vant to the target article. Formula (1) also accounts for
mately 60,000 PubMed documents is assembled. These the fact that the difference between lower-ranked MeSH
documents are sampled from recent publications in the terms is subtler than the difference between higher-
BioASQ Select Journal List. The translation model and ranked MeSH terms. The parameter λ was empirically
the background language model were built through set to be 0.3 in this research, and it can be tuned to gen-
training with this data set accordingly. erate predictions favouring either recall or precision.
previous and current versions of MTI (“MTI 2011” and Table 3 Processing time analysis for different steps
“MTI 2014”). It should be noted that here we used Key steps in MeSH Now Average time per
MeSH 2010 and retrieved neighbour documents pub- document (ms)
lished before the articles in NLM2007, and our learning- Obtaining candidate terms via k-NN 1890.82
to-rank model was trained with documents published Obtaining candidate terms via MTI 570.33
before the articles in NLM2007, because the newly pub- Obtaining classification results from 25.63
lished articles are assigned with new MeSH terms which each binary text classifier
are not available in NLM2007. We can see that MeSH Learning to Ranking 103.86
Now makes significant improvement over our previous Post-Processing and List Pruning 1.85
method. We also notice that the results of MTI-2014 are
much better than those of its previous version. Both
MTI-2014 and text classification results (results of input
source #2) contribute to the MeSH Now performance such as “Chi-Square Distribution”, “Survival Analysis”,
with better results generated by MTI than text etc. This is most likely due to the lack of sufficient
classification. positive instances in the training set (i.e. the numbers
Table 2 shows the results on the BioASQ5000 dataset. of these indexed terms in the gold standard are rela-
For comparison, we added the results of MTI First Line tively small). On the other hand, the most incorrectly
(MTIFL_2014) and MTI Default (MTIDEF_2014), both predicted MeSH terms are Check Tags (e.g. “Male”,
of which were used as baselines of the BioASQ chal- “Female”, “Adult”, “Young Adult”, etc.) despite that
lenge. This further verifies that our new approach out- the F1 scores of these individual Check Tags are rea-
performs existing methods. sonably high (most are above the average). Because of
their prevalence in the indexing results, however, im-
System throughput proving their prediction is critical for increasing the
The time complexity of large-scale automatic indexing is overall performance.
crucial to real-world systems but rarely discussed in the As mentioned before, MeSH Now was developed in
past. In Table 3, we present the average processing time 2014 based on the learning-to-rank framework we first
of each step of our method based on BioASQ5000 on a proposed in 2010 [3] for automatic MeSH indexing. At
single computer. We can see that text classification ap- the same time, our ranking framework was adopted by
pears to be a bottleneck given the large size of the classi- several other state-of-the-art systems such as MeSHLa-
fiers (20,000). However, this step can be performed in beler [73] and DeepMeSH [74]. MeSHLabeler is very
parallel so that the overall time can be greatly reduced. similar to MeSH Now with the major difference in using
For example, our current system takes approximately a machine learning model to predict the number of
9 h to process 700,000 articles via a computer cluster MeSH terms instead of heuristics. DeepMeSH further
where 500 jobs can run concurrently. incorporates deep semantic representation into MeSH-
Labeler for improved performance (0.63 in the latest
Discussion and conclusions BioASQ challenge in 2016).
To better understand the differences between the There are some limitations and remaining chal-
computer-predicted and human-indexed results, we lenges in this work for the automatic MeSH indexing
conducted an error analysis based on the results of task. First, our previous work revealed that 85% of
MeSH Now on BioASQ5000 dataset. First, we found the gold-standard MeSH annotations should be
that the predicted MeSH terms with the lowest per- present in the candidate list based on the nearest 20
formance belong to MeSH Category E: “Analytical, neighbours. However, our current best recall is below
Diagnostic and Therapeutic Techniques and Equip- 65%, suggesting there is still room for improving the
ment”, especially the “Statistics as Topic” subcategory, learning-to-rank algorithm to promote the relevant
MeSH terms higher in the ranked list. Second, our
Table 2 Evaluation results on BioASQ5000 test set current binary text classification results are lower
Methods Precision Recall F1 than previously reported [35], partly because for all
Huang et al. 2011 [3] 0.357 0.701 0.473 classifiers we simply used the same training data,
Text Classification 0.689 0.400 0.506
which is quite imbalanced. We believe that the per-
formance of MeSH Now could be further improved
MTIFL – 2014 0.621 0.517 0.564
if better text classification results are available to be
MTI – 2014 0.587 0.559 0.573 integrated. Finally, we are interested in exploring the
MeSH Now 0.612 0.608 0.610 opportunities of using MeSH Now in practical
Bold data are the best value applications.
Mao and Lu Journal of Biomedical Semantics (2017) 8:15 Page 8 of 9
38. Wilbur WJWK. Stochastic gradient descent and the prediction of MeSH for 64. Burges C, Shaked T, Renshaw E, Lazier A, Deeds M, Hamilton N, Hullender G.
PubMed records. In: AMIA. 2014. Learning to rank using gradient descent. In: Proceedings of the 22nd
39. Jimeno-Yepes A, Mork JG, Demner-Fushman D, Aronson AR. A one-size-fits- international conference on Machine learning. New York: ACM; 2005. pp.
all indexing method does not exist: automatic selection based on meta- 89–96.
learning. JCSE. 2012;6(2):151–60. 65. Metzler D, Croft WB. Linear feature-based models for information retrieval.
40. Yang Y, Chute CG. An application of Expert Network to clinical classification Inf Retr. 2007;10(3):257–74.
and MEDLINE indexing. The 18th Annual Symposium on Computer 66. Xu J, Li H. Adarank: a boosting algorithm for information retrieval. In:
Applications in Medical Care. Bethesda: American Medical Informatics Proceedings of the 30th annual international ACM SIGIR conference on
Association; 1994. pp. 157–161. Research and development in information retrieval. New York: ACM; 2007.
41. Trieschnigg D, Pezik P, Lee V, De Jong F, Kraaij W, Rebholz-Schuhmann D. pp. 391–398.
MeSH Up: effective MeSH text classification for improved document 67. Wu Q, Burges CJ, Svore KM, Gao J. Adapting boosting for information
retrieval. Bioinformatics. 2009;25(11):1412–8. retrieval measures. Inf Retr. 2010;13(3):254–270.
42. Delbecque T, Zweigenbaum P. Using Co-Authoring and Cross-Referencing 68. Quoc C, Le V. Learning to rank with nonsmooth cost functions. In: NIPS’07,
Information for MEDLINE Indexing. In: AMIA Annual Symposium vol. 19. 2007. p. 193.
Proceedings. Washington DC: American Medical Informatics Association; 69. Brown PF, Pietra VJD, Pietra SAD, Mercer RL. The mathematics of
2010. pp. 147–151. statistical machine translation: Parameter estimation. Comput Linguist.
43. Liu T-Y. Learning to rank for information retrieval. Found Trends Inf Retr. 1993;19(2):263–311.
2009;3(3):225–331. 70. Robertson SE, Walker S, Jones S, Hancock-Beaulieu MM, Gatford M. Okapi at
44. Mao Y, Wei C-H, Lu Z. NCBI at the 2014 BioASQ challenge task: large-scale TREC-3. Gaithersburg: NIST Special Publication; 1995. pp. 109–126
biomedical semantic indexing and question answering. In: Proceedings of 71. Berger A, Lafferty J. Information retrieval as statistical translation. In:
Question Answering Lab at CLEF. 2014. Proceedings of the 22nd annual international ACM SIGIR conference on
45. Balikas G, Partalas I, Ngomo A-CN, Krithara A, Gaussier E, Paliouras G. Results Research and development in information retrieval. New York: ACM; 1999.
of the BioASQ Track of the Question Answering Lab at CLEF 2014. In: pp. 222–229.
Proceedings of Question Answering Lab at CLEF. 2014. pp. 1181–1193. 72. Humphreys BL, Lindberg DA. The UMLS project: making the conceptual
46. Tsatsaronis G, Balikas G, Malakasiotis P, Partalas I, Zschunke M, Alvers MR, connection between users and the information they need. Bull Med Libr
Weissenborn D, Krithara A, Petridis S, Polychronopoulos D. An overview of Assoc. 1993;81(2):170–177.
the BIOASQ large-scale biomedical semantic indexing and question 73. Liu K, Peng S, Wu J, Zhai C, Mamitsuka H, Zhu S. MeSHLabeler: improving
answering competition. BMC Bioinformatics. 2015;16(1):138. the accuracy of large-scale MeSH indexing by integrating diverse evidence.
47. Liu K, Wu J, Peng S, Zhai C, Zhu S. The Fudan-UIUC participation in the Bioinformatics. 2015;31(12):339–347.
BioASQ Challenge Task 2a: The Antinomyra system. Risk. 2014;129816:100. 74. Peng S, You R, Wang H, Zhai C, Mamitsuka H, Zhu S. DeepMeSH: deep
48. Kavuluru R, Lu Y. Leveraging output term co-occurrence frequencies and semantic representation for improving large-scale MeSH indexing.
latent associations in predicting medical subject headings. Data & Bioinformatics. 2016;32(12):70–79.
Knowledge Engineering. 2014;94:189–201.
49. Mork JG, Jimeno-Yepes A, Aronson AR. The NLM Medical Text Indexer
System for Indexing Biomedical Literature. In: BioASQ@ CLEF. 2013.
50. Ruch P. Automatic assignment of biomedical categories: toward a generic
approach. Bioinformatics. 2006;22(6):658–64.
51. Aronson AR, Mork JG, Gay CW, Humphrey SM, Rogers WJ. The NLM
indexing initiative’s medical text indexer. Medinfo. 2004;11(Pt 1):268–72.
52. Névéol A, Shooshan SE, Humphrey SM, Mork JG, Aronson AR. A recent
advance in the automatic indexing of the biomedical literature. J Biomed
Inform. 2009;42(5):814–23.
53. Mork JG, Demner-Fushman D, Schmidt SC, Aronson AR. Recent
enhancements to the NLM medical text indexer. In: Working Notes for CLEF
2014 Conference, Sheffield, UK. 2014. p. 1328–36.
54. Partalas I, Gaussier É, Ngomo A-CN. Results of the First BioASQ Workshop. In:
BioASQ@ CLEF. 2013. p. 1–8.
55. Funk ME, Reid CA. Indexing consistency in MEDLINE. Bull Med Libr Assoc.
1983;71(2):176.
56. Lin J, Wilbur WJ. PubMed related articles: a probabilistic topic-based model
for content similarity. BMC Bioinformatics. 2007;8(1):423.
57. Tang L, Rajan S, Narayanan VK. Large scale multi-label classification via
metalabeler. In: Proceedings of the 18th international conference on World
wide web. New York: ACM; 2009. pp. 211–220.
58. Thai-Nghe N, Gantner Z, Schmidt-Thieme L. Cost-sensitive learning methods
for imbalanced data. In: Proceedings of the IEEE International Joint Conference
on Neural Networks (IJCNN 2010), Barcelona, Spain. 2010. pp. 1–8.
59. Huber PJ. Robust estimation of a location parameter. Ann Math Stat. 1964; Submit your next manuscript to BioMed Central
35(1):73–101. and we will help you at every step:
60. Kim W, Yeganova L, Comeau DC, Wilbur WJ. Identifying well-formed
biomedical phrases in MEDLINE® text. J Biomed Inform. 2012;45(6):1035–1041. • We accept pre-submission inquiries
61. Yepes AJJ, Mork JG, Demner-Fushman D, Aronson AR. Comparison and • Our selector tool helps you to find the most relevant journal
combination of several MeSH indexing approaches. In: AMIA Annual • We provide round the clock customer support
Symposium Proceedings. Washington DC: American Medical Informatics
Association; 2013. pp. 709–718. • Convenient online submission
62. Cao Z, Qin T, Liu T-Y, Tsai M-F, Li H. Learning to rank: from pairwise • Thorough peer review
approach to listwise approach. In: Proceedings of the 24th international • Inclusion in PubMed and all major indexing services
conference on Machine learning. New York: ACM; 2007. pp 129–136.
63. Friedman JH. Greedy function approximation: a gradient boosting machine. • Maximum visibility for your research
Ann Stat. 2001(29):1189–1232.
Submit your manuscript at
www.biomedcentral.com/submit