SlideShare a Scribd company logo
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Neural Learning to Rank
Bhaskar Mitra
Principal Applied Scientist, Microsoft
PhD student, University College London
@UnderdogGeek
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Objectives
A quick recap of neural networks
The fundamentals of learning to rank
A quick recap of deep neural networks
Learning to rank with deep neural networks
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Reading material
An Introduction to
Neural Information Retrieval
Foundations and Trends® in Information Retrieval
(December 2018)
Download PDF: http://bit.ly/fntir-neural
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Most information retrieval
(IR) systems present a ranked
list of retrieved artifacts
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Learning to Rank (LTR)
”... the task to automatically construct a
ranking model using training data, such
that the model can sort new objects
according to their degrees of relevance,
preference, or importance.”
- Liu [2009]
Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends in Information Retrieval, 2009.
Image source: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/45530.pdf
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
A quick recap of
neural networks
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Neural networks
Chains of parameterized linear transforms (e.g., multiply weight, add
bias) followed by non-linear functions (σ)
Popular choices for σ:
Parameters trained using backpropagation
E2E training over millions of samples in batched mode
Many choices of architecture and hyper-parameters
Non-linearity
Input
Linear transform
Non-linearity
Linear transform
Predicted output
forwardpass
backwardpass
Expected output
loss
Tanh ReLU
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
=
𝜕𝑙
𝜕𝑦2
×
𝜕𝑦2
𝜕𝑦1
×
𝜕𝑦1
𝜕𝑤1
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
=
𝜕 𝑦 − 𝑦2
2
𝜕𝑦2
×
𝜕𝑦2
𝜕𝑦1
×
𝜕𝑦1
𝜕𝑤1
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
= −2 × 𝑦 − 𝑦2 ×
𝜕𝑦2
𝜕𝑦1
×
𝜕𝑦1
𝜕𝑤1
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
= −2 × 𝑦 − 𝑦2 ×
𝜕𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
𝜕𝑦1
×
𝜕𝑦1
𝜕𝑤1
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
= −2 × 𝑦 − 𝑦2 × 1 − 𝑡𝑎𝑛ℎ2
𝑤2. 𝑥 + 𝑏2 × 𝑤2 ×
𝜕𝑦1
𝜕𝑤1
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
= −2 × 𝑦 − 𝑦2 × 1 − 𝑡𝑎𝑛ℎ2
𝑤2. 𝑥 + 𝑏2 × 𝑤2 ×
𝜕𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝜕𝑤1
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized
Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1)
𝜕𝑙
𝜕𝑤1
= −2 × 𝑦 − 𝑦2 × 1 − 𝑡𝑎𝑛ℎ2
𝑤2. 𝑥 + 𝑏2 × 𝑤2 × 1 − 𝑡𝑎𝑛ℎ2
𝑤1. 𝑥 + 𝑏1 × 𝑥
Update the parameter value based on the gradient with 𝜂 as the learning rate
𝑤1
𝑛𝑒𝑤
= 𝑤1
𝑜𝑙𝑑
− 𝜂 ×
𝜕𝑙
𝜕𝑤1
Stochastic Gradient Descent (SGD)
Task: regression
Training data: 𝑥, 𝑦 pairs
Model: NN (1 feature, 1 hidden layer, 1 hidden node)
Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2
𝑥 𝑦1 𝑦2
𝑙
𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1
𝑦 − 𝑦2
2
𝑦
𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2
…and repeat
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Neural models for
non-ranking tasks
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
The softmax function
In neural classification models, the softmax function is popularly used
to normalize the neural network output scores across all the classes
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Cross entropy
The cross entropy between two
probability distributions 𝑝 and 𝑞
over a discrete set of events is
given by,
If 𝑝 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 = 1and 𝑝𝑖 = 0 for all
other values of 𝑖 then,
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Cross entropy with
softmax loss
Cross entropy with softmax is a popular loss
function for classification
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Questions?
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
The fundamentals of
learning to rank
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Problem formulation
LTR models represent a rankable item—e.g., a document or a movie or a
song—given some context—e.g., a user-issued query or user’s historical
interactions with other items—as a numerical vector 𝑥 ∈ ℝ 𝑛
The ranking model 𝑓: 𝑥 → ℝ is trained to map the vector to a real-valued
score such that relevant items are scored higher.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Examples of ranking metrics
Discounted Cumulative Gain (DCG)
𝐷𝐶𝐺@𝑘 =
𝑖=1
𝑘
2 𝑟𝑒𝑙𝑖
− 1
𝑙𝑜𝑔2 𝑖 + 1
Reciprocal Rank (RR)
𝑅𝑅@𝑘 = max
1<𝑖<𝑘
𝑟𝑒𝑙𝑖
𝑖
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Why is ranking challenging?
Rank based metrics, such as DCG or MRR, are
non-smooth/non-differentiable
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Approaches
Pointwise approach
Relevance label 𝑦 𝑞,𝑑 is a number—derived from binary or graded human
judgments or implicit user feedback (e.g., CTR). Typically, a regression or
classification model is trained to predict 𝑦 𝑞,𝑑 given 𝑥 𝑞,𝑑.
Pairwise approach
Pairwise preference between documents for a query (𝑑𝑖 ≻ 𝑑𝑗 w.r.t. 𝑞) as
label. Reduces to binary classification to predict more relevant document.
Listwise approach
Directly optimize for rank-based metric, such as NDCG—difficult because
these metrics are often not differentiable w.r.t. model parameters.
Liu [2009] categorizes
different LTR approaches
based on training objectives:
Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends in Information Retrieval, 2009.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Features
They can often be categorized as:
Query-independent or static features
e.g., incoming link count and document length
Query-dependent or dynamic features
e.g., BM25
Query-level features
e.g., query length
Traditional L2R models employ
hand-crafted features that
encode IR insights
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Features
Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval, Information Retrieval Journal, 2010
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Pointwise objectives
Regression loss
Given 𝑞, 𝑑 predict the value of 𝑦 𝑞,𝑑
e.g., square loss for binary or categorical
labels,
where, 𝑦 𝑞,𝑑 is the one-hot representation
[Fuhr, 1989] or the actual value [Cossock and
Zhang, 2006] of the label
Norbert Fuhr. Optimum polynomial retrieval functions based on the probability ranking principle. ACM TOIS, 1989.
David Cossock and Tong Zhang. Subset ranking using regression. In COLT, 2006.
labels
prediction
0 1 1
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Pointwise objectives
Classification loss
Given 𝑞, 𝑑 predict the class 𝑦 𝑞,𝑑
e.g., cross-entropy with softmax over
categorical labels 𝑌 [Li et al., 2008],
where, 𝑠 𝑦 𝑞,𝑑
is the model’s score for label 𝑦 𝑞,𝑑
labels
prediction
0 1
Ping Li, Qiang Wu, and Christopher J Burges. Mcrank: Learning to rank using multiple classification and gradient boosting. In NIPS, 2008.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Pairwise objectives Pairwise loss generally has the following form [Chen et al., 2009],
where, 𝜙 can be,
• Hinge function 𝜙 𝑧 = 𝑚𝑎𝑥 0, 1 − 𝑧 [Herbrich et al., 2000]
• Exponential function 𝜙 𝑧 = 𝑒−𝑧
[Freund et al., 2003]
• Logistic function 𝜙 𝑧 = 𝑙𝑜𝑔 1 + 𝑒−𝑧
[Burges et al., 2005]
• Others…
Pairwise loss minimizes the average number of
inversions in ranking—i.e., 𝑑𝑖 ≻ 𝑑𝑗 w.r.t. 𝑞 but 𝑑𝑗 is
ranked higher than 𝑑𝑖
Given 𝑞, 𝑑𝑖, 𝑑𝑗 , predict the more relevant document
For 𝑞, 𝑑𝑖 and 𝑞, 𝑑𝑗 ,
Feature vectors: 𝑥𝑖 and 𝑥𝑗
Model scores: 𝑠𝑖 = 𝑓 𝑥𝑖 and 𝑠𝑗 = 𝑓 𝑥𝑗
Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, and Hang Li. Ranking measures and loss functions in learning to rank. In NIPS, 2009.
Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank boundaries for ordinal regression. 2000.
Yoav Freund, Raj Iyer, Robert E Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. In JMLR, 2003.
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In ICML, 2005.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Pairwise objectives
RankNet loss
Pairwise loss function proposed by Burges et al. [2005]—an industry favourite
[Burges, 2015]
Predicted probabilities: 𝑝𝑖𝑗 = 𝑝 𝑠𝑖 > 𝑠𝑗 ≡
𝑒 𝛾.𝑠 𝑖
𝑒 𝛾.𝑠 𝑖 +𝑒
𝛾.𝑠 𝑗
=
1
1+𝑒
−𝛾. 𝑠 𝑖−𝑠 𝑗
Desired probabilities: 𝑝𝑖𝑗 = 1 and 𝑝𝑗𝑖 = 0
Computing cross-entropy between 𝑝 and 𝑝
ℒ 𝑅𝑎𝑛𝑘𝑁𝑒𝑡 = − 𝑝𝑖𝑗. 𝑙𝑜𝑔 𝑝𝑖𝑗 − 𝑝𝑗𝑖. 𝑙𝑜𝑔 𝑝𝑗𝑖 = −𝑙𝑜𝑔 𝑝𝑖𝑗 = 𝑙𝑜𝑔 1 + 𝑒−𝛾. 𝑠 𝑖−𝑠 𝑗
pairwise
preference
score
0 1
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In ICML, 2005.
Chris Burges. RankNet: A ranking retrospective. https://www.microsoft.com/en-us/research/blog/ranknet-a-ranking-retrospective/. 2015.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
A generalized cross-entropy loss
An alternative loss function assumes a single relevant document 𝑑+ and compares it
against the full collection 𝐷
Predicted probabilities: p 𝑑+|𝑞 =
𝑒 𝛾.𝑠 𝑞,𝑑+
𝑑∈𝐷 𝑒 𝛾.𝑠 𝑞,𝑑
The cross-entropy loss is then given by,
ℒ 𝐶𝐸 𝑞, 𝑑+, 𝐷 = −𝑙𝑜𝑔 p 𝑑+|𝑞 = −𝑙𝑜𝑔
𝑒 𝛾.𝑠 𝑞,𝑑+
𝑑∈𝐷 𝑒 𝛾.𝑠 𝑞,𝑑
Computing the softmax over the full collection is prohibitively expensive—LTR models
typically consider few negative candidates [Huang et al., 2013, Shen et al., 2014, Mitra et al., 2017]
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In CIKM, 2013.
Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gregoire Mesnil. A latent semantic model with convolutional-pooling structure for information retrieval. In CIKM, 2014.
Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In WWW, 2017.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Blue: relevant Gray: non-relevant
NDCG and ERR higher for left but pairwise
errors less for right
Due to strong position-based discounting in
IR measures, errors at higher ranks are much
more problematic than at lower ranks
But listwise metrics are non-continuous and
non-differentiable
LISTWISE
OBJECTIVES
Christopher JC Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 2010.
[Burges, 2010]
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Listwise objectives
Burges et al. [2006] make two observations:
1. To train a model we don’t need the costs
themselves, only the gradients (of the costs
w.r.t model scores)
2. It is desired that the gradient be bigger for
pairs of documents that produces a bigger
impact in NDCG by swapping positions
Christopher JC Burges, Robert Ragno, and Quoc Viet Le. Learning to rank with nonsmooth cost functions. In NIPS, 2006.
LambdaRank loss
Multiply actual gradients with the change in
NDCG by swapping the rank positions of the
two documents
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Listwise objectives
According to the Luce model [Luce, 2005],
given four items 𝑑1, 𝑑2, 𝑑3, 𝑑4 the probability
of observing a particular rank-order, say
𝑑2, 𝑑1, 𝑑4, 𝑑3 , is given by:
where, 𝜋 is a particular permutation and 𝜙 is a
transformation (e.g., linear, exponential, or
sigmoid) over the score 𝑠𝑖 corresponding to
item 𝑑𝑖
R Duncan Luce. Individual choice behavior. 1959.
Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In ICML, 2007.
Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. Listwise approach to learning to rank: theory and algorithm. In ICML, 2008.
ListNet loss
Cao et al. [2007] propose to compute the
probability distribution over all possible
permutations based on model score and ground-
truth labels. The loss is then given by the K-L
divergence between these two distributions.
This is computationally very costly, computing
permutations of only the top-K items makes it
slightly less prohibitive.
ListMLE loss
Xia et al. [2008] propose to compute the
probability of the ideal permutation based on the
ground truth. However, with categorical labels
more than one permutation is possible.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Listwise objectives
Mingrui Wu, Yi Chang, Zhaohui Zheng, and Hongyuan Zha. Smoothing DCG for learning to rank: A novel approach using smoothed hinge functions. In CIKM, 2009.
Smooth DCG
Wu et al. [2009] compute a “smooth” rank of
documents as a function of their scores
This “smooth” rank can be plugged into a
ranking metric, such as MRR or DCG, to
produce a smooth ranking loss
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Questions?
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
A quick recap of
deep neural networks
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Types of vector representations
Local (or one-hot) representation
Every term in vocabulary T is represented by a
binary vector of length |T|, where one position in
the vector is set to one and the rest to zero
Distributed representation
Every term in vocabulary T is represented by a
real-valued vector of length k. The vector can be
sparse or dense. The vector dimensions may be
observed (e.g., hand-crafted features) or latent
(e.g., embedding dimensions).
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Different modalities of input text representation
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Different modalities of input text representation
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Different modalities of input text representation
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Different modalities of input text representation
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Shift-invariant
neural operations
Detecting a pattern in one part of the input space is similar to
detecting it in another
Leverage redundancy by moving a window over the whole
input space and then aggregate
On each instance of the window a kernel—also known as a
filter or a cell—is applied
Different aggregation strategies lead to different architectures
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Convolution
Move the window over the input space each time applying
the same cell over the window
A typical cell operation can be,
ℎ = 𝜎 𝑊𝑋 + 𝑏
Full Input [words x in_channels]
Cell Input [window x in_channels]
Cell Output [1 x out_channels]
Full Output [1 + (words – window) / stride x out_channels]
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Pooling
Move the window over the input space each time applying an
aggregate function over each dimension in within the window
ℎ𝑗 = 𝑚𝑎𝑥𝑖∈𝑤𝑖𝑛 𝑋𝑖,𝑗 𝑜𝑟 ℎ𝑗 = 𝑎𝑣𝑔𝑖∈𝑤𝑖𝑛 𝑋𝑖,𝑗
Full Input [words x channels]
Cell Input [window x channels]
Cell Output [1 x channels]
Full Output [1 + (words – window) / stride x channels]
max -pooling average -pooling
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Convolution w/
Global Pooling
Stacking a global pooling layer on top of a convolutional layer
is a common strategy for generating a fixed length embedding
for a variable length text
Full Input [words x in_channels]
Full Output [1 x out_channels]
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Recurrent neural
network
Similar to a convolution layer but additional dependency on
previous hidden state
A simple cell operation shown below but others like LSTM and
GRUs are more popular in practice,
ℎ𝑖 = 𝜎 𝑊𝑋𝑖 + 𝑈ℎ𝑖−1 + 𝑏
Full Input [words x in_channels]
Cell Input [window x in_channels] + [1 x out_channels]
Cell Output [1 x out_channels]
Full Output [1 x out_channels]
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Questions?
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Learning to rank with
deep neural networks
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Representation
learning for IR
Many IR scenarios—e.g., web search and content-based
filtering for recommender systems—involve matching
items based on their descriptions
Deep learning models can be useful
for learning good representation of
items for matching
i.e., LTR with raw inputs instead of
hand-engineered features
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
DNN for YouTube
recommendation
Input: user profile
Output: probability distribution over
items to be recommended
Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In RecSys, 2016.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Siamese networks
Relevance estimated by cosine similarity
between item embeddings
Input: character trigraph counts (bag of
words assumption)
Minimizes cross-entropy loss against
randomly sampled negative documents
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In CIKM, 2013.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Wide and deep model
Deep model for representation
learning and wide model for
memorization
Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, et al. Wide & deep learning for recommender systems. In workshop on deep learning for recommender systems, 2016.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Lexical and semantic
matching networks
Mitra et al. [2016] argue that both lexical and
semantic matching is important for
document ranking
Duet model is a linear combination of two
DNNs—focusing on lexical and semantic
matching, respectively—jointly trained on
labelled data
code: http://bit.ly/duetv2code
Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In WWW, 2017.
GET THE CODE
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Large scale pretrained
language models
BERT (and other large-scale unsupervised
language models) are demonstrating dramatic
performance improvements on many IR tasks
Jacob Devlin, Ming-Wei Chang, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2018.
Nogueira, Rodrigo, and Kyunghyun Cho. Passage Re-ranking with BERT. In arXiv, 2019.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Dealing with
multiple fields
Real world items have multiple sources of
descriptions—creates unique challenges for
representation learning models
Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary. Neural ranking models with multiple document fields. In WSDM, 2018.
Juan Li, Zhicheng Dou, Yutao Zhu, Xiaochen Zuo, and Ji-Rong Wen. Deep cross-platform product matching in e-commerce. In IRJ, 2019.
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Questions?
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Key takeaways
• Learning to Rank is effective on a broad range of IR tasks
• Optimization of non-smooth rank-based metrics is challenging but loss functions, such
as RankNet and LambdaRank, are effective in practice
• Learning to rank models can operate over hand-engineered features or employ deep
architectures to learn useful representations from raw input
• Large scale pretraining of language models demonstrate strong performance on tasks
that involve text matching
Workshop on Recommender Systems
HEC Montréal, August 20-23, 2019
Thank you
@UnderdogGeek bmitra@microsoft.com

More Related Content

What's hot (20)

PDF
Introduction to data mining and machine learning
Tilani Gunawardena PhD(UNIBAS), BSc(Pera), FHEA(UK), CEng, MIESL
 
PPT
Lect4
sumit621
 
PDF
Cluster Analysis for Dummies
Venkata Reddy Konasani
 
PDF
Machine Learning Basics
Humberto Marchezi
 
PPTX
Machine learning
Sukhwinder Singh
 
PDF
Dual Learning for Machine Translation (NIPS 2016)
Toru Fujino
 
PPTX
K-means Clustering
Anna Fensel
 
PDF
Dbm630 lecture09
Tokyo Institute of Technology
 
PPT
Chapter 09 class advanced
Houw Liong The
 
PPTX
InfoGAN: Interpretable Representation Learning by Information Maximizing Gen...
Shuhei Yoshida
 
PDF
Getting Started with Machine Learning
Humberto Marchezi
 
PPTX
K-means Clustering
Sajib Sen
 
PDF
Gradient Boosted Regression Trees in scikit-learn
DataRobot
 
PPTX
Kmeans
Nikita Goyal
 
PPTX
Support Vector Machines Simply
Emad Nabil
 
PDF
Safe and Efficient Off-Policy Reinforcement Learning
mooopan
 
PPTX
Machine Learning Algorithms (Part 1)
Zihui Li
 
PPT
Clustering
NLPseminar
 
PPTX
K-Means Clustering Simply
Emad Nabil
 
PDF
Machine learning in science and industry — day 2
arogozhnikov
 
Introduction to data mining and machine learning
Tilani Gunawardena PhD(UNIBAS), BSc(Pera), FHEA(UK), CEng, MIESL
 
Lect4
sumit621
 
Cluster Analysis for Dummies
Venkata Reddy Konasani
 
Machine Learning Basics
Humberto Marchezi
 
Machine learning
Sukhwinder Singh
 
Dual Learning for Machine Translation (NIPS 2016)
Toru Fujino
 
K-means Clustering
Anna Fensel
 
Chapter 09 class advanced
Houw Liong The
 
InfoGAN: Interpretable Representation Learning by Information Maximizing Gen...
Shuhei Yoshida
 
Getting Started with Machine Learning
Humberto Marchezi
 
K-means Clustering
Sajib Sen
 
Gradient Boosted Regression Trees in scikit-learn
DataRobot
 
Kmeans
Nikita Goyal
 
Support Vector Machines Simply
Emad Nabil
 
Safe and Efficient Off-Policy Reinforcement Learning
mooopan
 
Machine Learning Algorithms (Part 1)
Zihui Li
 
Clustering
NLPseminar
 
K-Means Clustering Simply
Emad Nabil
 
Machine learning in science and industry — day 2
arogozhnikov
 

Similar to Neural Learning to Rank (20)

PPTX
Neural Learning to Rank
Bhaskar Mitra
 
PDF
super-cheatsheet-artificial-intelligence.pdf
ssuser089265
 
PDF
Deep learning concepts
Joe li
 
PPT
Machine Learning and Inductive Inference
butest
 
PDF
Fundamentals of Deep Recommender Systems
WQ Fan
 
PPTX
lecture15-neural-nets (2).pptx
anjithaba
 
PDF
Machine learning cheat sheet
Hany Sewilam Abdel Hamid
 
PPTX
Recommender Systems from A to Z – Model Training
Crossing Minds
 
PDF
CS229_MachineLearning_notes.pdfkkkkkkkkkk
lenhan070903
 
PDF
machine learning notes by Andrew Ng and Tengyu Ma
Vijayabaskar Uthirapathy
 
PPTX
Introduction to deep Learning Fundamentals
VishalGour25
 
PDF
GTC 2021: Counterfactual Learning to Rank in E-commerce
GrubhubTech
 
PDF
Boston ML - Architecting Recommender Systems
James Kirk
 
PDF
Deep learning MindMap
Ashish Patel
 
PPTX
cnn.pptx Convolutional neural network used for image classication
SakkaravarthiShanmug
 
PDF
Relational machine-learning
Bhushan Kotnis
 
PPTX
Chapter10.pptx
adnansbp
 
PPT
Neural tool box
Mohan Raj
 
PDF
Deep learning architectures
Joe li
 
PPTX
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...
Balázs Hidasi
 
Neural Learning to Rank
Bhaskar Mitra
 
super-cheatsheet-artificial-intelligence.pdf
ssuser089265
 
Deep learning concepts
Joe li
 
Machine Learning and Inductive Inference
butest
 
Fundamentals of Deep Recommender Systems
WQ Fan
 
lecture15-neural-nets (2).pptx
anjithaba
 
Machine learning cheat sheet
Hany Sewilam Abdel Hamid
 
Recommender Systems from A to Z – Model Training
Crossing Minds
 
CS229_MachineLearning_notes.pdfkkkkkkkkkk
lenhan070903
 
machine learning notes by Andrew Ng and Tengyu Ma
Vijayabaskar Uthirapathy
 
Introduction to deep Learning Fundamentals
VishalGour25
 
GTC 2021: Counterfactual Learning to Rank in E-commerce
GrubhubTech
 
Boston ML - Architecting Recommender Systems
James Kirk
 
Deep learning MindMap
Ashish Patel
 
cnn.pptx Convolutional neural network used for image classication
SakkaravarthiShanmug
 
Relational machine-learning
Bhushan Kotnis
 
Chapter10.pptx
adnansbp
 
Neural tool box
Mohan Raj
 
Deep learning architectures
Joe li
 
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...
Balázs Hidasi
 
Ad

More from Bhaskar Mitra (20)

PPTX
Emancipatory Information Retrieval (Invited Talk at UCC)
Bhaskar Mitra
 
PPTX
Emancipatory Information Retrieval (SWIRL 2025)
Bhaskar Mitra
 
PPTX
Sociotechnical Implications of Generative AI for Information Access
Bhaskar Mitra
 
PDF
Bias and Beyond: On Generative AI and the Future of Search and Society
Bhaskar Mitra
 
PPTX
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
PPTX
Joint Multisided Exposure Fairness for Search and Recommendation
Bhaskar Mitra
 
PPTX
What’s next for deep learning for Search?
Bhaskar Mitra
 
PDF
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
Bhaskar Mitra
 
PPTX
Efficient Machine Learning and Machine Learning for Efficiency in Information...
Bhaskar Mitra
 
PPTX
Multisided Exposure Fairness for Search and Recommendation
Bhaskar Mitra
 
PPTX
Neural Information Retrieval: In search of meaningful progress
Bhaskar Mitra
 
PPTX
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
Bhaskar Mitra
 
PPTX
Duet @ TREC 2019 Deep Learning Track
Bhaskar Mitra
 
PPTX
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
Bhaskar Mitra
 
PPTX
Deep Neural Methods for Retrieval
Bhaskar Mitra
 
PPTX
Deep Learning for Search
Bhaskar Mitra
 
PPTX
Dual Embedding Space Model (DESM)
Bhaskar Mitra
 
PPTX
Adversarial and reinforcement learning-based approaches to information retrieval
Bhaskar Mitra
 
PPTX
5 Lessons Learned from Designing Neural Models for Information Retrieval
Bhaskar Mitra
 
PPTX
A Simple Introduction to Neural Information Retrieval
Bhaskar Mitra
 
Emancipatory Information Retrieval (Invited Talk at UCC)
Bhaskar Mitra
 
Emancipatory Information Retrieval (SWIRL 2025)
Bhaskar Mitra
 
Sociotechnical Implications of Generative AI for Information Access
Bhaskar Mitra
 
Bias and Beyond: On Generative AI and the Future of Search and Society
Bhaskar Mitra
 
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
Joint Multisided Exposure Fairness for Search and Recommendation
Bhaskar Mitra
 
What’s next for deep learning for Search?
Bhaskar Mitra
 
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
Bhaskar Mitra
 
Efficient Machine Learning and Machine Learning for Efficiency in Information...
Bhaskar Mitra
 
Multisided Exposure Fairness for Search and Recommendation
Bhaskar Mitra
 
Neural Information Retrieval: In search of meaningful progress
Bhaskar Mitra
 
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
Bhaskar Mitra
 
Duet @ TREC 2019 Deep Learning Track
Bhaskar Mitra
 
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
Bhaskar Mitra
 
Deep Neural Methods for Retrieval
Bhaskar Mitra
 
Deep Learning for Search
Bhaskar Mitra
 
Dual Embedding Space Model (DESM)
Bhaskar Mitra
 
Adversarial and reinforcement learning-based approaches to information retrieval
Bhaskar Mitra
 
5 Lessons Learned from Designing Neural Models for Information Retrieval
Bhaskar Mitra
 
A Simple Introduction to Neural Information Retrieval
Bhaskar Mitra
 
Ad

Recently uploaded (20)

PDF
Quorum Sensing and Microbial Communication
Prachi Virat
 
PPTX
ANTIANGINAL DRUGS.pptx m pharm pharmacology
46JaybhayAshwiniHari
 
PPTX
Home Garden as a Component of Agroforestry system : A survey-based Study
AkhangshaRoy
 
PPTX
RED ROT DISEASE OF SUGARCANE.pptx
BikramjitDeuri
 
PPTX
Pirimidinas_2025_Curso Ácidos nucleicos. Cinvestav
lalvarezmex
 
PPTX
Nature of Science and the kinds of models used in science
JocelynEvascoRomanti
 
PPTX
Preparation of Experimental Animals.pptx
muralinath2
 
PPTX
Reticular formation_nuclei_afferent_efferent
muralinath2
 
PDF
A deep Search for Ethylene Glycol and Glycolonitrile in the V883 Ori Protopla...
Sérgio Sacani
 
PPTX
Internal Capsule_Divisions_fibres_lesions
muralinath2
 
PDF
New Physics and Quantum AI: Pioneering the Next Frontier
Saikat Basu
 
PPTX
Weather Data Revolution: Advanced Technologies & AI(use) for Precise Storm Tr...
kutatomoshi
 
PDF
Systems Biology: Integrating Engineering with Biological Research (www.kiu.a...
publication11
 
DOCX
Precise Weather Research (UI) & Applied Technology / Science Weather Tracking
kutatomoshi
 
PPTX
Evolution of diet breadth in herbivorus insects.pptx
Mr. Suresh R. Jambagi
 
PPTX
Brain_stem_Medulla oblongata_functions of pons_mid brain
muralinath2
 
PDF
NSF-DOE Vera C. Rubin Observatory Observations of Interstellar Comet 3I/ATLAS...
Sérgio Sacani
 
PPTX
MODIS/VIIRS Standard Cloud Products: SW Calibration and Trend Quantification ...
ShaneFernandes24
 
PPTX
Pengenalan Sel dan organisasi kehidupanpptx
SuntiEkaprawesti1
 
PPTX
Chromium (Cr) based oxidizing reagents.pptx
karnikhimani
 
Quorum Sensing and Microbial Communication
Prachi Virat
 
ANTIANGINAL DRUGS.pptx m pharm pharmacology
46JaybhayAshwiniHari
 
Home Garden as a Component of Agroforestry system : A survey-based Study
AkhangshaRoy
 
RED ROT DISEASE OF SUGARCANE.pptx
BikramjitDeuri
 
Pirimidinas_2025_Curso Ácidos nucleicos. Cinvestav
lalvarezmex
 
Nature of Science and the kinds of models used in science
JocelynEvascoRomanti
 
Preparation of Experimental Animals.pptx
muralinath2
 
Reticular formation_nuclei_afferent_efferent
muralinath2
 
A deep Search for Ethylene Glycol and Glycolonitrile in the V883 Ori Protopla...
Sérgio Sacani
 
Internal Capsule_Divisions_fibres_lesions
muralinath2
 
New Physics and Quantum AI: Pioneering the Next Frontier
Saikat Basu
 
Weather Data Revolution: Advanced Technologies & AI(use) for Precise Storm Tr...
kutatomoshi
 
Systems Biology: Integrating Engineering with Biological Research (www.kiu.a...
publication11
 
Precise Weather Research (UI) & Applied Technology / Science Weather Tracking
kutatomoshi
 
Evolution of diet breadth in herbivorus insects.pptx
Mr. Suresh R. Jambagi
 
Brain_stem_Medulla oblongata_functions of pons_mid brain
muralinath2
 
NSF-DOE Vera C. Rubin Observatory Observations of Interstellar Comet 3I/ATLAS...
Sérgio Sacani
 
MODIS/VIIRS Standard Cloud Products: SW Calibration and Trend Quantification ...
ShaneFernandes24
 
Pengenalan Sel dan organisasi kehidupanpptx
SuntiEkaprawesti1
 
Chromium (Cr) based oxidizing reagents.pptx
karnikhimani
 

Neural Learning to Rank

  • 1. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Neural Learning to Rank Bhaskar Mitra Principal Applied Scientist, Microsoft PhD student, University College London @UnderdogGeek Workshop on Recommender Systems HEC Montréal, August 20-23, 2019
  • 2. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Objectives A quick recap of neural networks The fundamentals of learning to rank A quick recap of deep neural networks Learning to rank with deep neural networks
  • 3. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Reading material An Introduction to Neural Information Retrieval Foundations and Trends® in Information Retrieval (December 2018) Download PDF: http://bit.ly/fntir-neural
  • 4. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Most information retrieval (IR) systems present a ranked list of retrieved artifacts
  • 5. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Learning to Rank (LTR) ”... the task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance.” - Liu [2009] Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends in Information Retrieval, 2009. Image source: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/45530.pdf
  • 6. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 A quick recap of neural networks
  • 7. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Neural networks Chains of parameterized linear transforms (e.g., multiply weight, add bias) followed by non-linear functions (σ) Popular choices for σ: Parameters trained using backpropagation E2E training over millions of samples in batched mode Many choices of architecture and hyper-parameters Non-linearity Input Linear transform Non-linearity Linear transform Predicted output forwardpass backwardpass Expected output loss Tanh ReLU
  • 8. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = 𝜕𝑙 𝜕𝑦2 × 𝜕𝑦2 𝜕𝑦1 × 𝜕𝑦1 𝜕𝑤1 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 9. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = 𝜕 𝑦 − 𝑦2 2 𝜕𝑦2 × 𝜕𝑦2 𝜕𝑦1 × 𝜕𝑦1 𝜕𝑤1 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 10. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = −2 × 𝑦 − 𝑦2 × 𝜕𝑦2 𝜕𝑦1 × 𝜕𝑦1 𝜕𝑤1 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 11. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = −2 × 𝑦 − 𝑦2 × 𝜕𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 𝜕𝑦1 × 𝜕𝑦1 𝜕𝑤1 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 12. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = −2 × 𝑦 − 𝑦2 × 1 − 𝑡𝑎𝑛ℎ2 𝑤2. 𝑥 + 𝑏2 × 𝑤2 × 𝜕𝑦1 𝜕𝑤1 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 13. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = −2 × 𝑦 − 𝑦2 × 1 − 𝑡𝑎𝑛ℎ2 𝑤2. 𝑥 + 𝑏2 × 𝑤2 × 𝜕𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝜕𝑤1 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 14. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Goal: iteratively update the learnable parameters such that the loss 𝑙 is minimized Compute the gradient of the loss 𝑙 w.r.t. each parameter (e.g., 𝑤1) 𝜕𝑙 𝜕𝑤1 = −2 × 𝑦 − 𝑦2 × 1 − 𝑡𝑎𝑛ℎ2 𝑤2. 𝑥 + 𝑏2 × 𝑤2 × 1 − 𝑡𝑎𝑛ℎ2 𝑤1. 𝑥 + 𝑏1 × 𝑥 Update the parameter value based on the gradient with 𝜂 as the learning rate 𝑤1 𝑛𝑒𝑤 = 𝑤1 𝑜𝑙𝑑 − 𝜂 × 𝜕𝑙 𝜕𝑤1 Stochastic Gradient Descent (SGD) Task: regression Training data: 𝑥, 𝑦 pairs Model: NN (1 feature, 1 hidden layer, 1 hidden node) Learnable parameters: 𝑤1, 𝑏1, 𝑤2, 𝑏2 𝑥 𝑦1 𝑦2 𝑙 𝑡𝑎𝑛ℎ 𝑤1. 𝑥 + 𝑏1 𝑦 − 𝑦2 2 𝑦 𝑡𝑎𝑛ℎ 𝑤2. 𝑦1 + 𝑏2 …and repeat
  • 15. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Neural models for non-ranking tasks
  • 16. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 The softmax function In neural classification models, the softmax function is popularly used to normalize the neural network output scores across all the classes
  • 17. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Cross entropy The cross entropy between two probability distributions 𝑝 and 𝑞 over a discrete set of events is given by, If 𝑝 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 = 1and 𝑝𝑖 = 0 for all other values of 𝑖 then,
  • 18. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Cross entropy with softmax loss Cross entropy with softmax is a popular loss function for classification
  • 19. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Questions?
  • 20. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 The fundamentals of learning to rank
  • 21. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Problem formulation LTR models represent a rankable item—e.g., a document or a movie or a song—given some context—e.g., a user-issued query or user’s historical interactions with other items—as a numerical vector 𝑥 ∈ ℝ 𝑛 The ranking model 𝑓: 𝑥 → ℝ is trained to map the vector to a real-valued score such that relevant items are scored higher.
  • 22. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Examples of ranking metrics Discounted Cumulative Gain (DCG) 𝐷𝐶𝐺@𝑘 = 𝑖=1 𝑘 2 𝑟𝑒𝑙𝑖 − 1 𝑙𝑜𝑔2 𝑖 + 1 Reciprocal Rank (RR) 𝑅𝑅@𝑘 = max 1<𝑖<𝑘 𝑟𝑒𝑙𝑖 𝑖
  • 23. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Why is ranking challenging? Rank based metrics, such as DCG or MRR, are non-smooth/non-differentiable
  • 24. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Approaches Pointwise approach Relevance label 𝑦 𝑞,𝑑 is a number—derived from binary or graded human judgments or implicit user feedback (e.g., CTR). Typically, a regression or classification model is trained to predict 𝑦 𝑞,𝑑 given 𝑥 𝑞,𝑑. Pairwise approach Pairwise preference between documents for a query (𝑑𝑖 ≻ 𝑑𝑗 w.r.t. 𝑞) as label. Reduces to binary classification to predict more relevant document. Listwise approach Directly optimize for rank-based metric, such as NDCG—difficult because these metrics are often not differentiable w.r.t. model parameters. Liu [2009] categorizes different LTR approaches based on training objectives: Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends in Information Retrieval, 2009.
  • 25. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Features They can often be categorized as: Query-independent or static features e.g., incoming link count and document length Query-dependent or dynamic features e.g., BM25 Query-level features e.g., query length Traditional L2R models employ hand-crafted features that encode IR insights
  • 26. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Features Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval, Information Retrieval Journal, 2010
  • 27. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Pointwise objectives Regression loss Given 𝑞, 𝑑 predict the value of 𝑦 𝑞,𝑑 e.g., square loss for binary or categorical labels, where, 𝑦 𝑞,𝑑 is the one-hot representation [Fuhr, 1989] or the actual value [Cossock and Zhang, 2006] of the label Norbert Fuhr. Optimum polynomial retrieval functions based on the probability ranking principle. ACM TOIS, 1989. David Cossock and Tong Zhang. Subset ranking using regression. In COLT, 2006. labels prediction 0 1 1
  • 28. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Pointwise objectives Classification loss Given 𝑞, 𝑑 predict the class 𝑦 𝑞,𝑑 e.g., cross-entropy with softmax over categorical labels 𝑌 [Li et al., 2008], where, 𝑠 𝑦 𝑞,𝑑 is the model’s score for label 𝑦 𝑞,𝑑 labels prediction 0 1 Ping Li, Qiang Wu, and Christopher J Burges. Mcrank: Learning to rank using multiple classification and gradient boosting. In NIPS, 2008.
  • 29. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Pairwise objectives Pairwise loss generally has the following form [Chen et al., 2009], where, 𝜙 can be, • Hinge function 𝜙 𝑧 = 𝑚𝑎𝑥 0, 1 − 𝑧 [Herbrich et al., 2000] • Exponential function 𝜙 𝑧 = 𝑒−𝑧 [Freund et al., 2003] • Logistic function 𝜙 𝑧 = 𝑙𝑜𝑔 1 + 𝑒−𝑧 [Burges et al., 2005] • Others… Pairwise loss minimizes the average number of inversions in ranking—i.e., 𝑑𝑖 ≻ 𝑑𝑗 w.r.t. 𝑞 but 𝑑𝑗 is ranked higher than 𝑑𝑖 Given 𝑞, 𝑑𝑖, 𝑑𝑗 , predict the more relevant document For 𝑞, 𝑑𝑖 and 𝑞, 𝑑𝑗 , Feature vectors: 𝑥𝑖 and 𝑥𝑗 Model scores: 𝑠𝑖 = 𝑓 𝑥𝑖 and 𝑠𝑗 = 𝑓 𝑥𝑗 Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, and Hang Li. Ranking measures and loss functions in learning to rank. In NIPS, 2009. Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank boundaries for ordinal regression. 2000. Yoav Freund, Raj Iyer, Robert E Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. In JMLR, 2003. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In ICML, 2005.
  • 30. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Pairwise objectives RankNet loss Pairwise loss function proposed by Burges et al. [2005]—an industry favourite [Burges, 2015] Predicted probabilities: 𝑝𝑖𝑗 = 𝑝 𝑠𝑖 > 𝑠𝑗 ≡ 𝑒 𝛾.𝑠 𝑖 𝑒 𝛾.𝑠 𝑖 +𝑒 𝛾.𝑠 𝑗 = 1 1+𝑒 −𝛾. 𝑠 𝑖−𝑠 𝑗 Desired probabilities: 𝑝𝑖𝑗 = 1 and 𝑝𝑗𝑖 = 0 Computing cross-entropy between 𝑝 and 𝑝 ℒ 𝑅𝑎𝑛𝑘𝑁𝑒𝑡 = − 𝑝𝑖𝑗. 𝑙𝑜𝑔 𝑝𝑖𝑗 − 𝑝𝑗𝑖. 𝑙𝑜𝑔 𝑝𝑗𝑖 = −𝑙𝑜𝑔 𝑝𝑖𝑗 = 𝑙𝑜𝑔 1 + 𝑒−𝛾. 𝑠 𝑖−𝑠 𝑗 pairwise preference score 0 1 Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In ICML, 2005. Chris Burges. RankNet: A ranking retrospective. https://www.microsoft.com/en-us/research/blog/ranknet-a-ranking-retrospective/. 2015.
  • 31. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 A generalized cross-entropy loss An alternative loss function assumes a single relevant document 𝑑+ and compares it against the full collection 𝐷 Predicted probabilities: p 𝑑+|𝑞 = 𝑒 𝛾.𝑠 𝑞,𝑑+ 𝑑∈𝐷 𝑒 𝛾.𝑠 𝑞,𝑑 The cross-entropy loss is then given by, ℒ 𝐶𝐸 𝑞, 𝑑+, 𝐷 = −𝑙𝑜𝑔 p 𝑑+|𝑞 = −𝑙𝑜𝑔 𝑒 𝛾.𝑠 𝑞,𝑑+ 𝑑∈𝐷 𝑒 𝛾.𝑠 𝑞,𝑑 Computing the softmax over the full collection is prohibitively expensive—LTR models typically consider few negative candidates [Huang et al., 2013, Shen et al., 2014, Mitra et al., 2017] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In CIKM, 2013. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gregoire Mesnil. A latent semantic model with convolutional-pooling structure for information retrieval. In CIKM, 2014. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In WWW, 2017.
  • 32. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Blue: relevant Gray: non-relevant NDCG and ERR higher for left but pairwise errors less for right Due to strong position-based discounting in IR measures, errors at higher ranks are much more problematic than at lower ranks But listwise metrics are non-continuous and non-differentiable LISTWISE OBJECTIVES Christopher JC Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 2010. [Burges, 2010]
  • 33. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Listwise objectives Burges et al. [2006] make two observations: 1. To train a model we don’t need the costs themselves, only the gradients (of the costs w.r.t model scores) 2. It is desired that the gradient be bigger for pairs of documents that produces a bigger impact in NDCG by swapping positions Christopher JC Burges, Robert Ragno, and Quoc Viet Le. Learning to rank with nonsmooth cost functions. In NIPS, 2006. LambdaRank loss Multiply actual gradients with the change in NDCG by swapping the rank positions of the two documents
  • 34. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Listwise objectives According to the Luce model [Luce, 2005], given four items 𝑑1, 𝑑2, 𝑑3, 𝑑4 the probability of observing a particular rank-order, say 𝑑2, 𝑑1, 𝑑4, 𝑑3 , is given by: where, 𝜋 is a particular permutation and 𝜙 is a transformation (e.g., linear, exponential, or sigmoid) over the score 𝑠𝑖 corresponding to item 𝑑𝑖 R Duncan Luce. Individual choice behavior. 1959. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In ICML, 2007. Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. Listwise approach to learning to rank: theory and algorithm. In ICML, 2008. ListNet loss Cao et al. [2007] propose to compute the probability distribution over all possible permutations based on model score and ground- truth labels. The loss is then given by the K-L divergence between these two distributions. This is computationally very costly, computing permutations of only the top-K items makes it slightly less prohibitive. ListMLE loss Xia et al. [2008] propose to compute the probability of the ideal permutation based on the ground truth. However, with categorical labels more than one permutation is possible.
  • 35. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Listwise objectives Mingrui Wu, Yi Chang, Zhaohui Zheng, and Hongyuan Zha. Smoothing DCG for learning to rank: A novel approach using smoothed hinge functions. In CIKM, 2009. Smooth DCG Wu et al. [2009] compute a “smooth” rank of documents as a function of their scores This “smooth” rank can be plugged into a ranking metric, such as MRR or DCG, to produce a smooth ranking loss
  • 36. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Questions?
  • 37. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 A quick recap of deep neural networks
  • 38. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Types of vector representations Local (or one-hot) representation Every term in vocabulary T is represented by a binary vector of length |T|, where one position in the vector is set to one and the rest to zero Distributed representation Every term in vocabulary T is represented by a real-valued vector of length k. The vector can be sparse or dense. The vector dimensions may be observed (e.g., hand-crafted features) or latent (e.g., embedding dimensions).
  • 39. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Different modalities of input text representation
  • 40. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Different modalities of input text representation
  • 41. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Different modalities of input text representation
  • 42. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Different modalities of input text representation
  • 43. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Shift-invariant neural operations Detecting a pattern in one part of the input space is similar to detecting it in another Leverage redundancy by moving a window over the whole input space and then aggregate On each instance of the window a kernel—also known as a filter or a cell—is applied Different aggregation strategies lead to different architectures
  • 44. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Convolution Move the window over the input space each time applying the same cell over the window A typical cell operation can be, ℎ = 𝜎 𝑊𝑋 + 𝑏 Full Input [words x in_channels] Cell Input [window x in_channels] Cell Output [1 x out_channels] Full Output [1 + (words – window) / stride x out_channels]
  • 45. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Pooling Move the window over the input space each time applying an aggregate function over each dimension in within the window ℎ𝑗 = 𝑚𝑎𝑥𝑖∈𝑤𝑖𝑛 𝑋𝑖,𝑗 𝑜𝑟 ℎ𝑗 = 𝑎𝑣𝑔𝑖∈𝑤𝑖𝑛 𝑋𝑖,𝑗 Full Input [words x channels] Cell Input [window x channels] Cell Output [1 x channels] Full Output [1 + (words – window) / stride x channels] max -pooling average -pooling
  • 46. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Convolution w/ Global Pooling Stacking a global pooling layer on top of a convolutional layer is a common strategy for generating a fixed length embedding for a variable length text Full Input [words x in_channels] Full Output [1 x out_channels]
  • 47. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Recurrent neural network Similar to a convolution layer but additional dependency on previous hidden state A simple cell operation shown below but others like LSTM and GRUs are more popular in practice, ℎ𝑖 = 𝜎 𝑊𝑋𝑖 + 𝑈ℎ𝑖−1 + 𝑏 Full Input [words x in_channels] Cell Input [window x in_channels] + [1 x out_channels] Cell Output [1 x out_channels] Full Output [1 x out_channels]
  • 48. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Questions?
  • 49. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Learning to rank with deep neural networks
  • 50. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Representation learning for IR Many IR scenarios—e.g., web search and content-based filtering for recommender systems—involve matching items based on their descriptions Deep learning models can be useful for learning good representation of items for matching i.e., LTR with raw inputs instead of hand-engineered features
  • 51. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 DNN for YouTube recommendation Input: user profile Output: probability distribution over items to be recommended Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In RecSys, 2016.
  • 52. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Siamese networks Relevance estimated by cosine similarity between item embeddings Input: character trigraph counts (bag of words assumption) Minimizes cross-entropy loss against randomly sampled negative documents Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In CIKM, 2013.
  • 53. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Wide and deep model Deep model for representation learning and wide model for memorization Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, et al. Wide & deep learning for recommender systems. In workshop on deep learning for recommender systems, 2016.
  • 54. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Lexical and semantic matching networks Mitra et al. [2016] argue that both lexical and semantic matching is important for document ranking Duet model is a linear combination of two DNNs—focusing on lexical and semantic matching, respectively—jointly trained on labelled data code: http://bit.ly/duetv2code Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In WWW, 2017. GET THE CODE
  • 55. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Large scale pretrained language models BERT (and other large-scale unsupervised language models) are demonstrating dramatic performance improvements on many IR tasks Jacob Devlin, Ming-Wei Chang, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2018. Nogueira, Rodrigo, and Kyunghyun Cho. Passage Re-ranking with BERT. In arXiv, 2019.
  • 56. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Dealing with multiple fields Real world items have multiple sources of descriptions—creates unique challenges for representation learning models Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary. Neural ranking models with multiple document fields. In WSDM, 2018. Juan Li, Zhicheng Dou, Yutao Zhu, Xiaochen Zuo, and Ji-Rong Wen. Deep cross-platform product matching in e-commerce. In IRJ, 2019.
  • 57. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Questions?
  • 58. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Key takeaways • Learning to Rank is effective on a broad range of IR tasks • Optimization of non-smooth rank-based metrics is challenging but loss functions, such as RankNet and LambdaRank, are effective in practice • Learning to rank models can operate over hand-engineered features or employ deep architectures to learn useful representations from raw input • Large scale pretraining of language models demonstrate strong performance on tasks that involve text matching
  • 59. Workshop on Recommender Systems HEC Montréal, August 20-23, 2019 Thank you @UnderdogGeek [email protected]