Lecture 9 - Probabilistic Information Retrieval, Language Models
Lecture 9 - Probabilistic Information Retrieval, Language Models
Vasily Sidorov
1
Today’s Lecture
▪ Probabilistic Approach to Retrieval
▪ Probability Ranking Principle
▪ Overview of Language Models for IR
2
Why probabilities in IR?
Understanding
User Query
Information Need Representation
of user need is
uncertain
How to match?
Uncertain guess of
Document
Documents whether document
Representation
has relevant content
𝑃 𝐴, 𝐵 = 𝑃 𝐴 ∩ 𝐵 = 𝑃 𝐴 𝐵 𝑃 𝐵 = 𝑃 𝐵 𝐴 𝑃(𝐴)
▪ Bayes’ Rule
Posterior Prior
𝑃 𝐵𝐴 𝑃 𝐴 𝑃 𝐵𝐴 𝑃 𝐴
𝑃 𝐴𝐵 = =
𝑃 𝐵 σ𝑥=𝐴,𝐴ҧ 𝑃 𝐵 𝑥 𝑃(𝑥)
𝑃 𝐴 𝑃 𝐴
▪ Odds: 𝑂 𝐴 = =
𝑃(𝐴)ҧ 1−𝑃(𝐴)
6
The Probability Ranking Principle
“If a reference retrieval system’s response to each request is a
ranking of the documents in the collection in order of decreasing
probability of relevance to the user who submitted the request,
where the probabilities are estimated as accurately as possible on
the basis of whatever data have been made available to the system
for this purpose, the overall effectiveness of the system to its user
will be the best that is obtainable on the basis of those data.”
𝑃 𝑅 =0 𝑥 +𝑃 𝑅 =1 𝑥 =1
8
Probability Ranking Principle (PRP)
▪ Simple case: no selection costs or other utility
concerns that would differentially weight errors
▪ Bayes optimal decision rule:
𝑥 is relevant if 𝑃 𝑅 = 1 𝑥 > 𝑃(𝑅 = 0|𝑥)
▪ PRP in action: Rank all documents by 𝑃 𝑅 = 1 𝑥
9
Probability Ranking Principle
▪ More complex case: retrieval costs
▪ Let d be a document
▪ 𝐶 – cost of not retrieving a relevant document
▪ 𝐶′ – cost of retrieving a non-relevant document
▪ Probability Ranking Principle: if
𝐶′ ⋅ 𝑃 𝑅 = 0 𝑑 − 𝐶 ⋅ 𝑃 𝑅 = 1 𝑑 ≤
𝐶 ′ ⋅ 𝑃 𝑅 = 0 𝑑′ − 𝐶 ⋅ 𝑃 𝑅 = 1 𝑑′
for all 𝑑′ not yet retrieved, then 𝑑 is the next
document to be retrieved
▪ We won’t further consider cost/utility from now on
10
Probability Ranking Principle
▪ How do we compute all those probabilities?
▪ Do not know exact probabilities, have to use estimates
▪ Simplest model: Binary Independence Model (BIM)
▪ (Questionable) Assumptions
▪ “Relevance” of each document is independent of
relevance of other documents
▪ How about duplicates?
▪ Boolean model of relevance
▪ Single step information need
▪ Seeing a range of results might let user refine query
11
Probabilistic Retrieval Strategy
▪ Estimate how terms contribute to relevance
▪ How do other things like term frequency and document
length influence your judgments about document
relevance?
▪ Not at all in BIM
▪ A more detailed approach is the Okapi (BM25) formula
12
Probabilistic Ranking
Basic concept:
“For a given query, if we know some documents that are
relevant, terms that occur in those documents should be
given greater weighting in searching for other relevant
documents.
By making assumptions about the distribution of terms
and applying Bayes Theorem, it is possible to derive
weights theoretically.”
Van Rijsbergen
13
Binary Independence Model (BIM)
▪ Traditionally used in conjunction with PRP
▪ “Binary” = Boolean: documents are represented as
binary incidence vectors of terms:
▪ 𝑥Ԧ = 𝑥1 , … , 𝑥𝑛
▪ 𝑥𝑖 = 1 iff term 𝑖 is present in document 𝑥
▪ Different documents can be modeled as the same vector
Antony and
Julius Caesar The Tempest Hamlet Othello Macbeth
Cleopatra
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
14
Binary Independence Model (BIM)
▪ “Independence”: terms occur in documents
independently
▪ Not true, but works in practice
▪ The ‘naïve’ assumption of Naïve Bayes models
▪ Similar to Bernoulli Naïve Bayes model
▪ One feature 𝑋𝑤 for each word in dictionary
▪ 𝑋𝑤 = True in document 𝑑 if 𝑤 appears in 𝑑
▪ Naïve Bayes assumption
▪ Given the document’s topic, appearance of one word in the
document tells us nothing about the chances that another word
appears
15
Binary Independence Model (BIM)
▪ Queries: binary term incidence vectors
▪ Given query 𝑞,
Ԧ
▪ for each document 𝑑 need to compute 𝑃(𝑅|𝑞, Ԧ 𝑑)
▪ replace with computing 𝑃(𝑅|𝑞,Ԧ 𝑥)
Ԧ where 𝑥Ԧ is binary term
incidence vector representing 𝑑
▪ Interested only in ranking
▪ Will use odds and Bayes’ Rule:
𝑃 𝑅 = 1 𝑞Ԧ 𝑃(𝑥|𝑅
Ԧ = 1, 𝑞) Ԧ
𝑃(𝑅 = 1|𝑞,
Ԧ 𝑥)
Ԧ 𝑃 𝑥Ԧ 𝑞Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = =
𝑃(𝑅 = 0|𝑞,
Ԧ 𝑥)
Ԧ 𝑃 𝑅 = 0 𝑞Ԧ 𝑃 𝑥Ԧ 𝑅 = 0, 𝑞Ԧ
𝑃 𝑥Ԧ 𝑞 16
Binary Independence Model
𝑃(𝑅 = 1|𝑞,
Ԧ 𝑥)
Ԧ 𝑃 𝑅 = 1 𝑞Ԧ 𝑃(𝑥|𝑅
Ԧ = 1, 𝑞)
Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = = ⋅
𝑃(𝑅 = 0|𝑞,
Ԧ 𝑥)
Ԧ 𝑃(𝑅 = 0|𝑞)
Ԧ 𝑃(𝑥|𝑅
Ԧ = 0, 𝑞)
Ԧ
𝑝𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ ෑ
𝑟𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑥𝑖 =0
𝑞𝑖 =1 𝑞𝑖 =1 18
document relevant (𝑹 = 𝟏) not relevant (𝑹 = 𝟎)
term present 𝑥𝑖 = 1 𝑝𝑖 𝑟𝑖
term absent 𝑥𝑖 = 0 1 − 𝑝𝑖 1 − 𝑟𝑖
19
Binary Independence Model
𝑝𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ ෑ
𝑟𝑖 1 − 𝑟𝑖
𝑥𝑖 =1, 𝑥𝑖 =0
𝑞𝑖 =1 𝑞𝑖 =1 Non-matching
All matching terms
query terms
𝑝𝑖 1 − 𝑟𝑖 1 − 𝑝𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = O R 𝑞Ԧ ⋅ ෑ ⋅ ෑ ⋅ ⋅ ෑ𝑥𝑖 =0
𝑟𝑖 1 − 𝑝𝑖 1 − 𝑟𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑥𝑖 =1 𝑞𝑖 =1
𝑞𝑖 =1 𝑞𝑖 =1
𝑝𝑖 1 − 𝑟𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = O R 𝑞Ԧ ⋅ ෑ ⋅ෑ
𝑟𝑖 1 − 𝑝𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑞𝑖 =1
𝑞𝑖 =1 All query terms
All matching terms
20
Binary Independence Model
𝑝𝑖 1 − 𝑟𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ෑ
𝑟𝑖 1 − 𝑝𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑞𝑖 =1
𝑞𝑖 =1
Constant for
each query
𝑝𝑖 1 − 𝑟𝑖 𝑝𝑖 (1 − 𝑟𝑖 )
𝑅𝑆𝑉 = log ෑ = log
𝑟𝑖 1 − 𝑝𝑖 𝑟𝑖 1 − 𝑝𝑖
𝑥𝑖 =𝑞𝑖 =1 𝑥𝑖 =𝑞𝑖 =1
21
Binary Independence Model
▪ All boils down to computing RSV
𝑝𝑖 (1 − 𝑟𝑖 )
𝑅𝑆𝑉 = log
𝑟𝑖 1 − 𝑝𝑖
𝑥𝑖 =𝑞𝑖 =1
𝑝𝑖 (1 − 𝑟𝑖 )
𝑅𝑆𝑉 = 𝑐𝑖 ; 𝑐𝑖 = log
𝑟𝑖 1 − 𝑝𝑖
𝑥𝑖 =𝑞𝑖 =1
▪ The 𝑐𝑖 are log odds ratios
▪ They function as the term weights in this model
𝑠 + 0.5ൗ
𝑐𝑖 = log 𝑆 − 𝑠 + 0.5
𝑑𝑓𝑡 − 𝑠 + 0.5
ൗ𝑁 − 𝑑𝑓 − 𝑆 + 𝑠 + 0.5
𝑡
24
Estimation in practice
▪ 𝑟𝑖 — probability of term occurrence in non-relevant
documents for query
▪ If non-relevant documents are approximated by the
whole collection, then 𝑟𝑖 is 𝑑𝑓𝑡ൗ𝑁 and:
1 − 𝑟𝑖 𝑁 − 𝑑𝑓𝑡 − 𝑆 + 𝑠 𝑁 − 𝑑𝑓𝑡 𝑁
log = log ≈ log ≈ log = IDF
𝑟𝑖 𝑑𝑓𝑡 − 𝑠 𝑑𝑓𝑡 𝑑𝑓𝑡
25
Estimation – key challenge
▪ 𝑝𝑖 — probability of term occurrence in relevant
documents — cannot be approximated as easily
▪ 𝑝𝑖 can be estimated in various ways:
▪ From relevant documents if you know some
▪ E.g., from relevance feedback
▪ Constant (e.g., 𝑝𝑖 = 0.5)
▪ proportional to probability of occurrence in collection
▪ See (Greiff, SIGIR 1998)
26
PRP and BIM
▪ One of the oldest formal models in IR (Maron & Kuhns, 1960)
▪ Requires restrictive assumptions:
▪ Term independence
▪ Out-of-query terms don’t affect the outcome
▪ Boolean representation of documents/queries/relevance
▪ Document relevance values are independent
▪ Some of these assumptions can be removed
▪ Problem: either require partial relevance information
or seemingly only can derive somewhat inferior term
weights
27
Removing term independence
▪ In general, index terms aren’t
independent
▪ Dependencies can be complex
▪ van Rijsbergen (1979) proposed
simple model of dependencies as
a tree
▪ See also: Friedman and
Goldszmidt’s Tree Augmented
Naïve Bayes (AAAI 13, 1996)
▪ Each term dependent on one
other
▪ In 1970s, estimation problems
held back success of this model
28
Probabilistic Models
▪ The difference between ‘vector space’ and
‘probabilistic’ IR is not that big:
▪ In either case you build an information retrieval scheme in
the exact same way.
▪ Difference:
▪ for probabilistic IR, at the end, you score queries not by
cosine similarity and tf-idf in a vector space, but by a
slightly different formula motivated by probability theory
29
Okapi BM25: A Non-Binary Model
▪ The BIM was originally designed for short catalog
records of fairly consistent length, and it works
reasonably in these contexts
▪ For modern full-text search collections, a model
should pay attention to term frequency and
document length
▪ BestMatch25 (a.k.a BM25 or Okapi) is sensitive to
these quantities
▪ From 1994 until today, BM25 is one of the most
widely used and robust retrieval models
30
Okapi BM25: A Non-Binary Model
▪ The simplest score for document 𝑑 is just 𝑖𝑑𝑓 weighting of the
𝑁
query terms present in the document: 𝑅𝑆𝑉𝑑 = σ𝑡∈𝑞 log
𝑑𝑓𝑡
▪ Improve this formula by factoring in the term frequency and
document length:
𝑁 𝑘1 + 1 × 𝑡𝑓𝑡𝑑
𝑅𝑆𝑉𝑑 = log × ,
𝑑𝑓𝑡 𝐿𝑑
𝑡∈𝑞 𝑘1 × 1 − 𝑏 + 𝑏 × + 𝑡𝑓𝑡𝑑
𝐿𝑎𝑣𝑔
▪ 𝑡𝑓𝑡𝑑 : term frequency in document 𝑑
▪ 𝐿𝑑 𝐿𝑎𝑣𝑔 : length of doc 𝑑 (or average doc length in collection)
▪ 𝑘1 : tuning parameter controlling the document term frequency scaling
▪ 0 ≤ 𝑏 ≤ 1 : tuning parameter controlling the scaling by document length
31
Okapi BM25: A Non-Binary Model
▪ If the query is long, we might also use similar weighting for
query term
𝑁 𝑘1 + 1 × 𝑡𝑓𝑡𝑑 𝑘3 + 1 × 𝑡𝑓𝑡𝑞
𝑅𝑆𝑉𝑑 = log × ×
𝑑𝑓𝑡 𝐿𝑑 𝑘3 + 𝑡𝑓𝑡𝑞
𝑡∈𝑞 𝑘1 × 1−𝑏 +𝑏× + 𝑡𝑓𝑡𝑑
𝐿𝑎𝑣𝑔
▪ 𝑡𝑓𝑡𝑞 : term frequency in the query 𝑞
▪ 𝑘3 : tuning parameter controlling term frequency scaling of the query
▪ The above tuning parameters should ideally be set to optimize
performance on a development test collection
▪ In the absence of such optimization, experiments have shown
reasonable values are to set 𝑘1 and 𝑘3 to a value between 1.2 and 2
and 𝑏 = 0.75
32
Okapi BM25 + Relevance Info
𝑁
▪ We use simple estimate 𝑅𝑆𝑉𝑑 = σ𝑡∈𝑞 log in the absence of
𝑑𝑓𝑡
relevance information
▪ Basically, assume 𝑆 = 𝑠 = 0
▪ If we have relevance judgements available, we can use full
form for 𝑅𝑆𝑉𝑑 :
𝑠 + 0.5ൗ
log 𝑆 − 𝑠 + 0.5
൦ 𝑑𝑓𝑡 − 𝑠 + 0.5
𝑡∈𝑞 ൗ𝑁 − 𝑑𝑓 − 𝑆 + 𝑠 + 0.5
𝑡
𝑘1 + 1 × 𝑡𝑓𝑡𝑑 𝑘3 + 1 × 𝑡𝑓𝑡𝑞
× ×
𝐿𝑑 𝑘3 + 𝑡𝑓𝑡𝑞 ൪
𝑘1 × 1−𝑏 +𝑏× + 𝑡𝑓𝑡𝑑
𝐿𝑎𝑣𝑔
33
Language Models
▪ How to find a relevant document?
▪ Think about the document your are searching for
▪ What words are likely to appear there?
▪ Language models
▪ A model that can generate or
recognize strings
I wish
I wish I wish I wish I wish I wish
Simple finite automaton LM
34
Stochastic Language Models
▪ Model probability of generating strings (one word at
a time) in a language
▪ Typically, all strings over a certain alphabet Σ
▪ Example: a unigram model
Model 𝑴
0.20 the
0.10 a 𝑃 STOP 𝑞1 = 0.0003
0.01 hobbit
the hobbit carries the ring
0.01 ring
0.2 0.01 0.02 0.2 0.01
0.03 said
0.02 carries
… …
𝑃 𝑠 𝑀 = 0.00000008
35
Stochastic Language Models
Model 𝑴𝟏 Model 𝑴𝟐
0.20 the 0.20 the
0.01 class 0.0001 class
0.0001 soon 0.03 soon
0.0001 is 0.02 is
0.0001 dismissed 0.1 dismissed
0.0005 will 0.01 will
0.01 be 0.0001 be
𝑃 𝑠 𝑀2 > 𝑃 𝑠 𝑀1
36
Language Models (LMs) for IR
▪ We view a document as a generative model that
generates the query
▪ Estimate 𝑃 𝑞 𝑀𝑑
38
LM vs VSM
40