0% found this document useful (0 votes)
0 views

Lecture 9 - Probabilistic Information Retrieval, Language Models

The lecture discusses the probabilistic approach to information retrieval (IR), focusing on the Probability Ranking Principle (PRP) and various models such as the Binary Independence Model (BIM) and Okapi BM25. It emphasizes the importance of estimating document relevance probabilities to rank documents effectively in response to user queries. The presentation highlights the evolution and current relevance of probabilistic methods in IR systems.

Uploaded by

alexiesourin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Lecture 9 - Probabilistic Information Retrieval, Language Models

The lecture discusses the probabilistic approach to information retrieval (IR), focusing on the Probability Ranking Principle (PRP) and various models such as the Binary Independence Model (BIM) and Okapi BM25. It emphasizes the importance of estimating document relevance probabilities to rank documents effectively in response to user queries. The presentation highlights the evolution and current relevance of probabilistic methods in IR systems.

Uploaded by

alexiesourin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

CI-6226

Lecture 9. Probabilistic Information


Retrieval, Language Models
Information Retrieval and Analysis

Vasily Sidorov

1
Today’s Lecture
▪ Probabilistic Approach to Retrieval
▪ Probability Ranking Principle
▪ Overview of Language Models for IR

2
Why probabilities in IR?
Understanding
User Query
Information Need Representation
of user need is
uncertain

How to match?

Uncertain guess of
Document
Documents whether document
Representation
has relevant content

In traditional IR systems, matching between each document and


query is attempted in a semantically imprecise space of index terms.
Probabilities provide a principled foundation for uncertain reasoning.
Can we use probabilities to quantify our uncertainties?
3
Probabilistic IR topics
1. Classical probabilistic retrieval model
▪ Probability ranking principle, etc.
▪ Binary independence model (≈ Naïve Bayes text cat)
▪ (Okapi) BM25
2. Bayesian networks for text retrieval
3. Language model approach to IR
▪ An important emphasis in recent work

Probabilistic methods are one of the oldest but also


one of the currently hot topics in IR
▪ Traditionally: neat ideas, but didn’t win on performance
▪ It seems to be different now
4
Ranking the Documents
▪ We have a collection of documents
▪ User issues a query
▪ A list of documents needs to be returned
▪ Ranking method is the core of modern IR systems:
▪ In what order do we present documents to the user?
▪ We want the “best” document to be first, second best
second, etc…
▪ Idea: Rank by probability of relevance of the
document w.r.t. information need
▪ 𝑃 𝑅 = 1 document𝑖 , query
5
Recall a few probability basics
▪ For events A and B:

𝑃 𝐴, 𝐵 = 𝑃 𝐴 ∩ 𝐵 = 𝑃 𝐴 𝐵 𝑃 𝐵 = 𝑃 𝐵 𝐴 𝑃(𝐴)

▪ Bayes’ Rule
Posterior Prior
𝑃 𝐵𝐴 𝑃 𝐴 𝑃 𝐵𝐴 𝑃 𝐴
𝑃 𝐴𝐵 = =
𝑃 𝐵 σ𝑥=𝐴,𝐴ҧ 𝑃 𝐵 𝑥 𝑃(𝑥)

𝑃 𝐴 𝑃 𝐴
▪ Odds: 𝑂 𝐴 = =
𝑃(𝐴)ҧ 1−𝑃(𝐴)
6
The Probability Ranking Principle
“If a reference retrieval system’s response to each request is a
ranking of the documents in the collection in order of decreasing
probability of relevance to the user who submitted the request,
where the probabilities are estimated as accurately as possible on
the basis of whatever data have been made available to the system
for this purpose, the overall effectiveness of the system to its user
will be the best that is obtainable on the basis of those data.”

[1960s/1970s] S. Robertson, W.S. Cooper, M.E. Maron;


van Rijsbergen (1979:113); Manning & Schütze (1999:538)

Assumption 1: Document relevance is independent.


Assumption 2: Accurate estimations of probabilities.
7
Probability Ranking Principle (PRP)
Let 𝑥 represent a document in the collection.
Let 𝑅 represent relevance of a document w.r.t. given (fixed)
query and let 𝑹 = 𝟏 represent relevant and 𝑹 = 𝟎 not relevant.

Need to find 𝑃(𝑅 = 1|𝑥) – probability that a document 𝑥 is relevant.

𝑃 𝑥 𝑅 = 1 𝑃(𝑅 = 1) 𝑃(𝑅 = 1), 𝑃(𝑅 = 0) — prior probability


𝑃 𝑅=1𝑥 = of retrieving a relevant or non-relevant
𝑃(𝑥) document
𝑃(𝑥|𝑅 = 1), 𝑃(𝑥|𝑅 = 0) — probability
𝑃 𝑥 𝑅 = 0 𝑃(𝑅 = 0)
𝑃 𝑅=0𝑥 = that if a relevant (not relevant)
𝑃(𝑥) document is retrieved, it is 𝑥

𝑃 𝑅 =0 𝑥 +𝑃 𝑅 =1 𝑥 =1

8
Probability Ranking Principle (PRP)
▪ Simple case: no selection costs or other utility
concerns that would differentially weight errors
▪ Bayes optimal decision rule:
𝑥 is relevant if 𝑃 𝑅 = 1 𝑥 > 𝑃(𝑅 = 0|𝑥)
▪ PRP in action: Rank all documents by 𝑃 𝑅 = 1 𝑥

▪ Theorem: Using the PRP is optimal, in that it


minimizes the loss (Bayes risk)
▪ Provable if all probabilities correct, etc. [e.g., Ripley 1996]

9
Probability Ranking Principle
▪ More complex case: retrieval costs
▪ Let d be a document
▪ 𝐶 – cost of not retrieving a relevant document
▪ 𝐶′ – cost of retrieving a non-relevant document
▪ Probability Ranking Principle: if
𝐶′ ⋅ 𝑃 𝑅 = 0 𝑑 − 𝐶 ⋅ 𝑃 𝑅 = 1 𝑑 ≤
𝐶 ′ ⋅ 𝑃 𝑅 = 0 𝑑′ − 𝐶 ⋅ 𝑃 𝑅 = 1 𝑑′
for all 𝑑′ not yet retrieved, then 𝑑 is the next
document to be retrieved
▪ We won’t further consider cost/utility from now on
10
Probability Ranking Principle
▪ How do we compute all those probabilities?
▪ Do not know exact probabilities, have to use estimates
▪ Simplest model: Binary Independence Model (BIM)
▪ (Questionable) Assumptions
▪ “Relevance” of each document is independent of
relevance of other documents
▪ How about duplicates?
▪ Boolean model of relevance
▪ Single step information need
▪ Seeing a range of results might let user refine query

11
Probabilistic Retrieval Strategy
▪ Estimate how terms contribute to relevance
▪ How do other things like term frequency and document
length influence your judgments about document
relevance?
▪ Not at all in BIM
▪ A more detailed approach is the Okapi (BM25) formula

▪ Combine to find document relevance probability

▪ Order documents by decreasing probability

12
Probabilistic Ranking
Basic concept:
“For a given query, if we know some documents that are
relevant, terms that occur in those documents should be
given greater weighting in searching for other relevant
documents.
By making assumptions about the distribution of terms
and applying Bayes Theorem, it is possible to derive
weights theoretically.”
Van Rijsbergen

13
Binary Independence Model (BIM)
▪ Traditionally used in conjunction with PRP
▪ “Binary” = Boolean: documents are represented as
binary incidence vectors of terms:
▪ 𝑥Ԧ = 𝑥1 , … , 𝑥𝑛
▪ 𝑥𝑖 = 1 iff term 𝑖 is present in document 𝑥
▪ Different documents can be modeled as the same vector
Antony and
Julius Caesar The Tempest Hamlet Othello Macbeth
Cleopatra
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0

14
Binary Independence Model (BIM)
▪ “Independence”: terms occur in documents
independently
▪ Not true, but works in practice
▪ The ‘naïve’ assumption of Naïve Bayes models
▪ Similar to Bernoulli Naïve Bayes model
▪ One feature 𝑋𝑤 for each word in dictionary
▪ 𝑋𝑤 = True in document 𝑑 if 𝑤 appears in 𝑑
▪ Naïve Bayes assumption
▪ Given the document’s topic, appearance of one word in the
document tells us nothing about the chances that another word
appears

15
Binary Independence Model (BIM)
▪ Queries: binary term incidence vectors
▪ Given query 𝑞,
Ԧ
▪ for each document 𝑑 need to compute 𝑃(𝑅|𝑞, Ԧ 𝑑)
▪ replace with computing 𝑃(𝑅|𝑞,Ԧ 𝑥)
Ԧ where 𝑥Ԧ is binary term
incidence vector representing 𝑑
▪ Interested only in ranking
▪ Will use odds and Bayes’ Rule:

𝑃 𝑅 = 1 𝑞Ԧ 𝑃(𝑥|𝑅
Ԧ = 1, 𝑞) Ԧ
𝑃(𝑅 = 1|𝑞,
Ԧ 𝑥)
Ԧ 𝑃 𝑥Ԧ 𝑞Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = =
𝑃(𝑅 = 0|𝑞,
Ԧ 𝑥)
Ԧ 𝑃 𝑅 = 0 𝑞Ԧ 𝑃 𝑥Ԧ 𝑅 = 0, 𝑞Ԧ
𝑃 𝑥Ԧ 𝑞 16
Binary Independence Model
𝑃(𝑅 = 1|𝑞,
Ԧ 𝑥)
Ԧ 𝑃 𝑅 = 1 𝑞Ԧ 𝑃(𝑥|𝑅
Ԧ = 1, 𝑞)
Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = = ⋅
𝑃(𝑅 = 0|𝑞,
Ԧ 𝑥)
Ԧ 𝑃(𝑅 = 0|𝑞)
Ԧ 𝑃(𝑥|𝑅
Ԧ = 0, 𝑞)
Ԧ

Constant for a Needs estimation


given query

• Using Independence Assumption:


𝑛
𝑃(𝑥|𝑅
Ԧ = 1, 𝑞)
Ԧ 𝑃(𝑥𝑖 |𝑅 = 1, 𝑞)
Ԧ
=ෑ ⇒
𝑃(𝑥|𝑅
Ԧ = 0, 𝑞)
Ԧ 𝑃(𝑥𝑖 |𝑅 = 0, 𝑞)
Ԧ
𝑖=1
𝑛
𝑃(𝑥𝑖 |𝑅 = 1, 𝑞)
Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ
𝑃(𝑥𝑖 |𝑅 = 0, 𝑞)
Ԧ
𝑖=1
17
Binary Independence Model
𝑛
𝑃(𝑥𝑖 |𝑅 = 1, 𝑞)
Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ
𝑃(𝑥𝑖 |𝑅 = 0, 𝑞)
Ԧ
𝑖=1
▪ Since 𝑥𝑖 is either 0 or 1:
𝑃(𝑥𝑖 = 1|𝑅 = 1, 𝑞)
Ԧ 𝑃(𝑥𝑖 = 0|𝑅 = 1, 𝑞)
Ԧ
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ෑ
𝑃(𝑥𝑖 = 1|𝑅 = 0, 𝑞)
Ԧ 𝑃(𝑥𝑖 = 0|𝑅 = 0, 𝑞)
Ԧ
𝑥𝑖 =1 𝑥𝑖 =0

▪ Let 𝑝𝑖 = 𝑃(𝑥𝑖 = 1|𝑅 = 1, 𝑞),


Ԧ 𝑟𝑖 = 𝑃(𝑥𝑖 = 1|𝑅 = 0, 𝑞)
Ԧ
▪ ∀ terms not occurring in the query (𝑞𝑖 = 0), assume 𝑝𝑖 = 𝑟𝑖

𝑝𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ ෑ
𝑟𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑥𝑖 =0
𝑞𝑖 =1 𝑞𝑖 =1 18
document relevant (𝑹 = 𝟏) not relevant (𝑹 = 𝟎)
term present 𝑥𝑖 = 1 𝑝𝑖 𝑟𝑖
term absent 𝑥𝑖 = 0 1 − 𝑝𝑖 1 − 𝑟𝑖

19
Binary Independence Model
𝑝𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ ෑ
𝑟𝑖 1 − 𝑟𝑖
𝑥𝑖 =1, 𝑥𝑖 =0
𝑞𝑖 =1 𝑞𝑖 =1 Non-matching
All matching terms
query terms

𝑝𝑖 1 − 𝑟𝑖 1 − 𝑝𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = O R 𝑞Ԧ ⋅ ෑ ⋅ ෑ ⋅ ⋅ ෑ𝑥𝑖 =0
𝑟𝑖 1 − 𝑝𝑖 1 − 𝑟𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑥𝑖 =1 𝑞𝑖 =1
𝑞𝑖 =1 𝑞𝑖 =1

𝑝𝑖 1 − 𝑟𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = O R 𝑞Ԧ ⋅ ෑ ⋅ෑ
𝑟𝑖 1 − 𝑝𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑞𝑖 =1
𝑞𝑖 =1 All query terms
All matching terms
20
Binary Independence Model
𝑝𝑖 1 − 𝑟𝑖 1 − 𝑝𝑖
𝑂 𝑅 𝑞,
Ԧ 𝑥Ԧ = 𝑂 𝑅 𝑞Ԧ ⋅ ෑ ⋅ෑ
𝑟𝑖 1 − 𝑝𝑖 1 − 𝑟𝑖
𝑥𝑖 =1 𝑞𝑖 =1
𝑞𝑖 =1
Constant for
each query

The only required estimation


for rankings
▪ Retrieval Status Value

𝑝𝑖 1 − 𝑟𝑖 𝑝𝑖 (1 − 𝑟𝑖 )
𝑅𝑆𝑉 = log ෑ = ෍ log
𝑟𝑖 1 − 𝑝𝑖 𝑟𝑖 1 − 𝑝𝑖
𝑥𝑖 =𝑞𝑖 =1 𝑥𝑖 =𝑞𝑖 =1
21
Binary Independence Model
▪ All boils down to computing RSV
𝑝𝑖 (1 − 𝑟𝑖 )
𝑅𝑆𝑉 = ෍ log
𝑟𝑖 1 − 𝑝𝑖
𝑥𝑖 =𝑞𝑖 =1
𝑝𝑖 (1 − 𝑟𝑖 )
𝑅𝑆𝑉 = ෍ 𝑐𝑖 ; 𝑐𝑖 = log
𝑟𝑖 1 − 𝑝𝑖
𝑥𝑖 =𝑞𝑖 =1
▪ The 𝑐𝑖 are log odds ratios
▪ They function as the term weights in this model

So, how do we compute 𝑐𝑖 ’s from our data?


22
Binary Independence Model
▪ Estimating RSV coeffs in theory
▪ For each term 𝑖 look at this table of document counts
Documents Relevant Non-Relevant Total
Term present (𝑥𝑖 = 1) 𝑠 𝑑𝑓𝑡 − 𝑠 𝑑𝑓𝑡
Term absent (𝑥𝑖 = 0) 𝑆−𝑠 𝑁 − 𝑑𝑓𝑡 − 𝑆 + 𝑠 𝑁 − 𝑑𝑓𝑡
Total 𝑆 𝑁−𝑆 𝑁
𝑠 𝑑𝑓𝑡 −𝑠
▪ Estimates: 𝑝𝑖 ≈ , 𝑟𝑖 ≈ Assume no zero terms.
𝑆 𝑁−𝑆
Remember smoothing.
𝑠 𝑑𝑓𝑡 − 𝑠
𝑠ൗ
𝑆× 1− 𝑁−𝑆 𝑆−𝑠
𝑐𝑖 ≈ log = log
𝑑𝑓𝑡 − 𝑠 𝑠 𝑑𝑓𝑡 − 𝑠
× 1− ൗ𝑁 − 𝑑𝑓 − 𝑆 + 𝑠
𝑁−𝑆 𝑆 𝑡 23
Final Form of BIM
𝑠ൗ
𝑐𝑖 = log 𝑆−𝑠
𝑑𝑓𝑡 − 𝑠
ൗ𝑁 − 𝑑𝑓 − 𝑆 + 𝑠
𝑡

▪ What if every or no relevant document has a


particular term?
▪ Smoothing!

𝑠 + 0.5ൗ
𝑐𝑖 = log 𝑆 − 𝑠 + 0.5
𝑑𝑓𝑡 − 𝑠 + 0.5
ൗ𝑁 − 𝑑𝑓 − 𝑆 + 𝑠 + 0.5
𝑡
24
Estimation in practice
▪ 𝑟𝑖 — probability of term occurrence in non-relevant
documents for query
▪ If non-relevant documents are approximated by the
whole collection, then 𝑟𝑖 is 𝑑𝑓𝑡ൗ𝑁 and:

1 − 𝑟𝑖 𝑁 − 𝑑𝑓𝑡 − 𝑆 + 𝑠 𝑁 − 𝑑𝑓𝑡 𝑁
log = log ≈ log ≈ log = IDF
𝑟𝑖 𝑑𝑓𝑡 − 𝑠 𝑑𝑓𝑡 𝑑𝑓𝑡

25
Estimation – key challenge
▪ 𝑝𝑖 — probability of term occurrence in relevant
documents — cannot be approximated as easily
▪ 𝑝𝑖 can be estimated in various ways:
▪ From relevant documents if you know some
▪ E.g., from relevance feedback
▪ Constant (e.g., 𝑝𝑖 = 0.5)
▪ proportional to probability of occurrence in collection
▪ See (Greiff, SIGIR 1998)

26
PRP and BIM
▪ One of the oldest formal models in IR (Maron & Kuhns, 1960)
▪ Requires restrictive assumptions:
▪ Term independence
▪ Out-of-query terms don’t affect the outcome
▪ Boolean representation of documents/queries/relevance
▪ Document relevance values are independent
▪ Some of these assumptions can be removed
▪ Problem: either require partial relevance information
or seemingly only can derive somewhat inferior term
weights
27
Removing term independence
▪ In general, index terms aren’t
independent
▪ Dependencies can be complex
▪ van Rijsbergen (1979) proposed
simple model of dependencies as
a tree
▪ See also: Friedman and
Goldszmidt’s Tree Augmented
Naïve Bayes (AAAI 13, 1996)
▪ Each term dependent on one
other
▪ In 1970s, estimation problems
held back success of this model

28
Probabilistic Models
▪ The difference between ‘vector space’ and
‘probabilistic’ IR is not that big:
▪ In either case you build an information retrieval scheme in
the exact same way.
▪ Difference:
▪ for probabilistic IR, at the end, you score queries not by
cosine similarity and tf-idf in a vector space, but by a
slightly different formula motivated by probability theory

29
Okapi BM25: A Non-Binary Model
▪ The BIM was originally designed for short catalog
records of fairly consistent length, and it works
reasonably in these contexts
▪ For modern full-text search collections, a model
should pay attention to term frequency and
document length
▪ BestMatch25 (a.k.a BM25 or Okapi) is sensitive to
these quantities
▪ From 1994 until today, BM25 is one of the most
widely used and robust retrieval models
30
Okapi BM25: A Non-Binary Model
▪ The simplest score for document 𝑑 is just 𝑖𝑑𝑓 weighting of the
𝑁
query terms present in the document: 𝑅𝑆𝑉𝑑 = σ𝑡∈𝑞 log
𝑑𝑓𝑡
▪ Improve this formula by factoring in the term frequency and
document length:
𝑁 𝑘1 + 1 × 𝑡𝑓𝑡𝑑
𝑅𝑆𝑉𝑑 = ෍ log × ,
𝑑𝑓𝑡 𝐿𝑑
𝑡∈𝑞 𝑘1 × 1 − 𝑏 + 𝑏 × + 𝑡𝑓𝑡𝑑
𝐿𝑎𝑣𝑔
▪ 𝑡𝑓𝑡𝑑 : term frequency in document 𝑑
▪ 𝐿𝑑 𝐿𝑎𝑣𝑔 : length of doc 𝑑 (or average doc length in collection)
▪ 𝑘1 : tuning parameter controlling the document term frequency scaling
▪ 0 ≤ 𝑏 ≤ 1 : tuning parameter controlling the scaling by document length

31
Okapi BM25: A Non-Binary Model
▪ If the query is long, we might also use similar weighting for
query term
𝑁 𝑘1 + 1 × 𝑡𝑓𝑡𝑑 𝑘3 + 1 × 𝑡𝑓𝑡𝑞
𝑅𝑆𝑉𝑑 = ෍ log × ×
𝑑𝑓𝑡 𝐿𝑑 𝑘3 + 𝑡𝑓𝑡𝑞
𝑡∈𝑞 𝑘1 × 1−𝑏 +𝑏× + 𝑡𝑓𝑡𝑑
𝐿𝑎𝑣𝑔
▪ 𝑡𝑓𝑡𝑞 : term frequency in the query 𝑞
▪ 𝑘3 : tuning parameter controlling term frequency scaling of the query
▪ The above tuning parameters should ideally be set to optimize
performance on a development test collection
▪ In the absence of such optimization, experiments have shown
reasonable values are to set 𝑘1 and 𝑘3 to a value between 1.2 and 2
and 𝑏 = 0.75

32
Okapi BM25 + Relevance Info
𝑁
▪ We use simple estimate 𝑅𝑆𝑉𝑑 = σ𝑡∈𝑞 log in the absence of
𝑑𝑓𝑡
relevance information
▪ Basically, assume 𝑆 = 𝑠 = 0
▪ If we have relevance judgements available, we can use full
form for 𝑅𝑆𝑉𝑑 :
𝑠 + 0.5ൗ
෍ log 𝑆 − 𝑠 + 0.5
൦ 𝑑𝑓𝑡 − 𝑠 + 0.5
𝑡∈𝑞 ൗ𝑁 − 𝑑𝑓 − 𝑆 + 𝑠 + 0.5
𝑡

𝑘1 + 1 × 𝑡𝑓𝑡𝑑 𝑘3 + 1 × 𝑡𝑓𝑡𝑞
× ×
𝐿𝑑 𝑘3 + 𝑡𝑓𝑡𝑞 ൪
𝑘1 × 1−𝑏 +𝑏× + 𝑡𝑓𝑡𝑑
𝐿𝑎𝑣𝑔

33
Language Models
▪ How to find a relevant document?
▪ Think about the document your are searching for
▪ What words are likely to appear there?
▪ Language models
▪ A model that can generate or
recognize strings

I wish
I wish I wish I wish I wish I wish
Simple finite automaton LM

wish I wish ← not from this model

34
Stochastic Language Models
▪ Model probability of generating strings (one word at
a time) in a language
▪ Typically, all strings over a certain alphabet Σ
▪ Example: a unigram model
Model 𝑴
0.20 the
0.10 a 𝑃 STOP 𝑞1 = 0.0003
0.01 hobbit
the hobbit carries the ring
0.01 ring
0.2 0.01 0.02 0.2 0.01
0.03 said
0.02 carries
… …
𝑃 𝑠 𝑀 = 0.00000008
35
Stochastic Language Models
Model 𝑴𝟏 Model 𝑴𝟐
0.20 the 0.20 the
0.01 class 0.0001 class
0.0001 soon 0.03 soon
0.0001 is 0.02 is
0.0001 dismissed 0.1 dismissed
0.0005 will 0.01 will
0.01 be 0.0001 be

the class is soon dismissed


0.2 0.01 0.0001 0.0001 0.0005
0.2 0.0001 0.02 0.1 0.01

𝑃 𝑠 𝑀2 > 𝑃 𝑠 𝑀1
36
Language Models (LMs) for IR
▪ We view a document as a generative model that
generates the query
▪ Estimate 𝑃 𝑞 𝑀𝑑

▪ How do we create a model for a document? How do we


generate a query from LM?
▪ Not in the scope. Details in IIR12
37
LMs vs Vector Space Model
▪ The two have common things
▪ Both use term frequencies
▪ Probabilities are inherently “length-normalized” [0,1]
▪ Mixing term frequencies in document and collection has
an effect similar to idf
▪ Terms that are rare in the collection, but frequent in some
documents have great influence on the ranking
▪ Differences
▪ LMs: based on probabilistic theory, VSM: linear algebra
notion
▪ Collection frequency vs. document frequency

38
LM vs VSM

Results of a comparison of tf-idf with language modeling (LM)


term weighting by Ponte and Croft (1998)
39
Resources
▪ IIR Chapters 11 and 12
▪ Other approaches to smoothing probability distributions, such as Dirichlet priors,
discounting, etc.
▪ ChengXiang Zhai, John D. Lafferty: A Study of Smoothing Methods for Language
Models Applied to Ad Hoc Information Retrieval. SIGIR 2001: 334-342
▪ Further reading
▪ S. E. Robertson and K. Spärck Jones. 1976. Relevance Weighting of Search Terms.
Journal of the American Society for Information Sciences 27(3): 129–146
▪ C. J. van Rijsbergen. 1979. Information Retrieval. 2nd ed. London: Butterworths,
chapter 6. [Most details of math]
▪ N. Fuhr. 1992. Probabilistic Models in Information Retrieval. The Computer
Journal, 35(3),243–255
▪ F. Crestani, M. Lalmas, C. J. van Rijsbergen, and I. Campbell. 1998. Is This
Document Relevant? ... Probably: A Survey of Probabilistic Models in Information
Retrieval. ACM Computing Surveys 30(4): 528–552.

40

You might also like