08 Language Models
08 Language Models
Modeling
Introduction to N-grams
Dan Jurafsky
• More variables:
P(A,B,C,D) = P(A)P(B|A)P(C|A,B)P(D|A,B,C)
• The Chain Rule in General
P(x1,x2,x3,…,xn) = P(x1)P(x2|x1)P(x3|x1,x2)…P(xn|x1,…,xn-1)
The Chain Rule applied to compute
Dan Jurafsky
Markov Assumption
• Simplifying assumption:
Andrei Markov
• Or maybe
Dan Jurafsky
Markov Assumption
fifth, an, of, futures, the, an, incorporated, a, a, the, inflation, most, dollars,
quarter, in, is, mass
Bigram model
Condition on the previous word:
texaco, rose, one, in, this, issue, is, pursuing, growth, in, a, boiler, house, said, mr., gurria,
mexico, 's, motion, control, proposal, without, permission, from, five, hundred, fifty, five,
yen
N-gram models
• We can extend to trigrams, 4-grams, 5-grams
• In general this is an insufficient model of language
• because language has long-distance dependencies:
“The computer which I had just put into the machine room on
the fifth floor crashed.”
• But we can often get away with N-gram models
Language
Modeling
Introduction to N-grams
Language
Modeling
Estimating N-gram
Probabilities
Dan Jurafsky
An example
More examples:
Berkeley Restaurant Project sentences
• Result:
Dan Jurafsky
Practical Issues
• We do everything in log space
• Avoid underflow
• (also adding is faster than multiplying)
Dan Jurafsky
…
Dan Jurafsky
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
Dan Jurafsky
30
Dan Jurafsky
Intuition of Perplexity
mushrooms 0.1
• The Shannon Game:
• How well can we predict the next word? pepperoni 0.1
anchovies 0.01
I always order pizza with cheese and ____ ….
The 33rd President of the US was ____ fried rice 0.0001
I saw a ____ ….
• Unigrams are terrible at this game. (Why?) and 1e-100
• A better model of a text
• is one which assigns a higher probability to the word that actually occurs
Dan Jurafsky
Perplexity
The best language model is one that best predicts an unseen test set
• Gives the highest P(sentence)
Perplexity is the inverse probability of
the test set, normalized by the number
of words:
Chain rule:
For bigrams:
Approximating Shakespeare
Dan Jurafsky
Shakespeare as corpus
• N=884,647 tokens, V=29,066
• Shakespeare produced 300,000 bigram types
out of V2= 844 million possible bigrams.
• So 99.96% of the possible bigrams were never seen
(have zero entries in the table)
• Quadrigrams worse: What's coming out looks
like Shakespeare because it is Shakespeare
Dan Jurafsky
45
Dan Jurafsky
Zeros
• Training set: • Test set
… denied the allegations … denied the offer
… denied the reports … denied the loan
… denied the claims
… denied the request
allegations
3 allegations
outcome
2 reports
reports
attack
1 claims
…
claims
request
man
1 request
7 total
• Steal probability mass to generalize better
P(w | denied the)
2.5 allegations
allegations
1.5 reports
allegations
outcome
0.5 claims
attack
reports
0.5 request
…
man
claims
request
2 other
7 total
Dan Jurafsky
Add-one estimation
• MLE estimate:
• Add-1 estimate:
Dan Jurafsky
Laplace-smoothed bigrams
Dan Jurafsky
Reconstituted counts
Dan Jurafsky
Linear Interpolation
• Simple interpolation
66
Dan Jurafsky
67
Dan Jurafsky