Index Compression: Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Index Compression: Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Prasad
L07IndexCompression
Key step in indexing sort This sort was implemented by exploiting diskbased sorting
n n
n n
Prasad
L07IndexCompression
Today
n
Prasad
L07IndexCompression
symbol N L M
statistic documents avg. # tokens per doc terms (= word types) avg. # bytes per token
(incl. spaces/punct.)
n n
Prasad
Why compression?
n
[read compressed data and decompress] is faster than [read uncompressed data]
Prasad
Make it small enough to keep in main memory Reduce disk space needed, decrease time to read from disk Large search engines keep a significant part of postings in memory
Prasad
L07IndexCompression
322 -17
Exercise: give intuitions for all the 0 entries. Why do some zero entries correspond to big deltas in other columns?
What we mostly do in IR. Several of the preprocessing steps can be viewed as lossy compression: case folding, stop words, stemming, number elimination. Chap/Lecture 7: Prune postings entries that are unlikely to turn up in the top k list for any query.
n
Prasad
Heaps Law: M = kTb M is the size of the vocabulary, T is the number of tokens in the collection. Typical values: 30 k 100 and b 0.5. In a log-log plot of vocabulary vs. T, Heaps law is a line.
L07IndexCompression 9
n n
Prasad
Heaps Law
For RCV1, the dashed line log10M = 0.49 log10T + 1.64 is the best least squares fit. Thus, M = 101.64T0.49 so k = 101.64 44 and b = 0.49.
Prasad
L07IndexCompression
10
Zipfs law
n n
We also study the relative frequencies of terms. In natural language, there are a few very frequent terms and very many rare terms. Zipfs law: The ith most frequent term has frequency proportional to 1/i . cfi 1/i = c/i where c is a normalizing constant cfi is collection frequency: the number of occurrences of the term ti in the collection.
L07IndexCompression 11
n n
Prasad
Zipf consequences
n
then the second most frequent term (of) occurs cf1/2 times the third most frequent term (and) occurs cf1/3 times
Equivalent: cfi = c/i where c is a normalizing factor, so n log cfi = log c - log i n Linear relationship between log cfi and log i
L07IndexCompression 12
Prasad
Compression
n
First, we will consider space for dictionary and postings n Basic Boolean index only n No study of positional indexes, etc. n We will devise compression schemes
Prasad
L07IndexCompression
13
DICTIONARY COMPRESSION
Prasad L07IndexCompression 14
Must keep in memory n Search begins with the dictionary n Memory footprint competition n Embedded/mobile devices
Prasad
L07IndexCompression
15
20 bytes
4 bytes each
16
L07IndexCompression
Most of the bytes in the Term column are wasted we allot 20 bytes for 1 letter terms.
n
n n
Written English averages ~4.5 characters/word. Ave. dictionary word in English: ~8 characters
n
Short words dominate token counts but not token type (term) average.
L07IndexCompression 17
Prasad
.systilesyzygeticsyzygialsyzygyszaibelyiteszczecinszomo.
Freq. 33 29 44 126 Postings ptr. Term ptr.
Total string length = 400K x 8B = 3.2MB Pointers resolve 3.2M positions: log23.2M = 22bits = 3bytes
Prasad
L07IndexCompression
18
4 bytes per term for Freq. Now avg. 11 4 bytes per term for pointer to Postings. bytes/term, not 20. 3 bytes per term pointer Avg. 8 bytes per term in term string 400K terms x 19 7.6 MB (against 11.2MB for fixed width)
Prasad
L07IndexCompression
19
Blocking
n
Freq. 33 29 44 126 7
Net
n
Exercise
n
Estimate the space usage (and savings compared to 7.6 MB) with blocking, for block sizes of k = 4, 8 and 16.
n n
For k = 8. For every block of 8, need to store extra 8 bytes for length For every block of 8, can save 7 * 3 bytes for term pointer Saving (+8 21)/8 * 400K = 0.65 MB
L07IndexCompression 22
Prasad
Prasad
Exercise
n
Estimate the impact on search performance (and slowdown compared to k=1) with blocking, for block sizes of k = 4, 8 and 16.
n
logarithmic search time to get to get to (n/k) leaves linear time proportional to k/2 for subsequent search through the leaves closed-form solution not obvious
Prasad
L07IndexCompression
25
Front coding
n
Front-coding:
Sorted words commonly have long common prefix store differences only n (for last k-1 in a block of k) 8automata8automate9automatic10automation
n
8automat*a1e2ic3ion
Encodes automat
Prasad
L07IndexCompression
27
POSTINGS COMPRESSION
Prasad L07IndexCompression 28
Postings compression
n
n n n
The postings file is much larger than the dictionary, by a factor of at least 10. Key desideratum: store each posting compactly. A posting for our purposes is a docID. For Reuters (800,000 documents), we would use 32 bits per docID when using 4-byte integers. Alternatively, we can use log2 800,000 20 bits per docID. Our goal: use a lot less than 20 bits per docID.
L07IndexCompression 29
Prasad
A term like arachnocentric occurs in maybe one doc out of a million we would like to store this posting using log2 1M ~ 20 bits. A term like the occurs in virtually every doc, so 20 bits/posting is too expensive.
n
Prasad
L07IndexCompression
30
Hope: most gaps can be encoded/stored with far fewer than 20 bits.
Prasad
L07IndexCompression
31
Prasad
L07IndexCompression
32
Aim:
n
For arachnocentric, we will use ~20 bits/gap entry. For the, we will use ~1 bit/gap entry.
If the average gap for a term is G, we want to use ~log2G bits/gap entry. Key challenge: encode every integer (gap) with ~ as few bits as needed for that integer. Variable length codes achieve this by using short codes for small numbers
L07IndexCompression 33
Prasad
For a gap value G, use close to the fewest bytes needed to hold log2 G bits Begin with one byte to store G and dedicate 1 bit in it to be a continuation bit c If G 127, binary-encode it in the 7 available bits and set c =1 Else encode Gs lower-order 7 bits and then use additional bytes to encode the higher order bits using the same algorithm At the end set the continuation bit of the last byte to 1 (c =1) and of the other bytes to 0 (c =0). 34
Example
docIDs gaps VB code 00000110 10111000 824 829 5 10000101 215406 214577 00001101 00001100 10110001
Key property: VB-encoded postings are uniquely prefix-decodable. For a small gap (5), VB uses a whole byte.
Prasad
Instead of bytes, we can also use a different unit of alignment: 32 bits (words), 16 bits, 4 bits (nibbles) etc. Variable byte alignment wastes space if you have many small gaps nibbles do better in such cases.
L07IndexCompression 36
Prasad
Gamma codes
n
n n
Represent a gap G as a pair length and offset offset is G in binary, with the leading bit cut off
n
For example 13 1101 101 For 13 (offset 101), this is 3. Encode length in unary code: 1110.
Prasad
offset
-code none 0 0 1 00 001 101 1000 10,0 10,1 110,00 1110,001 1110,101 11110,1000 111111110,11111111 11111111110,0000000001
38
11111111 0000000001
L07IndexCompression
Exercise
n
Given the following sequence of -coded gaps, reconstruct the postings sequence:
1110001110101011111101101111011
From these -codes -- decode and reconstruct gaps, then full postings.
39
Uniquely prefix-decodable, like VB All gamma codes have an odd number of bits G is encoded using 2 log G +1 bits
n
Prasad
L07IndexCompression
40
Machines have word boundaries 8, 16, 32 bits Compressing and manipulating at individual bitgranularity will slow down query processing Variable byte alignment is potentially more efficient Regardless of efficiency, variable byte is conceptually simpler at little additional space cost
L07IndexCompression 41
Prasad
RCV1 compression
Data structure dictionary, fixed-width dictionary, term pointers into string with blocking, k = 4 with blocking & front coding collection (text, xml markup etc) collection (text) Term-doc incidence matrix postings, uncompressed (32-bit words) postings, uncompressed (20 bits) postings, variable byte encoded postings, -encoded
Prasad L07IndexCompression
Size in MB 11.2 7.6 7.1 5.9 3,600.0 960.0 40,000.0 400.0 250.0 116.0 101.0
42
n n
n n
We can now create an index for highly efficient Boolean retrieval that is very space efficient Only 4% of the total size of the collection Only 10-15% of the total size of the text in the collection However, weve ignored positional information Hence, space savings are less for indexes used in practice
n
Prasad
44
Text properties/model
n
Zipfs Law: The frequency of ith most frequent word is 1/i times that of the most frequent word. 50% of the words are stopwords.
Probability of occurrence of a symbol depends on previous symbol. (Finite-Context or Markovian Model) The number of distinct words in a document (vocabulary) grows as the square root of the size of the document. (Heaps Law) The average length of non-stop words is 6 to 7 letters. 46
Similarity
n
Hamming Distance between a pair of strings of same length is the number of positions that have different characters. Levenshtein (Edit) Distance is the minimum number of character insertions, deletions, and substituitions needed to make two strings the same. (Extensions include transposition, weighted operations, etc) UNIX diff utility uses Longest Common Subsequence, obtained by deletion, to align strings/words/lines.
47
48
Consider N = 1M documents, each with about L=1K terms. Avg 6 bytes/term incl. spaces/punctuation
n
6GB of data.
49
A 500K x 1M matrix has half-a-trillion 0s and 1s (500 billion). But it has no more than one billion 1s.
n
50
n n n n
The i th most frequent term has frequency proportional to 1/i Let this frequency be c/i. 500, 000 Then i =1 c / i = 1.
The k th Harmonic number is H k = i =11 / i. Thus c = 1/Hm , which is ~ 1/ln m = 1/ln(500k) ~ 1/13. So the i th most frequent term has frequency roughly 1/13i.
51
Expected number of occurrences of the i th most frequent term in a doc of length L is: Lc/i L/13i 76/i for L=1000.
Let J = Lc ~ 76. Then the J most frequent terms are likely to occur in every document. Now imagine the term-document incidence matrix with rows sorted in decreasing order of term frequency:
52
Informal Observations
n
Most frequent term appears approx 76 times in each document. 2nd most frequent term appears approx 38 times in each document. 76th most frequent term appears approx once in each document. First 76 terms appear at least once in each document. Next 76 terms appear at least once in every two documents.
L07IndexCompression 53
Prasad
etc.
54
J-row blocks
n
n n
In the i th of these J-row blocks, we have J rows each with N/i gaps of i each. Encoding a gap of i using Gamma codes takes 2log2 i +1 bits. So such a row uses space ~ (2N log2 i )/i bits. For the entire block, (2N J log2 i )/i bits, which in our case is ~ 1.5 x 108 (log2 i )/i bits. Sum this over i from 1 up to m/J = 500K/76 6500. (Since there are m/J blocks.)
55
Exercise
n
Work out the above sum and show it adds up to about 55 x 150 Mbits, which is about 1GByte. So weve taken 6GB of text and produced from it a 1GB index that can handle Boolean queries!
n
Neat! (16.7%)
Make sure you understand all the approximations in our probabilistic calculation.
56