Lecture 4 - Index Construction _ Compressing
Lecture 4 - Index Construction _ Compressing
Vasily Sidorov
1
Plan
• Last lecture:
—Dictionary data structures a-hu n-z
hy-m
—Tolerant retrieval
◦ Wildcards
◦ Spell correction
◦ Soundex $m mace madden
mo among amortize
—Index construction
2
Let’s Recall
Term TermID
friend 1
roman 2
countryman 3
Can also be positional
lend 4
i 5 TermID Freq. Postings List (DocIDs)
you 6 → 1 → 5 → 6 → 12
1 4
ear 7
5 2 → 1 → 8
Dictionary 7 6 → 1 → 2 → 6 → 8 → 12 → 13
Inverted Index
3
Ch. 4
Index construction
• How do we construct an index?
4
Sec. 4.1
Hardware basics
• Many design decisions in information retrieval are
based on the characteristics of hardware
5
Hardware basics
6
Sec. 4.1
Hardware basics
• Access to data in memory (RAM) is much faster
than access to data on disk.
• Disk seeks: No data is transferred from disk while
the disk head is being positioned.
• Therefore: Transferring one large chunk of data
from disk to memory is faster than transferring
many small chunks.
• Disk I/O is block-based: Reading and writing of
entire blocks (as opposed to smaller chunks).
• Block sizes: 8KB to 256 KB.
7
Hardware basics
• Solid State Drives (SSD) are mitigating some of the
problems:
—~100 times faster access time than HDDs
—1-2 order of magnitude faster I/O than HDDs
—Possesses a property of Random Access — (almost)
no “disk seek” delay
• Still much slower than RAM
• Still reads/writes in blocks
• Still too expensive compared to HDD
8
Sec. 4.1
Hardware basics
• Servers used in IR systems now typically have
dozens GB of main memory, sometimes hundreds of
GB.
9
Sec. 4.1
10
Sec. 4.2
12
Sec. 4.2
4.5 bytes per token vs. 7.5 bytes per term: why? 13
Sec. 4.2
16
Sec. 4.2
17
Sec. 4.2
18
Sec. 4.2
Bottleneck
• Parse and build postings entries one doc at a time
• Now sort postings entries by term (then by doc
within each term)
• Doing this with random disk seeks would be too
slow – must sort T=100M records
19
Sec. 4.2
20
21
Sec. 4.2
22
Sec. 4.2
23
Sec. 4.2
1
1 2
2 Merged run.
3 4
3
4
Runs being
merged.
Disk
24
Sec. 4.2
25
Sec. 4.3
26
Sec. 4.3
SPIMI:
Single-Pass In-Memory Indexing
• Key idea 1: Generate separate dictionaries for each
block – no need to maintain term-termID mapping
across blocks.
• Key idea 2: Don’t sort. Accumulate postings in
postings lists as they occur.
• With these two ideas we can generate a complete
inverted index for each block.
• These separate indexes can then be merged into
one big index.
27
Sec. 4.3
SPIMI-Invert
SPIMI: Compression
• Compression makes SPIMI even more efficient.
—Compression of terms
—Compression of postings
• We’ll discuss later today
29
Sec. 4.4
Distributed indexing
• For web-scale indexing (don’t try this at home!):
must use a distributed computing cluster
30
Sec. 4.4
31
Google Data Center in Jurong West
32
Sec. 4.4
33
Sec. 4.4
Distributed indexing
• Maintain a master machine directing the indexing
job – considered “safe”.
• Break up indexing into sets of (parallel) tasks.
• Master machine assigns each task to an idle
machine from a pool.
34
Sec. 4.4
Parallel tasks
• We will use two sets of parallel tasks
—Parsers
—Inverters
• Break the input document collection into splits
• Each split is a subset of documents (corresponding
to blocks in BSBI/SPIMI)
35
Sec. 4.4
Parsers
• Master assigns a split to an idle parser machine
• Parser reads one document at a time and emits
(term, doc) pairs
• Parser writes pairs into j partitions
• Each partition is for a range of terms’ first letters
—(e.g., a-f, g-p, q-z) – here j = 3.
• Now to complete the index inversion
36
Sec. 4.4
Inverters
• An inverter collects all (term,doc) pairs (= postings)
for one term-partition.
• Sorts and writes to postings lists
37
Sec. 4.4
Data flow
Map Reduce
Segment files
phase phase 38
Sec. 4.4
MapReduce
• The index construction algorithm we just described
is an instance of MapReduce.
• MapReduce (Dean & Ghemawat 2004) is a robust
and conceptually simple framework for distributed
computing …
• … without having to write code for the distribution
part.
• They describe the Google indexing system (ca.
2002) as consisting of a number of phases, each
implemented in MapReduce.
39
Sec. 4.4
MapReduce
• Index construction was just one phase.
• Another phase: transforming a term-partitioned
index into a document-partitioned index.
—Term-partitioned: one machine handles a subrange
of terms
—Document-partitioned: one machine handles a
subrange of documents
• As we’ll discuss in the web part of the course, most
search engines use a document-partitioned index
for better load balancing, etc.
40
Sec. 4.4
41
Example for index construction
Map:
—d1 : C came, C c’ed.
—d2 : C died.
—→ <C,d1>, <came,d1>, <C,d1>, <c’ed, d1>, <C, d2>,
<died,d2>
Reduce:
—(<C,(d1,d2,d1)>, <died,(d2)>, <came,(d1)>, <c’ed,(d1)>)
—→ (<C,(d1:2,d2:1)>, <died,(d2:1)>, <came,(d1:1)>,
<c’ed,(d1:1)>)
42
Sec. 4.5
Dynamic indexing
• Up to now, we have assumed that collections are
static.
• They rarely are:
—Documents come in over time and need to be
inserted.
—Documents are deleted and modified.
• This means that the dictionary and postings lists
have to be modified:
—Postings updates for terms already in dictionary
—New terms added to dictionary
◦ #ValentinesDay
43
Sec. 4.5
Simplest approach
• Maintain “big” main index
• New docs go into “small” auxiliary index
• Search across both, merge results
• Deletions
—Invalidation bit-vector for deleted docs
—Filter docs output on a search result by this
invalidation bit-vector
• Periodically, re-index into one main index
44
Sec. 4.5
45
Sec. 4.5
Logarithmic merge
• Maintain a series of indexes, each twice as large as
the previous one
—At any time, some of these powers of 2 are
instantiated
—Keep smallest (Z0) in memory
—Larger ones (I0, I1, …) on disk
• If Z0 gets too big (> n), write to disk as I0, or merge
with I0 (if I0 already exists) as Z1
• Either write merge Z1 to disk as I1 (if no I1), or merge
with I1 to form Z2
• etc…
46
Sec. 4.5
47
Sec. 4.5
48
Sec. 4.5
49
Sec. 4.5
50
Sec. 4.5
Next up
52
Ch. 5
53
Ch. 5
55
Sec. 5.1
Exercise: give intuitions for all the ‘0’ entries. Why do some zero entries
correspond to big deltas in other columns?
56
Sec. 5.1
57
Sec. 5.1
58
Sec. 5.1
59
Sec. 5.1
Thus, M = 101.64T0.49 so
k = 101.64 ≈ 44 and b = 0.49.
60
Sec. 5.1
Exercises
• What is the effect of including spelling errors, vs.
automatically correcting spelling errors on Heaps’
law?
• Compute the vocabulary size M for this scenario:
—Looking at a collection of web pages, you find that
there are 3000 different terms in the first 10,000
tokens and 30,000 different terms in the first
1,000,000 tokens.
—Assume a search engine indexes a total of
20,000,000,000 (2 × 1010) pages, containing 200
tokens on average
—What is the size of the vocabulary of the indexed
collection as predicted by Heaps’ law? 61
Sec. 5.1
Zipf’s law
62
Sec. 5.1
Zipf consequences
• If the most frequent term (the) occurs cf1 times
—then the second most frequent term (of) occurs
cf1/2 times
—the third most frequent term (and) occurs cf1/3
times …
• Equivalent: cfi = K/i where K is a normalizing factor,
so
—log cfi = log K - log i
—Linear relationship between log cfi and log i
64
Ch. 5
Compression
•Now, we will consider compressing the space
for the dictionary and postings
—Basic Boolean index only
—Not considering positional indexes, etc.
—We will consider compression schemes
65
Sec. 5.2
66
Sec. 5.2
67
Sec. 5.2
68
Sec. 5.2
69
Sec. 5.2
70
Sec. 5.2
Blocking
• Store pointers to every kth term string.
—Example below: k=4.
• Need to store term lengths (1 extra byte)
….7systile9syzygetic8syzygial6syzygy11szaibelyite8szczecin9szomo….
71
Sec. 5.2
Net savings
• Example for block size k = 4
• Where we used 3 bytes/pointer without blocking
—3 x 4 = 12 bytes,
now we use 3 + 4 = 7 bytes.
Shaved another ~0.5MB. This reduces the size of the
dictionary from 7.6 MB to 7.1 MB.
• We can save more with larger k
• Will cause slower term lookup
Exercise: Why is it slower? Estimate the performance impact of k
Exercise: Estimate the space usage (and savings compared to
7.6 MB) with blocking, for block sizes of k = 4, 8 and 16.
72
Sec. 5.2
Front coding
• Front-coding:
—Sorted words commonly have a long common prefix
– store differences only
—(for last k-1 in a block of k)
8automata8automate9automatic10automation
→8automat*a1e2ic3ion
Technique Size in MB
74
Sec. 5.3
Postings compression
• The postings file is much larger than the dictionary,
factor of at least 10
• Key goal: store each posting compactly.
• A posting for our purposes is a docID.
—For Reuters (800,000 documents), we would use 32
bits per docID when using 4-byte integers.
—Alternatively, we can use log2 800,000 ≈ 20 bits per
docID.
• Our goal: use far fewer than 20 bits per docID.
75
Sec. 5.3
76
Sec. 5.3
77
Sec. 5.3
78
Sec. 5.3
79
Sec. 5.3
80
Sec. 5.3
Example
docIDs 824 829 215406
gaps 5 214577
VB code 00000110 10000101 00001101
10111000 00001100
10110001
83
Sec. 5.3
Gamma codes
• We can compress better with bit-level codes
—The Gamma code is the best known of these.
• Represent a gap G as a pair length and offset
• offset is G in binary, with the leading bit cut off
—For example 13 → 1101 → 101
• length is the length of offset
—For 13 (offset 101), this is 3.
• We encode length with unary code: 1110.
• Gamma code of 13 is the concatenation of length
and offset: 1110101
84
Sec. 5.3
85
Sec. 5.3
86
Sec. 5.3
87
Sec. 5.3
RCV1 compression
Data structure Size in MB
dictionary, fixed-width 11.2
dictionary, term pointers into string 7.6
with blocking, k = 4 7.1
with blocking & front coding 5.9
collection (text, xml markup etc) 3,600.0
collection (text) 960.0
Term-doc incidence matrix 40,000.0
postings, uncompressed (32-bit words) 400.0
postings, uncompressed (20 bits) 250.0
postings, variable byte encoded 116.0
postings, g-encoded 101.0
88
Sec. 5.3
89
Ch. 4