0% found this document useful (0 votes)
42 views

Introduction To: Information Retrieval

This document provides an introduction to information retrieval and web search basics. It discusses the history of web search engines, including early keyword-based engines and the introduction of paid search and link-based ranking. It also covers user needs in web searching, how far users typically view search results, users' empirical evaluation of results and engines, and features of modern search engines like instant search.

Uploaded by

harini
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Introduction To: Information Retrieval

This document provides an introduction to information retrieval and web search basics. It discusses the history of web search engines, including early keyword-based engines and the introduction of paid search and link-based ranking. It also covers user needs in web searching, how far users typically view search results, users' empirical evaluation of results and engines, and features of modern search engines like instant search.

Uploaded by

harini
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 46

Introduction to Information Retrieval

Introduction to
Information Retrieval
Ch 19 Web Search Basics

Modified by Dongwon Lee from slides by


Christopher Manning and Prabhakar Raghavan
Introduction to Information Retrieval

Lab #6 (DUE: Nov. 6 11:55PM)


 https://online.ist.psu.edu/ist516/labs

 Tasks:
 Individual Lab / 3 Tasks
 Search Engine related hands-on lab
 URL Frontier exercise question

 Turn-In
 Solution document to ANGEL

2
Introduction to Information Retrieval

Brief (non-technical) history


 Early keyword-based engines ca. 1995-1997
 Altavista, Excite, Infoseek, Inktomi, Lycos
 Paid search ranking: Goto (morphed into
Overture.com  Yahoo!)
 CPM vs. CPC
 Your search ranking depended on how much you
paid
 Auction for keywords: casino was expensive!
Introduction to Information Retrieval

Brief (non-technical) history


 1998+: Link-based ranking pioneered by Google
 Blew away all early engines
 Great user experience in search of a business model
 Meanwhile Goto/Overture’s annual revenues were nearing $1 billion
 Result: Google added paid search “ads” to the side,
independent of search results
 Yahoo followed suit, acquiring Overture (for paid placement) and
Inktomi (for search)
 2005+: Google gains search share, dominating in Europe and
very strong in North America
 2009: Yahoo! and Microsoft propose combined paid search offering
Introduction to Information Retrieval

Paid
Search Ads

Algorithmic results.
Introduction to Information Retrieval Sec. 19.4.1

Web search basics


Sponsored Links

CG Appliance Express
Discount Appliances (650) 756-3931

User
Same Day Certified Installation
www.cgappliance.com
San Francisco-Oakland-San Jose,
CA

Miele Vacuum Cleaners


Miele Vacuums- Complete Selection
Free Shipping!
www.vacuums.com

Miele Vacuum Cleaners


Miele-Free Air shipping!
All models. Helpful advice.
www.best-vacuum.com

Web Results 1 - 10 of about 7,310,000 for miele. (0.12 seconds)

Miele, Inc -- Anything else is a compromise


At the heart of your home, Appliances by Miele. ... USA. to miele.com. Residential Appliances.
Vacuum Cleaners. Dishwashers. Cooking Appliances. Steam Oven. Coffee System ...

Web spider
www.miele.com/ - 20k - Cached - Similar pages

Miele
Welcome to Miele, the home of the very best appliances and kitchens in the world.
www.miele.co.uk/ - 3k - Cached - Similar pages

Miele - Deutscher Hersteller von Einbaugeräten, Hausgeräten ... - [ Translate this


page ]
Das Portal zum Thema Essen & Geniessen online unter www.zu-tisch.de. Miele weltweit
...ein Leben lang. ... Wählen Sie die Miele Vertretung Ihres Landes.
www.miele.de/ - 10k - Cached - Similar pages

Herzlich willkommen bei Miele Österreich - [ Translate this page ]


Herzlich willkommen bei Miele Österreich Wenn Sie nicht automatisch
weitergeleitet werden, klicken Sie bitte hier! HAUSHALTSGERÄTE ...
www.miele.at/ - 3k - Cached - Similar pages

Search

Indexer

The Web

Indexes Ad indexes
Introduction to Information Retrieval Sec. 19.4.1

User Needs
 Need [Brod02, RL04]
 Informational – want to learn about something (~40% / 65%)
Low hemoglobin
 Navigational – want to go to that page (~25% / 15%)
United Airlines
 Transactional – want to do something (web-mediated) (~35% / 20%)
 Access a service Seattle weather
 Downloads Mars surface images
Canon S410
 Shop
 Gray areas
Car rental Brasil
 Find a good hub
 Exploratory search “see what’s there”
Introduction to Information Retrieval

How far do people look for results?

(Source: iprospect.com WhitePaper_2006_SearchEngineUserBehavior.pdf)


Introduction to Information Retrieval

Users’ empirical evaluation of results


 Quality of pages varies widely
 Relevance is not enough
 Other desirable qualities (non IR!!)
 Content: Trustworthy, diverse, non-duplicated, well maintained
 Web readability: display correctly & fast
 No annoyances: pop-ups, etc
 Precision vs. recall
 On the web, recall seldom matters
 What matters
 Precision at 1? Precision within top-K?
 Comprehensiveness – must be able to deal with obscure queries
 Recall matters when the number of matches is very small
 User perceptions may be unscientific, but are significant
over a large aggregate
Introduction to Information Retrieval

Users’ empirical evaluation of engines


 Relevance and validity of results
 UI – Simple, no clutter, error tolerant
 Trust – Results are objective
 Coverage of topics for polysemic queries
 Pre/Post process tools provided
 Mitigate user errors (auto spell check, search assist,…)
 Explicit: Search within results, more like this, refine ...
 Anticipative: related searches, instant searches (next slide)
 Impact on stemming, spell-check, etc
 Web addresses typed in the search box
Introduction to Information Retrieval

2010: Instant Search

11
Introduction to Information Retrieval

2010: Instant Search

12
Introduction to Information Retrieval Sec. 19.2

The Web document collection


 No design/co-ordination
 Distributed content creation, linking,
democratization of publishing
 Content includes truth, lies, obsolete
information, contradictions …
 Unstructured (text, html, …), semi-
structured (XML, annotated photos),
structured (Databases)…
 Scale much larger than previous text
collections … but corporate records are
catching up
 Growth – slowed down from initial
The Web “volume doubling every few months” but
still expanding
 Content can be dynamically generated
Introduction to Information Retrieval Sec. 19.2.2

The trouble with paid search ads …

 It costs money. What’s the alternative?


 Search Engine Optimization (SEO):
 “Tuning” your web page to rank highly in the
algorithmic search results for select keywords
 Alternative to paying for placement
 Thus, intrinsically a marketing function
 Performed by companies, webmasters and
consultants (“Search engine optimizers”) for their
clients
 Some perfectly legitimate, some very shady
Introduction to Information Retrieval Sec. 19.2.2

Search engine optimization (Spam)


 Motives
 Commercial, political, religious, lobbies
 Promotion funded by advertising budget
 Operators
 Contractors (Search Engine Optimizers) for lobbies,
companies
 Web masters
 Hosting services
 Forums
 E.g., Web master world ( www.webmasterworld.com )
Introduction to Information Retrieval Sec. 19.2.2

Simplest forms: Keyword Stuffing


 First generation engines relied heavily on tf/idf
 The top-ranked pages for the query maui resort
were the ones containing the most maui’s and
resort’s
 SEOs -- dense repetitions of chosen terms
 e.g., maui resort maui resort maui resort
 Often, the repetitions would be in the same color as
the background of the web page
 Repeated terms got indexed by crawlers
 But not visible to humans on browsers

Pure word density cannot


be trusted as an IR signal
Introduction to Information Retrieval

Eg, Heaven's Gate web site

http://www.ariadne.ac.uk/issue10/search-engines/ 17
Introduction to Information Retrieval Sec. 19.2.2

Cloaking
 Serve fake content to search engine spider
 DNS cloaking: Switch IP address. Impersonate

SPAM
Y

Is this a Search
Engine spider?

N Real
Cloaking Doc
Introduction to Information Retrieval

The war against spam


 Quality signals - Prefer
authoritative pages based
 Spam recognition by
on: machine learning
 Votes from authors (linkage  Training set based on known
signals) spam
 Votes from users (usage signals)
 Family friendly filters
 Policing of URL submissions  Linguistic analysis, general
 Anti robot test classification techniques, etc.
 For images: flesh tone
 Limits on meta-keywords detectors, source text analysis,
etc.
 Robust link analysis
 Ignore statistically implausible  Editorial intervention
linkage (or text)  Blacklists
 Use link analysis to detect  Top queries audited
spammers (guilt by association)  Complaints addressed
 Suspect pattern detection
Introduction to Information Retrieval

Size of the web


 We covered some related topics in Week #1
 Refer to web measuring problems in the following
slide:
 http://pike.psu.edu/classes/ist516/latest/slides/web-
size.ppt

 More related topics in Ch 19 of the IIR book

20
Introduction to Information Retrieval Sec. 19.5

Relative Size from Overlap


Given two engines A and B
Sample URLs randomly from A
Check if contained in B and vice
versa
AB
A B = (1/2) * Size A
A B = (1/6) * Size B

(1/2)*Size A = (1/6)*Size B
\ Size A / Size B =
(1/6)/(1/2) = 1/3

Each test involves: (i) Sampling (ii) Checking


Introduction to Information Retrieval Sec. 19.5

Sampling URLs
 Ideal strategy: Generate a random URL and check for
containment in each index.
 Problem: Random URLs are hard to find!
 4 Approaches are discussed in Ch 19
Introduction to Information Retrieval Sec. 19.5

1. Random searches
 Choose random searches extracted from a local log
 Use only queries with small result sets.
 Count normalized URLs in result sets.
 Use ratio statistics
Introduction to Information Retrieval Sec. 19.5

1. Random searches
 575 & 1050 queries from the NEC RI employee logs
 6 Engines in 1998, 11 in 1999
 Implementation:
 Restricted to queries with < 600 results in total
 Counted URLs from each engine after verifying query
match
 Computed size ratio & overlap for individual queries
 Estimated index size ratio & overlap by averaging over all
queries
Introduction to Information Retrieval Sec. 19.5

2. Random IP addresses
 Generate random IP addresses
 Find a web server at the given address
 If there’s one
 Collect all pages from server
 From this, choose a page at random
Introduction to Information Retrieval Sec. 19.5

2. Random IP addresses
 HTTP requests to random IP addresses
 Ignored: empty or authorization required or excluded
 [Lawr99] Estimated 2.8 million IP addresses running
crawlable web servers (16 million total) from observing
2500 servers.
 OCLC using IP sampling found 8.7 M hosts in 2001
 Netcraft [Netc02] accessed 37.2 million hosts in July 2002
 [Lawr99] exhaustively crawled 2500 servers and
extrapolated
 Estimated size of the web to be 800 million pages
Introduction to Information Retrieval Sec. 19.5

3. Random walks
 View the Web as a directed graph
 Build a random walk on this graph
 Includes various “jump” rules back to visited sites
 Does not get stuck in spider traps!
 Can follow all links!
 Converges to a stationary distribution
 Must assume graph is finite and independent of the walk.
 Conditions are not satisfied (cookie crumbs, flooding)
 Time to convergence not really known
 Sample from stationary distribution of walk
 Use “strong query” method to check coverage by SE
Introduction to Information Retrieval Sec. 19.5

4. Random queries
 Generate random query: how?
Not an English
 Lexicon: 400,000+ words from a web crawl dictionary
 Conjunctive Queries: w1 and w2
e.g., vocalists AND rsi
 Get top-100 result URLs from engine A
 Choose a random URL as the candidate to check for
presence in engine B
 6-8 low frequency terms as conjunctive query
Introduction to Information Retrieval Sec. 19.6

Duplicate documents
 The web is full of duplicated content
 Strict duplicate detection = exact match
 Not as common
 But many, many cases of near duplicates
 E.g., Last modified date the only difference
between two copies of a page
Introduction to Information Retrieval

Eg, Near-duplicate videos

< Original Contrast Brightne Crop


Video> ss

Color Color TV
Enhancement Change size

Multi- Low Noise/Blur Small Logo


editing resolution 30
Introduction to Information Retrieval

Eg, Near-duplicate videos


Original
video

Elongated

Copied
video

31
Introduction to Information Retrieval Sec. 19.6

Duplicate/Near-Duplicate Detection

 Duplication: Exact match can be detected with


fingerprints
 Near-Duplication: Approximate match
 Compute syntactic similarity with an edit-
distance measure
 Use similarity threshold to detect near-duplicates
 E.g., Similarity > 80% => Documents are “near
duplicates”
 Not transitive though sometimes used transitively
Introduction to Information Retrieval Sec. 19.6

Computing Similarity
 Features:
 Segments of a document (natural or artificial breakpoints)
 Shingles (Word N-Grams)
 a rose is a rose is a rose my rose is a rose is yours
a_rose_is_a
rose_is_a_rose
is_a_rose_is
a_rose_is_a
 Similarity Measure between two docs (= sets of shingles)
 Set intersection
 Specifically (Size_of_Intersection / Size_of_Union)
Introduction to Information Retrieval Sec. 19.6

Shingles + Set Intersection


 Issue: Computing exact set intersection of shingles
between all pairs of documents is expensive
Introduction to Information Retrieval Sec. 19.6

Shingles + Set Intersection


 Issue: Computing exact set intersection of shingles
between all pairs of documents is expensive
 Solution  Approximate using a cleverly chosen subset of
shingles from each (called a sketch)
 Estimate (size_of_intersection / size_of_union)
based on a short sketch

Doc Shingle set A Sketch A


A
Jaccard
Doc Shingle set B Sketch B
B
Introduction to Information Retrieval Sec. 19.6

Sketch of a document
 Create a “sketch vector” (of size ~200) for each
document
 Documents that share ≥ t (say 80%) corresponding
vector elements are near duplicates
 For doc D, sketchD[ i ] is as follows:
 Let f map all shingles in the universe to 0..2m (e.g., f =
fingerprinting)
 Let pi be a random permutation on 0..2m
 Pick MIN {pi(f(s))} over all shingles s in D
Introduction to Information Retrieval Sec. 19.6

Computing Sketch[i] for Doc1


Document 1

264 Start with 64-bit f(shingles)

264 Permute on the number line


with pi
264
264 Pick the min value
Introduction to Information Retrieval Sec. 19.6

Test if Doc1.Sketch[i] = Doc2.Sketch[i]

Document 1 Document 2

264 264
264 264

264 264
A B
264 264

Are these equal?

Test for 200 random permutations: p1, p2,… p200


Introduction to Information Retrieval Sec. 19.6

Sketch Eg (|shingle|=4, |U|=5, |sketch|=2)


 a rose is a rose is a rose  my rose is a rose is yours
a_rose_is_a my_rose_is_a
rose_is_a_rose rose_is_a_rose
is_a_rose_is is_a_rose_is
a_rose_is_a a_rose_is_yours
Introduction to Information Retrieval

Min-Hash Technique:

Theory behind the “Sketch vector”


idea

40
Introduction to Information Retrieval Sec. 19.6

Set Similarity of sets Ci , Cj


Ci  C j
Jaccard(C i , C j ) 
Ci  C j
 View sets as columns of a matrix A; one row for each
element in the universe. aij = 1 indicates presence of
item i in set j
 Example C1 C2

0 1
1 0
1 1 Jaccard(C1,C2) = 2/5 = 0.4
0 0
1 1
0 1
Introduction to Information Retrieval Sec. 19.6

Key Observation
 For columns Ci, Cj, four types of rows
Ci Cj
A 1 1
B 1 0
C 0 1
D 0 0
 Claim

A
Jaccard(Ci,C j ) =
A+B+C
Introduction to Information Retrieval Sec. 19.6

“Min” Hashing
 Randomly permute rows
 Hash h(Ci) = index of first row with 1 in column
Ci
 Surprising Property
P  h(C i )  h(C j )   Jaccard Ci , C j 
 Why?
 Both are |A|/(|A|+|B|+|C|)
 Look down columns Ci, Cj until first non-Type-D row
 h(Ci) = h(Cj)  type A row
Introduction to Information Retrieval Sec. 19.6

Example
Task: find near-duplicate pairs
D1 D2 D3 among D1, D2, and D3 using
the similarity threshold >= 0.5
Index C1 C2 C3
1 R1 1 0 1
2 R2 0 1 1
3 R3 1 0 0
4 R4 1 0 1
5 R5 0 1 0
Introduction to Information Retrieval Sec. 19.6

Example
Signatures
S1 S2 S3
D1 D2 D3
Perm 1 = (12345) 1 2 1
Perm 2 = (54321) 4 5 4
Perm 3 = (34512) 3 5 4
Index C1 C2 C3
1 R1 1 0 1
2 R2 0 1 1
3 R3 1 0 0 Similarities
4 R4 1 0 1 1-2 1-3 2-3
5 R5 0 1 0 Col-Col 0.00 0.50 0.25
Sig-Sig 0.00 0.67 0.67
Introduction to Information Retrieval

More resources
 IIR Chapter 19
 http://en.wikipedia.org/wiki/Locality-
sensitive_hashing
 http://people.csail.mit.edu/indyk/vldb99.ps

You might also like