Lecture9
Lecture9
Lecture 9
Today
• Hash Table Continued
• Binary Search Tree
Linear Probing (f(i) = i)
3
Linear Probing Example
4
Quadratic Probing
5
Quadratic Probing Example
6
Motivation To Use Hash Table
7
Chapter 11
HEAD 1 2 3 4 5 7 8
• (Sorted) arrays
1 2 3 4 5 7 8
9
Sorted linked lists
1 2 3 4 5 7 8
• O(1) insert/delete (assuming we have a pointer to
the location of the insert/delete):
HEAD 1 2 3 4 5 7 8
6
• O(n) search/select:
HEAD 1 2 3 4 5 7 8
10
Sorted Arrays 1 2 3 4 5 7 8
• O(n) insert/delete:
1 2 3 4 54.5 7 8
1 2 3 4 5 7 8
Search: Binary search to see if 3 is in A.
Binary Search
Sorted Arrays Linked Lists
Trees*
12
Review
13
Review
14
Binary Search Tree
15
Binary Search Tree
16
Binary Search Tree
5 5
3 7 3 7
2 4 8 2 4 8
NOT a Binary
1 Binary Search Tree 1 Search Tree
18
Binary Search Trees
19
Inorder Walk
How INORDER-TREE-WALK
works:
• Check to make sure
that 𝑥 is not NIL.
• Recursively print the
keys of the nodes in
𝑥’s left subtree.
• Print 𝑥’s key.
Time
• Recursively print the Intuitively, the walk takes Θ(𝑛) time.
keys of the nodes in
𝑥’s right subtree.
20
Binary Search Tree
21
BST Search
Searching
Example
Search for values 𝐸, 𝐿, 𝐹, and 𝐻
in the example tree.
Time
The algorithm recurses, visiting
nodes on a downward path from
the root. Thus, running time is
O(ℎ), where ℎ is the height of
the tree.
22
BST Search
Iterative version
The iterative version unrolls
the recursion into a while
loop. It’s usually more efficient
than the recursive version,
since it avoids the overhead of
recursive calls.
23
Maximum and Minimum
The binary-search-tree property
guarantees that
• the minimum key of a binary
search tree is located at the
leftmost node, and
• the maximum key of a binary
search tree is located at the
rightmost node.
Traverse the appropriate pointers Time: Both procedures visit
(left or right) until NIL is reached. In nodes that form a downward
the following procedures, the path from the root to a leaf. Both
parameter 𝑥 is the root of a subtree. procedures run in O(ℎ) time,
The first call has 𝑥 = 𝑇. 𝑟𝑜𝑜𝑡. where ℎ is the height of the tree.
24
Successor and Predecessor
25
Successor and Predecessor
26
Successor and Predecessor
27
Successor and Predecessor
28
BST Insert
29
BST Insert
30
BST Insert
31
BST Deletion
32
BST Deletion, Transplant
TRANSPLANT(𝑇, 𝑢, 𝑣)
replaces the subtree rooted at
𝑢 by the subtree rooted at 𝑣:
• Makes 𝑢’s parent become
𝑣’s parent (unless 𝑢 is the
root, in which case it makes
𝑣 the root).
• 𝑢’s parent gets 𝑣 as either
its left or right child,
depending on whether 𝑢
was a left or right child.
• Doesn’t update 𝑣. 𝑙𝑒𝑓𝑡 or
𝑣. 𝑟𝑖𝑔ℎ𝑡, leaving that up to
TRANSPLANT’s caller.
33
BST Deletion
TREE − DELETE(𝑇, 𝑧): deleting node
𝑧 from binary search tree. (Case 1 and 2
from the previous slide)
34
BST Deletion
Otherwise, 𝑧 has two children. Find 𝑧’s successor 𝑦, 𝑦 must lie in 𝑧’s right
subtree and have no left child (why?).
Goal is to replace 𝑧 by 𝑦, splicing 𝑦 out of its current location.
35
BST Deletion Example
36
BST Deletion Example
37
BST Deletion Example
38
BST Deletion
Note that the last three lines execute when 𝑧 has two
children, regardless of whether 𝑦 𝑦 is 𝑧’s right child.
39
BST Deletion
On this binary search tree 𝑇,
run the following 4 examples:
TREE − DELETE 𝑇, 𝐼
TREE − DELETE(𝑇, 𝐺)
TREE − DELETE(𝑇, 𝐾)
TREE − DELETE(𝑇, 𝐵)
Time
𝑂(ℎ), on a tree of height ℎ.
Everything is 𝑂(1) except for
the call to TREE-MINIMUM.
40
Chapter 15
Dynamic Programming
Three Principles to Design Algorithms
42
Overview
• Dynamic programming is a design method, just like divide-
and-conquer, that solves problems by combining the
solutions to subproblems.
• D&C algorithms partition the problem into disjoint
subproblems, solve the problem recursively, and combine
the solution
• Dynamic programming applies when subproblems overlap
• Subproblems share subsubproblems
43
Overview
• A DP algorithm solves each subsubproblem just once and
then saves its answer in a table, avoiding having to solve the
recomputing the answer every time it solves each
subproblem.
• • Typically DP are applied to optimization problems, which
can have many possible solutions.
• Each solution has a value and we wish to find an optimal
solution (minimum or maximum)
• The solution is “an” optimal solution instead of “the”
optimal solution since there may be several solutions that
have the same optimal values
44
DP vs. Greedy Algorithms
• Dynamic programming (DP) and greedy algorithms (GA) both
apply to optimization problems
• Both break problems into subproblems
• DP reuses the optimal solutions to the subproblems and
combine the subproblem solutions into the final solution
• Can transforms exponential-time algorithms into
polynomial-time algorithms
• GA makes each choice in a locally optimal manner of the
subproblem and eventually arrives at the solution of the
problem.
• Can arrive at an optimal solution faster than a DP
approach
45
Dynamic Programming
• Strategy:
• Solve the subproblems and store results
• Large problems are decomposed into subproblems, but
solutions to subproblems are looked up (from a table)
• Four steps method
• Characterize the structure of an optimal solution
• Recursively define the value of an optimal solution
• Compute the value of an optimal solution, typically in a
bottom up fashion
• Construct an optimal solution from computed
information
• Can transform exponential-time algorithms into polynomial-
time algorithms.
46
Example: Longest Common Sequence
• A subsequence of a string S is a set of characters that appear
in left-to-right order, but not necessarily consecutive.
• Example: S = ACTTGCG
• ACT, ATTC, T, ACTTGC are all subsequences
• TTA is not a subsequence
47
Example: DNA Sequencing
48
LCS
49
LCS
• Given two sequences x[1..m] and y[1..n]. Find a longest
sequence common to both.
• There might be several of them
• x: A B C B D A B
• y: B D C A B A
50
LCS
51
LCS
52
LCS
53
Example
54
Optimal Substructure
55
Recursive Algorithm for LCS
56
Recursive Algorithm for LCS
57
Overlapping Subproblems
58
Memorization
59
Dynamic Programming
60
Reconstruct LCS
61
Print LCS
62
Print LCS
63