CSC312 -- Lecture 4
CSC312 -- Lecture 4
Techniques
The CSC312 Team
Reference Textbooks for this Topic
Lecture Objective
3
Lecture Outline
▪ Recursion
▪ Divide and Conquer
▪ Backtracking
▪ Dynamic Programming
4
Recursion
Introduction
▪ Repetition can be achieved by writing
loops, such as for loops and while loops. int factorial(int n) {
// Base case: factorial of 0 or 1 is 1
▪ Another way to achieve repetition is if (n == 0 || n == 1) {
through recursion, which occurs when a return 1;
function refers to itself in its own }
definition. // Recursive case: n * factorial of (n-1)
return n * factorial(n - 1);
▪ This capability provides an elegant and
}
powerful alternative for performing
repetitive tasks.
6
Introduction
▪ Recursion is the concept of defining a function that makes a call to itself.
▪ When a function calls itself, we refer to this as a recursive call.
▪ We also consider a function 𝑀 to be recursive if it calls another function that
ultimately leads to a call back to 𝑀- indirect recursion.
▪ The main benefit of a recursive approach to algorithm design is that it allows us to take
advantage of the repetitive structure present in many problems.
▪ By making our algorithm description exploit this repetitive structure in a recursive way, we can
often avoid complex case analyses and nested loops.
▪ This approach can lead to more readable algorithm descriptions, while still being quite efficient.
7
Introduction
int factorialA(int n) {
factorialA(3) is called.
// Base case: factorial of 0 or 1 is 1
if (n == 0 || n == 1) { factorialA(3) calls factorialB(2).
return 1;
factorialB(2) calls factorialA(2).
}
// Recursive case: n * factorial of (n-1) factorialA(2) calls factorialB(1).
return n * factorialB(n - 1);
factorialB(1) calls factorialA(1), which hits the
}
// Function B: Calls Function A base case and returns 1.
int factorialB(int n) {
The results are multiplied back up:
return factorialA(n); // Calls factorialA
} 3×2×1=6.
8
BENEFITS OF RECURSION
▪ Simplifies Complex Problems: Recursion breaks down complex problems into simpler sub-
problems, ideal for hierarchical structures like trees and graphs.
▪ Readable and Maintainable Code: Recursive solutions result in cleaner, more understandable
code, especially for tasks like directory traversal and mathematical sequences.
▪ Direct Mapping to Mathematical Concepts: Recursion directly implements naturally recursive
mathematical concepts like factorials and Fibonacci series.
▪ Solves Divide and Conquer Problems: Recursion is efficient for divide-and-conquer algorithms
like QuickSort and MergeSort, breaking problems into sub-problems and combining results.
▪ Backtracking Solutions: Essential for backtracking problems (e.g., maze-solving, Sudoku, N-
Queens), allowing exploration of multiple possibilities and backtracking when needed.
▪ Stack-Based Execution: Recursion uses the call stack to manage intermediate states, supporting
backtracking and state maintenance across calls.
9
The Factorial Function (1 of 3)
▪ The factorial of a positive integer n, denoted 𝑛!, is defined as the product
of the integers from 1 to 𝑛.
▪ If 𝑛 = 0, then 𝑛! is defined as 1 by convention. More formally, for any
integer 𝑛 ≥ 0,
11
The Factorial Function (3 of 3)
▪ In general, for a positive integer 𝑛, we can define 𝑓𝑎𝑐𝑡𝑜𝑟𝑖𝑎𝑙(𝑛) to be
𝑛 · 𝑓𝑎𝑐𝑡𝑜𝑟𝑖𝑎𝑙(𝑛 − 1)
▪ This leads to the following recursive definition
▪ It contains one or more base cases, which are defined non-recursively in terms of fixed
quantities.
▪ In this case, 𝑛 = 0 is the base case.
▪ It also contains one or more recursive cases.
12
A Recursive Implementation of the Factorial
Function
▪ Consider a C++ implementation of the factorial function shown in Code
Fragment below under the name recursiveFactorial.
13
Recursive Trace
▪ The execution of a recursive function
definition can be illustrated by means of a
recursion trace.
▪ Each entry of the trace corresponds to a
recursive call.
▪ Each new recursive function call is
indicated by an arrow to the newly called
function.
▪ When the function returns, an arrow
showing this return is drawn, and the
return value may be indicated with this
arrow. A recursion trace for the call recursiveFactorial(4).
14
Types of Recursive Algorithms
Linear Recursion (1 of 3)
▪ This is the simplest form of recursion, where a function is defined so that it
makes at most one recursive call each time it is invoked.
▪ An example is summing the elements of an array recursively
▪ Suppose, for example, we are given an array, 𝐴, of 𝑛 integers that we want to sum
together.
▪ We can solve this summation problem using linear recursion by observing that the
sum of all 𝑛 integers in 𝐴 is equal to 𝐴[0], if 𝑛 = 1, or the sum of the first 𝑛 − 1
integers in 𝐴 plus the last element in 𝐴.
15
Types of Recursive Algorithms
Linear Recursion (2 of 3)
▪ The Code Fragment describes how to
solve this array element summation Algorithm LinearSum(𝐴, 𝑛):
problem using the linear recursive Input: A integer array 𝑨 and an integer 𝑛 ≥ 1,
algorithm. such that 𝐴 has at least 𝑛 elements
▪ An important property that a recursive Output: The sum of the first n integers in 𝐴
function should always possess i.e., the if 𝑛 = 1 then
function terminates. return 𝐴[0]
▪ In the given algorithm, this is done by else
return LinearSum(A,n-1)+A[n-1]
writing a non-recursive statement for
the case 𝑛 = 1.
16
Example 1
17
Example 1 Cont’d
Let’s say the array A is:
A = [3, 5, 2, 7, 1]. Compute the sum of all these elements using Linear Recursion.
Step-by-Step Linear Recursion:
▪ Recursive Call 1: Sum of all 5 elements
▪ We want to sum all 5 elements:
𝑠𝑢𝑚(𝐴) = 𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2], 𝐴[3], 𝐴[4])
▪ Using recursion:
𝑠𝑢𝑚(𝐴) = 𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2], 𝐴[3]) + 𝐴[4]
▪ Recursive Call 2: Sum of the first 4 elements
▪ Now, we want to sum the first 4 elements:
𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2], 𝐴[3]) = 𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2]) + 𝐴[3]
18
Example 1 Cont’d
▪ Recursive Call 3: Sum of the first 3 elements
▪ Now, we want to sum the first 3 elements:
𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2]) = 𝑠𝑢𝑚(𝐴[0], 𝐴[1]) + 𝐴[2]
▪ Recursive Call 4: Sum of the first 2 elements
▪ Now, we want to sum the first 2 elements:
𝑠𝑢𝑚(𝐴[0], 𝐴[1]) = 𝑠𝑢𝑚(𝐴[0]) + 𝐴[1]
▪ Base Case: Sum of the first element
▪ Finally, the sum of just the first element is simply:
𝑠𝑢𝑚(𝐴[0]) = 𝐴[0]
19
Example 1 Cont’d
▪ Now, we start returning from the base case and summing up:
▪ 𝑠𝑢𝑚(𝐴[0]) = 3
▪ 𝑠𝑢𝑚(𝐴[0], 𝐴[1]) = 3 + 5 = 8
▪ 𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2]) = 8 + 2 = 10
▪ 𝑠𝑢𝑚(𝐴[0], 𝐴[1], 𝐴[2], 𝐴[3]) = 10 + 7 = 17
▪ 𝑠𝑢𝑚(𝐴) = 17 + 1 = 18
▪ Thus, the sum of all the elements in A = [3, 5, 2, 7, 1] is 18
20
Example 1 based on the LinearSum Algorithm
▪ Let's say we call LinearSum(A, 5) where:
int A[] = {3, 5, 2, 7, 1}; // array A
int n = 5; // number of elements in A
▪ The function call LinearSum(A, 5) will sum the first 5 elements of the array.
Step-by-Step Execution:
▪ First Call: LinearSum(A, 5)
▪ Since n != 1, we go to the recursive case:
▪ It calls LinearSum(A, 4) and adds A[4] (which is 1).
▪ Second Call: LinearSum(A, 4)
▪ Since n != 1, we go to the recursive case:
▪ It calls LinearSum(A, 3) and adds A[3] (which is 7).
25
Example 3
26
Example 3 Cont’d
Given: Array: A=[3,5,2,7,1]. Compute the sum of all these elements using Binary
Recursion.
▪ Initial values: 𝑖 = 0 (start from the first element),
▪ 𝑛 = 5 (total number of elements)
Step-by-Step Binary Recursion:
▪ First Call: BinarySum(A,0,5)
▪ Divide into two halves:
▪ First half: BinarySum(A, 0, 3) (⌈5/2⌉ = 3 elements)
▪ Second half: BinarySum(A, 3, 2) (⌊5/2⌋ = 2 elements)
Solving the First Half: BinarySum(A,0,3)(elements: [3,5,2])
▪ Divide into two halves:
▪ First half: BinarySum(A, 0, 2) (⌈3/2⌉ = 2 elements)
▪ Second half: BinarySum(A, 2, 1) (⌊3/2⌋ = 1 element)
27
Example 3 Cont’d
▪ Solving BinarySum(A, 0, 2): BinarySum(A,0,2)(elements: [3,5])
▪ Divide into two halves:
▪ First half: BinarySum(A, 0, 1) (1 element: 3) → returns 3
▪ Second half: BinarySum(A, 1, 1) (1 element: 5) → returns 5
▪ Combine: 3 + 5 = 8
▪ Solving BinarySum(A, 2, 1):
▪ Base case: 1 element → returns 2
▪ Combine results for BinarySum(A, 0, 3):
▪ 8(𝑓𝑟𝑜𝑚 [3,5]) + 2(𝑓𝑟𝑜𝑚 [2]) = 10
28
Example 3 Cont’d
Solving the Second Half: BinarySum(A,3,2)(elements: [7,1])
▪ Divide into two halves:
▪ First half: BinarySum(A, 3, 1) (1 element: 7) → returns 7
▪ Second half: BinarySum(A, 4, 1) (1 element: 1) → returns 1
▪ Combine: 7 + 1 = 8
▪ Final Combination: 10(𝑓𝑖𝑟𝑠𝑡 ℎ𝑎𝑙𝑓) + 8(𝑠𝑒𝑐𝑜𝑛𝑑 ℎ𝑎𝑙𝑓) = 18
▪ The sum of the array A=[3,5,2,7,1] using binary recursion is 18.
29
Example 3 Cont’d
sum(A, 0, 5) # Sum entire array: [3, 5, 2, 7, 1]
├── sum(A, 0, 3) # First half: [3, 5, 2]
│ ├── sum(A, 0, 2) # First half of first half: [3, 5]
│ │ ├── sum(A, 0, 1) → 3 # Single element: 3
│ │ └── sum(A, 1, 1) → 5 # Single element: 5
│ │ └── Combine: 3 + 5 = 8 # Result for [3, 5]
│ └── sum(A, 2, 1) → 2 # Single element: 2
│ └── Combine: 8 + 2 = 10 # Result for [3, 5, 2]
└── sum(A, 3, 2) # Second half: [7, 1]
├── sum(A, 3, 1) → 7 # Single element: 7
└── sum(A, 4, 1) → 1 # Single element: 1
└── Combine: 7 + 1 = 8 # Result for [7, 1]
31
Algorithm for solving a combinatorial puzzle by enumerating and testing all
possible configurations. 32
Types of Recursive Algorithms
Multiple Recursion (3 of 3)
▪ In Figure below, we show a recursion trace of a call to PuzzleSolve(3, 𝑆, 𝑈), where 𝑆 is empty and
𝑈 = {𝑎, 𝑏, 𝑐}.
▪ During the execution, all the permutations of the three characters are generated and tested.
▪ Note that the initial call makes three recursive calls, each of which in turn makes two more.
34
#include <iostream>
using namespace std;
// Tail recursive function to calculate factorial
int factorial(int n, int result = 1) {
if (n == 0) {
return result;
}
return factorial(n - 1, n * result); // Tail recursive call
}
int main() {
int num = 5;
cout << "Factorial of " << num << " is " << factorial(num) << endl;
return 0;
}
38
Introduction
39
Merge Sort
▪ Merge-sort is based on an algorithmic design pattern called divide-and-conquer.
▪ To sort a sequence 𝑆 with 𝑛 elements using the three divide-and-conquer steps, the merge-
sort algorithm proceeds as follows:
▪ Divide: If 𝑆 has zero or one element, return 𝑆 immediately; it is already sorted.
▪ Otherwise (𝑆 has at least two elements), remove all the elements from 𝑆 and put them into
two sequences, 𝑆1 and 𝑆2 , each containing about half of the elements of 𝑆; that is,
▪ 𝑆1 contains the first ⌈𝑛/2⌉ elements of 𝑆, and 𝑆2 contains the remaining ⌊𝑛/2⌋ elements.
▪ Recur: Recursively sort sequences 𝑆1 and 𝑆2 .
▪ Conquer: Put back the elements into 𝑆 by merging the sorted sequences 𝑆1 and 𝑆2 into a
sorted sequence.
40
Merge Sort Algorithm
MERGE-SORT(A; p; r)
if 𝑝 ≥ 𝑟 // zero or one element?
return
𝑞 = ⌊(𝑝 + 𝑟)/2⌋ // midpoint of A[p : r]
MERGE-SORT (𝐴; 𝑝; 𝑞) // recursively sort A[p : q]
MERGE-SORT (𝐴; 𝑞 + 1; 𝑟) // recursively sort A[q+1 : r]
// Merge A[p : q] and A[q+1 : r] into A[p : r].
MERGE(A; p; q; r)
41
Merge Sort Example
▪ We can visualize an execution of the merge-sort algorithm by means of a binary tree T,
called the merge-sort tree.
▪ Each node of T represents a recursive invocation (or call) of the merge-sort algorithm.
42
Quick Sort
▪ In particular, the quick-sort algorithm consists of the following three steps:
▪ Divide: If 𝑆 has at least two elements (nothing needs to be done if 𝑆 has zero or one
element), select a specific element 𝑥 from 𝑆, which is called the pivot.
▪ As is common practice, choose the pivot 𝑥 to be the last element in 𝑆.
▪ Remove all the elements from 𝑆 and put them into three sequences:
▪ 𝐿, storing the elements in 𝑆 less than x
▪ 𝐸, storing the elements in 𝑆 equal to x
▪ 𝐺, storing the elements in 𝑆 greater than x.
▪ If the elements of 𝑆 are all distinct, then 𝐸 holds just one element— the pivot
itself.
43
Quick Sort
▪ Recur: Recursively sort sequences 𝐿
and 𝐺.
▪ Conquer: Put back the elements into 𝑆
in order by first inserting the elements
of 𝐿, then those of 𝐸, and finally those
of 𝐺.
▪ Like merge-sort, the execution of quick-
sort can be visualized by means of a
binary recursion tree, called the quick-
sort tree. A visual schematic of the quick-sort algorithm.
44
Quick Sort Algorithm
45
Quick Sort Example
Quick-sort tree T for an execution of the quick-sort algorithm on a sequence with eight
elements: (a) input sequences processed at each node of T; (b) output sequences generated at
each node ofT.The pivot used at each level of the recursion is shown in bold.
46
Backtracking
Introduction
▪ Backtracking is a general algorithmic technique for solving problems by exploring all
possible solutions.
▪ For example, N-queens problem, sudoku
▪ Backtracking uses recursion to explore solutions by making choices, solving smaller
subproblems, and backtracking to try other options if a choice leads to an invalid state.
▪ It builds the solution incrementally and abandons (or "backtracks") as soon as it determines
that the current solution cannot be extended to a valid one.
▪ Its key characteristics include:
▪ Try and Error: It tries possible solutions one by one and abandons them if they lead to an
invalid state.
▪ Recursive Exploration: The process involves recursion to explore different possibilities.
48
Advantages and Disadvantages of Backtracking
▪ Advantages:
▪ Simple to implement.
▪ Useful for problems with many possible solutions but where only one or a few need to
be found.
▪ Disadvantages:
▪ Can be slow for large problem sizes due to exponential time complexity.
▪ May require significant memory if the problem space is large.
49
Example of Backtracking
N-Queens Problem
▪ The goal is to place queens on a
chessboard such that no two queens
threaten each other.
▪ A queen can attack another if they
share the same row, column,
or diagonal.
▪ To solve this, we place queens one
by one, ensuring each queen is
positioned so it doesn’t threaten any
previously placed queens. Example of an 8-queens problem
50
Pseudocode for N-queens problem
putQueen(row)
for every position 𝑐𝑜𝑙 on the same 𝑟𝑜𝑤
if position 𝑐𝑜𝑙 is available
place the next queen in position 𝑐𝑜𝑙;
if (row < N)
putQueen(row+1);
else success;
remove the queen from position 𝑐𝑜𝑙;
51
C++ Code Snippet for N-queens problem
void putQueen(int row) {
for (int col = 0; col < N; col++) { // Iterate over columns
if (isPositionAvailable(row, col)) { // Check if the position is available
placeQueen(row, col); // Place the queen in the current position
if (row < N - 1) {
putQueen(row + 1); // Recursively place queens on the next row
}
else {
success(); // If all rows are filled, we have a solution
}
removeQueen(row, col); // Backtrack: remove the queen from the current position
}
}
}
52
Example of Backtracking using 4-Queens
0 1 2 3
▪ The 4 Queens Problem involves placing
four queens on a 4x4 chessboard such
that no two queens threaten each other 0
▪ meaning they cannot share the same row,
column, or diagonal.
1
▪ Queens are placed row by row,
checking for clashes in columns and
diagonals. 2
▪ If no safe position is found, the algorithm
backtracks to undo the previous
placement and continues searching. 3
53
0 1 2 3
0 Q
1 Q
2 Q
3
Q
Solution to the 4-Queens Problem Solution to the 8-Queens Problem
54
Dynamic Programming
What is Dynamic Programming?
▪ Dynamic programming, like the divide-and-conquer method, solves problems by combining
the solutions to subproblems.
▪ Divide-and-conquer algorithms partition the problem into disjoint subproblems, solve the
subproblems recursively, and then combine their solutions to solve the original problem.
▪ In contrast, dynamic programming applies when the subproblems overlap-that is, when
subproblems share sub-subproblems.
▪ A dynamic-programming algorithm solves each sub-subproblem just once and then saves its
answer in a table, thereby avoiding the work of recomputing the answer every time it solves
each sub-subproblem i.e., Memoization.
56
Concept of Memoization
▪ Memoization in dynamic programming is a technique used to optimize recursive
algorithms by storing the results of expensive function calls and reusing them when
the same computation is needed again.
▪ The goal is to avoid recalculating the same result multiple times, which can significantly
reduce the time complexity of a recursive solution.
▪ Dynamic programming typically applies to optimization problems. Such problems
can have many possible solutions.
▪ Each solution has a value, and you want to find a solution with the optimal
(minimum or maximum) value i.e., an optimal solution to the problem.
57
Steps for Dynamic Programming
58
Elements of Dynamic Programming
▪ The two key elements an optimization problem must have in order for dynamic
programming to apply are:
▪ Optimal substructure: means that an optimal solution to a problem can be built from
the optimal solutions of its subproblems.
▪ This property enables constructing the overall solution by combining solutions to smaller
subproblems.
▪ Overlapping subproblems: This means the same subproblems are solved multiple
times. Instead of recomputing them, dynamic programming solves each subproblem once,
stores the results, and retrieves them in constant time.
▪ This reuse of solutions improves efficiency, making dynamic programming faster than naive
recursion.
59
Example of Dynamic programming
Matrix-chain Multiplication
▪ The goal is how to multiply a chain of matrices while performing the fewest total scalar
multiplications.
▪ Matrix multiplication is associative, but the computational cost depends on the matrix
dimensions.
▪ The matrix chain-product problem is to determine the parenthesization of the
expression defining the product A that minimizes the total number of scalar
multiplications performed.
▪ Example: Multiply the following matrices: 𝐵 a 2 × 10-matrix, 𝐶 a 10 × 50-matrix,
and 𝐷 a 50 × 20-matrix. It could be 𝐵 · 𝐶 · 𝐷 = 10400 or 𝐵 · 𝐶 · 𝐷 = 3000
scalar multiplications
60
Example of Dynamic programming
Matrix-chain Multiplication (Algorithm)
61
Example of Dynamic programming
Matrix-chain Multiplication
▪ Given four matrices:
▪ 𝐴1 of size 10 × 20
▪ 𝐴2 of size 20 × 30
▪ 𝐴3 of size 30 × 40
▪ 𝐴4 of size 40 × 30
▪ Use the matrix chain multiplication algorithm to determine the optimal order of
multiplication for 𝑨𝟏 ⋅ 𝑨𝟐 ⋅ 𝑨𝟑 ⋅ 𝑨𝟒 in order to minimize the total number of scalar
multiplications. Determine the minimum number of scalar multiplications required?
Solution:
▪ The minimum number of scalar multiplications required compute
A1⋅A2⋅A3⋅A4 is 30,000.
▪ • Optimal parenthesization is ((𝐴1 ⋅ 𝐴2) ⋅ 𝐴3) ⋅ 𝐴4
62
Example of Dynamic programming
Matrix-chain Multiplication
▪ Final Matrix Chain Multiplication Table (after chain length 4):
M 1 2 3 4
1 0 6000 18000 30000
2 0 24000 48000
3 0 36000
4 0
▪ Table for Parenthesization :
p 1 2 3 4
1 1 2 3
2 2 3
3 3
4
63
Comparison between Recursion and Dynamic Programming
Aspect Recursion Dynamic Programming
Definition Solves a problem by breaking it into Solves a problem by breaking it into subproblems
smaller subproblems and solving them and storing results of subproblems to avoid
independently. redundant work.
Redundancy May involve redundant work as the same Eliminates redundancy by storing solutions to
subproblems are solved multiple times. subproblems, ensuring each subproblem is solved
only once.
Efficiency Less efficient for problems with More efficient as it reuses already computed
overlapping subproblems because it results, saving time.
recalculates the same results multiple
times.
Speed Generally slower, especially for problems Faster due to memoization (top-down) or
with overlapping subproblems. tabulation (bottom-up) techniques.
Problem Breaks down the problem recursively and Solves subproblems iteratively and uses previously
Solving solves each part until it reaches the base solved subproblems to build up the final solution.
case.
64
Comparison between Recursion and Dynamic Programming
Aspect Recursion Dynamic Programming
Implementation Easier to implement, often more Requires extra code to store results and iterate
intuitive for simple problems. through the subproblems, making it more
complex to implement.
Optimal Yes, typically, recursion works for Yes, dynamic programming works for problems
Substructure problems with optimal substructure. with optimal substructure and overlapping
subproblems.
Overlapping Not optimized; each subproblem Optimized by storing the results of
Subproblems might be recomputed multiple subproblems to avoid recalculation.
times.
Examples Factorial, Fibonacci sequence, Tower Fibonacci sequence (with memoization), Matrix
of Hanoi. chain multiplication, Knapsack problem,
Longest common subsequence.
65
Reading Exercises
▪ Computing Fibonacci Numbers via Linear and Binary Recursion.
▪ Study the computation of Fibonacci numbers using both linear and binary recursion methods.
▪ Implementation of Sudoku using the Backtracking Algorithm:
▪ Focus on understanding how the backtracking algorithm is applied to solve Sudoku puzzles.
▪ C++ Implementations of Algorithms :
▪ Review the C++ implementations of all data structure algorithms and examples discussed,
including stacks and different types of queues.
▪ Exercises on Stacks and Queues:
▪ Complete all exercises at the end of the lecture notes on stacks and queues.
66
ASSIGNMENT (1 of 2)
1) Explain the step-by-step process of how the Merge Sort algorithm works on the following
array:
𝑎𝑟𝑟 = [45,12,89,33,67,90,11,5]
▪ Include the intermediate steps showing how the array is divided and merged back in sorted order.
2) Explain the step-by-step process of how the Quick Sort algorithm works on the following
array:
𝑎𝑟𝑟 = [56,23,78,19,46,90,15,32]
▪ Assume that the pivot is always the last element of the current subarray. Clearly show the
partitioning steps and the resulting subarrays after each recursive call.
67
ASSIGNMENT (2 of 2)