0% found this document useful (0 votes)
20 views

Lecture 1 (6 Files Merged) (6 Files Merged)

Uploaded by

assq499
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Lecture 1 (6 Files Merged) (6 Files Merged)

Uploaded by

assq499
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Algorithms

Lecture 1

Father of Algorithms
Abu Jafar Mohammed Ibn Musa
Al khowarizmi

Material

• Lecture slides.

• Text Book by Cormen et. al.

• Tutorials on Web.
Introduction
➢The word algorithm comes from the name of the person author
-Abu Jafar Mohammed Ibn Musa Al khowarizmi who wrote
A text book entitled-”Algorithmi de numero indorum” Now term” Algorithmi ”in the title of the book led to
the term Algorithm.

➢An algorithm is an effective method for finding out the solution for a given problem. It is a sequence of
instruction
That conveys the method to address a problem

➢Algorithm : Step by step procedure to solve a computational problem is called Algorithm.


or
➢An Algorithm is a step-by-step plan for a computational procedure that possibly begins with an
input and yields an output value in a finite number of steps in order to solve a particular problem.

Introduction
➢ An algorithm is a set of steps of operations to solve a problem performing calculation, data
processing, and automated reasoning tasks.

➢ An algorithm is an efficient method that can be expressed within finite amount of Time and
space.

➢ The important aspects of algorithm design include creating an efficient algorithm to solve a
problem in an efficient way using minimum time and space.

➢ To solve a problem, different approaches can be followed. Some of them can be efficient with
respect to time consumption, whereas other approaches may be memory efficient.

PROPERTIES OF ALGORITHM

TO EVALUATE AN ALGORITHM WE HAVE TO SATISFY THE FOLLOWING CRITERIA:

1.INPUT: The Algorithm should be given zero or more input.

2.OUTPUT: At least one quantity is produced. For each input the algorithm
produced value from specific task.

3.DEFINITENESS: Each instruction is clear and unambiguous.

4.FINITENESS: If we trace out the instructions of an algorithm, then for all cases, the algorithm terminates after a
finite number of steps.

5.EFFECTIVENESS: Every instruction must very basic so that it can be carried out, in principle, by a person using
only pencil & paper.
➢ A well-defined computational procedure that takes some value, or set of values, as
input and produces some value, or set of values, as output.

➢ Written in a pseudo code which can be implemented in the language of


programmer’s choice.

PSEUDO CODE: A notation resembling a simplified programming language, used


in program design.

How To Write an Algorithm


Step-1:start Step-1: start
Step-2:Read a,b,c Step-2: Read a,b,c
Step-3:if a>b Step-3:if a>b then go to step 4
if a>c otherwise go to step 5
print a is largest Step-4:if a>c then
else print a is largest otherwise
if b>c print c is largest
print b is largest Step-5: if b>c then
else print b is largest otherwise
print c is largest print c is largest
Step-4 : stop step-6: stop

Differences

Algorithm Program
1.At design phase 1.At Implementation phase
2.Natural language 2.written in any
programming language
3.Person should have 3.Programmer
Domain knowledge
4.Analyze 4.Testing
ALGORITHM SPECIFICATION
Algorithm can be described (Represent) in four ways.

1.Natural language like English:


When this way is chooses, care should be taken, we
should ensure that each & every statement is definite.
(no ambiguity)

2. Graphic representation called flowchart:


This method will work well when the algorithm is small& simple.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program, which resembles
language like Pascal & Algol(Algorithmic Language).
4.Programming Language:
we have to use programming language to write algorithms like
C, C++,JAVA etc.

PSEUDO-CODE CONVENTIONS
1. Comments begin with // and continue until the end of line.

2. Blocks are indicated with matching braces { and }.

3. An identifier begins with a letter. The data types of variables are not explicitly declared.
node= record
{
data type 1 data 1;
data type n data n;
node *link;
}
4. There are two Boolean values TRUE and FALSE.
Logical Operators
AND, OR, NOT
Relational Operators
<, <=,>,>=, =, !=

5. Assignment of values to variables is done using the assignment statement.


<Variable>:= <expression>;

6. Compound data types can be formed with records. Here is an example,


Node. Record
{
data type – 1 data-1;
.
.
.
data type – n data – n;
node * link;
}

Here link is a pointer to the record type node. Individual data items of a record can be
accessed with → and period.
Contd…
7. The following looping statements are employed.
For, while and repeat-until While Loop:
While < condition > do
{
<statement-1>
..
..
<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}

repeat-until:

repeat
<statement-1>
.
.
.
<statement-n>
until<condition>

8. A conditional statement has the following forms.

→ If <condition> then <statement>


→ If <condition> then <statement-1>
Else <statement-1>

Case statement:

Case
{
: <condition-1> : <statement-1>
.
.
.
: <condition-n> : <statement-n>
: else : <statement-n+1>
}

9. Input and output are done using the instructions read & write. No format is used to specify
the size of input or output quantities
10. There is only one type of procedure: Algorithm, the heading takes the form,
Algorithm Name (Parameter lists)

consider an example, the following algorithm fields & returns the maximum of n given
numbers:

1. algorithm Max(A,n)
2. // A is an array of size n
3. {
4. Result := A[1];
5. for i:= 2 to n do
6. if A[i] > Result then
7. Result :=A[i];
8. return Result;
9. }

Issue in the study of algorithm


1. How to create an algorithm.
2. How to validate an algorithm.
3. How to analyses an algorithm
4. How to test a program.

1 .How to create an algorithm: To create an algorithm we have following design


technique
a) Divide & Conquer
b) Greedy method
c) Dynamic Programming
d) Branch & Bound
e) Backtracking

2.How to validate an algorithm: Once an algorithm is created it is necessary to show that it


computes the correct output for all possible legal input , this process is called algorithm
validation.
3.How to analyses an algorithm: Analysis of an algorithm or performance analysis refers to task of
determining how much computing
Time & storage algorithms required.
a) Computing time-Time complexity: Frequency or Step count method
b) Storage space- To calculate space complexity we have to use number
of input used in algorithms.
4.How to test the program: Program is nothing but an expression
for the algorithm using any programming language. To test a program
we need following
a) Debugging: It is processes of executing programs on sample data sets
to determine whether faulty results occur & if so correct them.
b) Profiling or performance measurement is the process of executing a
correct program on data set and measuring the time & space it takes
to compute the result.
ANALYSIS OF ALGORITHM
PRIORI POSTERIORI
1.Done priori to run algorithm 1.Analysis after running
on a specific system it on system.
2.Hardware independent 2.Dependent on hardware
3.Approximate analysis 3.Actual statistics of an
algorithm
4.Dependent on no of time 4.They do not do posteriori
statements are executed analysis

Problem: Suppose there are 60 students in the class. How will you calculate the
number of absentees in the class?

Pseudo Approach
1.Initialize a variable called as Count to zero, absent to zero, total to 60
2.FOR EACH Student PRESENT DO the following:
Increase the Count by One
3.Then Subtract Count from total and store the result in absent
4.Display the number of absent students

Problem: Suppose there are 60 students in the class. How will you calculate the
number of absentees in the class?

Algorithmic Approach:
1.Count <- 0, absent <- 0, total <- 60
2.REPEAT till all students counted
Count <- Count + 1
3.absent <- total - Count
4.Print "Number absent is:" , absent
Need of Algorithm
1. To understand the basic idea of the problem.
2. To find an approach to solve the problem.
3. To improve the efficiency of existing techniques.
4. To understand the basic principles of designing the algorithms. To compare the
performance of the algorithm with respect to other techniques.
6. It is the best method of description without describing the implementation detail.
7. The Algorithm gives a clear description of requirements and goal of the problem
to the designer.
8. A good design can produce a good solution.
9. To understand the flow of the problem.

PERFORMANCE ANALYSIS
Performance Analysis: An algorithm is said to be efficient and fast if it take less time
to execute and consumes less memory space at run time is called Performance Analysis.
1. SPACE COMPLEXITY:
The space complexity of an algorithm is the amount of Memory Space required
by an algorithm during course of execution is called space complexity .There are three
types of space
a) Instruction space :executable program
b) Data space: Required to store all the constant and variable data space.
c) Environment: It is required to store environment information needed to resume the
suspended space.
2. TIME COMPLEXITY:
The time complexity of an algorithm is the total amount of time required by an
algorithm to complete its execution.

Algorithms

Lecture 2
Many Definitions of “Algorithm”
• An algorithm is any well-defined computational procedure that
takes some value, or set of values, as input and produces some
value, or set of values as output.

• An algorithm is a list of steps (sequence of unambiguous


instructions ) for solving a problem that transforms the input
into the output.

• A finite set of instructions that specifies a sequence of


operation to be carried out in order to solve a specific problem
is called an algorithm

Role of “Algorithm”
Example:
• Algorithms for the same problem can be based on very different ideas
and can solve the problem with dramatically different speeds.

• The greatest common divisor of two nonnegative, not-both-zero integers


m and n, denoted is defined as the largest integer that divides both m
and n evenly.

• In modern terms, Euclid’s algorithm is based on applying repeatedly the


equality

• where is the remainder of the division of m by n, until is equal to 0.

• Since , the last value of m is also the greatest common divisor of the
initial m and n.

Euclid’s algorithm for computing


Euclid’s algorithm for computing

• Step 1 If n = 0, return the value of m as the answer and stop;


otherwise, proceed to Step 2.

• Step 2 Divide m by n and assign the value of the remainder to r.

• Step 3 Assign the value of n to m and the value of r to n. Go to


Step 1.

For example, can be computed as follows:

Euclid’s algorithm for computing


• Alternatively, we can express the same algorithm in
pseudocode:

//Input: Two nonnegative, not-both-zero integers m and n

//Output: Greatest common divisor of m and n

while n = 0 do

r ← m mod n

m←n

n←r

return m
Middle-school procedure for computing

• Step 1 Find the prime factors of .

• Step 2 Find the prime factors of .

• Step 3 Identify all the common factors in the two prime expansions
found in Step 1 and Step 2. (If is a common factor occurring
andtimes in and , respectively, it should be repeated times.)

• Step 4 Compute the product of all the common factors and return it
as the greatest common divisor of the numbers given.

Middle-school procedure for computing

• Example:

• For the numbers 60 and 24:


• 60 = 2 . 2 . 3 . 5

• 24 = 2 . 2 . 2 . 3

• = 2 . 2 . 3 = 12.

Characteristics of an Algorithm
• Input

• Output

• Definiteness

• Finiteness

• Effectiveness

• Correctness

• Simplicity

• Unambiguous

• Feasibility

• Portable

• Independent
Algorithm Expectation
• Correctness:
• Algorithms must produce correct result.

• Less resource usage:


• Algorithms should use less resources (time and space).

• Analysis of an algorithm: (Performance)

• Two areas are important for performance:

1. Space efficiency - the memory required, also called, space


complexity

2. Time efficiency - the time required, also called time complexity

Time efficiency
The actual running time depends on many factors:

• The speed of the computer: cpu (not just clock speed), I/O, etc.

• The compiler, compiler options

• The quantity of data - ex. search a long list or short

• The actual data - ex. in the sequential search if the name is first
or last.

Algorithm as a Technology
• Algorithms are just like a technology. We all use latest and greatest
processors but we need to run implementations of good
algorithms on that computer in order to properly take benefits of
our money that we spent to have the latest processor.

• Example:

• A faster computer (computer A) running a sorting algorithm whose


running time on n values grows like against a slower computer
(computer B) running a sorting algorithm whose running time
grows like .

• They each must sort an array of 10 million numbers.


Algorithm as a Technology

• Computer A executes 10 billion instructions per second (faster than


any single sequential computer at the time of this writing).

• Computer B executes only 10 million instructions per second, so that


computer A is 1000 times faster than computer B in raw computing
power.

Algorithm as a Technology

Computer A
Computer B
• Running time grows like
• Running time grows like
• 10 billion instructions per sec
• 10 million instructions per sec
• Thus, instructions:
• Thus, instructions:
• Time taken = sec
• Time taken = sec
• It is more than 5.5 hrs
• It is under 20 mins

Algorithm as a Technology
• A faster algorithm running on a slower machine will always win for
large enough instances

• Suppose algorithm S1 sorts keys in instructions

• Suppose computer C1 executes 1 billion instruc/sec

• When million, takes 2000 sec

• Suppose algorithm S2 sorts keys in instructions

• Suppose computer C2 executes 10 million instruc/sec

• When million, takes 100 sec


Algorithms

Lecture 3

Lecture #2
• Mathematical Background.

• Insertion sort algorithm

Mathematical Basics
• Polynomial: Powers of n, such as n5 and = n1/2.

• Polylogarithmic: Powers of log n, such as (log n)7. We will usually write this
as log7 n.

• Exponential: A constant (not 1) raised to the power n, such as 3n.

• An important fact is that polylogarithmic functions are strictly asymptotically


smaller than polynomial

• function, which are strictly asymptotically smaller than exponential functions


(assuming the base of the exponent is bigger than 1).

• For example, if we let mean “asymptotically smaller” then, loga n < nb < cn
Mathematical Basics
• Logarithm Simplification: It is a good idea to first simplify terms
involving logarithms. For example, the following formulas are
useful. Here a; b; c are constants:

• Avoid using log n in exponents. The last rule above can be used to
achieve this. For example, rather than saying ,

express this as

Mathematical Basics
• Summations: they naturally arise in the analysis of iterative
algorithms. Also, more complex forms of analysis, such as
recurrences, are often solved by reducing them to summations.

• Solving a summation means reducing it to a closed form formula,


that is, one having no summations, recurrences, integrals, or other
complex operators.

• In algorithm design it is often not necessary to solve a summation


exactly, since an asymptotic approximation or close upper bound is
usually good enough. Here are some common summations and some
tips to use in solving summations.

Mathematical Basics
• Constant Series: For integers a and b,

• Notice that when b = a − 1, there are no terms in the summation


(since the index is assumed to count upwards only), and the
result is 0. Be careful to check that b ≥ a−1 before applying this
formula blindly.

• Arithmetic Series: For n 0,

• This is . As we will see later in more details!


Mathematical Basics

• Geometric Series: Let x 1 be any constant (independent of n), then

for n > 0,

• If 0 < x < 1 then this is ɵ(1). If x > 1, then this is (xn), that is, the

entire sum is proportional to the last element of the series.

Mathematical Basics
• Quadratic Series: For n 0,

• Harmonic Series: This arises often in probabilistic analyses of


algorithms. It does not have an exact closed form solution, but
it can be closely approximated. For n 0,

Mathematical Basics
• Summations with general bounds: When a summation does not start
at the 1 or 0, as most of the above formulas assume, you can just split it
up into the difference of two summations. For example, for 1 a b

• Linearity of Summation: Constant factors and added terms can be split


out to make summations simpler.
The problem of sorting
• Input: sequence 〈a1, a2, …, an〉 of numbers.

• Output: permutation 〈a'1, a'2, …, a'n〉 such that

a'1 ≤ a'2 ≤ … ≤ a'n.

• What is the difference between sequence and permutation?

• Does this difference affect your formulation and solution?

Insertion Sort

Example of insertion sort


Insertion Sort Time

Running time

• The running time depends on the input: an already sorted

sequence is easier to sort.

• Parameterize the running time by the size of the input, since short

sequences are easier to sort than long ones.

• Generally, we seek upper bounds on the running time, because

everybody likes a guarantee.

Kinds of analyses
• Worst-case: (usually)

T(n) =maximum time of algorithm on any input of size n.

• Average-case: (sometimes)

T(n) =expected time of algorithm over all inputs of size n.

• Need assumption of statistical distribution of inputs.

Best-case: (do not care !)

• We care about average-case and worst-case analysis (similar in

many cases.)
Best case
analysis

• The array is already sorted.

• Always find that A[i ] ≤ key upon the first time the while loop
test is run (when i = j − 1).

• All tj are 1.

• Running time is : T (n) = c1n + c2(n − 1) + c4(n − 1) + c5(n − 1) + c8(n − 1)

= (c1 + c2 + c4 + c5 + c8)n − (c2 + c4 + c5 + c8) .

• Can express T (n) as an +b for constants a and b (that depend on

Worst case analysis


• The array is in reverse sorted order.

• Always find that A[i ] > key in while loop test.

• Have to compare key with all elements to the left of the j th

position ⇒compare with (j − 1) elements.

Worst case
analysis
Machine-independent time

Order of growth
• Another abstraction to ease analysis and focus on the important
features.

• Look only at the leading term of the formula for running time.
• Drop lower-order terms.

• Ignore the constant coefficient in the leading term.

• Example: For insertion sort:

• The worst-case running time is an2 + bn + c.

• Drop lower-order terms⇒ an2.

• Ignore constant coefficient ⇒ n2.

Order of growth
• But we can’t say that the worst-case running time T(n) equals n 2.

• It grows like n2. But it doesn't equal n2.

• We say that the running time is (n2) to capture the notion that the
order of growth is n2.

• We usually consider one algorithm to be more efficient than


another if its worst case running time has a smaller order of
growth.

• Notice that the justification of the n2 time of insertion sort can be


explained by the existence of two nested loops (do you notice?!)
Θ-notation

Asymptotic performance

Insertion sort analysis


• Is insertion sort a fast sorting algorithm?

• Moderately so, for small n.

• Not at all, for large n.


Algorithms

Lecture 4

Lecture #3

• Merge Sort

• Divide and conquer approach

• Recurrences

• Asymptotic notations

Merge sort

May be better understood as follows:


Branching continues until p<r
becomes FALSE
Merging two sorted arrays

• Time = Θ(n) to merge a total of n elements (linear time).

Pseudo-code for Merging

Pseudo-code for Merging


• The procedure assumes that the subarrays A[p…q] and

A[q + 1…r] are in sorted order.

• Output: a single sorted subarray that replaces the current

subarray A[p…r].

• MERGE procedure takes time Θ(n), where n = r - p + 1

• Using ∞ just to copy all elements of the other array to the

output one. (we can get ride of it)

• Lines from 12 to 17 do the task.


Returning-back
until the initial copy of Merge-Sort()

Divide-and-conquer approach
• Merge sort belongs to this family.

1. Divide the problem into a number of subproblems.

2. Conquer the subproblems by solving them recursively.

3. Combine the solutions to the subproblems into the solution


for the original problem.

• The main advantage is to avoid comparing all elements with


each other. If you notice well, you can see that some elements
aren’t compared to other elements in the merging step.
Analyzing Merge Sort
• Sloppiness: Should be T( ⎡n/2⎤) + T( ⎣n/2⎦) , but it turns out not
to matter asymptotically. (That means that the array is recursively
divided into two equal parts changing between odd and even
sizes).

Recurrence for merge sort

• We shall usually omit stating the base case when T(n) = Θ(1) for

sufficiently small n, but only when it has no effect on the

asymptotic solution to the recurrence.

• There are several ways to find a good upper bound on T(n),

(recurrence for example)

Recursion tree
• Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
Recursion tree
• Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.

Merge sort
• Merge sort running time is Θ(n lg n).

• Θ(n lg n)grows more slowly than Θ(n 2).

• Therefore, merge sort asymptotically beats insertion sort in the


worst case.

• In practice, merge sort beats insertion sort for n> 30 or so.

• Try coding both algorithms:


• Choose different sizes for n (i.e., 5, 10, 50, 100, 1000).

• Count the number of comparisons.

• Compare the count against theoretical results.

Why Asymptotic notation?

• A way to describe behavior of functions in the limit.

• Describe growth of functions.

• Focus on what’s important by abstracting away low-order terms

and constant factors.

• How we indicate running times of algorithms.

• A way to compare “sizes” of functions:


O Notation

Set definition of O-notation

• EXAMPLE:

• 2n2∈O(n3) ( c = 1 and n0 = 2)

More Examples
W-notation (lower bounds)

 EXAMPLE:
)c= 1, n0= 16(

W-notation

More Examples
Θ-notation (tight bounds)

Example:

c1 = 1/4, c2 = 1/2, and n0 = 8.

Θ-notation

Theorem

f(n) = Q(g(n))
if and only if
f(n) = O(g(n))
and
f(n) = W(g(n))
o-notation

w - notation

So… Remember
Algorithms

Lecture 5

Lecture #4
• Quick sort.

• Sorting in linear time:


• Counting sort.

• Bucket sort.

Quick Sort Algorithm

• Proposed by C.A.R. Hoare in 1962.

• Divide-and-conquer algorithm.

• Sorts “in place”(like insertion sort, but not like merge sort).

• “In place” means that in case of repeated elements, the

algorithms keeps order of occurrence.

• Very practical (with tuning).


Divide and conquer

Quicksort: given an n-element array:

1.Divide: Partition the array into two subarrays around a pivot x


such that elements in lower subarray ≤x≤elements in upper
subarray.

2. Conquer: Recursively sort the two subarrays.

3.Combine:Trivial.

Partitioning subroutine
• Key: Linear-time partitioning subroutine.

Example of partitioning

0 4

5
2

3 6
Pseudo code for quick sort

• QUICKSORT(A, p, r)

If p< r

Then q←PARTITION (A, p, r)


QUICKSORT(A, p, q–1)

QUICKSORT(A, q+1, r)

• Initial call : QUICKSORT(A, 1, n)

Analysis of quick sort


• Assume that all input elements are distinct.

• In practice, there are better partitioning algorithms for when

duplicate input elements may exist.

• Let T(n)=worst-case running time on an array of n elements.

• As you may notice the concept of divide and conquer

approach is very clear in quick sort. After partitioning,

elements of left array won’t be compared to any element of

right array, and so on…

Worst-case of quick sort

• What is the worst case of quick sort?


• Input sorted or reverse sorted.

• Partition around min or max element.

• One side of partition always has no elements.


Worst-case recursion tree

T(n) = T(0) + T(n–1) + cn

Best-case analysis

• What is the best case analysis

• If we’re lucky, PARTITION splits the array evenly:

• What is the solution to this recurrence?


T(n)= 2T(n/2) + Θ(n)

= Θ(nlg n)(same as merge sort)

• What if the split is always 1/10:9/10?

• What is the solution to this recurrence?

Sorting in linear time

• Counting sort:

• No comparisons between elements.

• Input: A[1 . . n], where A[j]∈{1, 2, …, k}.

• Output: B[1 . . n], sorted.

• Auxiliary storage: C[1 . . k].


Counting sort

LOOP1:

LOOP2:

LOOP3:

LOOP4:
Analysis

Analysis

• If k= O(n), then counting sort takes Θ(n) time.

• But, sorting takes Ω(nlgn) time!

• Where’s the fallacy?

Answer:

• Comparison sorting takes Ω(nlgn) time.

• Counting sort is not a comparison sort.

• In fact, not a single comparison between elements occurs!

Radix sort

• Digit by digit Sort.

• Idea:

• Sort on least significant digit first with auxiliary stable sort

• Stable sorting algorithms maintain the relative order of


records with equal keys. (A key is that portion of the record
which is the basis for the sort; it may or may not include all of
the record.) If all keys are different then this distinction is not
necessary.
Rules to sort

• Assume that the numbers are


sorted by their low-order

t –1digits.

• Two numbers that differ in digit


are correctly sorted.

• Two numbers equal in digit are put


in the same order as the input
⇒correct order.

Example
Sort {329,457,657,839,436,720,355}

Analysis for Radix Sort


• Assume counting sort is the auxiliary stable sort.

• Sort n computer words of b bits each.

• Each word can be viewed as having b/r

base-2r digits.

• Example:32-bit word

• r=8 ⇒ b/r=4 passes of counting sort on base-28 digits;

• or r=16 ⇒b/r=2 passes of counting sort on base-216 digits.


Analysis for Radix Sort
• Recall: counting sort takes Θ(n + k) time to sort n numbers in the

range from 0 to k –1.

• If each b-bit word is broken into r-bit pieces, each pass of counting

sort takes Θ(n + 2r) time.

• Since there are b/r passes, we have

• Choose r to minimize T(n,b)

• Increasing r means fewer passes,

• but as r > lg n, the time grows exponentially.

Analysis for Radix Sort

• Minimize T(n,b) by differentiating and setting to 0.

• Or just observe that we don’t want 2r> n, and there’s no

harm asymptotically in choosing r as large as possible

subject to this constraint.

• Choosing r= lg n implies T(n,b)=Θ(bn/lgn).

Algorithms

Lecture 6
Lecture #5
• Linear search

• Binary search

• Recurrences Examples: powering a number

Linear Search
• Find an element in a an array.

• Scan the array from first element to the last one,


then report the index at which the element found
or “NIL”.

• Time complexity: O(n)

Remember: Consider the searching problem:


• Input: A sequence of n numbers A = a1, a2, . . . , an
and a value v.
• Output: An index i such that v = A[i] or the special value NIL if v
does not appear in A.

Binary Search
• Find an element in a sorted array:

• Divide : Check middle element.

• Conquer: Recursively search sub array.

• Combine: Trivial.
Binary Search
• Find an element in a sorted array:

• Divide : Check middle element

• Conquer: Recursively search sub array

• Combine: Trivial.

Recurrence for binary search

Self study: Master Theorem


(Go to Reference!)

Powering a number
• Problem:Compute an, where n ∈N.

• Naive algorithm: Θ(n)

• Divide-and-conquer algorithm
Powering a number

• Problem:Compute an, where n ∈N.

• Naive algorithm: Θ(n)

• Divide-and-conquer algorithm

Binary Search Tree


What is a tree?

• A tree is a kind of data structure that is used to represent


the data in hierarchical form.

• Items are called nodes

• Each node has one parent and one or two children. Root
node has no parents. Leaf nodes have no children.

• Tree is a non-linear data structure as the data in a tree is


not stored linearly or sequentially.

Binary Search Tree

A binary search tree:

• The value of left node must be smaller than the


parent node, and the value of right node must be
greater than the parent node.

• This rule is applied recursively to the left and right


subtrees of the root
Binary Search Tree

Example:

Binary Search Tree

Example: Creating a binary search tree for the data


elements: - 45, 15, 79, 90, 10, 55, 12, 20, 50

Binary Search Tree

Step 2 - Insert 15.


As 15 is smaller than 45, so insert it as the root node of the left subtree.
Binary Search Tree

Step 3 - Insert 79.


As 79 is greater than 45, so insert it as the root node of the right subtree.

Binary Search Tree

Step 4 - Insert 90.


90 is greater than 45 and 79, so it will be inserted as the right subtree of 79.

Binary Search Tree

Step 5 - Insert 10.


10 is smaller than 45 and 15, so it will be inserted as a left subtree of 15.
Binary Search Tree

Step 6 - Insert 55.


55 is larger than 45 and smaller than 79, so it will be inserted as the left subtree of 79.

Binary Search Tree

Step 7 - Insert 12.


12 is smaller than 45 and 15 but greater than 10, so it will be inserted as the right subtree of 10

Binary Search Tree

Step 8 - Insert 20.


20 is smaller than 45 but greater than 15, so it will be inserted as the right subtree of 15
Binary Search Tree

Step 9 - Insert 50
50 is greater than 45 but smaller than 79 and 55. So, it will be inserted as a left subtree of 55

Binary Search Tree

Searching in Binary search tree

1.First, compare the element to be searched with the root element of the tree.

2.If root is matched with the target element, then return the node's location.

3.If it is not matched, then check whether the item is less than the root element, if it is smaller

than the root element, then move to the left subtree.

4.If it is larger than the root element, then move to the right subtree.

5.Repeat the above procedure recursively until the match is found.

6.If the element is not found or not present in the tree, then return NULL.

Binary Search Tree


Algorithm to search an element in Binary search tree
Binary Search Tree

Time Complexity:
• Best-case: to find the element at the root

• Average-case: tree is almost balanced

• Worst-case: tree is just one path (elements are inserted in order)

Algorithms

Lecture 7

Lecture #6
• Graph Algorithms:
• Graph representation

• Traversing and searching

• Shortest-path
Graphs
• Graph G = (V, E)
• V = set of vertices
• E = set of edges  (VV)
• Types of graphs
• Undirected: edge (u, v) = (v, u); for all v, (v, v)  E (No self loops.)
• Directed: (u, v) is edge from u to v, denoted as u  v. Self loops
are allowed.
• Weighted: each edge has an associated weight, given by a weight
function w : E  R.

Representations of graphs
• Let G = (V, E), |V| = n and |E|=m
• The adjacency matrix of G written A(G), is the
n-by-n matrix in which entry ai,j is the number of
edges in G with endpoints {vi, vj}.

w w x y z
b w 0 1 1 0
y z x 1 0
a c 2 0
e
d
y 1 2 0 1
x
z 0 0 1 0

Representations of graphs
• Let G = (V, E), |V| = n and |E|=m
• The incidence matrix M(G) is the n-by-m matrix
in which entry mi,j is 1 if vi is an endpoint of ei
and otherwise is 0.

w a b c d e
b w 1 1 0 0 0
y
a c z x 1 0 1 1 0
e
x d y 0 1 1 1 1
z 0 0 0 0 1
Representations of graphs
e2 v3
v4
• Adjacency matrix e3

v5 v4 v3 v2 v1 e4 e5
v5
v1 v2
0 0 0 1 2
e1 e6
1 0 1 0 1 v2 v1

2 0 0 1 0 v3

0 0 0 0 0 v4

0 0 2 1 0 v5

Representations of graphs
e2 v3
v4
e3

e4 e5
v5
v2
• Incidence matrix
e1 e6
e6 e5 e4 e3 e2 e1 v1
2 0 0 0 0 1 v1
0 0 1 0 1 1 v2
0 1 0 1 1 0 v3
0 0 0 0 0 0 v4
0 1 1 1 0 0 v5

Graph-searching Algorithms

• Searching a graph:
• Systematically follow the edges of a graph
to visit the vertices of the graph.

• Used to discover the structure of a graph.

• Standard graph-searching algorithms.


• Breadth-first Search (BFS).

• Depth-first Search (DFS).


Intro 8
Breadth-first Search
• Input: Graph G = (V, E), either directed or undirected,
and source vertex s  V.

• Output:
• d[v] = distance (smallest # of edges, or shortest path) from s to v, for
all v  V. d[v] =  if v is not reachable from s. V.

• [v] = u such that (u, v) is last edge on shortest path s


• u is v’s predecessor.

• Builds breadth-first tree with root s that contains all reachable


Intro 9
vertices.

Breadth-First Search (BFS)

Intro 10

pseudo-code
BFS Set all nodes to "not visited";

q = new Queue();

q.enqueue(initial node);

while ( q ≠ empty ) do
{
x = q.dequeue();

if ( x has not been visited )


{
visited[x] = true; // Visit node x !

for ( every edge (x, y) /* we are using all edges ! */ )

if ( y has not been visited )


q.enqueue(y); // Use the edge (x,y) !!!
}
} Intro 11
BFS
Rule 0- Insert initial node in the queue.

Rule 1- Remove the head of the queue and Mark it as visited.

Rule 2 − Insert all adjacent nodes (to removed one) into queue.

Rule 3 − If no adjacent vertex is found, then stop.

Rule 4 − Repeat Rule 1 to Rule 3 until the queue is empty.

Intro 12

Example (BFS)

r s t u
 0  

   
v w x y

Q: s
0

Intro 13

Example (BFS)

r s t u
1 0  

 1  
v w x y

Q: w r
1 1

Intro 14
Example (BFS)

r s t u
1 0 2 

 1 2 
v w x y

Q: r t x
1 2 2

Intro 15

Example (BFS)

r s t u
1 0 2 

2 1 2 
v w x y

Q: t x v
2 2 2

Intro 16

Example (BFS)

r s t u
1 0 2 3

2 1 2 
v w x y

Q: x v u
2 2 3

Intro 17
Example (BFS)

r s t u
1 0 2 3

2 1 2 3
v w x y

Q: v u y
2 3 3

Intro 18

Example (BFS)

r s t u
1 0 2 3

2 1 2 3
v w x y

Q: u y
3 3

Intro 19

Example (BFS)

r s t u
1 0 2 3

2 1 2 3
v w x y

Q: y
3

Intro 20
Example (BFS)

r s t u
1 0 2 3

2 1 2 3
v w x y

Q: 

Intro 21

Example (BFS)

r s t u
1 0 2 3

2 1 2 3
v w x y

BF Tree

Intro 22

Analysis of BFS
• Initialization takes O(V).
• Traversal Loop
• After initialization, each vertex is enqueued and dequeued at most once,
and each operation takes O(1). So, total time for queuing is O(V).
• The adjacency list of each vertex is scanned at most once. The sum of
lengths of all adjacency lists is (E).

• Summing up over all vertices => total running time of BFS is


O(V+E), linear in the size of the adjacency list representation of
graph.
Example 2
• Show the BFS tree for the following graph starting from u:

u x

v y

w z

Intro 24

Example 2
• Show the BFS tree for the following graph starting from u:

Q:
u 0 1 x u
v,x
x
x,y
v 1 2 y y
w
z

w 3 4 z
BF Tree
Intro 25

BFS(G,s)
BFS(G,s)
foreach
1.1.for eachvertex
vertexuuininV[G]
V[G]–– {s}
{s}
Another pseudo-code
22 do color[u]
docolor[u] white
white
33 d[u]d[u]

44 [u] white: undiscovered
[u]nil
nil
gray: discovered
55 color[s]  gray
color[s]  gray black: finished
d[s]
66 d[s] 00
77 [s]
[s]nil
nil
Q: a queue of discovered
88 QQ vertices
99 enqueue(Q,s)
enqueue(Q,s) color[v]: color of v
d[v]: distance from s to v
10
10 whileQQ
while
[u]: predecessor of v
11
11 douu
do dequeue(Q)
dequeue(Q)
12
12 foreach
for eachvvininAdj[u]
Adj[u]
13
13 do if color[v] =white
do if color[v] = white
14
14 then color[v]
thencolor[v] gray
gray
15
15 d[v] 
d[v]  d[u] +11
d[u] +
16
16 [v]
[v]
uu
17
17 enqueue(Q,v)
enqueue(Q,v)
18
18 color[u]
color[u] black
black
Intro 26
Depth-first Search (DFS)
• Explore edges out of the most recently discovered
vertex v.
• When all edges of v have been explored, backtrack to
explore other edges leaving the vertex from which v
was discovered (its predecessor).
• “Search as deep as possible first.”
• Continue until all vertices reachable from the original
source are discovered.
• If any undiscovered vertices remain, then one of them
is chosen as a new source and search is repeated from
that source.
Intro 27

Depth-First Search (DFS)

Intro 28

Pseudo-code DFS(G,v) ( v is the vertex where the search starts )


Stack S := {}; ( start with an empty stack )
for each vertex u, set visited[u] := false;
push S, v;
while (S is not empty) do
u := pop S;
if (not visited[u]) then
visited[u] := true;
for each unvisited neighbour w of u
push S, w;
end if
end while
END DFS()
Intro 29
Order: 1 2 4 5 3
Intro 30

Algorithms

Lecture 8

Dijkstra's algorithm
Author : Edsger Wybe Dijkstra

"Computer Science is no more about computers than


astronomy is about telescopes."

http://www.cs.utexas.edu/~EWD/
2
Edsger Wybe Dijkstra
- May 11, 1930 – August 6, 2002

- Received the 1972 A. M. Turing Award, widely considered


the most prestigious award in computer science.

- The Schlumberger Centennial Chair of Computer Sciences


at The University of Texas at Austin from 1984 until 2000

- Made a strong case against use of the GOTO statement in


programming languages and helped lead to its deprecation.

- Known for his many essays on programming.

Single-Source Shortest Path Problem

Single-Source Shortest Path Problem - The problem


of finding shortest paths from a source vertex v to all
other vertices in the graph.

Dijkstra's algorithm
Dijkstra's algorithm - is a solution to the single-source
shortest path problem in graph theory.

Works on both directed and undirected graphs. However, all


edges must have nonnegative weights.

Approach: Greedy

Input: Weighted graph G={E,V} and source vertex v∈V, such


that all edge weights are nonnegative

Output: Lengths of shortest paths (or the shortest paths


themselves) from a given source vertex v∈V to all other
vertices
5
Dijkstra's algorithm - Pseudocode

dist[s] ←0 (distance to source vertex is zero)


for all v ∈ V–{s}
do dist[v] ←∞ (set all other distances to infinity)
S←∅ (S, the set of visited vertices is initially empty)
Q←V (Q, the queue initially contains all vertices)
while Q ≠∅ (while the queue is not empty)
do u ← mindistance(Q,dist) (select the element of Q with the min. distance)
S←S∪{u} (add u to list of visited vertices)
for all v ∈ neighbors[u]
do if dist[v] > dist[u] + w(u, v) (if new shortest path found)
then d[v] ←d[u] + w(u, v) (set new value of shortest path)
(if desired, add traceback code)
return dist

Dijkstra Animated Example

Dijkstra Animated Example

8
Dijkstra Animated Example

Dijkstra Animated Example

10

Dijkstra Animated Example

11
Dijkstra Animated Example

12

Dijkstra Animated Example

13

Dijkstra Animated Example

14
Dijkstra Animated Example

15

Dijkstra Animated Example

16

Implementations and Running Times


The simplest implementation is to store vertices in an
array or linked list. This will produce a running time of

O(|V|^2 + |E|)

For sparse graphs, or graphs with very few edges


and many nodes, it can be implemented more
efficiently storing the graph in an adjacency list using
a binary heap or priority queue. This will produce a
running time of

O((|E|+|V|) log |V|)


17
Applications of Dijkstra's Algorithm
- Traffic Information Systems are most prominent use
- Mapping (Map Quest, Google Maps)
- Routing Systems

18

Algorithms
Lecture 9

Minimum Spanning Trees


Definition
• A Minimum Spanning Tree (MST) is a
subgraph of an undirected graph such that
the subgraph spans (includes) all nodes, is
connected, is acyclic, and has minimum
total edge weight.

2
Algorithm Characteristics
• Both Prim’s and Kruskal’s Algorithms work
with undirected graphs.
• Both work with weighted and unweighted
graphs but are more interesting when edges
are weighted
• Both are greedy algorithms that produce
optimal solutions

Kruskal’s Algorithm

• Work with edges, rather than nodes


• Two steps:
– Sort edges by increasing edge weight
– Select the first |V| – 1 edges that do not generate
a cycle.

Walk-Through
Consider an undirected, weight graph
3
10
F C
A 4
4
3
8
6
5
4
B D
4
H 1
2
3
G 3
E

5
Sort the edges by increasing edge weight
3
10
F C edge dv edge dv

A 4
4
3 (D,E) 1 (B,E) 4
8
6 (D,G) 2 (B,F) 4
5
4
B D (E,G) 3 (B,H) 4
4
H 1 (C,D) 3 (A,H) 5
2
3 (G,H) 3 (D,F) 6
G 3
E (C,F) 3 (A,B) 8
(B,C) 4 (A,F) 10

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2 (B,F) 4
5
4
B D (E,G) 3 (B,H) 4
4
H 1 (C,D) 3 (A,H) 5
2
3 (G,H) 3 (D,F) 6
G 3
E (C,F) 3 (A,B) 8
(B,C) 4 (A,F) 10

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3 (B,H) 4
4
H 1 (C,D) 3 (A,H) 5
2
3 (G,H) 3 (D,F) 6
G 3
E (C,F) 3 (A,B) 8
(B,C) 4 (A,F) 10

8
Select first |V|–1 edges which do not
generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3 (A,H) 5
2
3 (G,H) 3 (D,F) 6
G 3
E (C,F) 3 (A,B) 8
(B,C) 4 (A,F) 10

Accepting edge (E,G) would create a cycle

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3 (D,F) 6
G 3
E (C,F) 3 (A,B) 8
(B,C) 4 (A,F) 10

10

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3 (A,B) 8
(B,C) 4 (A,F) 10

11
Select first |V|–1 edges which do not
generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3  (A,B) 8
(B,C) 4 (A,F) 10

12

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3  (A,B) 8
(B,C) 4  (A,F) 10

13

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4 
8 4
6 (D,G) 2  (B,F) 4
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3  (A,B) 8
(B,C) 4  (A,F) 10

14
Select first |V|–1 edges which do not
generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4 
8 4
6 (D,G) 2  (B,F) 4 
5
4
B D (E,G) 3  (B,H) 4
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3  (A,B) 8
(B,C) 4  (A,F) 10

15

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4 
8 4
6 (D,G) 2  (B,F) 4 
5
4
B D (E,G) 3  (B,H) 4 
4
H 1 (C,D) 3  (A,H) 5
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3  (A,B) 8
(B,C) 4  (A,F) 10

16

Select first |V|–1 edges which do not


generate a cycle
3
10
F C edge dv edge dv

A 4 3 (D,E) 1  (B,E) 4 
8 4
6 (D,G) 2  (B,F) 4 
5
4
B D (E,G) 3  (B,H) 4 
4
H 1 (C,D) 3  (A,H) 5 
2
3 (G,H) 3  (D,F) 6
G 3
E (C,F) 3  (A,B) 8
(B,C) 4  (A,F) 10

17
Select first |V|–1 edges which do not
generate a cycle
3
F C edge dv edge dv

A 3 (D,E) 1  (B,E) 4 
4
(D,G) 2  (B,F) 4 
5
B D (E,G) 3  (B,H) 4 
H 1 (C,D) 3  (A,H) 5 
2

}
3 (G,H) 3 (D,F) 6
not
G E (C,F) 3  (A,B) 8 considered
(B,C) 4  (A,F) 10
Done
Total Cost =  dv = 21

18

Algorithms

Lecture 10

Lecture #9
• Greedy Algorithms.
• Making Change.

• Fractional Knapsack Problem.

• Task Scheduling.
The Greedy Method Technique

• The greedy method is a general algorithm design paradigm, built


on the following elements:
• configurations: different choices, collections, or values to find.

• objective function: a score assigned to configurations, which


we want to either maximize or minimize.

• It works best when applied to problems with the greedy-choice


property:
• A globally-optimal solution can always be found by a series of
local improvements from a starting configuration.

The Greedy Method Technique


• In this context, we are talking about optimization problems.

• There are some important terminology:

• Solution: feasible solution meets the constraints of the problem.

• Solution space: all possible feasible solutions.

• Optimization function: the criteria of solutions. It gives each


solution a score to judge the solution if it is better or worse than
others.

• Optimization problems may be minimization problems (i.e., low


value for objective function is better) or maximization.

The Greedy Method Technique


• Greedy methodology involves making the best choice in
the current step of algorithm (this is what we call local
improvement).

• However, it may not be the optimal solution finally!!!

• This tells us the following fact:

No guarantee that a greedy algorithm will find the


optimal solution each run but mostly it finds a
solution within a factor of optimal one.
Example 1: Making Change

• Problem: A dollar amount to reach and a collection of coin

amounts to use to get there.

• Configuration: A dollar amount yet to return to a customer plus

the coins already returned

• Objective function: Minimize number of coins returned.

• Greedy solution: Always return the largest coin you can.

Example 1: Making Change


• Example 1: Coins are valued $.32, $.08, $.01
• Has the greedy-choice property, since no amount over $.32
can be made with a minimum number of coins by omitting
a $.32 coin (similarly for amounts over $.08, but under
$.32).

• Example 2: Coins are valued $.30, $.20, $.05, $.01


• Does not have greedy-choice property, since $.40 is best
made with two $.20’s, but the greedy solution will pick
three coins (which ones?)

Example 2: The Fractional Knapsack Problem

• Given: A set S of n items, with each item i having


• bi - a positive benefit

• wi - a positive weight

• Goal: Choose items with maximum total benefit but with


weight at most W.

• If we are allowed to take fractional amounts, then this is the


fractional knapsack problem.
Example 2: The Fractional Knapsack Problem
• The fractional knapsack problem:

• In this case, we let xi denote the amount we take of item i

• Objective:

maximize b (x / w )
iS
i i i

• Constraint: x
iS
i W

In Knapsack problem: (called also Integer knapsack problem): no fractions


are allowed. Can you formulate this problem?

Example 2
• Given: A set S of n items, with each item i having

• bi - a positive benefit

• wi - a positive weight

• Goal: Choose items with maximum total benefit but with


“knapsack”
weight at most W.
Solution:
• 1 ml of 5
Items: • 2 ml of 3
1 2 3 4 5 • 6 ml of 4
Weight:
• 1 ml of 2
4 ml 8 ml 2 ml 6 ml 1 ml
Benefit: $12 $32 $40 $30 $50 10 ml
Value: 3 4 20 5 50
($ per ml)

Algorithm fractionalKnapsack(S, W)

Input: set S of items w/ benefit bi and weight wi; max. weight W

Output: amount xi of each item i to maximize benefit with weight at


most W
for each item i in S

xi  0

vi  bi / wi {value} Run time: O(n log n)

w0 {total weight}

while w < W

remove item i with highest vi

xi  min{wi , W - w}

w  w + min{wi , W - w}
The Fractional Knapsack Algorithm
• Greedy choice: Keep taking item with highest value (benefit to
weight ratio)
• Since  b ( x / w )   (b / w ) x
iS
i i i
iS
i i i

• Run time: O(n log n).

• Correctness: Suppose there is a better solution:


• there is an item i with higher value than a chosen item j (i.e., vi<vj) but
xi<wi and xj>0 If we substitute some i with j, we get a better solution.

• How much of i: min{wi-xi, xj}

• Thus, there is no better solution than the greedy one

Example 3: Task Scheduling

• Given: a set T of n tasks, each having:


• A start time, si

• A finish time, fi (where si < fi)

• Goal: Perform all the tasks using a minimum number of


“machines.”

Machine 3
Machine 2
Machine 1

1 2 3 4 5 6 7 8 9

Algorithm taskSchedule(T)

Input: set T of tasks w/ start time si and finish time fi

Output: non-conflicting schedule with minimum number of machines


m0 {no. of machines}

while T is not empty

remove task i w/ smallest si

if there’s a machine j for i then

schedule i on machine j
else
mm+1
schedule i on machine m
Task Scheduling Algorithm
• Greedy choice: consider tasks by their start time and use as few
machines as possible with this order.
• Run time: O(n log n).

• Correctness: Suppose there is a better schedule.


• We can use k-1 machines. The algorithm uses k.

• Let i be first task scheduled on machine k.

• Machine i must conflict with k-1 other tasks.

• But that means there is no non-conflicting schedule using k-1


machines

Example3
• Given: a set T of n tasks, each having:
• A start time, si

• A finish time, fi (where si < fi)

• [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start)

• Goal: Perform all tasks on min. number of machines.

Machine 3

Machine 2
Machine 1

1 2 3 4 5 6 7 8 9

Task Scheduling Algorithm

• Second formulation of this problem:

Given: one machine and n tasks.

Goal: to minimize the average waiting time of tasks.

• Third formulation of the problem:

Given: m machine and n tasks.

Goal: assign each machine just one task in order to minimize

the total operation time. (i.e., parallel operation)


Algorithms

Lecture 11

Lecture 10

• Dynamic Programming Paradigm.

• Basic Technique.

• Area of Applications.

• Examples.

Dynamic Programming Methodology

• Dynamic Programming model works when the problem can be


divided into small subproblems (or stages) where the output of each
subproblem is the input to the next one.

• Dynamic Programming (DP) determines the optimum solution to


an n-variable problem by decomposing it into n stages with each
stage constituting a single-variable sub problem.

•Invented by American mathematician Richard Bellman in the 1950s


to solve optimization problems
Recursive Nature of Computations in DP
• Computations in DP are done recursively, in the sense that the
optimum solution of one sub problem is used as an input to the
next sub problem.

• By the time the last sub problem is solved, the optimum

solution for the entire problem is at hand.

•The sub problems are normally linked by common constraints. As


we move from one sub problem to the next, the feasibility of these
common constraints must be maintained.

Recursive Nature of Computations in DP

• Two main properties of a problem that suggest that the given

problem can be solved using Dynamic Programming:

 Overlapping Subproblems

 Optimal Substructure

How To Devise a Dynamic Programming Approach

• Given a problem that is solvable by a Divide & Conquer method

• Prepare a table to store results of sub-problems

• Replace base case by filling the start of the table

• Replace recursive calls by table lookups

• Devise for-loops to fill the table with sub-problem solutions instead


of returning values
• Solution is at the end of the table

• Notice that previous table locations also contain valid (optimal) sub-
problem solutions
Example

• We illustrate with the famous STAGECOACH problem.

• It concerns a mythical fortune seeker in Missouri who decided to

go west to join the gold rush in California during the mid-19th

century. The journey would require travelling by stagecoach through

different states. The possible choices are shown in the figure below.

Each state is represented by a circled letter and the direction of

travel is always from left to right in the diagram.

7
B E 1
4
6 4 H 3
2 3
4 6
A C 2 F J
4 3
3 4
4 3 3 I
1
D 3 G

Example

• Thus, four stages were required to travel from the point of

embarkation in state A (Missouri) to his destination in state J

(California). The distances between two states are also shown.

Thus the problem is to find the shortest route the fortune-seeker

should take.

• Let’s solve it in a smart way!


Solution:
7
B E 1
4
6 4 H 3
2 3
4 6
A C 2 F J
4 3
3 4
4 3 3 I
1
D 3 G

Solution: E or F
11 4 H
7
B E 1 J
4 3
C or D 6 4 H
11 2 I 3
7 3 E 7
4 6
A C 2 F J
4 3
3 4
4 3 3 I
1
D 3 G 4 J
E or
8 F 6 H

Solution:
Thus the optimum route will be

C E H

A J

D F I
i.e. A C E H J
or A D E H J with optimum
value 11.
or A D F I J
Problem Formulation
Thus
Fn ( xn )  min f n ( xn , yn )  f n ( xn , yn* )

where
fn (xn, yn) = immediate cost (stage n) +
minimum future cost (stages n+1 onward)

 cxn , yn  Fn 1 ( xn 1 )

and xn+1 state into which the system is transformed by the

choice of yn.

Characteristics of DP problems

We pay special attention to the three basic elements of a DP

model:

 Definition of the stages

 Definition of the alternatives at each stage

 Definition of the states for each stage

Knapsack Problem
• Value and mass of each item is
given.
• Maximize profit.
• Subject to mass constraint of
knapsack: 15 kg.
• Being a smart kid,
you apply dynamic programming
Knapsack Problem
• Given: A set S of n items, with each item i having

– wi - a positive weight

– bi - a positive benefit

• Goal: Choose items with maximum total benefit but with weight at most
W.
• If we are not allowed to take fractional amounts, then this is the 0/1
knapsack problem.

– In this case, we let T denote the set of items we take

– Objective: maximize iT


b i

– Constraint: iT
w W i

Knapsack Problem

Example 1:

“knapsack”

Items:

1 2 3 4 5 box of width 9 in

Weight: 4 in 2 in 2 in 6 in 2 in Solution:
Benefit: $20 $3 $6 $25 $80 • item 5 ($80, 2 in)
• item 3 ($6, 2in)
• item 1 ($20, 4in)

Knapsack Problem

• Define B[k,w] to be the best selection with weight at most w.

• Good news: this does have subproblem optimality.

 B[k  1, w] if wk  w
B[k , w]  
max{B[k  1, w], B[k  1, w  wk ]  bk } else

• This is the recurrence relationship.


Knapsack Problem

• Pseudo code:
Algorithm 01Knapsack(S, W):
Input: set S of n items with benefit bi
and weight wi; maximum weight W
Output: benefit of best subset of S with
weight at most W
let A and B be arrays of length W + 1
for w  0 to W do
B[w]  0
O(n*W)
for k  1 to n do
copy array B into array A
for w  wk to W do
if A[wwk]  bk > A[w] then
B[w]  A[wwk]  bk
return B[W]

Knapsack Problem

• Pseudo code:
Algorithm ALG(S, W):
let A and B be arrays of length W + 1
for w  0 to W do
B[w]  0
for i = 1 : n do
copy array B into array A
for j = wi : W do
O(n*W) if A[ jwi]  bi > A[j] then
B[j]  A[jwi]  bj
return B[W]

Knapsack Problem
Weight
• Example 1 solution:
k Items 1 2 3 4 5 6 7 8 9
bk/wk

0 0/0 0 0 0 0 0 0 0 0 0

1 4/20 0 0 0 )1(20 )1(20 )1(20 )1(20 )1(20 )1(20

2 2/3 0 )2(3 )2(3 )1(20 )1(20 23(1,2) 23(1,2) 23(1,2) 23(1,2)

3 2/6 0 )3(6 )3(6 )1(20 )1(20 )1,3( 26 26 29 29


)1,3( )1,2,3( )1,2,3(

4 6/25 0 )3(6 )3(6 )1(20 )1(20 )1,3( 26 26 31 31


)1,3( )3,4( )3,4(

5 2/80 0 )5(80 )5(80 )3,5( 86 )3,5( 86 100 100 106 106


)1,5( )1,5( )1,3,5( )1,3,5(

benefit item index


Knapsack Problem

Example 1 Solution:

“knapsack”

Items:

1 2 3 4 5 box of width 9 in

Weight: 4 in 2 in 2 in 6 in 2 in
Benefit: $20 $3 $6 $25 $80 Solution:

• item 5 ($80, 2 in)


• item 3 ($6, 2in)
• item 1 ($20, 4in)

Knapsack Problem

• Example 2: four items and capacity W = 8

You might also like