0% found this document useful (0 votes)
15 views38 pages

Chapter 3

Discreat mathematics capter 3

Uploaded by

mahir.rahman2030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views38 pages

Chapter 3

Discreat mathematics capter 3

Uploaded by

mahir.rahman2030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

3.

0 Algorithms

• Algorithm

• Pseudocode

• Computer Program

• Flow Chart
3.0 Algorithms
The term algorithm is a corruption of the name al-Khowarizmi, a
mathematician of the ninth century, whose book on Hindu numerals is the
basis of modern decimal notation. Originally, the word algorism was used for
the rules for performing arithmetic using decimal notation.
An algorithm is a finite sequence of precise instructions for performing a
computation or for solving a problem.
Describe an algorithm for finding the maximum (largest) value in a finite
sequence of integers.

Solution of Example
1: We perform the following steps. 1. Set the temporary maximum equal to the first integer in
the sequence. (The temporary maximum will be the largest integer examined at any stage of
the procedure.)

2. Compare the next integer in the sequence to the temporary maximum, and if it is larger
than the temporary maximum, set the temporary maximum equal to this integer.

3. Repeat the previous step if there are more integers in the sequence.
4. Stop when there are no integers left in the sequence. The temporary maximum at this point
is the largest integer in the sequence.
PROPERTIES OF ALGORITHMS
There are several properties that algorithms generally share. They are useful to
keep in mind when algorithms are described. These properties are:

▶ Input. An algorithm has input values from a specified set.


▶ Output. From each set of input values an algorithm produces output values
from a specified set. The output values are the solution to the problem.
▶ Definiteness. The steps of an algorithm must be defined precisely.
▶ Correctness. An algorithm should produce the correct output values for each
set of input values.
▶ Finiteness. An algorithm should produce the desired output after a finite (but
perhaps large) number of steps for any input in the set.
▶ Effectiveness. It must be possible to perform each step of an algorithm exactly
and in a finite amount of time.
3.1 Searching Algorithms
The problem of locating an element in an ordered list occurs in many contexts. For
instance, a program that checks the spelling of words searches for them in a dictionary,
which is just an ordered list of words. Problems of this kind are called searching problems.
linear search, Links or sequential search
3.1 Searching Algorithms
Binary Search
3.1 Searching Algorithms, Example
3.1.3 Sorting
Ordering the elements of a list is a problem that occurs in many contexts. For
example, to Demo produce a telephone directory it is necessary to
alphabetize the names of subscribers. Similarly, producing a directory of
songs available for downloading requires that their titles be put in alphabetic
order. Putting addresses in order in an e-mail mailing list can determine
whether there are duplicated addresses. Creating a useful dictionary requires
that words be put in alphabetical order. Similarly, generating a parts list
requires that we order them according to increasing part number.
Bubble Sort

Insertion Sort
Bubble Sort, Example
Insertion Sort, Example
ALGORITHM, Naive String Matcher.

The steps of the naive string matcher with P = eye in T = eceyeye.


Matches are identified with a solid line and mismatches with a jagged
line. The algorithm finds two valid shifts, s = 2 and s = 4.
Greedy Algorithms
Many algorithms we will study in this book are designed to solve optimization
problems. The goal of such problems is to find a solution to the given problem
that either minimizes or maximizes the value of some parameter. Optimization
problems studied later in this text include finding a route between two cities with
least total mileage, determining a way to encode messages using the fewest bits
possible, and finding a set of fiber links between network nodes using the least
amount of fiber.
Cashier’s Algorithm
• 100 Cent = 1$
• 25 Cent = 1 Quarters
• 10 Cent = Dime
• 5 Cent = Nickels
• 1 Cent = Penny
Greedy Algorithms
cashier’s algorithm
Greedy Algorithms
Greedy Algorithm for Scheduling Talks.
Greedy Algorithm for Scheduling Talks.
3.1.6 The Halting Problem
We will now describe a proof of one of the most famous theorems in
computer science. We will show that there is a problem that cannot be solved
using any procedure. That is, we will Links show there are unsolvable
problems. The problem we will study is the halting problem. It asks whether
there is a procedure that does this: It takes as input a computer program and
input to the program and determines whether the program will eventually
stop when run with this input. It would be convenient to have such a
procedure, if it existed. Certainly being able to test whether a program
entered into an infinite loop would be helpful when writing and debugging
programs.
Halting means that the program on certain input
will accept it and halt or reject it and halt and it
would never go into an infinite loop. Basically
halting means terminating. So can we have an
algorithm that will tell that the given program will
halt or not. In terms of Turing machine, will it
terminate when run on some machine with some
particular given input string.
3.2 The Growth of Functions, Asymptotic Notations
Big O Omega Ω Theta θ
3.2 The Growth of Functions

WORKING WITH THE DEFINITION OF BIG-O NOTATION


A useful approach for finding a pair of witnesses is to first
select a value of k for which the size of |f(x)| can be readily
estimated when x > k and to see whether we can use this
estimate to find a value of C for which
|f(x)| ≤ C|g(x)| for x > k.
3.2 The Growth of Functions
3.2 The Growth of Functions
3.2 The Growth of Functions
• Show that n2 is not O(n).

• shows that 7x2 is O(x3). Is it also true that x3 is O(7x2)?


3.2.3 Big-O Estimates for Some Important Functions
Polynomials can often be used to estimate the growth of functions. Instead of
analyzing the growth of polynomials each time they occur, we would like a
result that can always be used to estimate the growth of a polynomial.
Theorem 1 does this. It shows that the leading term of a polynomial dominates
its growth by asserting that a polynomial of degree n or less is O(xn).

Using the triangle inequality, if x > 1 we have


3.2.3 Big-O Estimates for Some Important Functions
3.2.4 The Growth of Combinations of Functions
Many algorithms are made up of two or more separate subprocedures. The number of
steps used by a computer to solve a problem with input of a specified size using such an
algorithm is the sum of the number of steps used by these subprocedures. To give a
big-O estimate for the number of steps needed, it is necessary to find big-O estimates for
the number of steps used by each subprocedure and then combine these estimates.
Big-O estimates of combinations of functions can be provided if care is taken when
different big-O estimates are combined. In particular, it is often necessary to estimate the
growth of the sum and the product of two functions. What can be said if big-O estimates
for each of two functions are known? To see what sort of estimates hold for the sum and
the product of two functions, suppose that f1(x) is O(g1(x)) and f2(x) is O(g2(x)). From the
definition of big-O notation, there are constants C1, C2, k1, and k2 such that
3.2.4 The Growth of Combinations of Functions
3.2.4 The Growth of Combinations of Functions
3.2.4 The Growth of Combinations of Functions, Examples
3.2 The Growth of Functions, Asymptotic Notations
Big O Omega Ω Theta θ
3.3 Complexity of Algorithms
Time complexity Space complexity

WORST-CASE, BEST-CASE AND AVERAGE-CASE COMPLEXITY


What is the worst-case complexity of the bubble sort in terms of the number of
comparisons made?

The bubble sort described before Example 4 in Section 3.1 sorts a list by performing a
sequence of passes through the list. During each pass the bubble sort successively compares
adjacent elements, interchanging them if necessary. When the ith pass begins, the i − 1
largest elements are guaranteed to be in the correct positions. During this pass, n − i
comparisons are used. Consequently, the total number of comparisons used by the bubble
sort to order a list of n elements is;

What is the worst-case complexity of the insertion sort in terms of the number of
comparisons made?

Solution: The insertion sort (described in Section 3.1) inserts the jth element into the correct
position among the first j − 1 elements that have already been put into the correct order. It
does this by using a linear search technique, successively comparing the jth element with
successive terms until a term that is greater than or equal to it is found or it compares aj with
itself and stops because aj is not less than itself. Consequently, in the worst case, j
comparisons are required to insert the jth element into the correct position. Therefore, the
total number of comparisons used by the insertion sort to sort a list of n elements is;
3.3.3 Complexity of Matrix Multiplication
The definition of the product of two matrices can be expressed as an algorithm for
computing the product of two matrices. Suppose that C = [cij] is the m × n matrix that is the
product of the m × k matrix A = [aij] and the k × n matrix B = [bij]. The algorithm based on
the definition of the matrix product is expressed in pseudocode in Algorithm 1.

Solution: There are n2 entries in the product of A and B. To find each entry
requires a total of n multiplications and n − 1 additions. Hence, a total of
n3 multiplications and n2(n − 1) additions are used.
3.3.3 Complexity of Matrix Multiplication

The number of bit operations used to find the Boolean product of two n × n matrices can
be easily determined.

EXAMPLE 8 How many bit operations are used to find A ⊙ B, where A and
B are n × n zero–one matrices?

Solution: There are n2 entries in A ⊙ B. Using Algorithm 2, a total of n ORs


and n ANDs are used to find an entry of A ⊙ B. Hence, 2n bit operations are
used to find
3.3.4 Algorithmic Paradigms
• Greedy algorithms provide an example of an
algorithmic paradigm.
• BRUTE-FORCE ALGORITHMS Brute force is an
important, and basic, algorithmic paradigm.
3.3.5 Understanding the Complexity of Algorithms
3.3.5 Understanding the Complexity of Algorithms
TRACTABILITY A problem that is solvable using an algorithm with polynomial
(or better) worst-case complexity is called tractable, because the
expectation is that the algorithm will produce the solution to the problem
for reasonably sized input in a relatively short time. However, if the
polynomial in the big-Θ estimate has high degree (such as degree 100) or if
the coefficients are extremely large, the algorithm may take an extremely
long time to solve the problem.

The situation is much worse for problems that cannot be solved using an
algorithm with worst-case polynomial time complexity. Such problems are
called intractable.
P versus NP problem
3.3.5 Understanding the Complexity of Algorithms

You might also like