5CS4-AOA-Unit-1_ppt @zammers
5CS4-AOA-Unit-1_ppt @zammers
Jaipur
1
Algorithm and Complexity
An algorithm is a specific procedure for solving a well-
defined computational problem.
The development and analysis of algorithms is fundamental to all
aspects of computer science: artificial intelligence, databases,
graphics, networking, operating systems, security, and so on.
The (computational) complexity of an algorithm is a measure of the
amount of computing resources (time and space) that a particular
algorithm consumes when it runs.
Computer scientists use mathematical measures of complexity that
allow them to predict, before writing the code, how fast an
algorithm will run and how much memory it will require.
Such predictions are important guides for
programmers implementing and selecting algorithms for real-world
applications.
The complexity of an algorithm describes the efficiency of the
algorithm in terms of the amount of the memory required to process
the data and the processing time.
Complexity of an algorithm is analyzed in two perspectives:
Time and Space.
Time Complexity:
It’s a function describing the amount of time required to run an
algorithm in terms of the size of the input.
"Time" can mean the number of memory accesses performed, the
number of comparisons between integers, the number of times
some inner loop is executed, or some other natural unit related to
the amount of real time the algorithm will take.
Space Complexity:
It’s a function describing the amount of memory an algorithm takes
in terms of the size of input to the algorithm.
We often speak of "extra" memory needed, not counting the
memory needed to store the input itself. Again, we use natural (but
fixed-length) units to measure this.
Space complexity is sometimes ignored because the space used is
minimal and/or obvious, however sometimes it becomes as
important an issue as time.
Characteristics of Algorithm
We can have three cases to analyze an algorithm:
1) Worst Case
2) Average Case
3) Best Case
If the value of the search key is less than the item in the
middle of the interval, narrow the interval to the lower
half. Otherwise narrow it to the upper half.
The name "Quick Sort" comes from the fact that, quick sort is
capable of sorting a list of data elements significantly faster (twice
or thrice faster) than any of the common sorting algorithms.
Like Merge sort, Quick sort also falls into the category of divide
and conquer approach of problem-solving methodology.
Application
def quickSortHelper(alist,first,last):
if first<last:
splitpoint = partition(alist,first,last)
quickSortHelper(alist,first,splitpoint-1)
quickSortHelper(alist,splitpoint+1,last)
def partition(alist,first,last):
pivotvalue = alist[first]
leftmark = first+1
rightmark = last
done = False
while not done:
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
Complexity Analysis
Total running time of the QUICKSORT
function is going to be the summation of
the time taken by the PARTITION(A,
start, end) and two recursive calls to itself.
The comparison (if start < end) is going
to take a constant amount of time and
thus, we are ignoring it.
Complexity Analysis
Worst Case Complexity [Big-O]: O(n2)
It occurs when the pivot element picked is either the greatest or the smallest
element.
This condition leads to the case in which the pivot element lies in an extreme
end of the sorted array. One sub-array is always empty and another sub-array
contains n - 1 elements. Thus, quicksort is called only on this sub-array.
However, the quick sort algorithm has better performance for scattered pivots.
Best Case Complexity [Big-omega]: O(n*log
n)
It occurs when the pivot element is always the middle element or
near to the middle element.
Let's take a case when the partition is always balanced i.e., every
time, each of the two halves separated by the pivot has no more
than n/2 elements. It means that one of the halves
has ⌊n/2⌋⌊n/2⌋ and another has ⌈n/2⌉−1and ⌈n/2⌉−1 elements.
Average Case Complexity [Big-
theta]: O(n*log n)
First, let's imagine that we don't always get evenly
balanced partitions, but that we always get at
worst a 3-to-1 split.
That is, imagine that each time we partition, one
side gets 3n/43n/43, n, slash, 4 elements and the
other side gets n/4n/4n, slash, 4.
(To keep the math clean, let's not worry about
the pivot.)
Then the tree of subproblem sizes and
partitioning times would look like this:
Advantages
It is in-place since it uses only a small auxiliary stack.
It requires only n (log n) time to sort n items.
It has an extremely short inner loop.
This algorithm has been subjected to a thorough
mathematical analysis, a very precise statement can be
made about performance issues.
Disadvantages
It is recursive. Especially, if recursion is not available,
the implementation is extremely complicated.
It requires quadratic (i.e., n2) time in the worst-case.
It is fragile, i.e. a simple mistake in the implementation
can go unnoticed and cause it to perform badly.
Strassen’s Matrix Multiplication
Given two square matrices A and B of size n x n each, find their multiplication matrix.
Naive Method
Following is a simple way to multiply two matrices.
void multiply(int A[][N], int B[][N], int C[][N])
{
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
C[i][j] = 0;
for (int k = 0; k < N; k++)
{
C[i][j] += A[i][k]*B[k][j];
}
}
}
}