Unit 1
Unit 1
UNIT –I
Stacks and queues are dynamic sets in which the element removed from the set by
the DELETE operation is prespecified. In a stack, the element deleted from the set is the
one most recently inserted: the stack implements a last-in, first-out, or LIFO, policy.
Similarly, in a queue , the element deleted is always the one that has been in the set
for the longest time: the queue implements a first-in, first-out, or FIFO, policy. There
are several efficient ways to implement stacks and queues on a computer. In this
section we show how to use a simple array to implement each.
1.1Stacks
The INSERT operation on a stack is often called PUSH, and the DELETE operation, which
does not take an element argument, is often called POP. These names are allusions to
physical stacks, such as the spring-loaded stacks of plates used in cafeterias. The order
in which plates are popped from the stack is the reverse of the order in which they
were pushed onto the stack, since only the top plate is accessible.
When top [S] = 0, the stack contains no elements and is empty. The stack can be tested
for emptiness by the query operation STACK-EMPTY. If an empty stack is popped, we
say the stack underflows, which is normally an error. If top[S] exceeds n, the
stack overflows. (In our pseudocode implementation, we don't worry about stack
overflow.)
1.2Queues
We call the INSERT operation on a queue ENQUEUE, and we call the
DELETE operation DEQUEUE; like the stack operation POP, DEQUEUE takes no element
argument. The FIFO property of a queue causes it to operate like a line of people in
the registrar's office. The queue has a head and a tail. When an element is enqueued,
it takes its place at the tail of the queue, 1ike a newly arriving student takes a place at
the end of the line. The element dequeued is always the one at the head of the queue,
like the student at the head of the line who has waited the longest. (Fortunately, we
don't have to worry about computational elements cutting into line.)
In the above table, 4 has the highest priority, and 45 has the lowest priority.
In the above table, 4 has the lowest priority, and 45 has the highest priority.
Representation of priority queue
Now, we will see how to represent the priority queue through a one-way list.
We will create the priority queue by using the list given below in which INFO list contains the
data elements, PRN list contains the priority numbers of each data element available in
the INFO list, and LINK basically contains the address of the next node.
In the case of priority queue, lower priority number is considered the higher priority,
i.e., lower priority number = higher priority.
Step 1: In the list, lower priority number is 1, whose data value is 333, so it will be inserted in
the list as shown in the below diagram:
Step 2: After inserting 333, priority number 2 is having a higher priority, and data values
associated with this priority are 222 and 111. So, this data will be inserted based on the FIFO
principle; therefore 222 will be added first and then 111.
Step 3: After inserting the elements of priority 2, the next higher priority number is 4 and data
elements associated with 4 priority numbers are 444, 555, 777. In this case, elements would be
inserted based on the FIFO principle; therefore, 444 will be added first, then 555, and then 777.
Step 4: After inserting the elements of priority 4, the next higher priority number is 5, and the
value associated with priority 5 is 666, so it will be inserted at the end of the queue.
The methods for representing lists given in the previous section extend to any
homogeneous data structure. In this section, we look specifically at the problem of
representing rooted trees by linked data structures. We first look at binary trees, and
then we present a method for rooted trees in which nodes can have an arbitrary
number of children.
We represent each node of a tree by an object. As with linked lists, we assume that
each node contains a key field. The remaining fields of interest are pointers to other
nodes, and they vary according to the type of tree.
Binary trees
As shown in Figure 11.9, we use the fields p, left, and right to store pointers to the
parent, left child, and right child of each node in a binary tree T. If p[x] = NIL, then x is
the root. If node x has no left child, then left[x] = NIL, and similarly for the right child.
The root of the entire tree T is pointed to by the attribute root[T]. If root [T] = NIL,
then the tree is empty.
Figure 11.9 The representation of a binary tree T. Each node x has the fields p[x]
(top), left[x] (lower left), and right[x] (lower right). The key fields are not shown.
Fortunately, there is a clever scheme for using binary trees to represent trees with
arbitrary numbers of children. It has the advantage of using only O(n) space for any n-
node rooted tree. The left-child, right-sibling representation is shown in Figure
11.10. As before, each node contains a parent pointer p, and root[T] points to the root
of tree T. Instead of having a pointer to each of its children, however, each node x has
only two pointers:
If node x has no children, then left-child[x] = NIL, and if node x is the rightmost child
of its parent, then right-sibling[x] = NIL.
Basic Operations
Following are basic primary operations of a Graph −
Add Vertex − Adds a vertex to the graph.
Add Edge − Adds an edge between the two vertices of the graph.
Display Vertex − Displays a vertex of the graph.
2.What is an algorithm:
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following
characteristics −
Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or
phases), and their inputs/outputs should be clear and must lead to only one meaning.
Input − An algorithm should have 0 or more well-defined inputs.
Output − An algorithm should have 1 or more well-defined outputs, and should match
the desired output.
Finiteness − Algorithms must terminate after a finite number of steps.
Feasibility − Should be feasible with the available resources.
Independent − An algorithm should have step-by-step directions, which should be
independent of any programming code.
3.ALGORITHM SPECIFICATION:
4.PERFORMANCE ANALYSIS
In this chapter, we will discuss the complexity of computational problems with respect to the
amount of space an algorithm requires.
Space complexity shares many of the features of time complexity and serves as a further way of
classifying problems according to their computational difficulties.
What are the Different Types of Time complexity
Notation Used?
As we have seen, Time complexity is given by time as a function of the length of the input.
And, there exists a relation between the input data size (n) and the number of operations
performed (N) with respect to time. This relation is denoted as Order of growth in Time
complexity and given notation O[n] where O is the order of growth and n is the length of the
input. It is also called as ‘Big O Notation’
Big O Notation expresses the run time of an algorithm in terms of how quickly it grows
relative to the input ‘n’ by defining the N number of operations that are done on it. Thus, the
time complexity of an algorithm is denoted by the combination of all O[n] assigned for each
line of function.
There are different types of time complexities used, let’s see one by one:
and many more complex notations like Exponential time, Quasilinear time, factorial time,
etc. are used based on the type of functions defined.
4.2 Space Complexity
Space complexity is a function describing the amount of memory (space) an algorithm takes in
terms of the amount of input to the algorithm.
We often speak of extra memory needed, not counting the memory needed to store the input
itself. Again, we use natural (but fixed-length) units to measure this.
We can use bytes, but it's easier to use, say, the number of integers used, the number of fixed-
sized structures, etc.
In the end, the function we come up with will be independent of the actual number of bytes
needed to represent the unit.
Space complexity is sometimes ignored because the space used is minimal and/or obvious,
however sometimes it becomes as important issue as time complexity
Important Note: The best algorithm/program should have the lease space-
complexity. The lesser the space used, the faster it executes.
Example #1
#include<stdio.h>
int main()
{
int a = 5, b = 5, c;
c = a + b;
printf("%d", c);
}
In the above program, 3 integer variables are used. The size of the integer
data type is 2 or 4 bytes which depends on the compiler. Now, lets assume
the size as 4 bytes. So, the total space occupied by the above-given program
is 4 * 3 = 12 bytes. Since no additional variables are used, no extra space is
required.
The following 3 asymptotic notations are mostly used to represent the time complexity
of algorithms:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1) Θ Notation:
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is
said to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1*
g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0
Theta notation
2) Big O Notation:
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.
If f(n) describes the running time of an algorithm; f(n) is O(g(n)) if there exist positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0