0% found this document useful (0 votes)
8 views

ADA Unit 3

Uploaded by

rishiag5112
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ADA Unit 3

Uploaded by

rishiag5112
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

ADA Unit 3

Brute Force Approach and its pros and cons


● It is an intuitive, direct, and straightforward technique of

problem-solving in which all the possible ways or all the possible

solutions to a given problem are enumerated.


● Many problems solved in day-to-day life using the brute force strategy,

for example exploring all the paths to a nearby market to find the

minimum shortest path.

● Arranging the books in a rack using all the possibilities to optimize the

rack spaces, etc.

● In fact, daily life activities use a brute force nature, even though optimal

algorithms are also possible.

PROS AND CONS OF BRUTE FORCE ALGORITHM:

Pros:

● The brute force approach is a guaranteed way to find the correct

solution by listing all the possible candidate solutions for the

problem.

● It is a generic method and not limited to any specific domain of

problems.

● The brute force method is ideal for solving small and simpler

problems.

● It is known for its simplicity and can serve as a comparison

benchmark.

Cons:

● The brute force approach is inefficient. For real-time problems,

algorithm analysis often goes above the O(N!) order of growth.


● This method relies more on compromising the power of a

computer system for solving a problem than on a good algorithm

design.

● Brute force algorithms are slow.

● Brute force algorithms are not constructive or creative compared

to algorithms that are constructed using some other design

paradigms.

Selection Sort

Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting

the smallest (or largest) element from the unsorted portion of the list and moving it to the

sorted portion of the list.

The algorithm repeatedly selects the smallest (or largest) element from the

unsorted portion of the list and swaps it with the first element of the unsorted

part. This process is repeated for the remaining unsorted portion until the entire

list is sorted.

How does Selection Sort Algorithm work?

Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}

First pass:
● For the first position in the sorted array, the whole array is traversed

from index 0 to 4 sequentially. The first position where 64 is stored

presently, after traversing whole array it is clear that 11 is the lowest

value.

● Thus, replace 64 with 11. After one iteration 11, which happens to be

the least value in the array, tends to appear in the first position of the

sorted list.

Selection Sort Algorithm | Swapping 1st element with the minimum in array

Second Pass:

● For the second position, where 25 is present, again traverse the rest of

the array in a sequential manner.

● After traversing, we found that 12 is the second lowest value in the

array and it should appear at the second place in the array, thus swap

these values.
Selection Sort Algorithm | swapping i=1 with the next minimum element

Third Pass:

● Now, for third place, where 25 is present again traverse the rest of the

array and find the third least value present in the array.

● While traversing, 22 came out to be the third least value and it should

appear at the third place in the array, thus swap 22 with element

present at third position.

Selection Sort Algorithm | swapping i=2 with the next minimum element
Fourth pass:

● Similarly, for fourth position traverse the rest of the array and find the

fourth least element in the array

● As 25 is the 4th lowest value

hence, it will place at the fourth position.

Selection Sort Algorithm | swapping i=3 with the next minimum element

Fifth Pass:

● At last the largest value present in the array automatically get placed

at the last position in the array

● The resulted array is the sorted array.

Selection Sort Algorithm | Required sorted array


Complexity Analysis of Selection Sort

Time Complexity: The time complexity of Selection Sort is O(N2) as there are

two nested loops:

● One loop to select an element of Array one by one = O(N)

● Another loop to compare that element with every other Array element

= O(N)

● Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N2)

Auxiliary Space: O(1) as the only extra memory used is for temporary variables

while swapping two values in Array. The selection sort never makes more than

O(N) swaps and can be useful when memory writing is costly.

Advantages of Selection Sort Algorithm

● Simple and easy to understand.

● Works well with small datasets.

Disadvantages of the Selection Sort Algorithm

● Selection sort has a time complexity of O(n^2) in the worst and

average case.

● Does not work well on large datasets.


● Does not preserve the relative order of items with equal keys which

means it is not stable.

BUBBLE SORT

Bubble Sort is the simplest sorting algorithm that works by repeatedly

swapping the adjacent elements if they are in the wrong order. This algorithm is

not suitable for large data sets as its average and worst-case time complexity is

quite high.

Bubble Sort Algorithm

In this algorithm,

● traverse from left and compare adjacent elements and the higher one

is placed at right side.

● In this way, the largest element is moved to the rightmost end at first.

● This process is then continued to find the second largest and place it

and so on until the data is sorted.

How does Bubble Sort Work?


Let us understand the working of bubble sort with the help of the following

illustration:

Input: arr[] = {6, 3, 0, 5}

First Pass:

The largest element is placed in its correct position, i.e., the end of the array.

Bubble Sort Algorithm : Placing the largest element at correct position

Second Pass:

Place the second largest element at correct position


Bubble Sort Algorithm : Placing the second largest element at correct position

Third Pass:

Place the remaining two elements at their correct positions.

Bubble Sort Algorithm : Placing the remaining elements at their correct positions

● Total no. of passes: n-1

● Total no. of comparisons: n*(n-1)/2


Complexity Analysis of Bubble Sort:
Time Complexity: O(N2)

Auxiliary Space: O(1)

Advantages of Bubble Sort:


● Bubble sort is easy to understand and implement.

● It does not require any additional memory space.

● It is a stable sorting algorithm, meaning that elements with the same

key value maintain their relative order in the sorted output.

Disadvantages of Bubble Sort:


● Bubble sort has a time complexity of O(N2) which makes it very slow

for large data sets.

● Bubble sort is a comparison-based sorting algorithm, which means

that it requires a comparison operator to determine the relative order

of elements in the input data set. It can limit the efficiency of the

algorithm in certain cases.


SEQUENTIAL SEARCH/ Linear Search
Algorithm

Linear Search is defined as a sequential search algorithm that starts at one end

and goes through each element of a list until the desired element is found,

otherwise the search continues till the end of the data set.

Linear Search Algorithm

How Does Linear Search Algorithm Work?

In Linear Search Algorithm,

● Every element is considered as a potential match for the key and


checked for the same.
● If any element is found equal to the key, the search is successful and
the index of that element is returned.
● If no element is found equal to the key, the search yields “No match
found”.
For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key =

30

Step 1: Start from the first element (index 0) and compare key with each element

(arr[i]).

● Comparing key with first element arr[0]. SInce not equal, the iterator
moves to the next element as a potential match.

Compare key with arr[0]

● Comparing key with next element arr[1]. SInce not equal, the iterator
moves to the next element as a potentiaL match
Compare key with arr[1]

Step 2: Now when comparing arr[2] with key, the value matches. So the Linear

Search Algorithm will yield a successful message and return the index of the

element when key is found (here 2).


/ C code to linearly search x in arr[].

#include <stdio.h>

int search(int arr[], int N, int x)


{
for (int i = 0; i < N; i++)
if (arr[i] == x)
return i;
return -1;
}

// Driver code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
int result = search(arr, N, x);
(result == -1)
? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}

Complexity Analysis of Linear Search:

Time Complexity:

● Best Case: In the best case, the key might be present at the first index.
So the best case complexity is O(1)
● Worst Case: In the worst case, the key might be present at the last
index i.e., opposite to the end from which the search has started in the
list. So the worst-case complexity is O(N) where N is the size of the list.
● Average Case: O(N)

Auxiliary Space: O(1) as except for the variable to iterate through the list, no

other variable is used.

Advantages of Linear Search:

● Linear search can be used irrespective of whether the array is sorted or


not. It can be used on arrays of any data type.
● Does not require any additional memory.
● It is a well-suited algorithm for small datasets.

Drawbacks of Linear Search:

● Linear search has a time complexity of O(N), which in turn makes it


slow for large datasets.
● Not suitable for large arrays.

Exhaustive Search Algorithm:


Exhaustive Search is a brute-force algorithm that systematically enumerates all
possible solutions to a problem and checks each one to see if it is a valid
solution. This algorithm is typically used for problems that have a small and
well-defined search space, where it is feasible to check all possible solutions.
Travelling Salesman Problem using Greedy approach-
tutorialspoint.com

The travelling salesman problem is a graph computational problem where the


salesman needs to visit all cities (represented using nodes in a graph) in a list
just once and the distances (represented using edges in the graph) between all
these cities are known. The solution that is needed to be found for this problem
is the shortest possible route in which the salesman visits all the cities and
returns to the origin city.

If you look at the graph below, considering that the salesman starts from the
vertex ‘a’, they need to travel through all the remaining vertices b, c, d, e, f and
get back to ‘a’ while making sure that the cost taken is minimum.

There are various approaches to find the solution to the travelling salesman
problem: naïve approach, greedy approach, dynamic programming approach,
etc. In this tutorial we will be learning about solving travelling salesman
problem using greedy approach.
Travelling Salesperson Algorithm
As the definition for greedy approach states, we need to find the best optimal
solution locally to figure out the global optimal solution. The inputs taken by the
algorithm are the graph G {V, E}, where V is the set of vertices and E is the set of
edges. The shortest path of graph G starting from one vertex returning to the
same vertex is obtained as the output.

Algorithm

​ Travelling salesman problem takes a graph G {V, E} as an input and


declare another graph as the output (say G’) which will record the path
the salesman is going to take from one node to another.
​ The algorithm begins by sorting all the edges in the input graph G from
the least distance to the largest distance.
​ The first edge selected is the edge with least distance, and one of the two
vertices (say A and B) being the origin node (say A).
​ Then among the adjacent edges of the node other than the origin node
(B), find the least cost edge and add it onto the output graph.
​ Continue the process with further nodes making sure there are no cycles
in the output graph and the path reaches back to the origin node A.
​ However, if the origin is mentioned in the given problem, then the solution
must always start from that node only. Let us look at some example
problems to understand this better.

Examples

Consider the following graph with six cities and the distances between them −
From the given graph, since the origin is already mentioned, the solution must
always start from that node. Among the edges leading from A, A → B has the
shortest distance.

Then, B → C has the shortest and only edge between, therefore it is included in
the output graph.
There’s only one edge between C → D, therefore it is added to the output graph.

There’s two outward edges from D. Even though, D → B has lower distance
than D → E, B is already visited once and it would form a cycle if added to the
output graph. Therefore, D → E is added into the output graph.
There’s only one edge from e, that is E → F. Therefore, it is added into the output
graph.

Again, even though F → C has lower distance than F → A, F → A is added into


the output graph in order to avoid the cycle that would form and C is already
visited once.
The shortest path that originates and ends at A is A → B → C → D → E → F →
A

The cost of the path is: 16 + 21 + 12 + 15 + 16 + 34 = 114.

Even though, the cost of path could be decreased if it originates from other
nodes but the question is not raised with respect to that.

Example

The complete implementation of Travelling Salesman Problem using Greedy


Approach is given below −
Output
Shortest Path: 154321

Minimum Cost: 99
DFS traversal of a Tree


Given a Binary tree, Traverse it using DFS using recursion.

Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have

only one logical way to traverse them, trees can be traversed in different ways.

Generally, there are 2 widely used ways for traversing trees:

● DFS or Depth-First Search

● BFS or Breadth-First Search

What is a Depth-first search?

DFS (Depth-first search) is a technique used for traversing trees or graphs. Here

backtracking is used for traversal. In this traversal first, the deepest node is

visited and then backtracks to its parent node if no sibling of that node exists

DFS Traversal of a Graph vs Tree:

In the graph, there might be cycles and disconnectivity. Unlike the graph, the

tree does not contain a cycle and are always connected. So DFS of a tree is

relatively easier. We can simply begin from a node, then traverse its adjacent (or
children) without caring about cycles. And if we begin from a single node (root),

and traverse this way, it is guaranteed that we traverse the whole tree as there

is no dis-connectivity,

Examples:

Input Tree:

Therefore, the Depth First Traversals of this Tree will be:

1. Inorder: 4 2 5 1 3

2. Preorder: 1 2 4 5 3

3. Postorder: 4 5 2 3 1

Below are the Tree traversals through DFS using recursion:

1. Inorder Traversal :

Follow the below steps to solve the problem:


● Traverse the left subtree, i.e., call Inorder(left-subtree)

● Visit the root

● Traverse the right subtree, i.e., call Inorder(right-subtree)

2. Preorder Traversal :

Follow the below steps to solve the problem:

● Visit the root

● Traverse the left subtree, i.e., call Preorder(left-subtree)

● Traverse the right subtree, i.e., call Preorder(right-subtree)

3. Postorder Traversal:

Follow the below steps to solve the problem:

● Traverse the left subtree, i.e., call Postorder(left-subtree)

● Traverse the right subtree, i.e., call Postorder(right-subtree)

● Visit the root

DFS for Tree

Time Complexity: O(N)

Auxiliary Space: O(log N)

Uses of Inorder traversal:


In the case of binary search trees (BST), Inorder traversal gives nodes in

non-decreasing order. To get nodes of BST in non-increasing order, a variation

of Inorder traversal where Inorder traversal is reversed can be used

Uses of Preorder:

Preorder traversal is used to create a copy of the tree. Preorder traversal is also

used to get prefix expressions of an expression tree.

Uses of Postorder:

Postorder traversal is used to delete the tree. Please see the question for the

deletion of the tree for details. Postorder traversal is also useful to get the

postfix expression of an expression tree

// C program for different tree traversals

#include <stdio.h>

#include <stdlib.h>

/* A binary tree node has data, pointer to left child

and a pointer to right child */

struct node {

int data;

struct node* left;


struct node* right;

};

/* Helper function that allocates a new node with the

given data and NULL left and right pointers. */

struct node* newNode(int data)

struct node* node

= (struct node*)malloc(sizeof(struct node));

node->data = data;

node->left = NULL;

node->right = NULL;

return (node);

/* Given a binary tree, print its nodes according to the

"bottom-up" postorder traversal. */

void printPostorder(struct node* node)


{

if (node == NULL)

return;

// first recur on left subtree

printPostorder(node->left);

// then recur on right subtree

printPostorder(node->right);

// now deal with the node

printf("%d ", node->data);

/* Given a binary tree, print its nodes in inorder*/

void printInorder(struct node* node)

if (node == NULL)

return;
/* first recur on left child */

printInorder(node->left);

/* then print the data of node */

printf("%d ", node->data);

/* now recur on right child */

printInorder(node->right);

/* Given a binary tree, print its nodes in preorder*/

void printPreorder(struct node* node)

if (node == NULL)

return;

/* first print data of node */

printf("%d ", node->data);


/* then recur on left subtree */

printPreorder(node->left);

/* now recur on right subtree */

printPreorder(node->right);

/* Driver code*/

int main()

struct node* root = newNode(1);

root->left = newNode(2);

root->right = newNode(3);

root->left->left = newNode(4);

root->left->right = newNode(5);

printf("\nPreorder traversal of binary tree is \n");

printPreorder(root);
printf("\nInorder traversal of binary tree is \n");

printInorder(root);

printf("\nPostorder traversal of binary tree is \n");

printPostorder(root);

getchar();

return 0;

Preorder traversal of binary tree is

1 2 4 5 3

Inorder traversal of binary tree is

4 2 5 1 3

Postorder traversal of binary tree is

4 5 2 3 1

Level Order Traversal (Breadth First


Search or BFS) of Binary Tree
Output:

23

45

How does Level Order Traversal work?


The main idea of level order traversal is to traverse all the nodes of a lower level

before moving to any of the nodes of a higher level. This can be done in any of

the following ways:

● the naive one (finding the height of the tree and traversing each level

and printing the nodes of that level)

● efficiently using a queue.


// Recursive C program for level

// order traversal of Binary Tree

#include <stdio.h>

#include <stdlib.h>

// A binary tree node has data,

// pointer to left child

// and a pointer to right child

struct node {

int data;

struct node *left, *right;

};

// Function prototypes

void printCurrentLevel(struct node* root, int level);

int height(struct node* node);

struct node* newNode(int data);


// Function to print level order traversal a tree

void printLevelOrder(struct node* root)

int h = height(root);

int i;

for (i = 1; i <= h; i++)

printCurrentLevel(root, i);

// Print nodes at a current level

void printCurrentLevel(struct node* root, int level)

if (root == NULL)

return;

if (level == 1)

printf("%d ", root->data);

else if (level > 1) {

printCurrentLevel(root->left, level - 1);


printCurrentLevel(root->right, level - 1);

// Compute the "height" of a tree -- the number of

// nodes along the longest path from the root node

// down to the farthest leaf node

int height(struct node* node)

if (node == NULL)

return 0;

else {

// Compute the height of each subtree

int lheight = height(node->left);

int rheight = height(node->right);

// Use the larger one


if (lheight > rheight)

return (lheight + 1);

else

return (rheight + 1);

// Helper function that allocates a new node with the

// given data and NULL left and right pointers.

struct node* newNode(int data)

struct node* node

= (struct node*)malloc(sizeof(struct node));

node->data = data;

node->left = NULL;

node->right = NULL;

return (node);
}

// Driver program to test above functions

int main()

struct node* root = newNode(1);

root->left = newNode(2);

root->right = newNode(3);

root->left->left = newNode(4);

root->left->right = newNode(5);

printf("Level Order traversal of binary tree is \n");

printLevelOrder(root);

return 0;

Output
Level Order traversal of binary tree is
1 2 3 4 5

Time Complexity: O(N2), where N is the number of nodes in the skewed tree.

Auxiliary Space: O(1) If the recursion stack is considered the space used is O(N).

You might also like