0% found this document useful (0 votes)
10 views

CSE DAA LAB MANUAL

Uploaded by

ashurajpoot6387
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

CSE DAA LAB MANUAL

Uploaded by

ashurajpoot6387
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 29

DEPARTMENT OF

COMPUTER SCIENCE & ENGINEERING

LAB HAND BOOK

DESIGN AND ANALYSIS OF ALGORITHM


LAB
(KCS-553)

Ambalika Institute of Management and Technology,


Lucknow
Name of Student: Asheesh kumar
Roll No:2203630139003 Branch: IT Session:2023-24
KCS-553 Design and Analysis of Algorithm Lab

Objective:

1. Program for Insertion Sort.


2. Program for Quick Sort.
3. Program for Merge Sort.
4. Program for Selection Sort.
5. Program for Heap Sort.
6. Program for Recursive Binary & Linear Search.
3. Knapsack Problem using Greedy Solution
8. Perform Travelling Salesman Problem
9. Find Minimum Spanning Tree using Kruskal’s Algorithm
10. Implement N Queen Problem using Backtracking
INDEX

SL NAME OF LAB EXPERIMENT DATE SIGN REMARK


1 Program for Insertion Sort
2 Program for Quick Sort
3 Program for Merge Sort
4 Program for Selection Sort
5 Program for Heap Sort
6 Program for Recursive Binary &
Linear Search
7 Knapsack Problem using Greedy
Solution
8 Perform Travelling Salesman Problem
9 Find Minimum Spanning Tree using
Kruskal’s Algorithm
10 Implement N Queen Problem using
Backtracking
11
12

Appendices
I. Institute Vision
II. Institute Mission
III. CSE Department Vision
IV CSE Department Mission
V CSE Department PEOs
VI Program Outcomes (POs)

PRACTICAL-1
OBJECT– TO IMPLEMENT THE INSERTION SORT.

THEORY – Insertion sort is a simple sorting algorithm, a comparison


sort in which the sorted array (or list) is built one entry at a time. It is much less efficient
on large lists than more advanced algorithms such as quick sort, heap sort, or merge sort.

However, insertion sort provides several advantages:

 simple implementation , efficient for (quite) small data sets

 efficient for data sets that are already substantially sorted: the running time is O(n
+ d), where d is the number of inversions

 more efficient in practice than most other simple quadratic (i.e., O(n2)) algorithms
such as selection sort or bubble sort: the average running time is n2/4, and the
running time is linear in the best case

 stable (i.e., does not change the relative order of elements with equal keys)

 in-place (i.e., only requires a constant amount O(1) of additional memory space)
 online (i.e., can sort a list as it receives it)

Class: Sorting algorithm


Data Structure: Array
Time Complexity: Θ(n2)
Space Complexity: Ώ (n) total, O(1) auxiliary
Optimal: Not usually

Program –
#include<stdio.h>
#include<conio.h>
Void main()
{
int arr[5]={25,23,17,14,89};
int i,j,k,temp;
clrscr();
printf("INSERTION SORT \n\n");
printf("array before sorting :");
for(i=0;i<5;i++)
printf("%d\t",arr[i]);
for(i=1;i<5;i++)
{
for(j=0;j<i;j++)
{
if(arr[j]>arr[i])
{
temp=arr[j];
arr[j]=arr[i];

for(k=i;k>j;k--)
arr[k]=arr[k-1];
arr[k+1]=temp;
}
}
}
printf("\n\n array after sorting :");
for(i=0;i<5;i++)
printf("%d\t",arr[i]);
getch();
}

OUTPUT –
INSERTION SORT
array before sorting :
array after sorting :

PRACTICAL-2

OBJECT– TO IMPLEMENT THE QUICK SORT.


THEORY – Quicksort is a well-known sorting algorithm developed
by C. A. R. Hoare that, on average, makes O(nlogn) (big O notation) comparisons to sort
n items. However, in the worst case, it makes O(n2) comparisons. Typically, quicksort is
significantly faster in practice than other O(nlogn) algorithms, because its inner loop can
be efficiently implemented on most architectures, and in most real-world data it is
possible to make design choices which minimize the probability of requiring quadratic
time.

Quicksort is a comparison sort and, in efficient implementations, is not a stable sort.

Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-
lists.

The steps are:

 Pick an element, called a pivot, from the list.


 Reorder the list so that all elements which are less than the pivot come before the
pivot and so that all elements greater than the pivot come after it (equal values can
go either way). After this partitioning, the pivot is in its final position. This is
called the partition operation.
 Recursively sort the list of lesser elements and the list of greater elements in
sequence.

The base cases of the recursion are lists of size zero or one, which are always sorted.

Program –
#include<stdio.h>
#include<conio.h>
void quicksort(int,int);
int partition(int,int);
int arr[10]={11,2,9,13,57,25,17,1,90,3};
void main()
{ int i;
clrscr();
printf("QUICK SORT\n");
printf("array before sorting :");
for(i=0;i<10;i++)
printf("%d ",arr[i]);
quicksort(0,9);
printf("\narray after sorting :");
for(i=0;i<10;i++)
printf("%d ",arr[i]);
getch(); }
void quicksort(int p,int q)
{
int i;
if(q>p)
{
i=partition(p,q);
quicksrt(p,i-1);
quicksort(i+1,q);
}
}
int partition(int p,int q)
{
int i=arr[p],t;
while(q>=p+1)
{
while(arr[p]<i)
p++;
while(arr[q]>i)
q--;
if(q>p+1)
{
t=arr[p+1];
arr[p+1]=arr[q];
arr[q]=t;
}
}
t=arr[p];
arr[p]=arr[q];
return q;
}

OUTPUT –QUICK SORT


Array before sorting :
Array after sorting :

PRACTICAL-3

OBJECT– TO IMPLEMENT THE MERGE SORT


THEORY : Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It divides
input array in two halves, calls itself for the two halves and then merges the two sorted
halves. The merge() function is used for merging two halves. The merge(arr, l, m, r) is
key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges the two
sorted sub-arrays into one. See following C implementation for details.
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)

Time Complexity: Sorting arrays on different machines. Merge Sort is a recursive algorithm
and time complexity can be expressed as following recurrence relation.
T(n) = 2T(n/2) +
The above recurrence can be solved either using Recurrence Tree method or Master method.
It falls in case II of Master Method and solution of the recurrence is .
Time complexity of Merge Sort is in all 3 cases (worst, average and best) as
merge sort always divides the array in two halves and take linear time to merge two halves.
Auxiliary Space: O(n)
Algorithmic Paradigm: Divide and Conquer
Sorting In Place: No in a typical implementation
Stable: Yes

Program
#include<stdlib.h>
#include<stdio.h>
// Merges two subarrays of arr[].
// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

/* create temp arrays */


int L[n1], R[n2];

/* Copy data to temp arrays L[] and R[] */


for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1+ j];

/* Merge the temp arrays back into arr[l..r]*/


i = 0; // Initial index of first subarray
j = 0; // Initial index of second subarray
k = l; // Initial index of merged subarray
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}
/* Copy the remaining elements of L[], if there
are any */
while (i < n1)
{
arr[k] = L[i];
i++;
k++;
}

/* Copy the remaining elements of R[], if there


are any */
while (j < n2)
{
arr[k] = R[j];
j++;
k++;
}
}

/* l is for left index and r is right index of the


sub-array of arr to be sorted */
void mergeSort(int arr[], int l, int r)
{
if (l < r)
{
// Same as (l+r)/2, but avoids overflow for
// large l and h
int m = l+(r-l)/2;

// Sort first and second halves


mergeSort(arr, l, m);
mergeSort(arr, m+1, r);

merge(arr, l, m, r);
}
}

/* UTILITY FUNCTIONS */
/* Function to print an array */
void printArray(int A[], int size)
{
int i;
for (i=0; i < size; i++)
printf("%d ", A[i]);
printf("\n");
}

/* Driver program to test above functions */


int main()
{
int arr[] = {12, 11, 13, 5, 6, 7};
int arr_size = sizeof(arr)/sizeof(arr[0]);

printf("Given array is \n");


printArray(arr, arr_size);

mergeSort(arr, 0, arr_size - 1);

printf("\nSorted array is \n");


printArray(arr, arr_size);
return 0;
}

Output:

PRACTICAL-4

OBJECT– TO IMPLEMENT THE SELECTION SORT


THEORY :
The selection sort algorithm sorts an array by repeatedly finding the minimum element
(considering ascending order) from unsorted part and putting it at the beginning. The
algorithm maintains two subarrays in a given array.

1) The subarray which is already sorted.


2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from
the unsorted subarray is picked and moved to the sorted subarray.
Following example explains the above steps:

arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]

// and place it at beginning

11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64

Time Complexity: O(n2) as there are two nested loops.


Auxiliary Space: O(1)
The good thing about selection sort is it never makes more than O(n) swaps and can be
useful when memory write is a costly operation.
Stability : The default implementation is not stable. However it can be made stable. Please
see stable selection sort for details.
In Place : Yest, it does not require extra space

PROGRAM
#include <stdio.h>
void swap(int *xp, int *yp)
{
int temp = *xp;
*xp = *yp;
*yp = temp;
}
void selectionSort(int arr[], int n)
{
int i, j, min_idx;
// One by one move boundary of unsorted subarray
for (i = 0; i < n-1; i++)
{
// Find the minimum element in unsorted array
min_idx = i;
for (j = i+1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;
// Swap the found minimum element with the first element
swap(&arr[min_idx], &arr[i]);
}
}
/* Function to print an array */
void printArray(int arr[], int size)
{
int i;
for (i=0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}
// Driver program to test above functions
int main()
{
int arr[] = {64, 25, 12, 22, 11};
int n = sizeof(arr)/sizeof(arr[0]);
selectionSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}

Output:

PRACTICAL-5

OBJECT– TO IMPLEMENT THE HEAP SORT.

THEORY –Heapsort (method) is a comparison-based sorting


algorithm, and is part of the selection sort family. Although somewhat slower in practice
on most machines than a good implementation of quicksort, it has the advantage of a
worst-case T(n log n) runtime. Heapsort is an in-place algorithm, but is not a stable sort.

Heapsort inserts the input list elements into a heap data structure. The largest value (in a
max-heap) or the smallest value (in a min-heap) are extracted until none remain, the
values having been extracted in sorted order. The heap's invariant is preserved after each
extraction, so the only cost is that of extraction.
During extraction, the only space required is that needed to store the heap. In order to
achieve constant space overhead, the heap is stored in the part of the input array that has
not yet been sorted.
(The structure of this heap is described at Binary heap:
Heap Implementation.)

Heapsort uses two heap operations: insertion and root deletion. Each extraction places an
element in the last empty location of the array. The remaining prefix of the array stores
the unsorted elements.

Program -
#include<stdio.h>
#include<conio.h>
void quicksort(int,int);
int partition(int,int);
int arr[10];
void main()
{
int i;
clrscr();
printf("QUICK SORT \n\n");
printf("enter the array");
for(i=0;i<10;i++)
scanf("%d",&arr[i]);
quicksort(1,10);
printf("the sorted array is :");
for(i=0;i<10;i++)
printf("%d\t",arr[i]);
getch();
}
void quicksort(int p,int r)
{
int q;
if(p<r)
{
q=partition(p,r);
quicksort(p,q-1);
quicksort(q+1,r);
}
}
int partition(int p,int r)
{
int x,i,j,temp;
x=arr[r];
i=p-1;
for(j=p;j<r;j++)
{
if(arr[j]<=x)
{
i++;
temp=arr[i];
arr[i]=arr[j];
}
}
temp=arr[r];
arr[r]=arr[i+1];
arr[i+1]=temp;
return(i+1);
}
OUTPUT – Enter the array:
The sorted array is :

PRACTICAL-6

OBJECT– TO IMPLEMENT THE RECURSIVE AND BINARY SEARCH.

THEORY – Given a sorted array arr[] of n elements, write a function to search a given
element x in arr[].
A simple approach is to do linear search.The time complexity of above algorithm is
O(n). Another approach to perform the same task is using Binary Search.

Binary Search: Search a sorted array by repeatedly dividing the search interval in half.
Begin with an interval covering the whole array. If the value of the search key is less
than the item in the middle of the interval, narrow the interval to the lower half.
Otherwise narrow it to the upper half. Repeatedly check until the value is found or the
interval is empty.

We basically ignore half of the elements just after one comparison.


1. Compare x with the middle element.
2. If x matches with middle element, we return the mid index.

3. Else If x is greater than the mid element, then x can only lie in right half subarray
after the mid element. So we recur for right half.

4. Else (x is smaller) recur for the left half.


Recursive implementation of Binary Search

PROGRAM:
#include <stdio.h>

// A recursive binary search function. It returns


// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l)
{
int mid = l + (r - l)/2;
// If the element is present at the middle
// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then


// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid-1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid+1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n-1, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d",
result);
return 0;
}

Output :

PRACTICAL-7

OBJECT– TO IMPLEMENT OF KNAPSACK PROBLEM USING GREEDY


SOLUTION.

THEORY – In Fractional Knapsack, we can break items for maximizing the total value of
knapsack. This problem in which we can break an item is also called the fractional knapsack
problem.
Input :

Same as above

Output :

Maximum possible value = 240

By taking full items of 10 kg, 20 kg and

2/3rd of last item of 30 kg

An efficient solution is to use Greedy approach. The basic idea of the greedy approach is to
calculate the ratio value/weight for each item and sort the item on basis of this ratio. Then
take the item with the highest ratio and add them until we can’t add the next item as a whole
and at the end add the next item as much as we can. Which will always be the optimal
solution to this problem.
A simple code with our own comparison function can be written as follows, please see sort
function more closely, the third argument to sort function is our comparison function which
sorts the item according to value/weight ratio in non-decreasing order.
After sorting we need to loop over these items and add them in our knapsack satisfying
above-mentioned criteria.
As main time taking step is sorting, the whole problem can be solved in O(n log n) only.

PROGRAM
// C++ program to solve fractional Knapsack Problem
#include <bits/stdc++.h>
using namespace std;
// Structure for an item which stores weight and corresponding
// value of Item
struct Item
{
int value, weight;

// Constructor
Item(int value, int weight) : value(value), weight(weight)
{}
};
// Comparison function to sort Item according to val/weight ratio
bool cmp(struct Item a, struct Item b)
{
double r1 = (double)a.value / a.weight;
double r2 = (double)b.value / b.weight;
return r1 > r2;
}
double fractionalKnapsack(int W, struct Item arr[], int n)
{
sort(arr, arr + n, cmp);
// Uncomment to see new order of Items with their ratio /*
for (int i = 0; i < n; i++)
{
cout << arr[i].value << " " << arr[i].weight << " : "
<< ((double)arr[i].value / arr[i].weight) << endl;
}
*/
int curWeight = 0; // Current weight in knapsack
double finalvalue = 0.0; // Result (value in Knapsack)
for (int i = 0; i < n; i++)
{
if (curWeight + arr[i].weight <= W)
{
curWeight += arr[i].weight;
finalvalue += arr[i].value;
}
// If we can't add current Item, add fractional part of it
else
{
int remain = W - curWeight;
finalvalue += arr[i].value * ((double) remain / arr[i].weight);
break;
}
}
return finalvalue;
}

// driver program to test above function


int main()
{
int W = 50; // Weight of knapsack
Item arr[] = {{60, 10}, {100, 20}, {120, 30}};

int n = sizeof(arr) / sizeof(arr[0]);

cout << "Maximum value we can obtain = "


<< fractionalKnapsack(W, arr, n);
return 0;
}

Output :
PRACTICAL-8

OBJECT– PERFORM TRAVELLING SALESMAN PROBLEM .

THEORY – Travelling Salesman Problem (TSP): Given a set of cities and distance
between every pair of cities, the problem is to find the shortest possible route that visits
every city exactly once and returns back to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltoninan cycle
problem is to find if there exist a tour that visits every city exactly once. Here we know
that Hamiltonian Tour exists (because the graph is complete) and in fact many such tours
exist, the problem is to find a minimum weight Hamiltonian Cycle.
For example, consider the graph shown in figure on right side. A TSP tour in the graph
is 1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80.
The problem is a famous NP hard problem. There is no polynomial time know solution
for this problem.
1. Consider city 1 as the starting and ending point. Since route is cyclic, we can
consider any point as starting point.
2. Generate all (n-1)! permutations of cities.
3. Calculate cost of every permutation and keep track of minimum cost permutation.
4. Return the permutation with minimum cost.
PROGRAM
// CPP program to implement traveling salesman
// problem using naive approach.
#include <bits/stdc++.h>
using namespace std;
#define V 4

// implementation of traveling Salesman Problem


int travllingSalesmanProblem(int graph[][V], int s)
{
// store all vertex apart from source vertex
vector<int> vertex;
for (int i = 0; i < V; i++)
if (i != s)
vertex.push_back(i);

// store minimum weight Hamiltonian Cycle.


int min_path = INT_MAX;
do {

// store current Path weight(cost)


int current_pathweight = 0;

// compute current path weight


int k = s;
for (int i = 0; i < vertex.size(); i++) {
current_pathweight += graph[k][vertex[i]];
k = vertex[i];
}
current_pathweight += graph[k][s];

// update minimum
min_path = min(min_path, current_pathweight);
} while (next_permutation(vertex.begin(), vertex.end()));

return min_path;
}

// driver program to test above function


int main()
{
// matrix representation of graph
int graph[][V] = { { 0, 10, 15, 20 },
{ 10, 0, 35, 25 },
{ 15, 35, 0, 30 },
{ 20, 25, 30, 0 } };
int s = 0;
cout << travllingSalesmanProblem(graph, s) << endl;
return 0;
}
Output:

PRACTICAL-9

OBJECT– FIND MINIMUM SPANNING TREE USING KRUSKAL’S


ALGORITHM.

THEORY – Kruskal's algorithm is an algorithm in graph theory that finds a


minimum spanning tree for a connected weighted graph. This means it finds a subset of
the edges that forms a tree that includes every vertex, where the total weight of all the
edges in the tree is minimized. If the graph is not connected, then it finds a minimum
spanning forest (a minimum spanning tree for each connected component). Kruskal's
algorithm is an example of a greedy algorithm.

It works as follows:

 create a forest F (a set of trees), where each vertex in the graph is a separate tree
 create a set S containing all the edges in the graph
 while S is nonempty
 remove an edge with minimum weight from S
 if that edge connects two different trees, then add it to the forest, combining two
trees into a single tree
 otherwise discard that edge
 At the termination of the algorithm, the forest has only one component and forms
a minimum spanning tree of the graph.

This algorithm first appeared in Proceedings of the American Mathematical Society, pp.
48–50 in 1956, and was written by Joseph Kruskal.
Other algorithms for this problem include Prim's algorithm, Reverse-Delete algorithm,
and Boruvka's algorithm

Where E is the number of edges in the graph and V is the number of vertices, Kruskal's
algorithm can be shown to run in O(E log E) time, or equivalently, O(E log V) time, all
with simple data structures. These running times are equivalent because:

E is at most V2 and logV2 = 2logV is O(log V).


If we ignore isolated vertices, which will each be their own component of the minimum
spanning tree anyway, V = E+1, so log V is O(log E).

Program –
#include<stdio.h>
#include<conio.h>
#include<alloc.h>
struct lledge
{
int v1,v2;
float cost;
struct lledge *next;
};
int stree[5],count[5],mincost;
struct lledge *kminstree(struct lledge*,int);
int getrval(int);
void combine(int,int);
void del(struct lledge*);
void main()
{
struct lledge *temp,*root;
int i;
clrscr();
root=(struct lledge*)malloc(sizeof(struct lledge));
root->v1=4;
root->v2=3;
root->cost=1;
temp=root->next=(struct lledge*)malloc(sizeof(struct lledge));
temp->v1=4;
temp->v2=2;
temp->cost=2;
temp->next=(struct lledge*)malloc(sizeof(struct lledge));
temp=temp->next;
temp->v1=3;
temp->v2=2;
temp->cost=3;
temp->next=(struct lledge*)malloc(sizeof(struct lledge));
temp=temp->next;
temp->v1=4;
temp->v2=1;
temp->cost=4;
temp->next=NULL;
root=kminstree(root,5);
for(i=1;i<=4;i++)
printf("\n stree[%d]->%d",i,stree[i]);
printf("\n the minimum cost of spanning tree is %d ",mincost);
del(root);
getch();
}
struct lledge* kminstree(struct lledge *root,int n)
{
struct lledge *temp=NULL;
struct lledge *p,*q;
int noofedges=0;
int i,p1,p2;

for(i=0;i<n;i++)
stree[i]=i;
for(i=0;i<n;i++)
count[i]=0;

while((noofedges<(n-1))&&(root!=NULL))
{
p=root;
root=root->next;

p1=getrval(p->v1);
p2=getrval(p->v2);

if(p1!=p2)
{
combine(p->v1,p->v2);
noofedges++;
mincost+=p->cost;
if(temp==NULL)
{
temp=p;
q=temp;
}
else
{
q->next=p;
q=q->next;
}
q->next=NULL;
}
}
return temp;
}
int getrval(int i)
{
int j,k,temp;
k=i;
while(stree[k]!=k)
k=stree[k];
j=i;
while(j!=k)
{
temp=stree[j];
stree[j]=k;
j=temp;
}
return k;
}
void combine(int i,int j)
{
if(count[i]<count[j])
stree[i]=j;
else
{
stree[j]=i;
if(count[i]==count[j])
count[j]++;
}
}
void del(struct lledge *root)
{
struct lledge *temp;
while(root!=NULL)
{
temp=root->next;
free(root);
root=temp;
}
}

OUTPUT –

PRACTICAL-10

OBJECT– TO IMPLEMENT N-QUEEN PROBLEM USING BACK TRACKING

THEORY – We have discussed Knight’s tour and Rat in a Maze problems in Set
1 and Set 2 respectively. Let us discuss N Queen as another example problem that can be
solved using Backtracking.
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no two
queens attack each other. For example, following is a solution for 4 Queen problem.

The expected output is a binary matrix which has 1s for the blocks where queens are placed.
For example, following is the output matrix for above 4 queen solution.

{ 0, 1, 0, 0}

{ 0, 0, 0, 1}

{ 1, 0, 0, 0}

{ 0, 0, 1, 0}

The idea is to place queens one by one in different columns, starting from the leftmost
column. When we place a queen in a column, we check for clashes with already placed
queens. In the current column, if we find a row for which there is no clash, we mark this row
and column as part of the solution. If we do not find such a row due to clashes then we
backtrack and return false.

1) Start in the leftmost column


2) If all queens are placed
return true
3) Try all rows in the current column. Do following for every tried row.
a) If the queen can be placed safely in this row then mark this [row, column] as part of the solution and
recursively check if placing queen here leads to a solution.
b) If placing the queen in [row, column] leads to a solution then return true.
c) If placing queen doesn't lead to a solution then umark this [row,column] (Backtrack) and go to step (a) to
try other rows.
3) If all rows have been tried and nothing worked, return false to trigger backtracking.

PROGRAM
/* C/C++ program to solve N Queen Problem using backtracking */
#define N 4
#include<stdio.h>

/* A utility function to print solution */


void printSolution(int board[N][N])
{
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
printf(" %d ", board[i][j]);
printf("n");
}
}
/* A utility function to check if a queen can be placed on board[row][col].
Note that this function is called when "col" queens are already placed in
columns from 0 to col -1.
So we need to check only left side for attacking queens */
bool isSafe(int board[N][N], int row, int col)
{
int i, j;

/* Check this row on left side */


for (i = 0; i < col; i++)
if (board[row][i])
return false;
/* Check upper diagonal on left side */
for (i=row, j=col; i>=0 && j>=0; i--, j--)
if (board[i][j])
return false;
/* Check lower diagonal on left side */
for (i=row, j=col; j>=0 && i<N; i++, j--)
if (board[i][j])
return false;
return true;
}
/* A recursive utility function to solve N Queen problem */
bool solveNQUtil(int board[N][N], int col)
{
/* base case: If all queens are placed then return true */
if (col >= N)
return true;
/* Consider this column and try placing
this queen in all rows one by one */
for (int i = 0; i < N; i++)
{
/* Check if the queen can be placed on
board[i][col] */
if ( isSafe(board, i, col) )
{
/* Place this queen in board[i][col] */
board[i][col] = 1;
/* recur to place rest of the queens */
if ( solveNQUtil(board, col + 1) )
return true;

/* If placing queen in board[i][col] doesn't lead to


a solution, then remove queen from board[i][col] */
board[i][col] = 0; // BACKTRACK
}
}
/* If the queen can not be placed in any row in this colum col then
return false */
return false;
}
/* This function solves the N Queen problem usingBacktracking. It mainly uses
solveNQUtil() to solve the problem. It returns false if queens cannot be
placed, otherwise, return true and prints placement of queens in the form of
1s. Please note that there may be more than one solutions, this function
prints one of the feasible solutions.*/

bool solveNQ()
{
int board[N][N] = { {0, 0, 0, 0},
{0, 0, 0, 0},
{0, 0, 0, 0},
{0, 0, 0, 0}
};
if ( solveNQUtil(board, 0) == false )
{
printf("Solution does not exist");
return false;
}
printSolution(board);
return true;
}
// driver program to test above function
int main()
{
solveNQ();
return 0;
}

Output:
AMBALIKA INSTITUTE OF MANAGEMENT & TECHNOLOGY, LUCKNOW
DEPARTMENT OF INFORMATION TECHNOLOGY

AIMT Vision (Institute)

To nourish the students, blossom them into tomorrow’s world class professionals and good human
beings by inculcating the qualities of sincerity, integrity and social ethics.

AIMT Mission (Institute)

1. To provide the finest infra structure and excellent environment for the academic growth of the
students to bridge the gap between academia and the demand of industry.

2. To expose students in various co- curricular activities to convert them into skilled professionals.

3. To grind very enthusiastic engineering and management student to transform him into hard
working, committed, having zeal to excel, keeping the values of devotion, concern and honesty.

4. To involve the students in extracurricular activities to make them responsible citizens.

CSE Department Vision

To embrace students towards becoming computer professionals having problem solving skills,
leadership qualities, foster research & innovative ideas inculcating moral values and social concerns

CSE Department Mission

1. To provide state of art facilities for high quality academic practices.


2. To focus advancement of quality & impact of research for the betterment of society.
3. To nurture extra-curricular skills and ethical values in students to meet the challenges of building a
strong nation.

CSE Department PEOs

PEO1:- All the graduates will become high class software professionals who could be absorbed in the
software industry on the basis of sound academic and technical knowledge gained by them on account
of adopting state of the art academic practices.
PEO2:-All the graduates will demonstrate their talent in research and development activities involving
themselves in such researches which could alleviate the existing problem of the society.
PEO3:-All the graduates shall be committed for high moral and ethical standards in solving the societal
problems by means of their exposure to various co-curricular and extra-curricular activities.
PROGRAM OUTCOME

• PO 1 Engineering Knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering problems.
• PO 2 Problem Analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics, natural
sciences, and engineering sciences.
• PO 3 Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate consideration
for the public health and safety, and the cultural, societal, and environmental considerations.
• PO 4 Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions
• PO 5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations.
• PO 6 The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
• PO 7 Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for
sustainable development.
• PO 8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice
• PO 9 Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.
• PO 10 Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive clear
instructions.
• PO 11 Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and leader in
a team, to manage projects and in multidisciplinary environments.
• PO 12 Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

You might also like