0% found this document useful (0 votes)
38 views

Assignment 1

The document discusses various data structures and algorithms concepts including Big O, Big Omega, small o, small Omega, Theta notation and their definitions. It also provides code implementations in C/C++ for finding square root of a number, GCD of two numbers, merging two sorted arrays, and quicksort using Hoare and Lomuto partitioning approaches. The time and space complexity of various sorting algorithms like bubble sort, selection sort, insertion sort and quicksort are also mentioned.

Uploaded by

Sheetal Anand
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Assignment 1

The document discusses various data structures and algorithms concepts including Big O, Big Omega, small o, small Omega, Theta notation and their definitions. It also provides code implementations in C/C++ for finding square root of a number, GCD of two numbers, merging two sorted arrays, and quicksort using Hoare and Lomuto partitioning approaches. The time and space complexity of various sorting algorithms like bubble sort, selection sort, insertion sort and quicksort are also mentioned.

Uploaded by

Sheetal Anand
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Data structures and algorithms

Assignment – 1:
1)Big O:
Y(x) is a non-negative function defined over non-negative x values. We say Y(x) is Big-Oh
of f(x) if there is a positive constant a where the following inequality holds: Y(x)<=a.f(x).

Big Omega:

Y(x) is a non-negative function defined over non-negative x values. We say Y(x) is Big-
Omega of f(x) if there is a positive constant a where the following inequality holds:
Y(x)>=a.f(x).

Small o:

f(x) = o(g(x)) (small-oh) means that the growth rate of f(x) is asymptotically less than to the
growth rate of g(x).

Small Omega:

f(x) = ω(g(x)) (small-omega) means that the growth rate of f(x) is asymptotically greater than
the growth rate of g(x).

Theeta:

We say a function Y(x) is Theta(f(x)) if it is both Big O(f(x)) and Big Omega(f(x)) i.e Y(x) =
O(f(x)) and proof Y(x) = Ω(f(x)).

2) using MATPLOTLIB:

import matplotlib.pyplot as plt

import numpy as np

x1=np.arange(0,100)

y1=4*(x1**2.8)+25
plt.plot(x1,y1,label="4n^(2.8)+25")

x2=np.arange(0,100)

y2=x2**3

plt.plot(x2,y2,label="n^3")

plt.legend()

plt.xlabel("X")

plt.ylabel("Y")

plt.show()

3) Finding square root of a number using C:

#include<stdio.h>
int main()

int number;

float temp, sqrt;

printf("Number: \n");

scanf("%d", &number);

sqrt = number / 2;

temp = 0;

while(sqrt != temp){

temp = sqrt;

sqrt = ( number/temp + temp) / 2;

printf("The square root of '%d' is '%.4f'", number, sqrt);

}
4) Program to find GCD of two numbers using C:

#include <stdio.h>

int main()

int n1, n2, i, gcd;

printf("Enter two integers: ");

scanf("%d %d", &n1, &n2);

for(i=1; i <= n1 && i <= n2; ++i)

// Checks if i is factor of both integers

if(n1%i==0 && n2%i==0)

gcd = i;

printf("G.C.D of %d and %d is %d", n1, n2, gcd);

return 0;
}

5) Bubble sort: The time complexity is O(n^2). It happens when we have a reverse sorted
array, as in that case, we will have to make all the passes.
Selection sort: The time complexity is O(n^2) as to find the minimum element at every
iteration, we will have to traverse the entire unsorted array.
Insertion sort: The time complexity is O(n^2). It occurs when we have a reverse sorted
array, as in that case, to find the correct position for any element, we will have to fully
traverse the sorted array each time.

Space complexity is O(1) for all three algorithms as we don’t need to take any extra space to
sort them, thus making them in-place algorithms.

6)Program to merge two sorted arrays using C:

#include <stdio.h>

int main()

int n1,n2,n3;

int a[10000], b[10000], c[20000];

printf("Enter the size of first array: ");


scanf("%d",&n1);

printf("Enter the array elements: ");

for(int i = 0; i < n1; i++)

scanf("%d", &a[i]);

printf("Enter the size of second array: ");

scanf("%d",&n2);

printf("Enter the array elements: ");

for(int i = 0; i < n2; i++)

scanf("%d", &b[i]);

n3 = n1 + n2;

for(int i = 0; i < n1; i++)

c[i] = a[i];

for(int i = 0; i < n2; i++)

c[i + n1] = b[i];

printf("The merged array: ");

for(int i = 0; i < n3; i++)

printf("%d ", c[i]);

printf("\nFinal array after sorting: ");

for(int i = 0; i < n3; i++){

int temp;

for(int j = i + 1; j < n3; j++) {

if(c[i] > c[j]) {

temp = c[i];
c[i] = c[j];

c[j] = temp;

for(int i = 0; i < n3 ; i++)

printf(" %d ",c[i]);

return 0;
}

7) Implementation of quick sort using Hoare partitioning approach in C++:

#include <iostream>

#include <ctime>

#include <cstdlib>
using namespace std;

#define N 15

// Partition using Hoare's Partitioning scheme

int partition(int a[], int low, int high)

int pivot = a[low];

int i = low - 1;

int j = high + 1;

while (1)

do {

i++;

} while (a[i] < pivot);

do {

j--;

} while (a[j] > pivot);

if (i >= j) {

return j;

}
swap(a[i], a[j]);

void quicksort(int a[], int low, int high)

if (low >= high) {

return;

int pivot = partition(a, low, high);

quicksort(a, low, pivot);

quicksort(a, pivot + 1, high);

int main()

int arr[N];

srand(time(NULL));

for (int i = 0; i < N; i++) {


arr[i] = (rand() % 100) - 50;

quicksort(arr, 0, N - 1);

for (int i = 0; i < N; i++) {

cout << arr[i] << " ";

return 0;

Implementation of quick sort using Lomuto partitioning approach:

void swap(int *i, int *j)


{
int tmp = *i;
*i = *j;
*j = tmp;
}

int partition(int *arr, int l, int r)


{
int pivot = arr[r];
int i = l;
for (int j = l; j < r; ++j) {
if (arr[j] < pivot) {
swap(&arr[i], &arr[j]);
++i;
}
}
swap(&arr[i], &arr[r]);
return i;
}

void quicksort(int *arr, int l, int r)


{
if (l >= r)
return;
int i = partition(arr, l, r);

quicksort(arr, l, i - 1);
quicksort(arr, i + 1, r);
}

Time complexity of quick sort is O(n log(n)) and space complexity is O(n).

You might also like