0% found this document useful (0 votes)
4 views

OSD Lab File

The document outlines a lab file for an Operating System and Design course at Delhi Technological University, detailing objectives and experiments related to various scheduling algorithms. It includes theoretical explanations and implementations of algorithms such as First Come First Serve, Shortest Job First, and others, along with their respective C++ code examples. The document aims to enhance understanding of operating system functions, types, and scheduling efficiency.

Uploaded by

jais4560
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

OSD Lab File

The document outlines a lab file for an Operating System and Design course at Delhi Technological University, detailing objectives and experiments related to various scheduling algorithms. It includes theoretical explanations and implementations of algorithms such as First Come First Serve, Shortest Job First, and others, along with their respective C++ code examples. The document aims to enhance understanding of operating system functions, types, and scheduling efficiency.

Uploaded by

jais4560
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

OSD Lab File

Delhi Technological University, Rohini Delhi-110042


(Govt. of NCT of Delhi)

Operating System and Design


(CO – 204)

Submitted To: Submitted By:


Rishabh Chakraborty Dhruv Rajput
(2K21/EP/34)
Index

S. No. Objective Date Signature


1. To study functions, types of OS,
Preemption and Scheduling
Algorithms.

2. To implement First Come First


Serve and Shortest Job First
Scheduling Algorithms.

3. To implement Longest Job First


Scheduling Algorithm.

4. To implement Round Robin


Scheduling Algorithm.

5. To implement Priority Scheduling


Algorithm.

6. To implement Highest Response


Ratio Next Scheduling Algorithm.

7. To implement Fork System Call.

8. To implement Banker’s Algorithm.

9. To implement Dining Philosopher’s


Problem.

10. To implement First Fit, Best Fit and


Worst Fit Memory Allocation
Algorithms.
Experiment – 1
Functions, Types of OS, Preemption &
Scheduling Algorithms
What is Operating System
An operating system (OS) is a crucial software component that manages computer hardware
and software resources and provides services for computer programs. It acts as an
intermediary between computer hardware and the applications running on it.

Functions of Operating System


1. Process Management : The OS manages processes, which are instances of executing
programs. It allocates resources, such as CPU time and memory, to processes and
ensures they run efficiently.

2. Memory Management: It handles memory allocation and deallocation, ensuring that


each process has enough memory to execute without interfering with other
processes.

3. File System Management: The OS provides mechanisms for storing, organizing, and
accessing files on storage devices such as hard drives and SSDs.

4. Device Management: It controls peripheral devices such as keyboards, mice,


printers, and network adapters, allowing applications to interact with them without
needing to know the specifics of each device.

5. User Interface: Operating systems provide a user interface (UI) through which users
can interact with the computer. This can range from a command-line interface (CLI)
to graphical user interfaces (GUIs) with windows, icons, buttons, and menus.

6. Security: Operating systems implement security measures to protect the system and
its resources from unauthorized access, viruses, malware, and other threats.

Types of Operating System


1. Batch Operating System
A batch operating system is a type of operating system that manages and executes
jobs in batches without any user interaction during the job's execution. In a batch
processing system, users submit their tasks (jobs) to the computer system in batches,
and the operating system executes these jobs sequentially without requiring user
intervention until all the jobs in the batch are completed.
2. Time Sharing (Multitasking) Operating System
A time-sharing operating system enables multiple users to share a computer's
resources simultaneously. It allows users to interact with the system in real-time
through terminals, provides fast response times, and uses CPU scheduling to allocate
time slices to each user or process. This system facilitates efficient resource sharing
and context switching between tasks, making it ideal for interactive computing
environments. Examples include UNIX, Linux, and modern versions of Windows.

3. Real Time Operating System (RTOS)


A real-time operating system (RTOS) is designed to execute tasks with precise timing
constraints. It guarantees that tasks meet deadlines by providing predictable
response times to events. RTOSs are commonly used in embedded systems,
industrial control systems, and other applications where timing accuracy is critical,
such as automotive systems, medical devices, and aerospace systems. Examples
include FreeRTOS, VxWorks, and QNX.
4. Distributed Operating System
A distributed operating system is a type of operating system that manages multiple
independent computers or nodes that are connected through a network. It provides
a unified interface for users and applications to access resources across the
distributed system, such as files, storage, and processing power. Distributed
operating systems handle tasks like process coordination, communication, and
resource allocation across the network. They are commonly used in large-scale
distributed computing environments, such as cloud computing platforms and peerto-
peer networks. Examples include Google's Android, Microsoft's Windows Distributed
File System (DFS), and the GNU Hurd kernel.

Processes and Threads


A process is an instance of a program in execution. It consists of the program code,
data, and resources (such as memory, CPU time, and I/O devices) allocated by the
operating system.
Each process has its own address space, which includes the program code, data,
stack, and heap. Processes are isolated from each other, meaning one process cannot
directly access another process's memory.
Processes are managed by the operating system's process scheduler, which allocates
CPU time and other resources to processes.
Processes communicate with each other through inter-process communication (IPC)
mechanisms provided by the operating system, such as pipes, sockets, and shared
memory.

A thread is the smallest unit of execution within a process. Multiple threads can exist
within a single process and share the same memory space.
Threads within the same process share the process's resources, including memory
and open files, making communication and data sharing between threads easier and
more efficient compared to processes.
Threads allow for concurrent execution within a process, enabling tasks to be
executed simultaneously or concurrently. This concurrency can lead to improved
performance and responsiveness in multi-threaded applications.
Threads are scheduled for execution by the operating system's thread scheduler,
which allocates CPU time to individual threads.
Threads typically have their own stack space for storing local variables and function
call information, but they share the process's global variables and heap memory.

Preemption and non preemption


Preemption refers to the ability of an operating system to interrupt the execution of
a currently running process or thread to allocate the CPU to another process or
thread with a higher priority.
In preemptive scheduling, the operating system can preempt a running process or
thread and place it back into the ready queue to allow another process or thread to
run.
Preemption ensures that higher-priority tasks can be executed promptly, even if
lower-priority tasks are currently running. It helps improve system responsiveness
and can prevent tasks from monopolizing system resources.
Preemptive scheduling is commonly used in real-time operating systems and in
general-purpose operating systems to provide fairness and responsiveness to
interactive tasks.

Non-preemption, also known as cooperative scheduling, does not allow the


operating system to preempt a running process or thread voluntarily. In non-
preemptive scheduling, a process or thread continues to run until it voluntarily
yields control of the CPU, typically by blocking on I/O or by explicitly yielding to
another process or thread.
Non-preemptive scheduling relies on processes or threads to cooperate and
voluntarily relinquish CPU control, which can lead to issues such as priority inversion
and reduced system responsiveness.
Non-preemptive scheduling is simpler to implement and may be suitable for certain
embedded systems or environments where tasks are well-behaved and predictable.
Scheduling in OS
Operating system scheduling is a critical aspect of managing processes and threads
efficiently. It involves determining the order in which processes or threads are executed on
the CPU. Several factors influence scheduling decisions, including burst time, arrival time,
and waiting time.

- Burst Time: Burst time is the time taken by a process to complete its execution. It is
a crucial parameter for CPU scheduling algorithms, influencing decisions on process
prioritization.
- Arrival Time: Arrival time is the instance when a process enters the ready queue for
execution. It plays a significant role in determining the order in which processes are
scheduled.

- Waiting Time: Waiting time is the cumulative time a process spends waiting in the
ready queue before getting CPU time. It includes waiting for CPU time and waiting for I/O
operations

Effective scheduling algorithms aim to minimize waiting times, maximize CPU utilization, and
optimize system performance, considering factors like burst time, arrival time, and priority.

Submitted by : Dhruv Rajput (2K21/EP/34)


Experiment - 2
To implement First Come First Serve and
Shortest Job First Scheduling Algorithm.
Theory
Scheduling Algorithms:

1. First Come First Serve (FCFS):


The First-Come, First-Served (FCFS) scheduling algorithm is one of the simplest
strategies used by operating systems to manage tasks. It works by executing tasks in
the order they arrive in the system's ready queue. FCFS is non-preemptive, meaning
once a task starts executing, it continues until completion or voluntary
relinquishment of the CPU.

FCFS prioritizes tasks solely based on their arrival time, without considering their
priority levels or execution times. This simplicity makes FCFS easy to implement and
understand, which is advantageous for systems where complexity needs to be
minimized.

However, FCFS may not always lead to optimal performance. It can suffer from the
"convoy effect," where short tasks get delayed behind long-running tasks, leading to
inefficient CPU utilization and longer response times. Despite this drawback, FCFS
ensures fairness, as all tasks eventually get CPU time, and it may be suitable for
systems where predictability and simplicity are prioritized over efficiency.

2. Shorted Job First (SJF):


Shortest Job First (SJF) is a CPU scheduling algorithm that selects the task with the
shortest burst time (execution time) from the ready queue for execution. This
algorithm aims to minimize average waiting time and turnaround time by prioritizing
shorter tasks. SJF can be preemptive or non-preemptive. In preemptive SJF, if a new
task with a shorter burst time arrives while a task is already executing, the currently
running task may be preempted. In non-preemptive SJF, the running task continues
until completion. SJF is optimal in terms of average waiting time among all
scheduling algorithms but requires knowledge of burst times, which may not always
be available in practical scenarios.
First Come First Serve Scheduling:

Code:

#include<bits/stdc++.h>
using namespace std;

class Process {
public:
int pid;
int at;
int bt;
};

static bool cmp(Process& a, Process& b) {


if(a.at == b.at) {
return a.bt < b.bt;
}
return a.at < b.at;
}

void fcfsScheduler(vector<Process> &processes) {


int currTime = 0;

int tat;
int wt;

int n = processes.size();

vector<int> CT(n+1);

float avgtat = 0;
float avgwt = 0;

for(int i = 0; i < n; i++) {


Process currProcess = processes[i];

if(currTime < currProcess.at) {


currTime = currProcess.at;
}

cout<<"Process "<<currProcess.pid<<" is executing from time "<<currTime;


currTime += currProcess.bt;
cout<<" to "<<currTime<<endl;
CT[processes[i].pid] = currTime;
tat=CT[processes[i].pid]-currProcess.at;
avgtat += tat;
wt=tat-currProcess.bt;
avgwt += wt;

cout<<"turn around time "<<tat<<endl;


cout<<"waiting time "<<wt<<endl;
}
cout<<endl;
cout<<"average tat is : "<<avgtat/5<<endl;
cout<<"avg wt is : "<<avgwt/5<<endl;

int main() {

int numOfProcesses;
cin>>numOfProcesses;

vector<Process> processes;
for(int i = 0; i < numOfProcesses; i++) {
Process p;
p.pid = i+1;
cin>>p.at>>p.bt;
processes.push_back(p);
}

sort(processes.begin(), processes.end(), cmp);

fcfsScheduler(processes);

return 0;
}

Output:
Shortest Job First Scheduling:

Code:

#include<bits/stdc++.h>
using namespace std;

// Structure to represent each process


struct process_struct {
int pid; // Process ID
int at; // Arrival Time
int bt; // Burst Time
int ct, wt, tat, rt;// Completion Time, Waiting Time, Turnaround Time, Response Time
int start_time; // Start Time of execution
} ps[100];

int main() {
int n;
bool is_completed[100] = {false}; // Array to track completion status of each process
int current_time = 0;
int completed = 0;
cin >> n;
int sum_tat = 0, sum_wt = 0, sum_rt = 0, total_idle_time = 0, prev = 0;

cout << fixed << setprecision(2); // Setting floating point precision

// Input arrival times for each process


for (int i = 0; i < n; i++) {
cin >> ps[i].at;
ps[i].pid = i;
}

// Input burst times for each process


for (int i = 0; i < n; i++) {
cin >> ps[i].bt;
}

// Process scheduling loop


while (completed != n) {
// Find process with min. burst time in ready queue at current time
int min_index = -1;
int minimum = INT_MAX;
for (int i = 0; i < n; i++) {
if (ps[i].at <= current_time && !is_completed[i]) {
if (ps[i].bt < minimum) {
minimum = ps[i].bt;
min_index = i;
}
if (ps[i].bt == minimum) {
if (ps[i].at < ps[min_index].at) {
minimum = ps[i].bt;
min_index = i;
}
}
}
}

if (min_index == -1) {
current_time++;
} else {
// Update process details for completion
ps[min_index].start_time = current_time;
ps[min_index].ct = ps[min_index].start_time + ps[min_index].bt;
ps[min_index].tat = ps[min_index].ct - ps[min_index].at;
ps[min_index].wt = ps[min_index].tat - ps[min_index].bt;
ps[min_index].rt = ps[min_index].wt;

// Update statistics
sum_tat += ps[min_index].tat;
sum_wt += ps[min_index].wt;
sum_rt += ps[min_index].rt;
total_idle_time += (prev == 0) ? 0 : (ps[min_index].start_time - prev);

completed++;
is_completed[min_index] = true;
current_time = ps[min_index].ct;
prev = current_time;
}
}

// Output
cout << "\nProcess No.\tAT\tBurst Time\t\tCT\t\tTAT\t\tWT\t\tRT\n";
for (int i = 0; i < n; i++)
cout << i + 1 << "\t\t" << ps[i].at << "\t\t" << ps[i].bt << "\t\t" << ps[i].ct << "\t\t" <<
ps[i].tat << "\t\t" << ps[i].wt << "\t\t" << ps[i].rt << endl;
cout << endl;

cout << "\nAverage Turn Around time= " << (float)sum_tat / n;


cout << "\nAverage Waiting Time= " << (float)sum_wt / n;
cout << "\nAverage Response Time= " << (float)sum_rt / n;
return 0;
}

Output:
Inference
In this experiment, C++ codes were developed to implement the First-Come, First-
Served (FCFS) and Shortest Job First (SJF) scheduling algorithms for operating
systems. The FCFS algorithm prioritizes tasks based on their arrival order, while the
SJF algorithm selects tasks with the shortest execution time. Through this
implementation, insights into process scheduling efficiency and algorithm
performance were gained, aiding in understanding the practical implications of
different scheduling strategies in operating systems.

The average Turn Around Time for FCFS is 5.4


The average Waiting Time for FCFS is 2.8

The average Turn Around Time for SJF is 5


The average Waiting Time for SJF is 2.4

Submitted by : Dhruv Rajput (2K21/EP/34)


Experiment - 3
To implement Longest Job First Scheduling
Algorithm.
Theory
Scheduling Algorithms:

1. Longest Job First (LJF):


The Longest Job First (LJF) scheduling algorithm, also known as Longest Job Next
(LJN), is a non-preemptive scheduling algorithm used in operating systems for task
scheduling. In this algorithm, the processor selects the task with the longest
estimated run time to execute first, hence the name "longest job first." Once a task is
started, it runs to completion without interruption unless it voluntarily relinquishes
the CPU or completes its execution.

Theoretical explanation of the LJF scheduling algorithm:

Task Arrival: As tasks arrive, they are added to the ready queue.

Selection of Task: When the CPU becomes available (either because it's idle or the
current task finishes execution), the scheduler selects the task with the longest
estimated run time from the ready queue. This is the distinguishing feature of the LJF
algorithm.

Task Execution: The selected task is then executed by the CPU. Since LJF is non-
preemptive, the task runs until it completes its execution or voluntarily relinquishes
the CPU.

Completion of Task: Once the task finishes execution, it is removed from the system.

Repeat: Steps 2-4 are repeated until there are no more tasks remaining in the
system.
Code:

#include <iostream>
#include <vector>
#include <algorithm>

using namespace std;

struct Process {
int id;
int arrivalTime;
int burstTime;
int completionTime;
int waitingTime;
int turnaroundTime;
};

// Comparator function to sort processes based on arrival time and then burst
time in descending order
bool compare(Process p1, Process p2) {
if (p1.arrivalTime != p2.arrivalTime)
return p1.arrivalTime < p2.arrivalTime;
return p1.burstTime > p2.burstTime;
}
void calculateCompletionTimes(vector<Process>& processes) {
int currentTime = 0;
for (int i = 0; i < processes.size(); ++i) {
currentTime = max(currentTime, processes[i].arrivalTime);
processes[i].completionTime = currentTime + processes[i].burstTime;
processes[i].turnaroundTime = processes[i].completionTime -
processes[i].arrivalTime;
processes[i].waitingTime = processes[i].turnaroundTime -
processes[i].burstTime;
currentTime = processes[i].completionTime;
}
currentTime=0;
cout << "Order of execution:\n";
for (int i = 0; i < processes.size(); ++i) {
cout << "Process " << processes[i].id << " executes from time " <<
max(currentTime, processes[i].arrivalTime) << " to " << max(currentTime,
processes[i].arrivalTime) + processes[i].burstTime << endl;
currentTime = max(currentTime, processes[i].arrivalTime) +
processes[i].burstTime;
}
}

int main() {
int n;
cout << "Enter the number of processes: ";
cin >> n;

vector<Process> processes(n);

cout << "Enter arrival time and burst time for each process:\n";
for (int i = 0; i < n; ++i) {
processes[i].id = i + 1;
cout << "Process " << processes[i].id << ":\n";
cout << "Arrival Time: ";
cin >> processes[i].arrivalTime;
cout << "Burst Time: ";
cin >> processes[i].burstTime;
}

// Sort processes according to arrival time and then burst time


sort(processes.begin(), processes.end(), compare);

// Calculate completion times, waiting times, and turnaround times


calculateCompletionTimes(processes);

// Calculate average turnaround time and average waiting time


double avgTurnaroundTime = 0, avgWaitingTime = 0;
for (int i = 0; i < n; ++i) {
avgTurnaroundTime += processes[i].turnaroundTime;
avgWaitingTime += processes[i].waitingTime;
}
avgTurnaroundTime /= n;
avgWaitingTime /= n;

// Output average turnaround time and average waiting time


cout << "Average Turnaround Time: " << avgTurnaroundTime << endl;
cout << "Average Waiting Time: " << avgWaitingTime << endl;

return 0;
}

Output:
Inference
The code calculates the average turnaround time (TWT) and average waiting time
(WT) for the Longest Job First (LJF) scheduling algorithm. It iterates through a list of
processes, prioritizing tasks based on their estimated run times. It then calculates the
completion time, turnaround time, and waiting time for each process. Finally, it
computes the averages of these metrics and returns them.

The average Turn Around Time is 5.8


The average Waiting Time is 3.2

Submitted by : Dhruv Rajput (2K21/EP/34)


Experiment – 4, 5
To implement Round Robin and Priority
Scheduling Algorithms.
Theory
Scheduling Algorithms:

1. Round Robin (RR):


Round robin scheduling, a fundamental CPU scheduling algorithm in operating
systems, operates on the principle of fairness and time slicing. It allocates CPU time
in a cyclic manner to processes awaiting execution. Each process receives a fixed
time quantum, during which it can utilize the CPU. Once a process's time quantum
expires, it is preempted and moved to the end of the ready queue, allowing the next
process in line to execute. This approach ensures that no process monopolizes the
CPU for an extended period, promoting fairness in resource allocation.

One of the key advantages of round robin scheduling lies in its simplicity and ease of
implementation. It requires maintaining a ready queue of processes and a timer to
enforce time quanta. Additionally, it offers relatively low response times, particularly
for processes with short CPU bursts, as they can quickly execute within their allotted
time slices.

2. Priority Scheduling
Priority scheduling is a CPU scheduling algorithm used in operating systems to
determine which processes should be executed next based on their priority levels. In
this algorithm, each process is assigned a priority, and the scheduler selects the
process with the highest priority for execution. Processes with higher priorities are
given precedence over those with lower priorities, ensuring that critical tasks are
handled promptly.

One of the main advantages of priority scheduling is its ability to prioritize important
or time-sensitive tasks, such as real-time processes or system-critical operations. By
assigning appropriate priorities to processes, the system can ensure that vital tasks
are completed efficiently, thus improving overall system performance and
responsiveness.
1. Round Robin Code

Code:
#include <iostream>
#include <algorithm>
#include <queue>
#include <iomanip>
#include <climits>
using namespace std;
struct process_struct
{
int pid;
int at;
int bt;
int ct, wt, tat, rt, start_time;
int bt_remaining;
} ps[100];
bool comparatorAT(struct process_struct a, struct
process_struct b)
{
int x = a.at;
int y = b.at;
return x < y;
// if(x > y)
// return false; // Apply sorting
// return true; // no sorting
}
bool comparatorPID(struct process_struct a, struct process_struct b)
{
int x = a.pid;
int y = b.pid;
return x < y;
}
int main()
{
int n, index;
int cpu_utilization;
queue<int> q;
bool visited[100] = {false}, is_first_process = true;
int current_time = 0, max_completion_time;
int completed = 0, tq, total_idle_time = 0, length_cycle;
cout << "Enter total number of processes: ";
cin >> n;
float sum_tat = 0, sum_wt = 0, sum_rt = 0;
cout << fixed << setprecision(2);
for (int i = 0; i < n; i++)
{
cout << "\nEnter Process " << i + 1 << " Arrival Time: ";
cin >> ps[i].at;
ps[i].pid = i;
}
for (int i = 0; i < n; i++)
{
cout << "\nEnter Process " << i + 1 << " Burst Time: ";
cin >> ps[i].bt;
ps[i].bt_remaining = ps[i].bt;
}
cout << "\nEnter time quanta: ";
cin >> tq;
// sort structure on the basis of Arrival time in increasing order
sort(ps, ps + n, comparatorAT);
q.push(0);
visited[0] = true;
while (completed != n)
{
index = q.front();
q.pop();
if (ps[index].bt_remaining == ps[index].bt)
{
ps[index].start_time =
max(current_time, ps[index].at);
total_idle_time += (is_first_process == true) ? 0 :
ps[index].start_time - current_time;
current_time = ps[index].start_time;
is_first_process = false;
}
if (ps[index].bt_remaining - tq > 0)
{
ps[index].bt_remaining -= tq;
current_time += tq;
}
else
{
current_time += ps[index].bt_remaining;
ps[index].bt_remaining = 0;
completed++;
ps[index].ct = current_time;
ps[index].tat = ps[index].ct - ps[index].at;
ps[index].wt = ps[index].tat - ps[index].bt;
ps[index].rt = ps[index].start_time -
ps[index].at;
sum_tat += ps[index].tat;
sum_wt += ps[index].wt;
sum_rt += ps[index].rt;
}
// check which new Processes needs to be pushed to Ready Queue from
Input list
for (int i = 1; i < n; i++)
{
if (ps[i].bt_remaining > 0 && ps[i].at <= current_time &&
visited[i] == false)
{
q.push(i);
visited[i] = true;
}
}
// check if Process on CPU needs to be pushed to Ready Queue
if (ps[index].bt_remaining > 0)
q.push(index);
// if queue is empty, just add one process from list, whose remaining
burst time > 0
if (q.empty())
{
for (int i = 1; i < n; i++)
{
if (ps[i].bt_remaining > 0)
{
q.push(i);
visited[i] = true;
break;
}
}
}
} // end of while
// Calculate Length of Process completion cycle
max_completion_time = INT_MIN;
for (int i = 0; i < n; i++)
max_completion_time =
max(max_completion_time, ps[i].ct);
length_cycle = max_completion_time - ps[0].at; //
ps[0].start_time;
cpu_utilization = (float)(length_cycle - total_idle_time) /
length_cycle;
// sort so that process ID in output comes in Original order (just for
interactivity- Not needed otherwise)
sort(ps, ps + n, comparatorPID);
// Output
cout << "\nProcess No.\tAT\tCPU Burst Time\tStart Time\tCT\tTAT\tWT\tRT\n
";
for (int i = 0; i < n; i++)
cout
<< i << "\t\t" << ps[i].at << "\t" << ps[i].bt << "\t\t" <<
ps[i].start_time << "\t\t" << ps[i].ct << "\t" << ps[i].tat << "\t" <<
ps[i].wt << "\t"
<< ps[i].rt << endl;
cout << endl;
cout << "\nAverage Turn Around time= " << (float)sum_tat / n;
cout << "\nAverage Waiting Time= " << (float)sum_wt / n;
cout << "\nAverage Response Time= " << (float)sum_rt / n << endl;
return 0;
}

Output:
2. Priority Scheduling Code
Code :

#include<bits/stdc++.h>
using namespace std;

struct process
{
int pid, arrival_time, burst_time, priority, start_time;
int
completion_time,
turnaround_time, waiting_time, response_time;
};
int main()
{
int n, total_turnaround_time = 0, total_waiting_time = 0,
total_response_time = 0, total_idle_time = 0;
struct process p[100];
float
avg_turnaround_time,
avg_waiting_time, avg_response_time, cpu_utilisation, throughput;
int burst_remaining[100];
int is_completed[100];
memset(is_completed, 0, sizeof(is_completed));
cout << setprecision(2) << fixed;
cout << "Enter the number of processes: ";
cin >> n;
for (int i = 0; i < n; i++) {
cout << "Enter arrival time of process " << i + 1 << ": ";
cin >> p[i].arrival_time;
cout << "Enter burst time of process " << i + 1 << ": ";
cin >> p[i].burst_time;
cout << "Enter priority of the process " << i + 1 << ": ";
cin >> p[i].priority;
p[i].pid = i + 1;
burst_remaining[i] = p[i].burst_time;
cout << endl;
}
int current_time = 0;
int completed = 0;
int prev = 0;
while (completed != n)
{
int idx = -1;
int min_priority = INT_MAX; // Initialize min_priority with the
maximum possible value
for (int i = 0; i < n; i++)
{
if (p[i].arrival_time <= current_time && is_completed[i] == 0 &&
p[i].priority < min_priority)
{
min_priority = p[i].priority;
idx = i;
}
}
if (idx != -1)
{
if (burst_remaining[idx] == p[idx].burst_time)
{
p[idx].start_time = current_time;
total_idle_time += p[idx].start_time - prev;
}
burst_remaining[idx] -= 1;
current_time++;
prev = current_time;
if (burst_remaining[idx] == 0)
{
p[idx].completion_time = current_time;
p[idx].turnaround_time = p[idx].completion_time -
p[idx].arrival_time;
p[idx].waiting_time = p[idx].turnaround_time -
p[idx].burst_time;
p[idx].response_time = p[idx].start_time -
p[idx].arrival_time;
total_turnaround_time += p[idx].turnaround_time;
total_waiting_time += p[idx].waiting_time;
total_response_time += p[idx].response_time;
is_completed[idx] = 1;
completed++;
}
}
else
{
current_time++;
}
}
int min_arrival_time = 10000000;
int max_completion_time = -1;
for (int i = 0; i < n; i++)
{
min_arrival_time = min(min_arrival_time, p[i].arrival_time);
max_completion_time =
max(max_completion_time, p[i].completion_time);
}
avg_turnaround_time = (float)total_turnaround_time / n;
avg_waiting_time = (float)total_waiting_time / n;

avg_response_time = (float)total_response_time / n;

cpu_utilisation = ((max_completion_time - total_idle_time) /


(float)max_completion_time) *
100;

throughput = float(n) / (max_completion_time - min_arrival_time);

cout << endl


<< endl;

cout << "#P\t"


<< "AT\t"
<< "BT\t"
<< "PRI\t"
<< "ST\t"
<< "CT\t"
<< "TAT\t"
<< "WT\t"
<< "RT\t"
<< "\n"
<< endl;
for (int i = 0; i < n; i++)
{
cout << p[i].pid << "\t" << p[i].arrival_time << "\t" <<
p[i].burst_time
<< "\t" << p[i].priority << "\t" << p[i].start_time << "\t" <<
p[i].completion_time << "\t" << p[i].turnaround_time << "\t" <<
p[i].waiting_time << "\t" << p[i].response_time << "\t"
<< "\n"
<< endl;
}
cout << "Average Turnaround Time = " << avg_turnaround_time << endl;
cout << "Average Waiting Time = " << avg_waiting_time << endl;
cout << "Average Response Time = " << avg_response_time << endl;
cout << "CPU Utilization = " << cpu_utilisation << "%" << endl;
cout << "Throughput = " << throughput << " process/unit time" << endl;
}

Output:
Inference
The code calculates the average turnaround time (TWT) and average waiting time
(WT) for the Round Robin and Priority scheduling algorithm. It iterates through a list
of processes, prioritizing tasks based on their estimated run times. It then calculates
the completion time, turnaround time, and waiting time for each process. Finally, it
computes the averages of these metrics and returns them.

The average Turn Around Time for round robin is 6.8


The average Waiting Time for round robin is 4.2

The average Turn Around Time for priority scheduling is 42.4


The average Waiting Time for priority scheduling is 29

Submitted by : Dhruv Rajput (2K21/EP/34)


Experiment – 6, 7
To implement Highest Response Ratio
Next Scheduling Algorithm and Fork
System call.
Theory
Scheduling Algorithms:

1. HRRN Scheduling (Highest Response Ratio Next):


The Highest Response Ratio Next (HRRN) algorithm is a CPU scheduling algorithm
used in operating systems to decide the order in which processes should be executed
on the CPU. Unlike many other scheduling algorithms, HRRN prioritizes processes
based on their response ratios, which take into account both the waiting time and
the burst time of a process.

In the HRRN algorithm, each process is assigned a response ratio, calculated as the
ratio of the sum of its waiting time and burst time to its burst time. Higher response
ratios indicate a higher urgency for CPU time. The algorithm selects the process with
the highest response ratio for execution, allowing for optimal utilization of CPU
resources.

2. Fork() system call


The fork() system call is a fundamental operation in Unix-like operating systems,
including Linux. It is used to create a new process, known as the child process, which
is a copy of the calling process, known as the parent process. Here's a brief
explanation of the fork() system call:

When a fork() call is made, the operating system creates a new process by
duplicating the existing process. This duplication includes copying the entire address
space of the parent process, including its code, data, stack, and heap. Essentially, the
child process starts as an exact copy of the parent process.

After the fork() call, both the parent and child processes continue execution from the
point of the fork() call. However, they each receive a different return value from the
fork() call to distinguish between them. In the parent process, the return value is the
process ID (PID) of the newly created child process, while in the child process, the
return value is 0. This allows the processes to differentiate between themselves and
execute different code paths if needed.
1. Highest Response Ration Next

Code:
#include <iostream>
#include <iomanip>
using namespace std;

struct Process {
char name;
int arrival_time, burst_time, completion_time, waiting_time,
turnaround_time;
int completed;
float normalized_turnaround_time;
} processes[10];

int num_processes;

void sortByArrival() {
struct Process temp;
int i, j;

for (i = 0; i < num_processes - 1; i++) {


for (j = i + 1; j < num_processes; j++) {
if (processes[i].arrival_time > processes[j].arrival_time) {
temp = processes[i];
processes[i] = processes[j];
processes[j] = temp;
}
}
}
}

int main() {
int i, j, total_burst_time = 0;
char name;
float current_time, average_waiting_time = 0, average_turnaround_time = 0;
num_processes = 5;

int arrival_times[] = { 0, 2, 4, 5, 7 };
int burst_times[] = { 2, 6, 7, 3, 5 };

for (i = 0, name = 'A'; i < num_processes; i++, name++) {


processes[i].name = name;
processes[i].arrival_time = arrival_times[i];
processes[i].burst_time = burst_times[i];
processes[i].completed = 0;
total_burst_time += processes[i].burst_time;
}

sortByArrival();
cout << "P_No.\tAT\tBT\tWT\tTAT\tNTT";

for (current_time = processes[0].arrival_time; current_time <


total_burst_time;) {
float highest_response_ratio = -9999;
float temp;
int loc;
for (i = 0; i < num_processes; i++) {
if (processes[i].arrival_time <= current_time &&
processes[i].completed != 1) {
temp = (processes[i].burst_time + (current_time -
processes[i].arrival_time)) / processes[i].burst_time;
if (highest_response_ratio < temp) {
highest_response_ratio = temp;
loc = i;
}
}
}
current_time += processes[loc].burst_time;
processes[loc].waiting_time = current_time -
processes[loc].arrival_time - processes[loc].burst_time;
processes[loc].turnaround_time = current_time -
processes[loc].arrival_time;
average_turnaround_time += processes[loc].turnaround_time;
processes[loc].normalized_turnaround_time =
((float)processes[loc].turnaround_time / processes[loc].burst_time);
processes[loc].completed = 1;
average_waiting_time += processes[loc].waiting_time;
cout << "\n" << processes[loc].name << "\t" <<
processes[loc].arrival_time;
cout << "\t" << processes[loc].burst_time << "\t" <<
processes[loc].waiting_time;
cout << "\t" << processes[loc].turnaround_time << "\t" <<
processes[loc].normalized_turnaround_time;
}
cout << "\n\nAverage waiting time: " << average_waiting_time << endl;
cout << "Average Turn Around time: " << average_turnaround_time << endl;
}

Output:

2. Priority Scheduling Code

Code :
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include<bits/stdc++.h>

int main() {
pid_t pid;

// Fork a child process


pid = fork();

if (pid < 0) { // Error occurred


fprintf(stderr, "Fork failed");
return 1;
} else if (pid == 0) { // Child process
printf("This is the child process, with PID: %d\n", getpid());
} else { // Parent process
printf("This is the parent process, with PID: %d\n", getpid());
printf("Child process PID: %d\n", pid);
}

return 0;
}

Output:

Inference
The HRRN algorithm prioritizes processes based on their response ratios, optimizing
average response time and preventing starvation.
It implements a non-preemptive scheduling approach, selecting processes with the
highest response ratios for execution.

The fork() system call in C++ creates a child process identical to the parent process.
Upon successful execution, fork() returns the process ID (PID) of the child to the
parent and returns 0 to the child process.
Parent and child processes continue execution independently from the point of the
fork() call, allowing for concurrent execution of code paths.

Submitted by : Dhruv Rajput (2K21/EP/34)


Experiment – 8, 9
To implement Banker’s Algorithm and
Dining Philosopher’s Problem.
Theory

1. Banker’s Algorithm:
The Banker's Algorithm is a pivotal method for resource allocation and deadlock
avoidance in operating systems. It ensures that processes can securely request and
release resources without causing deadlock.

Conceptually, the algorithm simulates the allocation of resources to processes and


assesses if this allocation would result in a safe state, where all processes can
complete their execution without deadlock.

To use the Banker's Algorithm, processes must declare their maximum resource
requirements in advance. The system maintains key data structures such as the
allocation matrix, maximum matrix, and available vector representing the current
availability of resources.

2. Dining Philosopher’s Problem


The Dining Philosophers Problem is a classic synchronization challenge in computer
science, illustrating the complexities of resource allocation in concurrent systems. It
features a group of philosophers seated around a dining table, with each philosopher
alternating between thinking and eating. To eat, a philosopher must acquire two
adjacent forks, but if all philosophers attempt to acquire their left fork
simultaneously, they may deadlock. This problem highlights the importance of
proper resource management and synchronization techniques to prevent deadlocks
and ensure efficient utilization of shared resources in multi-threaded environments.
Various solutions have been proposed, including resource ordering, deadlock
detection and recovery, or introducing a central authority for resource allocation,
each addressing the problem's nuances and trade-offs.

1. Banker’s Algorithm

Code:
#include <iostream>
using namespace std;

int main() {
// P0, P1, P2, P3, P4 are the Process names here
int num_processes, num_resources, i, j, k;
num_processes = 5; // Number of processes
num_resources = 3; // Number of resources
int allocation[5][3] = { { 0, 1, 0 }, // P0 // Allocation Matrix
{ 2, 0, 0 }, // P1
{ 3, 0, 2 }, // P2
{ 2, 1, 1 }, // P3
{ 0, 0, 2 } }; // P4

int max_demand[5][3] = { { 7, 5, 3 }, // P0 // MAX Matrix


{ 3, 2, 2 }, // P1
{ 9, 0, 2 }, // P2
{ 2, 2, 2 }, // P3
{ 4, 3, 3 } }; // P4

int available_resources[3] = { 3, 3, 2 }; // Available Resources

int finished[num_processes], sequence[num_processes], ind = 0;


for (k = 0; k < num_processes; k++) {
finished[k] = 0;
}
int need[num_processes][num_resources];
for (i = 0; i < num_processes; i++) {
for (j = 0; j < num_resources; j++)
need[i][j] = max_demand[i][j] - allocation[i][j];
}
int y = 0;
for (k = 0; k < 5; k++) {
for (i = 0; i < num_processes; i++) {
if (finished[i] == 0) {

int flag = 0;
for (j = 0; j < num_resources; j++) {
if (need[i][j] > available_resources[j]){
flag = 1;
break;
}
}

if (flag == 0) {
sequence[ind++] = i;
for (y = 0; y < num_resources; y++)
available_resources[y] += allocation[i][y];
finished[i] = 1;
}
}
}
}

int safe = 1;

// To check if sequence is safe or not


for(int i = 0; i < num_processes; i++) {
if(finished[i] == 0) {
safe = 0;
cout << "The given sequence is not safe";
break;
}
}

if(safe == 1) {
cout << "Following is the SAFE Sequence" << endl;
for (i = 0; i < num_processes - 1; i++)
cout << " P" << sequence[i] << " ->";
cout << " P" << sequence[num_processes - 1] <<endl;
}

return 0;
}

Output:

2. Dining Philosopher’s Problem

Code :
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>

#define NUM_PHILOSOPHERS 5

pthread_mutex_t forks[NUM_PHILOSOPHERS];
pthread_t philosophers[NUM_PHILOSOPHERS];

void *philosopher(void *arg) {


int id = *(int *)arg;
int right = id;
int left = (id + 1) % NUM_PHILOSOPHERS;

// Infinite loop for each philosopher's actions


while (1) {
// Think
printf("Philosopher %d is thinking\n", id);
sleep(1);

// Pick up forks
pthread_mutex_lock(&forks[right]);
printf("Philosopher %d picks up fork %d (right)\n", id, right);
pthread_mutex_lock(&forks[left]);
printf("Philosopher %d picks up fork %d (left)\n", id, left);

// Eat
printf("Philosopher %d is eating\n", id);
sleep(2);

// Put down forks


pthread_mutex_unlock(&forks[left]);
printf("Philosopher %d puts down fork %d (left)\n", id, left);
pthread_mutex_unlock(&forks[right]);
printf("Philosopher %d puts down fork %d (right)\n", id, right);
}
}

int main() {
int i;
int ids[NUM_PHILOSOPHERS];

// Initialize mutexes for each fork


for (i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_init(&forks[i], NULL);
}

// Create threads for each philosopher


for (i = 0; i < NUM_PHILOSOPHERS; i++) {
ids[i] = i;
pthread_create(&philosophers[i], NULL, philosopher, &ids[i]);
}

// Join threads
for (i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_join(philosophers[i], NULL);
}

// Destroy mutexes
for (i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_destroy(&forks[i]);
}

return 0;
}

Output:
Inference
The Banker's Algorithm is a deadlock avoidance technique in operating systems,
ensuring safe resource allocation by simulating resource requests to maintain system
integrity. In contrast, the Dining Philosophers Problem is a classic synchronization
issue representing the challenges of resource allocation in concurrent systems. It
involves philosophers seated around a table, alternating between eating and thinking
but facing potential deadlock if each philosopher attempts to acquire both forks
simultaneously. Solutions to the Dining Philosophers Problem include strategies like
resource ordering, centralized resource management, or implementing timeouts to
prevent deadlock. Both concepts highlight critical aspects of managing finite
resources in complex computing environments, emphasizing the importance of
efficient resource allocation and deadlock avoidance strategies.

Submitted by : Dhruv Rajput (2K21/EP/34)


Experiment – 10
To implement First Fit, Best Fit and
Worst Fit Memory Allocation Algorithms.
Theory

Memory Allocation Algorithms:

1. First Fit
This algorithm assigns memory to processes by scanning from the beginning of
the available memory space and allocating the first block that is large enough to
accommodate the process. While it's straightforward and fast in terms of
implementation, it may lead to increased fragmentation over time as smaller
memory blocks get scattered throughout the memory space, making it
challenging to allocate contiguous blocks for larger processes.

2. Best Fit
Best Fit algorithm meticulously searches the entire memory space to find the
smallest block that can satisfy the process's memory requirements. By selecting
the most suitable block for each process, it aims to minimize wasted memory and
reduce fragmentation. However, this exhaustive search can be time-consuming
and resource-intensive, especially in systems with large memory sizes.

3. Worst Fit
In contrast to Best Fit, Worst Fit allocates the largest available memory block to
the requesting process. This approach often results in more fragmentation as it
leaves behind smaller holes in memory. While it may seem counterintuitive, it
can be beneficial in scenarios where processes frequently request large memory
blocks, as it reduces the likelihood of rejecting requests due to insufficient
available memory.

Code:
1. First Fit
#include <iostream>
#include <vector>

using namespace std;

void implementFirstFit(vector<int>& blockSize, int blocks, vector<int>&


processSize, int processes) {
vector<int> allocate(processes, -1);
vector<int> occupied(blocks, 0);

for (int i = 0; i < processes; i++) {


for (int j = 0; j < blocks; j++) {
if (!occupied[j] && blockSize[j] >= processSize[i]) {
allocate[i] = j;
occupied[j] = 1;
break;
}
}
}

cout << "\nProcess No.\tProcess Size\tBlock no.\n";


for (int i = 0; i < processes; i++) {
cout << i + 1 << "\t\t\t" << processSize[i] << "\t\t\t";
if (allocate[i] != -1)
cout << allocate[i] + 1 << endl;
else
cout << "Not Allocated" << endl;
}
}

int main() {
vector<int> blockSize = {30, 5, 10};
vector<int> processSize = {10, 6, 9};
int m = blockSize.size();
int n = processSize.size();

implementFirstFit(blockSize, m, processSize, n);

return 0;
}

Output:
2. Best Fit

#include <iostream>
#include <vector>

using namespace std;

void implementBestFit(vector<int>& blockSize, int blocks, vector<int>&


processSize, int processes) {
vector<int> allocation(processes, -1);

for (int i = 0; i < processes; i++) {


int indexPlaced = -1;
for (int j = 0; j < blocks; j++) {
if (blockSize[j] >= processSize[i]) {
if (indexPlaced == -1 || blockSize[j] <
blockSize[indexPlaced])
indexPlaced = j;
}
}
if (indexPlaced != -1) {
allocation[i] = indexPlaced;
blockSize[indexPlaced] -= processSize[i];
}
}

cout << "\nProcess No.\tProcess Size\tBlock no.\n";


for (int i = 0; i < processes; i++) {
cout << i + 1 << "\t\t\t" << processSize[i] << "\t\t\t";
if (allocation[i] != -1)
cout << allocation[i] + 1 << endl;
else
cout << "Not Allocated" << endl;
}
}

int main() {
vector<int> blockSize = {50, 20, 100, 90};
vector<int> processSize = {10, 30, 60, 30};
int blocks = blockSize.size();
int processes = processSize.size();

implementBestFit(blockSize, blocks, processSize, processes);

return 0;
}

Output:

3. Worst Fit

#include <iostream>
#include <vector>

using namespace std;

void implementWorstFit(vector<int>& blockSize, int blocks, vector<int>&


processSize, int processes) {
vector<int> allocation(processes, -1);
vector<int> occupied(blocks, 0);

for (int i = 0; i < processes; i++) {


int indexPlaced = -1;
for (int j = 0; j < blocks; j++) {
if (blockSize[j] >= processSize[i] && !occupied[j]) {
if (indexPlaced == -1 || blockSize[indexPlaced] <
blockSize[j])
indexPlaced = j;
}
}

if (indexPlaced != -1) {
allocation[i] = indexPlaced;
occupied[indexPlaced] = 1;
blockSize[indexPlaced] -= processSize[i];
}
}

cout << "\nProcess No.\tProcess Size\tBlock no.\n";


for (int i = 0; i < processes; i++) {
cout << i + 1 << "\t\t\t" << processSize[i] << "\t\t\t";
if (allocation[i] != -1)
cout << allocation[i] + 1 << endl;
else
cout << "Not Allocated" << endl;
}
}

int main() {
vector<int> blockSize = {100, 50, 30, 120, 35};
vector<int> processSize = {40, 10, 30, 60};
int blocks = blockSize.size();
int processes = processSize.size();

implementWorstFit(blockSize, blocks, processSize, processes);

return 0;
}

Output:

Inference
First Fit:

The First Fit algorithm allocates memory to a process by scanning memory from the
beginning and assigning the first available block that is large enough to
accommodate the process.
It is simple to implement and efficient in terms of time complexity, but it may lead to
increased fragmentation over time as smaller blocks are allocated first, leaving
scattered gaps in memory.

Best Fit:

Best Fit meticulously searches the entire memory space to find the smallest block
that can accommodate the process. It aims to minimize memory wastage by
selecting the most fitting block for each process.
While it reduces fragmentation by allocating the most suitable block, the exhaustive
search required can be time-consuming and resource-intensive, particularly in
systems with large memory sizes.

Worst Fit:

The Worst Fit algorithm allocates memory by selecting the largest available block in
the memory space that can accommodate the process.
It may result in increased fragmentation as larger blocks are allocated, leaving behind
smaller, unusable gaps in memory. Despite this, it can be beneficial in scenarios
where processes frequently request large memory blocks, as it minimizes the need
for frequent memory allocations

Submitted by : Dhruv Rajput (2K21/EP/34)

You might also like