OSD Lab File
OSD Lab File
3. File System Management: The OS provides mechanisms for storing, organizing, and
accessing files on storage devices such as hard drives and SSDs.
5. User Interface: Operating systems provide a user interface (UI) through which users
can interact with the computer. This can range from a command-line interface (CLI)
to graphical user interfaces (GUIs) with windows, icons, buttons, and menus.
6. Security: Operating systems implement security measures to protect the system and
its resources from unauthorized access, viruses, malware, and other threats.
A thread is the smallest unit of execution within a process. Multiple threads can exist
within a single process and share the same memory space.
Threads within the same process share the process's resources, including memory
and open files, making communication and data sharing between threads easier and
more efficient compared to processes.
Threads allow for concurrent execution within a process, enabling tasks to be
executed simultaneously or concurrently. This concurrency can lead to improved
performance and responsiveness in multi-threaded applications.
Threads are scheduled for execution by the operating system's thread scheduler,
which allocates CPU time to individual threads.
Threads typically have their own stack space for storing local variables and function
call information, but they share the process's global variables and heap memory.
- Burst Time: Burst time is the time taken by a process to complete its execution. It is
a crucial parameter for CPU scheduling algorithms, influencing decisions on process
prioritization.
- Arrival Time: Arrival time is the instance when a process enters the ready queue for
execution. It plays a significant role in determining the order in which processes are
scheduled.
- Waiting Time: Waiting time is the cumulative time a process spends waiting in the
ready queue before getting CPU time. It includes waiting for CPU time and waiting for I/O
operations
Effective scheduling algorithms aim to minimize waiting times, maximize CPU utilization, and
optimize system performance, considering factors like burst time, arrival time, and priority.
FCFS prioritizes tasks solely based on their arrival time, without considering their
priority levels or execution times. This simplicity makes FCFS easy to implement and
understand, which is advantageous for systems where complexity needs to be
minimized.
However, FCFS may not always lead to optimal performance. It can suffer from the
"convoy effect," where short tasks get delayed behind long-running tasks, leading to
inefficient CPU utilization and longer response times. Despite this drawback, FCFS
ensures fairness, as all tasks eventually get CPU time, and it may be suitable for
systems where predictability and simplicity are prioritized over efficiency.
Code:
#include<bits/stdc++.h>
using namespace std;
class Process {
public:
int pid;
int at;
int bt;
};
int tat;
int wt;
int n = processes.size();
vector<int> CT(n+1);
float avgtat = 0;
float avgwt = 0;
int main() {
int numOfProcesses;
cin>>numOfProcesses;
vector<Process> processes;
for(int i = 0; i < numOfProcesses; i++) {
Process p;
p.pid = i+1;
cin>>p.at>>p.bt;
processes.push_back(p);
}
fcfsScheduler(processes);
return 0;
}
Output:
Shortest Job First Scheduling:
Code:
#include<bits/stdc++.h>
using namespace std;
int main() {
int n;
bool is_completed[100] = {false}; // Array to track completion status of each process
int current_time = 0;
int completed = 0;
cin >> n;
int sum_tat = 0, sum_wt = 0, sum_rt = 0, total_idle_time = 0, prev = 0;
if (min_index == -1) {
current_time++;
} else {
// Update process details for completion
ps[min_index].start_time = current_time;
ps[min_index].ct = ps[min_index].start_time + ps[min_index].bt;
ps[min_index].tat = ps[min_index].ct - ps[min_index].at;
ps[min_index].wt = ps[min_index].tat - ps[min_index].bt;
ps[min_index].rt = ps[min_index].wt;
// Update statistics
sum_tat += ps[min_index].tat;
sum_wt += ps[min_index].wt;
sum_rt += ps[min_index].rt;
total_idle_time += (prev == 0) ? 0 : (ps[min_index].start_time - prev);
completed++;
is_completed[min_index] = true;
current_time = ps[min_index].ct;
prev = current_time;
}
}
// Output
cout << "\nProcess No.\tAT\tBurst Time\t\tCT\t\tTAT\t\tWT\t\tRT\n";
for (int i = 0; i < n; i++)
cout << i + 1 << "\t\t" << ps[i].at << "\t\t" << ps[i].bt << "\t\t" << ps[i].ct << "\t\t" <<
ps[i].tat << "\t\t" << ps[i].wt << "\t\t" << ps[i].rt << endl;
cout << endl;
Output:
Inference
In this experiment, C++ codes were developed to implement the First-Come, First-
Served (FCFS) and Shortest Job First (SJF) scheduling algorithms for operating
systems. The FCFS algorithm prioritizes tasks based on their arrival order, while the
SJF algorithm selects tasks with the shortest execution time. Through this
implementation, insights into process scheduling efficiency and algorithm
performance were gained, aiding in understanding the practical implications of
different scheduling strategies in operating systems.
Task Arrival: As tasks arrive, they are added to the ready queue.
Selection of Task: When the CPU becomes available (either because it's idle or the
current task finishes execution), the scheduler selects the task with the longest
estimated run time from the ready queue. This is the distinguishing feature of the LJF
algorithm.
Task Execution: The selected task is then executed by the CPU. Since LJF is non-
preemptive, the task runs until it completes its execution or voluntarily relinquishes
the CPU.
Completion of Task: Once the task finishes execution, it is removed from the system.
Repeat: Steps 2-4 are repeated until there are no more tasks remaining in the
system.
Code:
#include <iostream>
#include <vector>
#include <algorithm>
struct Process {
int id;
int arrivalTime;
int burstTime;
int completionTime;
int waitingTime;
int turnaroundTime;
};
// Comparator function to sort processes based on arrival time and then burst
time in descending order
bool compare(Process p1, Process p2) {
if (p1.arrivalTime != p2.arrivalTime)
return p1.arrivalTime < p2.arrivalTime;
return p1.burstTime > p2.burstTime;
}
void calculateCompletionTimes(vector<Process>& processes) {
int currentTime = 0;
for (int i = 0; i < processes.size(); ++i) {
currentTime = max(currentTime, processes[i].arrivalTime);
processes[i].completionTime = currentTime + processes[i].burstTime;
processes[i].turnaroundTime = processes[i].completionTime -
processes[i].arrivalTime;
processes[i].waitingTime = processes[i].turnaroundTime -
processes[i].burstTime;
currentTime = processes[i].completionTime;
}
currentTime=0;
cout << "Order of execution:\n";
for (int i = 0; i < processes.size(); ++i) {
cout << "Process " << processes[i].id << " executes from time " <<
max(currentTime, processes[i].arrivalTime) << " to " << max(currentTime,
processes[i].arrivalTime) + processes[i].burstTime << endl;
currentTime = max(currentTime, processes[i].arrivalTime) +
processes[i].burstTime;
}
}
int main() {
int n;
cout << "Enter the number of processes: ";
cin >> n;
vector<Process> processes(n);
cout << "Enter arrival time and burst time for each process:\n";
for (int i = 0; i < n; ++i) {
processes[i].id = i + 1;
cout << "Process " << processes[i].id << ":\n";
cout << "Arrival Time: ";
cin >> processes[i].arrivalTime;
cout << "Burst Time: ";
cin >> processes[i].burstTime;
}
return 0;
}
Output:
Inference
The code calculates the average turnaround time (TWT) and average waiting time
(WT) for the Longest Job First (LJF) scheduling algorithm. It iterates through a list of
processes, prioritizing tasks based on their estimated run times. It then calculates the
completion time, turnaround time, and waiting time for each process. Finally, it
computes the averages of these metrics and returns them.
One of the key advantages of round robin scheduling lies in its simplicity and ease of
implementation. It requires maintaining a ready queue of processes and a timer to
enforce time quanta. Additionally, it offers relatively low response times, particularly
for processes with short CPU bursts, as they can quickly execute within their allotted
time slices.
2. Priority Scheduling
Priority scheduling is a CPU scheduling algorithm used in operating systems to
determine which processes should be executed next based on their priority levels. In
this algorithm, each process is assigned a priority, and the scheduler selects the
process with the highest priority for execution. Processes with higher priorities are
given precedence over those with lower priorities, ensuring that critical tasks are
handled promptly.
One of the main advantages of priority scheduling is its ability to prioritize important
or time-sensitive tasks, such as real-time processes or system-critical operations. By
assigning appropriate priorities to processes, the system can ensure that vital tasks
are completed efficiently, thus improving overall system performance and
responsiveness.
1. Round Robin Code
Code:
#include <iostream>
#include <algorithm>
#include <queue>
#include <iomanip>
#include <climits>
using namespace std;
struct process_struct
{
int pid;
int at;
int bt;
int ct, wt, tat, rt, start_time;
int bt_remaining;
} ps[100];
bool comparatorAT(struct process_struct a, struct
process_struct b)
{
int x = a.at;
int y = b.at;
return x < y;
// if(x > y)
// return false; // Apply sorting
// return true; // no sorting
}
bool comparatorPID(struct process_struct a, struct process_struct b)
{
int x = a.pid;
int y = b.pid;
return x < y;
}
int main()
{
int n, index;
int cpu_utilization;
queue<int> q;
bool visited[100] = {false}, is_first_process = true;
int current_time = 0, max_completion_time;
int completed = 0, tq, total_idle_time = 0, length_cycle;
cout << "Enter total number of processes: ";
cin >> n;
float sum_tat = 0, sum_wt = 0, sum_rt = 0;
cout << fixed << setprecision(2);
for (int i = 0; i < n; i++)
{
cout << "\nEnter Process " << i + 1 << " Arrival Time: ";
cin >> ps[i].at;
ps[i].pid = i;
}
for (int i = 0; i < n; i++)
{
cout << "\nEnter Process " << i + 1 << " Burst Time: ";
cin >> ps[i].bt;
ps[i].bt_remaining = ps[i].bt;
}
cout << "\nEnter time quanta: ";
cin >> tq;
// sort structure on the basis of Arrival time in increasing order
sort(ps, ps + n, comparatorAT);
q.push(0);
visited[0] = true;
while (completed != n)
{
index = q.front();
q.pop();
if (ps[index].bt_remaining == ps[index].bt)
{
ps[index].start_time =
max(current_time, ps[index].at);
total_idle_time += (is_first_process == true) ? 0 :
ps[index].start_time - current_time;
current_time = ps[index].start_time;
is_first_process = false;
}
if (ps[index].bt_remaining - tq > 0)
{
ps[index].bt_remaining -= tq;
current_time += tq;
}
else
{
current_time += ps[index].bt_remaining;
ps[index].bt_remaining = 0;
completed++;
ps[index].ct = current_time;
ps[index].tat = ps[index].ct - ps[index].at;
ps[index].wt = ps[index].tat - ps[index].bt;
ps[index].rt = ps[index].start_time -
ps[index].at;
sum_tat += ps[index].tat;
sum_wt += ps[index].wt;
sum_rt += ps[index].rt;
}
// check which new Processes needs to be pushed to Ready Queue from
Input list
for (int i = 1; i < n; i++)
{
if (ps[i].bt_remaining > 0 && ps[i].at <= current_time &&
visited[i] == false)
{
q.push(i);
visited[i] = true;
}
}
// check if Process on CPU needs to be pushed to Ready Queue
if (ps[index].bt_remaining > 0)
q.push(index);
// if queue is empty, just add one process from list, whose remaining
burst time > 0
if (q.empty())
{
for (int i = 1; i < n; i++)
{
if (ps[i].bt_remaining > 0)
{
q.push(i);
visited[i] = true;
break;
}
}
}
} // end of while
// Calculate Length of Process completion cycle
max_completion_time = INT_MIN;
for (int i = 0; i < n; i++)
max_completion_time =
max(max_completion_time, ps[i].ct);
length_cycle = max_completion_time - ps[0].at; //
ps[0].start_time;
cpu_utilization = (float)(length_cycle - total_idle_time) /
length_cycle;
// sort so that process ID in output comes in Original order (just for
interactivity- Not needed otherwise)
sort(ps, ps + n, comparatorPID);
// Output
cout << "\nProcess No.\tAT\tCPU Burst Time\tStart Time\tCT\tTAT\tWT\tRT\n
";
for (int i = 0; i < n; i++)
cout
<< i << "\t\t" << ps[i].at << "\t" << ps[i].bt << "\t\t" <<
ps[i].start_time << "\t\t" << ps[i].ct << "\t" << ps[i].tat << "\t" <<
ps[i].wt << "\t"
<< ps[i].rt << endl;
cout << endl;
cout << "\nAverage Turn Around time= " << (float)sum_tat / n;
cout << "\nAverage Waiting Time= " << (float)sum_wt / n;
cout << "\nAverage Response Time= " << (float)sum_rt / n << endl;
return 0;
}
Output:
2. Priority Scheduling Code
Code :
#include<bits/stdc++.h>
using namespace std;
struct process
{
int pid, arrival_time, burst_time, priority, start_time;
int
completion_time,
turnaround_time, waiting_time, response_time;
};
int main()
{
int n, total_turnaround_time = 0, total_waiting_time = 0,
total_response_time = 0, total_idle_time = 0;
struct process p[100];
float
avg_turnaround_time,
avg_waiting_time, avg_response_time, cpu_utilisation, throughput;
int burst_remaining[100];
int is_completed[100];
memset(is_completed, 0, sizeof(is_completed));
cout << setprecision(2) << fixed;
cout << "Enter the number of processes: ";
cin >> n;
for (int i = 0; i < n; i++) {
cout << "Enter arrival time of process " << i + 1 << ": ";
cin >> p[i].arrival_time;
cout << "Enter burst time of process " << i + 1 << ": ";
cin >> p[i].burst_time;
cout << "Enter priority of the process " << i + 1 << ": ";
cin >> p[i].priority;
p[i].pid = i + 1;
burst_remaining[i] = p[i].burst_time;
cout << endl;
}
int current_time = 0;
int completed = 0;
int prev = 0;
while (completed != n)
{
int idx = -1;
int min_priority = INT_MAX; // Initialize min_priority with the
maximum possible value
for (int i = 0; i < n; i++)
{
if (p[i].arrival_time <= current_time && is_completed[i] == 0 &&
p[i].priority < min_priority)
{
min_priority = p[i].priority;
idx = i;
}
}
if (idx != -1)
{
if (burst_remaining[idx] == p[idx].burst_time)
{
p[idx].start_time = current_time;
total_idle_time += p[idx].start_time - prev;
}
burst_remaining[idx] -= 1;
current_time++;
prev = current_time;
if (burst_remaining[idx] == 0)
{
p[idx].completion_time = current_time;
p[idx].turnaround_time = p[idx].completion_time -
p[idx].arrival_time;
p[idx].waiting_time = p[idx].turnaround_time -
p[idx].burst_time;
p[idx].response_time = p[idx].start_time -
p[idx].arrival_time;
total_turnaround_time += p[idx].turnaround_time;
total_waiting_time += p[idx].waiting_time;
total_response_time += p[idx].response_time;
is_completed[idx] = 1;
completed++;
}
}
else
{
current_time++;
}
}
int min_arrival_time = 10000000;
int max_completion_time = -1;
for (int i = 0; i < n; i++)
{
min_arrival_time = min(min_arrival_time, p[i].arrival_time);
max_completion_time =
max(max_completion_time, p[i].completion_time);
}
avg_turnaround_time = (float)total_turnaround_time / n;
avg_waiting_time = (float)total_waiting_time / n;
avg_response_time = (float)total_response_time / n;
Output:
Inference
The code calculates the average turnaround time (TWT) and average waiting time
(WT) for the Round Robin and Priority scheduling algorithm. It iterates through a list
of processes, prioritizing tasks based on their estimated run times. It then calculates
the completion time, turnaround time, and waiting time for each process. Finally, it
computes the averages of these metrics and returns them.
In the HRRN algorithm, each process is assigned a response ratio, calculated as the
ratio of the sum of its waiting time and burst time to its burst time. Higher response
ratios indicate a higher urgency for CPU time. The algorithm selects the process with
the highest response ratio for execution, allowing for optimal utilization of CPU
resources.
When a fork() call is made, the operating system creates a new process by
duplicating the existing process. This duplication includes copying the entire address
space of the parent process, including its code, data, stack, and heap. Essentially, the
child process starts as an exact copy of the parent process.
After the fork() call, both the parent and child processes continue execution from the
point of the fork() call. However, they each receive a different return value from the
fork() call to distinguish between them. In the parent process, the return value is the
process ID (PID) of the newly created child process, while in the child process, the
return value is 0. This allows the processes to differentiate between themselves and
execute different code paths if needed.
1. Highest Response Ration Next
Code:
#include <iostream>
#include <iomanip>
using namespace std;
struct Process {
char name;
int arrival_time, burst_time, completion_time, waiting_time,
turnaround_time;
int completed;
float normalized_turnaround_time;
} processes[10];
int num_processes;
void sortByArrival() {
struct Process temp;
int i, j;
int main() {
int i, j, total_burst_time = 0;
char name;
float current_time, average_waiting_time = 0, average_turnaround_time = 0;
num_processes = 5;
int arrival_times[] = { 0, 2, 4, 5, 7 };
int burst_times[] = { 2, 6, 7, 3, 5 };
sortByArrival();
cout << "P_No.\tAT\tBT\tWT\tTAT\tNTT";
Output:
Code :
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include<bits/stdc++.h>
int main() {
pid_t pid;
return 0;
}
Output:
Inference
The HRRN algorithm prioritizes processes based on their response ratios, optimizing
average response time and preventing starvation.
It implements a non-preemptive scheduling approach, selecting processes with the
highest response ratios for execution.
The fork() system call in C++ creates a child process identical to the parent process.
Upon successful execution, fork() returns the process ID (PID) of the child to the
parent and returns 0 to the child process.
Parent and child processes continue execution independently from the point of the
fork() call, allowing for concurrent execution of code paths.
1. Banker’s Algorithm:
The Banker's Algorithm is a pivotal method for resource allocation and deadlock
avoidance in operating systems. It ensures that processes can securely request and
release resources without causing deadlock.
To use the Banker's Algorithm, processes must declare their maximum resource
requirements in advance. The system maintains key data structures such as the
allocation matrix, maximum matrix, and available vector representing the current
availability of resources.
1. Banker’s Algorithm
Code:
#include <iostream>
using namespace std;
int main() {
// P0, P1, P2, P3, P4 are the Process names here
int num_processes, num_resources, i, j, k;
num_processes = 5; // Number of processes
num_resources = 3; // Number of resources
int allocation[5][3] = { { 0, 1, 0 }, // P0 // Allocation Matrix
{ 2, 0, 0 }, // P1
{ 3, 0, 2 }, // P2
{ 2, 1, 1 }, // P3
{ 0, 0, 2 } }; // P4
int flag = 0;
for (j = 0; j < num_resources; j++) {
if (need[i][j] > available_resources[j]){
flag = 1;
break;
}
}
if (flag == 0) {
sequence[ind++] = i;
for (y = 0; y < num_resources; y++)
available_resources[y] += allocation[i][y];
finished[i] = 1;
}
}
}
}
int safe = 1;
if(safe == 1) {
cout << "Following is the SAFE Sequence" << endl;
for (i = 0; i < num_processes - 1; i++)
cout << " P" << sequence[i] << " ->";
cout << " P" << sequence[num_processes - 1] <<endl;
}
return 0;
}
Output:
Code :
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
#define NUM_PHILOSOPHERS 5
pthread_mutex_t forks[NUM_PHILOSOPHERS];
pthread_t philosophers[NUM_PHILOSOPHERS];
// Pick up forks
pthread_mutex_lock(&forks[right]);
printf("Philosopher %d picks up fork %d (right)\n", id, right);
pthread_mutex_lock(&forks[left]);
printf("Philosopher %d picks up fork %d (left)\n", id, left);
// Eat
printf("Philosopher %d is eating\n", id);
sleep(2);
int main() {
int i;
int ids[NUM_PHILOSOPHERS];
// Join threads
for (i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_join(philosophers[i], NULL);
}
// Destroy mutexes
for (i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_destroy(&forks[i]);
}
return 0;
}
Output:
Inference
The Banker's Algorithm is a deadlock avoidance technique in operating systems,
ensuring safe resource allocation by simulating resource requests to maintain system
integrity. In contrast, the Dining Philosophers Problem is a classic synchronization
issue representing the challenges of resource allocation in concurrent systems. It
involves philosophers seated around a table, alternating between eating and thinking
but facing potential deadlock if each philosopher attempts to acquire both forks
simultaneously. Solutions to the Dining Philosophers Problem include strategies like
resource ordering, centralized resource management, or implementing timeouts to
prevent deadlock. Both concepts highlight critical aspects of managing finite
resources in complex computing environments, emphasizing the importance of
efficient resource allocation and deadlock avoidance strategies.
1. First Fit
This algorithm assigns memory to processes by scanning from the beginning of
the available memory space and allocating the first block that is large enough to
accommodate the process. While it's straightforward and fast in terms of
implementation, it may lead to increased fragmentation over time as smaller
memory blocks get scattered throughout the memory space, making it
challenging to allocate contiguous blocks for larger processes.
2. Best Fit
Best Fit algorithm meticulously searches the entire memory space to find the
smallest block that can satisfy the process's memory requirements. By selecting
the most suitable block for each process, it aims to minimize wasted memory and
reduce fragmentation. However, this exhaustive search can be time-consuming
and resource-intensive, especially in systems with large memory sizes.
3. Worst Fit
In contrast to Best Fit, Worst Fit allocates the largest available memory block to
the requesting process. This approach often results in more fragmentation as it
leaves behind smaller holes in memory. While it may seem counterintuitive, it
can be beneficial in scenarios where processes frequently request large memory
blocks, as it reduces the likelihood of rejecting requests due to insufficient
available memory.
Code:
1. First Fit
#include <iostream>
#include <vector>
int main() {
vector<int> blockSize = {30, 5, 10};
vector<int> processSize = {10, 6, 9};
int m = blockSize.size();
int n = processSize.size();
return 0;
}
Output:
2. Best Fit
#include <iostream>
#include <vector>
int main() {
vector<int> blockSize = {50, 20, 100, 90};
vector<int> processSize = {10, 30, 60, 30};
int blocks = blockSize.size();
int processes = processSize.size();
return 0;
}
Output:
3. Worst Fit
#include <iostream>
#include <vector>
if (indexPlaced != -1) {
allocation[i] = indexPlaced;
occupied[indexPlaced] = 1;
blockSize[indexPlaced] -= processSize[i];
}
}
int main() {
vector<int> blockSize = {100, 50, 30, 120, 35};
vector<int> processSize = {40, 10, 30, 60};
int blocks = blockSize.size();
int processes = processSize.size();
return 0;
}
Output:
Inference
First Fit:
The First Fit algorithm allocates memory to a process by scanning memory from the
beginning and assigning the first available block that is large enough to
accommodate the process.
It is simple to implement and efficient in terms of time complexity, but it may lead to
increased fragmentation over time as smaller blocks are allocated first, leaving
scattered gaps in memory.
Best Fit:
Best Fit meticulously searches the entire memory space to find the smallest block
that can accommodate the process. It aims to minimize memory wastage by
selecting the most fitting block for each process.
While it reduces fragmentation by allocating the most suitable block, the exhaustive
search required can be time-consuming and resource-intensive, particularly in
systems with large memory sizes.
Worst Fit:
The Worst Fit algorithm allocates memory by selecting the largest available block in
the memory space that can accommodate the process.
It may result in increased fragmentation as larger blocks are allocated, leaving behind
smaller, unusable gaps in memory. Despite this, it can be beneficial in scenarios
where processes frequently request large memory blocks, as it minimizes the need
for frequent memory allocations