BSC Os Sem4
BSC Os Sem4
UNIT- I
What is Operating System? History and Evolution of OS, Basic OS functions,
Resource Abstraction, Types of Operating Systems– Multiprogramming Systems,
Batch Systems, Time Sharing Systems; Operating Systems for Personal Computers,
Workstations and Hand- held Devices, Process Control & Real time Systems.
UNIT- II
Processor and User Modes, Kernels, System Calls and System Programs, System
View of the Process and Resources, Process Abstraction, Process Hierarchy,
Threads, Threading Issues, Thread Libraries; Process Scheduling, Non-Preemptive
and Preemptive Scheduling Algorithms.
UNIT III
Process Management: Deadlock, Deadlock Characterization, Necessary and
Sufficient Conditions for Deadlock, Deadlock Handling Approaches: Deadlock
Prevention, Deadlock Avoidance and Deadlock Detection and Recovery.
Concurrent and Dependent Processes, Critical Section, Semaphores, Methods for
Inter- process Communication; Process Synchronization, Classical Process
Synchronization Problems: Producer-Consumer, Reader-Writer.
UNIT IV
Memory Management: Physical and Virtual Address Space; Memory Allocation
Strategies– Fixed and -Variable Partitions, Paging, Segmentation, Virtual Memory.
UNIT V
File and I/O Management, OS Security: Directory Structure, File Operations, File
Allocation Methods, Device Management, Pipes, Buffer, Shared Memory, Security
Policy Mechanism, Protection, Authentication and Internal Access Authorization
REFERENCE BOOKS:
1. Operating System Principles by Abraham Silberschatz, Peter Baer Galvin and
Greg Gagne (7thEdition) Wiley India Edition.
2. Operating Systems: Internals and Design Principles by Stallings (Pearson)
3. Operating Systems by J. Archer Harris (Author), Jyoti Singh (Author) (TMH)
UNIT-1
Operating System Introduction
Page 1 of 38
What is Operating System:
Definition: An operating system is system software, that acts as
interface between user and computer. It is used to control resources
such as the CPU, memory, input/output devices and overall
operations of computer system.
An Operating System provides an environment, in which a
user can execute programs efficiently and conveniently. It is the
first program loaded during the booting and remains in the
memory all the time.
4. Error Handling:
Page 2 of 38
The various types of errors can occur, while a computer system is running.
These include internal and external hardware errors, such as memory errors,
device failure errors etc.
In each case, the OS is responsible to clear the errors, without effect on running
applications.
5.
Resource Manager:
A computer has a set of resources for storing, processing of data, and also control
of these functions.
The OS is responsible for managing these resources.
6. User Interface:
OS provides an environment like CUI (Character User Interface) or GUI (Graphical
User Interface) to the user, to use the computer easily.
It also translates various instructions given by user.
Hence it acts as interface between user and computer.
7. Multitasking:
OS automatically skips from one task to another, when multiple tasks are
executed.
For example, typing text, listening to music, printing the information and so on.
The Operating System is responsible for executing all the tasks at the same time.
8. Security:
OS provides the security to protect various resources against unauthorized users.
It also uses timer, that do not allow unauthorized processes to access the CPU.
9. Networking:
Networking is used for exchanging the information between different computers.
These computers are connected by using various communication links, such as
telephone lines or buses.
Resource Abstraction & Types of Operating Systems (Evaluation of OS):
Resource abstraction It is the process of “hiding the details of how the
hardware operates, thereby making computer hardware relatively easy for an
application programmer to use
Operating Systems are classified into different categories, Following are some of
the most widely used types of Operating system.
1. Simple Batch System
2. Multiprogramming System
3. Distributed Systems
4. Real Time Systems
5. Time sharing Operating Systems
Page 3 of 38
In Batch Processing System, computer programs are executed as 'batches'. In this
system, programs are collected, grouped and executed at a time.
In Batch Processing System, the user has to submit a job (written on cards or tape)
to a computer operator.
The computer operator grouped all the jobs together as batches serially.
Then computer operator places a batch of several jobs into input device.
Then a special program called the Monitor, it executes each program in the batch.
The Batches are executed one after another at a defined time interval.
Finally, the operator receives the output of all jobs, and returns them to the
concerned users.
2. Multiprogramming (Multitasking) Systems:
In a multiprogramming system, one or more programs are loaded into main memory.
Only one program is executed at a time by the CPU. All the other programs are waiting
for execution.
Process 1
Process 2 CPU
Process 3
This operating system picks and begins to execute one job from memory.
Once this job needs an I/O operation, then operating system switches to another
job (CPU and OS always busy).
Jobs in the memory are always less than the number of jobs on disk (Job Pool).
If several jobs are ready to run at the same time, then OS chooses which one to run
based on CPU Scheduling methods.
In Multiprogramming system, CPU will never be idle and keeps on processing.
3. Distributed Operating System:
In Distributed Operating System, the workload is shared between two or more
computers, linked together by a network.
A network is a communication path between two or more computer systems.
In Distributed Operating system, computers are called Nodes.
It provides an illusion (imagination) to its users, that they are using single
computer.
Different computers are linked together by a communication network. i.e., LAN
(Local Area Network) or WAN (Wide Area Network).
Page 4 of 38
1. Client-Server Model
2. Peer-to-Peer Model
1. Client-Server Model: In this model, the Client sends a resource request to the
Server, and the server provides the requested resource to the Client. The
following diagram shows the Client-Server Model.
Server
Network
Page 5 of 38
Operating Systems for Personal Comp’s
1. Microsoft Windows
Microsoft created the Windows operating system in the mid-1980s.
There have been many different versions of Windows, but the most recent ones are
Windows 10 (released in 2015), Windows 8 (2012), Windows 7 (2009), and Windows
Vista (2007).
Windows comes pre-loaded on most new PCs, which helps to make it the most
popular operating system in the world.
2. macOS
macOS (previously called OS X) is a line of operating systems created by Apple.
It comes preloaded on all Macintosh computers, or Macs.
Some of the specific versions include Mojave (released in 2018), High Sierra (2017),
and Sierra (2016).
3. Solaris
Best for Large workload processing, managing multiple databases, etc.
Solaris is a UNIX based operating system which was originally developed by Sun
Microsystems in the mid-’90s.
In 2010 it was renamed as Oracle Solaris after Oracle acquired Sun Microsystems. It is
known for its scalability and several other features that made it possible such as
Dtrace, ZFS and Time Slider
4. Linux
The Linux was introduced by Linus Torvalds and the Free Software Foundation (FSF).
Linux (pronounced LINN-ux) is a family of open-source operating systems,
which means they can be modified and distributed by anyone around the world.
This is different from proprietary software like Windows, which can only be modified
by the company that owns it.
The advantages of Linux are that it is free, and there are many different distributions
—or versions—you can choose from.
5. Chrome OS
Best For a Web application.
Chrome OS is another Linux-kernel based operating software that is designed by
Google. As it is derived from the free chromium OS, it uses the Google Chrome web
browser as its principal user interface. This OS primarily supports web applications.
Page 6 of 38
WORKSTATIONS
Page 7 of 38
UNIT-2
Processor
Processor is a hardware component which Controls the all operations of the
computer system. It is regularly called to as Central Processing Unit (CPU).
A processor is an integrated electronic circuit that performs the calculations that run a
computer.
A processor performs arithmetical, logical, input/output (I/O) and other basic
instructions that are passed from an operating system (OS).
Most other processes are dependent on the operations of a processor.
The CPU is just one of the processors inside a personal computer (PC).
The Graphics Processing Unit (GPU) is another processor, and even some hard drives
are technically capable of performing some processing.
Page 8 of 38
User Mode and Kernel Mode.
There are two modes of operation in the operating system to make sure it
works correctly. These are
1. User mode
2. Kernel mode.
1. User Mode
The system is in user mode when the operating system is running a user
application such as handling a text editor.
While in the User Mode, the CPU executes the processes that are given by
the user in the User Space.
The mode bit is set to 1 in the user mode. It is changed from 1 to 0 when
switching from user mode to kernel mode.
2. Kernel Mode
A Kernel is a computer program that is the heart of an Operating System.
The system starts in kernel mode when it boots and after the operating system is
loaded, it executes applications in user mode.
There are certain instructions that need to be executed by Kernel only. So, the CPU
executes these instructions in the Kernel Mode only.
Ex:- memory management should be done in Kernel-Mode only
The mode bit is set to 0 in the kernel mode. It is changed from 0 to 1 when switching
from kernel mode to user mode.
The Operating System has control over the system,
The Kernel also has control over everything in the system.
The Kernel remains in the memory until the Operating System is
shut down.
It provides an interface between the user and the hardware
components of the system. When a process makes a request to
the Kernel, then it is called System Call.
Functions of a Kernel
Access Computer resource:
Resource Management
Memory Management:
Device Management:
In the above image, the user process executes in the user mode until it gets
a system call. Then a system trap is generated, and the mode bit is set to zero. The
system call gets executed in kernel mode. After the execution is completed, again a
system trap is generated, and the mode bit is set to 1. The system control returns
to kernel mode and the process execution continues.
Page 9 of 38
System Call
A system call is a way for programs to interact with the operating system.
A computer program makes a system call when it makes a request to the operating
system’s kernel.
System call provides the services of the operating system to the user programs via
Application Program Interface (API).
It provides an interface between a process and operating system. All programs needing
resources must use system calls.
System Programs
System Programming can be defined as the act of building Systems Software using
System Programming Languages.
According to Computer Hierarchy, which comes at last is Hardware. Then it
is Operating System, System Programs, and finally Application Programs.
Process Concepts
Process:
Page 10 of 38
A Process is a program in the execution. A system consists of a collection of
processes. All the processes are executed in sequential fashion. Operating system
processes are executing the system code, and user processes are executing the user
code.
Process Hierarchy
Process States:
The process state is defined as the current activity of the process.
A process goes through various states, during its execution.
The Operating system placed all the processes in a FIFO (First In First Out) queue for
execution.
A dispatcher is a program; it switches the processor from one process to another for
execution.
The different process states are as follows.
New Terminated
admitted
dispatch complete
Ready Running
timeout
1. New State: The New state defines that, a process is being admitted (created) by an
operating system.
2. Ready State: The Ready state defines that, the process ready to execute. i.e., waiting
for a chance of execution.
3. Running State: The Running state defines that, the instructions of a process are
being executed.
Page 11 of 38
4. Waiting State: The Waiting state defines that, the process is waiting for some event to
occur, such as the completion of an I/O operation. It is also known as Blocked state.
5. Terminated State: The Terminated state defines that, the process has finished its
execution. The process can be either completely executed or aborted for some
reasons.
State Transitions of a Process
The process states are divided into different combinations
1. NullNew
2. NewReady
3. ReadyRunning
4. RunningTerminated
5. RunningReady
6. RunningWaiting
7. WaitingReady
Page 12 of 38
Process Scheduling (or) CPU Scheduling
The process scheduling is to assign processes to the processor for execution.
It is the method of executing multiple processes at a time in a multiprogramming
system.
Hence, the CPU scheduling helps to achieve system objectives such as response
time, CPU utilization, waiting time etc. In many systems, the scheduling task is divided
into three separate functions. They are
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
New
Long-term Long-term
scheduler scheduler
Ready/ Suspend
Ready Running Exit
Medium-term Short-term
scheduler scheduler
1. Long-Term Scheduler:
A Long-Term Scheduler determines, which programs are admitted to the system for
processing.
Once admitted a program, it becomes a process, and is added to the queue.
It controls the degree of Multi-programming., the no.of processes present in ready
state at any time.
The Long-Term Scheduler is also called as Job Scheduler.
2. Short-Term Scheduler:
The Short-Term Scheduler is also known as CPU Scheduler or Dispatcher.
It decides which process will execute next in the CPU. i.e., Ready to Be Running
state.
It also preempts the currently running process, to execute another process.
The main aim of this scheduler is, to enhance CPU performance and increase
process execution rate.
3. Medium-Term Scheduler:
The Medium-Term Scheduler is responsible for suspending and resuming the
processes.
It mainly does Swapping. i.e., moving processes from Main memory to secondary
memory and vice versa.
The Medium-Term Scheduler reduces the degree of multi-programming.
Page 13 of 38
Process Scheduling Algorithms
Scheduling algorithms are used to decide, which of the process in the queue should be
allocated to the CPU. An Operating System uses Dispatcher, which assigns a process to
the CPU.
Types of Scheduling Algorithms:
The scheduling algorithms are classified into two types. They are as follows:
Scheduling Algorithms
I. Non-Preemptive Algorithms:
A non-preemptive algorithm will not prevent currently running process. In
this case, once the process enters into CPU execution, it cannot be pre-empted,
until it completes its execution.
Ex: (1). First Come First Serve (FCFS)
(2). Shortest Job First (SJF)
II. Preemptive Algorithms:
A preemptive algorithm will prevent the currently running process. In this
case, the currently running process may be interrupted and moves to the Ready
state. The preemptive decision is performed, when a new process arrives or when an
interrupt occurs, or a time-out occurs.
Ex: Round Robin (RR)
1) First Come First Serve [FCFS] Algorithm:
The FCFS algorithm is a simplest and straight forward scheduling algorithm.
It follows non-preemptive scheduling algorithm method.
In this algorithm, processes are executed on first-come and first-served basis.
This algorithm is easy to understand and implement.
The problem with this algorithm is, the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3, then Gantt chart of this scheduling
is as follows.
P1 P2 P3
Page 14 of 38
0 24 27 30
2) Shortest Job First [SJF] Algorithm:
It is also called as Shortest Process Next (SPN).
It follows non-preemptive scheduling algorithm method.
The SJF algorithm is faster than the FCFS.
The process with least burst time is selected from the ready queue for execution.
This is the best approach to minimize waiting time.
The problem with SJF is that, it requires the prior knowledge of burst-time of each
process.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 6
P2 8
P3 7
P4 3
P4 P1 P3 P2
0 3 9 16 24
3) Round Robin [RR] Algorithm:
The Round Robin scheduling algorithm was used in Time-sharing System.
It is one of the most widely used algorithms.
A fixed time (Quantum) is allotted to each process for execution.
If the running process doesn’t complete within the quantum, then the process is
preempted.
The next process in the ready queue is allocated the CPU for execution.
The problem with this algorithm is , the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Page 15 of 38
Threads
A Thread is also called as “Light Weight Process” (or) a single unit of a process.
Thread has its own Program Counter (PC), a register set, and a stack.
It shares some information from other threads like process code, data, and open files.
A traditional process has a single thread of control. It is also called “Heavy Weight
Process”.
If the process contains multiple threads of control, then it can do more than one task
at a time.
Many software packages that run on modern computers are multi-threaded.
For Example, MS-Word software uses multiple threads like performing spelling and
grammar checking in background, auto save, etc.
Reference Image:
Threading Issues
Following threading issues are:
a) The fork() and exec() system call
b) Signal handling
c) Thread cancelation
d) Thread Pools
e) Thread local storage
Page 16 of 38
a. The fork() and exec() system calls
The fork() is used to create a duplicate process. The meaning of the fork() and exec()
system calls change in a multithreaded program.
If a thread calls the fork(), does the new process duplicate all threads
If a thread calls the exec() system call, the program specified in the parameter to
exec() will replace the entire process which includes all threads.
b. Signal Handling
Generally, signal is used in UNIX systems to notify a process that a
particular event has occurred.
A signal received either synchronously or asynchronously, based on the
source of and the reason for the event being signaled.
All signals, whether synchronous or asynchronous, follow the same
pattern as given below
A signal is generated by the occurrence of a particular event.
The signal is delivered to a process.
Once delivered, the signal must be handled.
c. Cancellation
Termination of the thread in the middle of its execution is called ‘thread
cancellation.
Threads that are no-longer required can be cancelled by another thread in one
of two techniques:
1. Asynchronies cancellation
2. Deferred cancellation
1. Asynchronies Cancellation
It means cancellation of thread immediately
2. Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself
when it is feasible
For example − If multiple database threads are concurrently searching
through a database and one thread returns the result the remaining threads might
be cancelled.
d. Thread polls
Multithreading in a web server, whenever the server receives a request, it creates a
separate thread to service the request
A thread pool is to create a number of threads at process start-up and place them into
a pool, where they sit and wait for work.
Page 17 of 38
e. Thread Local Storage
The benefit of using threads in the first place is that Most data is shared
among the threads but, sometimes threads also need thread explicit data.
The Major libraries of threads are pThreads, Win32 and java which provide
support for thread specific which is called as TLS thread local storage
Thread Libraries
Thread libraries provide programmers with an Application Program Interface for
creating and managing threads.
Thread libraries may be implemented either in user space or in kernel space
There are two primary ways of implementing thread library, Those are
The first way is to provide a library entirely in user space with kernel support
The second way is to implement a kernel level library supported directly by the
operating system.
There are Three Main Thread Libraries in use today:
1. POSIX Pthreads - may be provided as either a user or kernel library, as an extension
to the POSIX standard.
pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain
shareware for Windows.
Global variables are shared amongst all threads.
One thread can wait for the others to rejoin before continuing.
2. Win32 threads - provided as a kernel-level library on Windows systems.
It is Similar to pThreads.
3. Java threads –
Since Java generally runs on a Java Virtual Machine,
The implementation of threads is based upon whatever OS and hardware
The JVM is running on, i.e. either Pthreads or Win32 threads depending on the
system.
Process Management
UNIT-3
Deadlock
Deadlock: “Deadlock is a situation, when a set of processes are blocked, because each
process is loading a resource, and waiting for another resource, acquired by some other
Page 18 of 38
process”. (or) The Deadlock is a situation when several processes may compete for a
finite number of resources.
In a multiprogramming system, a process requests a resource, and if the
resource is not available then the process enters a waiting state. The Waiting
process may never change state, because the resources are held by other waiting
process. This situation is called a Deadlock.
Consider the following Resource Allocation Graph.
R1
Assigned to
Waiting for
P1 P2
Assigned to
Waiting for
R2
R1
Page 19 of 38
Assigned to
Waiting for
Assigned to
Waiting for
R2
Circular
R1Wait
Held
Request
P1 P2
Held
Request
R2
No Deadlock
R1 R1
P1 P1
Page 21 of 38
With dead lock avoidance, a decision is made dynamically where the current resource
allocation request will be granted.
If it is granted potentially, it leads to a dead lock. Dead lock avoidance requires the
knowledge of further process resource request.
In this we can describe two approaches to dead lock avoidance.
Don’t start a process, if its demands may lead to dead lock.
Don’t grant an incremental resource requested by a process, if this allocation lead
to dead lock.
The Deadlock Avoidance algorithm ensures that, a process will never enter into unsafe
or deadlock state.
Each process declares the maximum number of resources of each type that it may
need, number of available resources, allocated resources, maximum demand of the
processes.
Processes inform operating system in advance, that how many resources they will
need.
If we allocated the resources in an order for each process, according to requirements,
and deadlock cannot be occurred. Then this state is called as Safe state.
A safe state in not a deadlocked state, and not all unsafe states are deadlocked. But
an unsafe state, deadlock may occur.
We can recognize deadlock by using Banker’s algorithm.
Unsafe
Deadlock
Safe
Resource allocation:
Consider a system with a finite number of processes and finite number of
resources. At any time a process may have zero or more resources allocated to it.The state
of the system is reflected by the current allocation of resources to processes. The state
may be safe state or unsafe state.
Page 22 of 38
Safe State:
Unsafe State:
Page 23 of 38
1. Dead Lock Detection: Deadlock detection is the process of whether a deadlock
exists or not, and identify the processes and resources involved in the deadlock. The
basic idea is, to check allocation of resource availability, and to determine if the
system is in deadlocked state.
Detection strategies do not restrict process actions. With deadlock detection,
requested resources are granted to processes whenever possible. Periodically, the OS
performs an algorithm, to detect the circular wait condition.
1. A deadlock exists, if and only if, there are unmarked processes at the end of the
algorithm.
2. Each unmarked process is deadlocked.
3. The strategy in this algorithm is to find a process, whose request can be
satisfied with the available resources.
2. Deadlock Recovery: When a detection algorithm finds that a deadlock exists, then
several recovery methods used.
a) Process Termination: To eliminate deadlocks by aborting a process, we use one of
two methods. In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock
cycle. These processes are computed for a long time, and the results of these
partial computations must be discarded, and recomputed later.
2. Abort one process at a time, until the deadlock cycle is eliminated: This
method is very complicated to implement, even after each process is aborted. A
deadlock-detection algorithm determines, whether any processes are still
deadlocked.
b) Resource Preemption: Resources are preempted from the processes that are
involved in deadlock. Then preempted resources are allocated to other processes.
So that, there is a possibility of recovering the system from deadlock.
Process Synchronization
Process Synchronization means sharing system resources by processes in a
such a way that, Concurrent access to shared data is handled thereby minimizing
the chance of inconsistent data. Maintaining data consistency demands
mechanisms to ensure synchronized execution of cooperating processes. Process
Synchronization was introduced to handle problems that arose while multiple
process executions. Some of the problems are discussed below.
Page 24 of 38
processes, at a given point of time, only one process must be executing its critical
section. If any other process also wants to execute its critical section, it must wait
until the first one finishes
1. Mutual Exclusion Out of a group of cooperating processes, only one process can
be in its critical section at a given point of time.
2. Progress If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be allowed to get
into its critical section.
3. Bounded Waiting After a process makes a request for getting into its critical
section, there is a limit for how many other processes can get into their critical
section, before this process's request is granted. So after the limit is reached,
system must grant the process permission to get into its critical section.
Synchronization Hardware
Many systems provide hardware support for critical section code. The critical
section problem could be solved easily in a single-processor environment if we
could disallow interrupts to occur while a shared variable or resource is being
modified.
In this manner, we could be sure that the current sequence of instructions
would be allowed to execute in order without pre-emption. Unfortunately, this
solution is not feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time
consuming as the message is passed to all the processors. This message
Page 25 of 38
transmission lag, delays entry of threads into critical section and the system
efficiency decreases.
Page 26 of 38
1. CREATING A FILE WITH RECORDS
PROGRAM:
/* a program to write data into file*/
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
main()
{
int stno,sub1,sub2,n,i; char stname[10];
FILE *fp; fp=fopen("bca.txt","w");
if(fp = = NULL)
{
printf("Can't open that file!");
exit(0);
}
clrscr();
printf("How many students: ");
scanf("%d",&n);
for(i=1;i<=n;i++)
{
printf("\nEnter Student %d Details \n",i);
printf("Student Number:");
scanf("%d",&stno);
fflush(stdin); printf("Student Name:");
gets(stname);
printf("Marks in two subjects:");
scanf("%d%d",&sub1,&sub2);
fprintf(fp,"\n%d %s %d %d",stno,stname,sub1,sub2);
}
printf("Record(s) Created Successfully");
fclose(fp);
getch();
}
OUTPUT:
How many students: 2
Enter Student 1 Details Student Number:101 Student Name:ABC
Marks in two subjects:45 56
Page 27 of 38
Enter Student 2
Details Student
Number:102
Student Name:XYZ
Marks in two subjects:67 55 Record(s) Created Successfully
Page 28 of 38
C 4
Page 29 of 38
wait_time[i] = wait_time[i] + ser_time[j];
}
sum = sum + wait_time[i];
}
avg_wait_time = (float)sum / n;
sum = 0;
printf("\nProcess ID\t\tService Time\t Waiting Time\t Turnaround Time\n");
for(i = 0; i < n; i++)
{
turn_time[i] = ser_time[i] + wait_time[i];
sum = sum + turn_time[i];
printf("\nProcess[%d]\t\t%d\t\t %d\t\t %d",process[i],ser_time[i],wait_time[i],
turn_time[i]);
}
avg_turn_time = (float)sum / n;
printf("\n\n Average Waiting Time: %f", avg_wait_time); printf("\nAverage
Turnaround Time: %f", avg_turn_time);
getch();
}
OUTPUT:
How many Processes?3
Enter execution time for Process[1]: 3
Enter execution time for Process[2]: 6
Enter execution time for Process[3]: 4
Process ID Service Time Waiting Time Turnaround Time
Process[1] 3 0 3
Process[3] 4 3 7
Process[2] 6 7 13
Average Waiting Time: 3.333333
Average Turnaround Time: 7.666667
Page 30 of 38
}
printf("\nEnter Quantum Time:");
scanf("%d", &q_time);
printf("\nProcess Name\t\tService Time\t Turnaround Time\t Waiting Time\n");
total=0;
i=0;
while (x!=0)
{
if(temp[i] <= q_time && temp[i] > 0)
{
total = total +
temp[i]; temp[i] = 0;
counter = 1;
}
else if(temp[i] > 0)
{
temp[i] = temp[i] -
q_time; total = total +
q_time;
}
if(temp[i] == 0 && counter == 1)
{
x--;
printf("\nProcess %s\t\t%d\t\t %d\t\t\t %d",p[i],ser_time[i],
total-arr_time[i], total-arr_time[i]-ser_time[i]);
wait_time = wait_time + total - arr_time[i] - ser_time[i];
turn_time = turn_time + total - arr_time[i];
counter = 0;
}
if(i == n - 1)
{
i = 0;
}
else if(arr_time[i+1] <= total)
{
i++;
}
else
{
i=0;
}
}
avg_wait_time = wait_time * 1.0 / n;
avg_turn_time = turn_time * 1.0 / n;
printf("\n\n Average Waiting Time: %f", avg_wait_time);
printf("\n Average Turnaround Time: %f", avg_turn_time);
getch();
}
OUTPUT:
How many Processes?3
Page 31 of 38
Enter Quantum Time:1
Process Name Service Time Turnaround Time Waiting Time
Process A 3 4 1
Process C 4 8 4
Process B 6 11 5
Average Waiting Time: 3.333333
Average Turnaround Time: 7.666667
5. DEADLOCK DETECTION
Aim: A program to implement deadlock prevention algorithm
PROGRAM:
#include<stdio.h>
#include<conio.h>
void main()
{
int found,flag,l,p[4][5],tp,tr,c[4][5],i,j,k=1,m[5],r[5],a[5],temp[5],sum=0;
clrscr();
printf("How many processes:");
scanf("%d",&tp);
printf("\n How many resources:");
scanf("%d",&tr);
printf("Enter number of resource units for each resource:\n");
for(i=1;i<=tr;i++)
scanf("%d",&r[i]);
printf("enter maximum resources for each process\n");
for(i=1;i<=tp;i++)
for(j=1;j<=tr;j++)
{
scanf("%d",&c[i][j]);
}
printf("Enter allocated resources for each process\n");
for(i=1;i<=tp;i++)
for(j=1;j<=tr;j++)
{
scanf("%d",&p[i][j]);
}
printf("enter availability vector:\n");
for(i=1;i<=tr;i++)
{
scanf("%d",&a[i]);
temp[i]=a[i];
}
for(i=1;i<=tp;i++)
{
sum=0;
for(j=1;j<=tr;j++)
sum+=p[i][j];
if(sum==0)
{
Page 32 of 38
m[k]=i;
k++;
}
}
for(i=1;i<=tp;i++)
{
for(l=1;l<k;l++)
if(i!=m[l])
{
flag=1;
for(j=1;j<=tr;j++)
if(c[i][j]>temp[j])
{
flag=0;
break;
}
}
if(flag==1)
{
m[k]=i;
k++;
for(j=1;j<=tr;j++)
temp[j]+=p[i][j];
}
}
printf("deadlock causing processes are:");
for(j=1;j<=tp;j++)
{
found=0;
for(i=1;i<k;i++)
{
if(j==m[i])
found=1;
}
if(found==0)
printf("%d\t",j);
}
getch();
}
OUTPUT:
How many processes:2
How many resources:3
Enter number of resource units for each resource:
333
enter maximum resources for each process
222
222
Enter allocated resources for each process
111
111
enter availability vector:
111
deadlock causing processes are: 1 2
6. DEADLOCK AVOIDANCE
AIM: A program to implement deadlock avoidance algorithm
PROGRAM:
/* A program to implement deadlock Avoidance algorithm */
Page 33 of 38
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
void main()
{
int allocated[15][15],max[15][15],need[15][15];
int avail[15],tres[15],work[15],flag[15];
int pno,rno,i,j,prc,count,t,total;
count=0;
clrscr();
printf("Enter number of processes:");
scanf("%d",&pno);
printf("Enter number of resources:");
scanf("%d",&rno); for(i=1;i<=pno;i++) flag[i]=0;
printf("Enter number of resource units for each resource:");
for(i=1;i<= rno;i++)
scanf("%d",&tres[i]);
printf("Enter Maximum resources for each process:");
for(i=1;i<= pno;i++)
{
printf("\n For process%d :",i);
for(j=1;j<= rno;j++)
scanf("%d",&max[i][j]);
}
printf("Enter allocated resources for each process:");
for(i=1;i<= pno;i++)
{
printf("\n For process%d :",i);
for(j=1;j<= rno;j++)
scanf("%d",&allocated[i][j]);
}
printf("Available resources:\n");
for(j=1;j<= rno;j++)
{
avail[j]=0;
total=0;
for(i=1;i<= pno;i++)
total+=allocated[i][j];
avail[j]=tres[j]-total;
work[j]=avail[j];
printf("%d \t",work[j]);
}
do
{
for(i=1;i<= pno;i++)
{
for(j=1;j<= rno;j++)
need[i][j]=max[i][j]-allocated[i][j];
}
printf("\n Allocated matrix \t Max \t Need");
for(i=1;i<= pno;i++)
{
printf("\n");
for(j=1;j<= rno;j++)
printf("%4d",allocated[i][j]); printf("\t\t|");
for(j=1;j<= rno;j++)
printf("%4d",max[i][j]); printf("\t|");
Page 34 of 38
for(j=1;j<= rno;j++)
printf("%4d",need[i][j]);
}
prc=0;
for(i=1;i<= pno;i++)
{
if(flag[i]==0)
{
prc=i;
for(j=1;j<= rno;j++)
{
if(work[j]< need[i][j])
{
prc=0;
break;
}
}
}
if(prc!=0)
break;
}
if(prc!=0)
{
printf("\n Process %d completed",i);
count++;
printf("\n Available matrix:");
for(j=1;j<= rno;j++)
{
work[j]+=allocated[prc][j];
allocated[prc][j]=0; max[prc][j]=0;
flag[prc]=1;
printf("%d",work[j]);
}
}
} while(count!=pno && prc!=0);
if(count==pno)
printf("\nThe system is in a safe state!!");
else
printf("\nThe system is in an unsafe state!!");
getch();
}
OUTPUT:
Enter number of
processes:2
Enter number of
resources:3
Enter number of resource units for each
resource:3 3 3 Enter Maximum resources for
each process:
For process1 :1 1 1
For process2 :1 1 1
Enter allocated resources for each
process: For process1 :1 1 1
Page 35 of 38
For process2 :1 1 1
1 1 1
Allocated matrix Max Need
1 1 1 | 1 1 1 | 0 0 0
1 1 1 | 1 1 1 | 0 0 0
Process 1 completed
Available matrix: 2 2 2
Allocated matrix Max Need
0 0 0 | 0 0 0 | 0 0 0
1 1 1 | 1 1 1 | 0 0 0
Process 2 completed
Available matrix: 3 3 3
The system is in a safe state!!
Page 36 of 38
OUTPUT:-
Page 37 of 38
else
{
printf("File in the index is already allocated \n");
printf("Enter another file indexed");
goto y;
}
printf("Do you want to enter more file(Yes - 1/No - 0)");
scanf("%d", &c);
if(c==1)
goto x;
else
exit(0);
getch();
}
Output :-
Program Output:
Enter the index block: 5
Enter no of blocks needed and no of files for the index 5 on the disk :
4
1234
Allocated
File Indexed
5-------->1 : 1
5-------->2 : 1
5-------->3 : 1
5-------->4 : 1
Do you want to enter more file(Yes - 1/No - 0)1
Enter the index block: 4
4 index is already allocated
Enter the index block: 6
Enter no of blocks needed and no of files for the index 6 on the disk :
2
78
A5llocated
File Indexed
6-------->7 : 1
6-------->8 : 1
Do you want to enter more file(Yes - 1/No - 0)0
Dear Students, no one can predict your future, don't see back, take always step
forward with confidence.
Page 38 of 38