0% found this document useful (0 votes)
4 views

OS Sem

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

OS Sem

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

OPERATING SYSTEMS

DISCLAIMER: THIS IS AS PER SYLLABUS BOOK


UNIT – I
1. What is Operating System
Operating System lies in the category of system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the software and
different parts of the computer or the computer hardware. The operating system is designed in
such a way that it can manage the overall resources and operations of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other programs that
reside in the computer, which also includes application programs and other system software
of the computer. Examples of Operating Systems are Windows, Linux, Mac OS, etc.
An Operating System (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system is the
most important type of system software in a computer system.

2. What Operating System do..


 Resource Management: The operating system manages and allocates memory, CPU
time, and other hardware resources among the various programs and processes
running on the computer.
 Process Management: The operating system is responsible for starting, stopping, and
managing processes and programs. It also controls the scheduling of processes and
allocates resources to them.
 Memory Management: The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
 Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such as
access controls and encryption.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 File Management: The operating system is responsible for organizing and managing
the file system, including the creation, deletion, and manipulation of files and
directories.
 Device Management: The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and
interfaces to enable communication between the devices and the computer.
 Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols, and
sharing resources such as printers and files over a network.
 User Interface: The operating system provides a user interface that enables users to
interact with the computer system. This can be a Graphical User Interface (GUI), a
Command-Line Interface (CLI), or a combination of both.
 Backup and Recovery: The operating system provides mechanisms for backing up
data and recovering it in case of system failures, errors, or disasters.
 Virtualization: The operating system provides virtualization capabilities that allow
multiple operating systems or applications to run on a single physical machine. This
can enable efficient use of resources and flexibility in managing workloads.
 Performance Monitoring: The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing
resource usage, and analyzing system logs and metrics.
 Time-Sharing: The operating system enables multiple users to share a computer
system and its resources simultaneously by providing time-sharing mechanisms that
allocate resources fairly and efficiently.
 System Calls: The operating system provides a set of system calls that enable
applications to interact with the operating system and access its resources. System
calls provide a standardized interface between applications and the operating system,
enabling portability and compatibility across different hardware and software
platforms.
 Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.

3. Uniprogramming vs Multiprogramming
Uniprogramming Multiprogramming
 In uniprogramming the system runs  But for multiprogramming, the
smoothly as only one task in a run at processor needs to be faster.
a time, it can function on a slow
processor as well.

 The main memory has a smaller size  In multiprogramming, the main


in uniprogramming as only one task memory needs more space.
sits there at a time.

 A fixed-size partition is used in  Both fixed and variable size


uniprogramming partitions can be used in
multiprogramming.
 Some examples of  Some examples of
uniprogramming are: multiprogramming are:
 The operating system in old mobile  The operating systems that are used
phones, batch processing in old in modern computers like Windows
computers, etc. 10, etc.
4. Operating Systems Operations
An operating system is a construct that allows the user application programs to interact with
the system hardware. Operating system by itself does not provide any function but it provides
an atmosphere in which different applications and programs can do useful work.
The major operations of the operating system are process management, memory
management, device management and file management. These are given in detail as follows:

Process Management
The operating system is responsible for managing the processes i.e assigning the processor to
a process at a time. This is known as process scheduling. The different algorithms used for
process scheduling are FCFS (first come first served), SJF (shortest job first), priority
scheduling, round robin scheduling etc.
There are many scheduling queues that are used to handle processes in process management.
When the processes enter the system, they are put into the job queue. The processes that are
ready to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the device queue.
Memory Management
Memory management plays an important part in operating system. It deals with memory and
the moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are −
 The operating system assigns memory to the processes as required. This can be done
using best fit, first fit and worst fit algorithms.
 All the memory is tracked by the operating system i.e. it nodes what memory parts are
in use by the processes and which are empty.
 The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.

Device Management
There are many I/O devices handled by the operating system such as mouse,
keyboard, disk drive etc. There are different device drivers that can be connected to
the operating system to handle a specific device. The device controller is an interface
between the device and the device driver. The user applications can access all the I/O
devices using the device drivers, which are device specific codes.

File Management
Files are used to provide a uniform view of data storage by the operating system. All the files
are mapped onto physical devices that are usually non volatile so data is safe in the case of
system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access −
 Sequential Access
The information in a file is processed in order using sequential access. The files records are
accessed on after another. Most of the file systems such as editors, compilers etc. use
sequential access.
 Direct Access
In direct access or relative access, the files can be accessed in random for read and write
operations. The direct access model is based on the disk model of a file, since it allows
random accesses.

5. Operating System Services


Services of Operating System
 Program execution
 Input Output Operations
 Communication between Process
 File Management
 Memory Management
 Process Management
 Security and Privacy
 Resource Management
 User Interface
 Networking
 Error handling
 Time Management

5. System calls
A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.
How are system calls made?
When a computer software needs to access the operating system's kernel, it makes a system
call. The system call uses an API to expose the operating system's services to user programs.
It is the only method to access the kernel system. All programs or processes that require
resources for execution must use system calls, as they serve as an interface between the
operating system and user programs.
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with
kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not
present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.

6. Types of System calls


There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some
examples of information maintenance, including getting system data, set time or date, get
time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of
communication, including create, delete communication connections, send, receive messages,
etc.

6. Operating System structure


A system structure for an operating system is like the blueprint of how an OS is
organized and how its different parts interact with each other. Because operating
systems have complex structures, we want a structure that is easy to understand
so that we can adapt an operating system to meet our specific needs. Similar to
how we break down larger problems into smaller, more manageable
subproblems, building an operating system in pieces is simpler. The operating
system is a component of every segment. The strategy for integrating different
operating system components within the kernel can be thought of as an
operating system structure

Types of Operating Systems Structures


Depending on this, we have the following structures in the operating system:
 Simple/Monolithic Structure
 Micro-Kernel Structure
 Hybrid-Kernel Structure
 Exo-Kernel Structure
 Layered Structure
 Modular Structure
 Virtual Machines
1. Simple/Monolithic structure
Such operating systems do not have well-defined structures and are small, simple, and
limited. The interfaces and levels of functionality are not well separated. MS-DOS is an
example of such an operating system. In MS-DOS, application programs are able to access
the basic I/O routines. These types of operating systems cause the entire system to crash if
one of the user programs fails.

2. Micro-Kernel Structure
This structure designs the operating system by removing all non-essential components from
the kernel and implementing them as system and user programs. This results in a smaller
kernel called the micro-kernel. Advantages of this structure are that all new services need to
be added to user space and does not require the kernel to be modified. Thus it is more secure
and reliable as if a service fails, then rest of the operating system remains untouched. Mac OS
is an example of this type of OS.
3. Hybrid-Kernel Structure
Hybrid-kernel structure is nothing but just a combination of both monolithic-kernel structure
and micro-kernel structure. Basically, it combines properties of both monolithic and micro-
kernel and make a more advance and helpful approach. It implement speed and design of
monolithic and modularity and stability of micro-kernel structure.
3. Exo-Kernel Structure
Exokernel is an operating system developed at MIT to provide application-level management
of hardware resources. By separating resource management from protection, the exokernel
architecture aims to enable application-specific customization. Due to its limited operability,
exokernel size typically tends to be minimal.
The OS will always have an impact on the functionality, performance, and scope of the apps
that are developed on it because it sits in between the software and the hardware. The
exokernel operating system makes an attempt to address this problem by rejecting the notion
that an operating system must provide abstractions upon which to base applications. The
objective is to limit developers use of abstractions as little as possible while still giving them
freedom.

4. Layered Structure
An OS can be broken into pieces and retain much more control over the system. In this
structure, the OS is broken into a number of layers (levels). The bottom layer (layer 0) is the
hardware, and the topmost layer (layer N) is the user interface. These layers are so designed
that each layer uses the functions of the lower-level layers. This simplifies the debugging
process, if lower-level layers are debugged and an error occurs during debugging, then the
error must be on that layer only, as the lower-level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified
and passed on which adds overhead to the system. Moreover, careful planning of the layers is
necessary, as a layer can use only lower-level layers. UNIX is an example of this structure.

5. Modular Structure
It is considered as the best approach for an OS. It involves designing of a modular kernel.
The kernel has only a set of core components and other services are added as dynamically
loadable modules to the kernel either during runtime or boot time. It resembles layered
structure due to the fact that each kernel has defined and protected interfaces, but it is more
flexible than a layered structure as a module can call any other module. For example Solaris
OS is organized as shown in the figure.
VMs (Virtual Machines)
Based on our needs, a virtual machine abstracts the hardware of our personal computer,
including the CPU, disc drives, RAM, and NIC (Network Interface Card), into a variety of
different execution contexts, giving us the impression that each execution environment is a
different computer. An illustration of it is a virtual box.
An operating system enables us to run multiple processes concurrently while making it
appear as though each one is using a different processor and virtual memory by using CPU
scheduling and virtual memory techniques.
The fundamental issue with the virtual machine technique is disc systems. Let’s say the
physical machine only has three disc drives, but it needs to host seven virtual machines. The
program that creates virtual machines would need a significant amount of disc space in order
to provide virtual memory and spooling, so it should be clear that it is impossible to assign a
disc drive to every virtual machine. The answer is to make virtual discs available.

7. Process Concepts
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in
the system.
To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −
S.N. Component & Description

Stack
1 The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

Heap
2
This is dynamically allocated memory to a process during its run time.

Text
3 This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.

Data
4
This section contains the global and static variables.
8. Operations on Processes
The execution of a process is a complex activity. It involves various operations. Following
are the operations that are performed while execution of a process:

Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system, the user,
or the old process itself. There are several events that lead to the process creation. Some of
the such events are the following:
1. When we start the computer, the system creates several background processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run. It means
the operating system puts the process from the ready state into the running state. Dispatching
is done by the operating system when the resources are free or the process has higher priority
than the ongoing process. There are various other cases in which the process in the running
state is preempted and the process in the ready state is dispatched by the operating system.
Blocking
When a process invokes an input-output system call that blocks the process, and operating
system is put in block mode. Block mode is basically a mode where the process waits for
input-output. Hence on the demand of the process itself, the operating system blocks the
process and dispatches another process to the processor. Hence, in process-blocking
operations, the operating system puts the process in a ‘waiting’ state.
Preemption
When a timeout occurs that means the process hadn’t been terminated in the allotted time
interval and the next process is ready to execute, then the operating system preempts the
process. This operation is only valid where CPU scheduling supports preemption. Basically,
this happens in priority scheduling where on the incoming of high priority process the
ongoing process is preempted. Hence, in process preemption operation, the operating system
puts the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words, process termination
is the relaxation of computer resources taken by the process for the execution. Like creation,
in termination also there may be several events that may lead to the process of termination.
Some of them are:
1. The process completes its execution fully and it indicates to the OS that it has
finished.
2. The operating system itself terminates the process due to service errors.
3. There may be a problem in hardware that terminates the process.
4. One process can be terminated by another process.

9. Inter – Process Communication


Inter-process communication (IPC) serves as a means for transmitting data among multiple
threads situated within one or more processes or programs. These processes may be active on
a solitary computer or distributed across a network of machines.
It is a set of programming interfaces that enable a programmer to coordinate actions across
multiple processes that can run concurrently in an operating system. This enables a given
program to handle several user requests at the same time.
Because each user request may cause multiple processes to operate in the operating system,
the processes may need to communicate with one another. Because each IPC protocol
technique has its own set of advantages and disadvantages, it is not uncommon for a single
program to use many protocols.
10. Threads, Multithreading models
What is Thread in Operating Systems?
In a process, a thread refers to a single sequential activity being executed. these activities are
also known as thread of execution or thread control. Now, any operating system process can
execute a thread. we can say, that a process can have multiple threads.
Why Do We Need Thread?
 Threads run in parallel improving the application performance. Each such thread has
its own CPU state and stack, but they share the address space of the process and the
environment.
 Threads can share common data so they do not need to use inter-process
communication. Like the processes, threads also have states like ready, executing,
blocked, etc.
 Priority can be assigned to the threads just like the process, and the highest priority
thread is scheduled first.
 Each thread has its own Thread Control Block (TCB). Like the process, a context
switch occurs for the thread, and register contents are saved in (TCB). As threads
share the same address space and resources, synchronization is also required for the
various activities of the thread.
Multithreading model:
Multithreading allows the execution of multiple parts of a program at the same time. These
parts are known as threads and are lightweight processes available within the process.
Therefore, multithreading leads to maximum utilization of the CPU by multitasking.
The main models for multithreading are one to one model, many to one model and many to
many model. Details about these are given as follows −
One to One Model
The one to one model maps each of the user threads to a kernel thread. This means that many
threads can run in parallel on multiprocessors and other threads can run when one thread
makes a blocking system call.
A disadvantage of the one to one model is that the creation of a user thread requires a
corresponding kernel thread. Since a lot of kernel threads burden the system, there is
restriction on the number of threads in the system.
A diagram that demonstrates the one to one model is given as follows −
Many to One Model
The many to one model maps many of the user threads to a single kernel thread. This model
is quite efficient as the user space manages the thread management.
A disadvantage of the many to one model is that a thread blocking system call blocks the
entire process. Also, multiple threads cannot run in parallel as only one thread can access the
kernel at a time.
A diagram that demonstrates the many to one model is given as follows −

Many to Many Model


The many to many model maps many of the user threads to a equal number or lesser kernel
threads. The number of kernel threads depends on the application or machine.
The many to many does not have the disadvantages of the one to one model or the many to
one model. There can be as many user threads as required and their corresponding kernel
threads can run in parallel on a multiprocessor.
A diagram that demonstrates the many to many model is given as follows −
UNIT – II
1. Process Scheduling – Basic Concepts
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.

2. Scheduling Criteria

CPU utilization- The object of any CPU scheduling algorithm is to keep the CPU busy if
possible and to maximize its usage. In theory, the range of CPU utilization is in the range
of 0 to 100 but in real-time, it is actually 50 to 90% which relies on the system’s load.
Throughput- It is a measure of the work that is done by the CPU which is directly
proportional to the number of processes being executed and completed per unit of time. It
keeps on varying which relies on the duration or length of processes.
Turnaround time- An important Scheduling criterion in OS for any process is how long it
takes to execute a process. A turnaround time is the elapsed from the time of submission to
that of completion. It is the summation of time spent waiting to get into the memory, waiting
for a queue to be ready, for the I/O process, and for the execution of the CPU. The formula
for calculating
𝑇𝑢𝑟𝑛𝐴𝑟𝑜𝑢𝑛𝑑𝑇𝑖𝑚𝑒=𝐶𝑜𝑚𝑝𝑖𝑙𝑎𝑡𝑖𝑜𝑛𝑡𝑖𝑚𝑒−𝐴𝑟𝑟𝑖𝑣𝑎𝑙𝑡𝑖𝑚𝑒.
Waiting time- Once the execution starts, the scheduling process does not hinder the time that
is required for the completion of the process. The only thing that is affected is the waiting
time of the process, i.e the time that is spent by a process waiting in a queue. The formula for
calculating KaTeX parse error: Expected 'EOF', got '–' at position 31: …urnaround Time –̲
Burst Time.
Response time- Turnaround time is not considered as the best criterion for comparing
scheduling algorithms in an interactive system. Some outputs of the process might produce
early while computing other results simultaneously. Another criterion is the time that is taken
from process submission to generate the first response. This is called response time and the
formula for calculating it is, KaTeX parse error: Expected 'EOF', got '–' at position 79: …for
the first) –̲ Arrival Time.

3. Scheduling Algorithms
Prepare on your own
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms. There are six popular process scheduling algorithms.
 First-Come, First-Served (FCFS) Scheduling
 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
4. Thread Scheduling
Scheduling of threads involves two boundary scheduling.
1. Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via lightweight
process (LWP) by the application developer.
2. Scheduling of kernel-level threads by the system scheduler to perform different
unique OS functions.
User-Level Thread
The User-level Threads are implemented by the user-level software. the user-level threads are
basically created and implemented by the thread library which OS provides as an API for
creating the managing synchronizing threads. it is faster than the kernel-level threads, it is
basically represented by the program counter, stack, register, and PCB.
Example – user threads library includes POSIX threads, Mach C-Threads
Kernel-Level Thread
So, in terms of the Operating systems basically, the threads are the unit of execution within a
process. and the kernel level threads are also kinds of threads which is directly handled via
kernel threads management. The Kernel-level threads are directly handled by the OS directly
whereas the thread’s management is done by the kernel.
In the Kernel Level Threads, Each thread is self-organizing and the kernel provides each
thread with its own context with information about the thread’s status, such as its name,
group, and priority.
Example – The example of Kernel-level threads are Java threads, POSIX threads, etc.
The two main types of threads are user-level threads and kernel-level threads. A diagram that
demonstrates these is as follows −
5. Process Synchronization: Background
Processes Synchronization or Synchronization is the way by which processes that share the
same memory space are managed in an operating system. It helps maintain the consistency of
data by using variables or hardware so that only one process can make changes to the shared
memory at a time. There are various solutions for the same such as semaphores, mutex
locks, synchronization hardware, etc.
What is Process Synchronization in OS?
An operating system is software that manages all applications on a device and basically helps
in the smooth functioning of our computer. Because of this reason, the operating system has
to perform many tasks, sometimes simultaneously. This isn't usually a problem unless these
simultaneously occurring processes use a common resource.
For example, consider a bank that stores the account balance of each customer in the same
database. Now suppose you initially have x rupees in your account. Now, you take out some
amount of money from your bank account, and at the same time, someone tries to look at the
amount of money stored in your account. As you are taking out some money from your
account, after the transaction, the total balance left will be lower than x. But, the transaction
takes time, and hence the person reads x as your account balance which leads to inconsistent
data. If in some way, we could make sure that only one process occurs at a time, we could
ensure consistent data.

In the above image, if Process1 and Process2 happen at the same time, user 2 will get the
wrong account balance as Y because of Process1 being transacted when the balance is X.
Inconsistency of data can occur when various processes share a common resource in a system
which is why there is a need for process synchronization in the operating system.
How Process Synchronization in OS Works?
Let us take a look at why exactly we need Process Synchronization. For example, If
a process1 is trying to read the data present in a memory location while another process2 is
trying to change the data present at the same location, there is a high chance that the data read
by the process1 will be incorrect.
Let us look at different elements/sections of a program:
 Entry Section: The entry Section decides the entry of a process.
 Critical Section: The Critical section allows and makes sure that only one process is
modifying the shared data.
 Exit Section: The entry of other processes in the shared data after the execution of one
process is handled by the Exit section.
 Remainder Section: The remaining part of the code which is not categorized as above
is contained in the Remainder section.

5. The Critical Section Problem:


A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter the critical section
next, and the selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.

6. Peterson’s Solution:
Peterson’s Solution is a classical software-based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering
the critical section
 int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions


 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.

 Bounded Waiting is preserved as every process gets a fair chance.


7. Synchronization Hardware:
What is Synchronization Hardware?
Synchronization hardware refers to the various hardware mechanisms that an operating
system uses to coordinate the access of multiple threads or processes to shared resources.
Shared resources include anything that multiple threads or processes can access, such as
memory, files, and input/output devices. Synchronization hardware is essential because
multiple threads or processes can interfere with each other when they access shared resources
simultaneously.

8. Semaphore
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be
signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations
wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
 Binary Semaphores: They can only be either 0 or 1. They are also known as mutex
locks, as the locks can provide mutual exclusion. All the processes can share the same
mutex semaphore that is initialized to 1. Then, a process has to wait until the lock
becomes 0. Then, the process can make the mutex semaphore 1 and start its critical
section. When it completes its critical section, it can reset the value of the mutex
semaphore to 0 and some other process can enter its critical section.
 Counting Semaphores: They can have any value and are not restricted to a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if
the number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value
of the counting semaphore by 1. After the process is over with the use of the instance
of the resource, it can leave the critical section thereby adding 1 to the number of
available instances of the resource.

9. Classical problems of Synchronization


Synchronization Problems
These problems are used for testing nearly every newly proposed synchronization scheme.
The following problems of synchronization are considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
These are summarized, for detailed explanation, you can view the linked articles for each.
Bounded-Buffer (or Producer-Consumer) Problem
The Bounded Buffer problem is also called the producer-consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. The solution to this problem is, to
create two counting semaphores “full” and “empty” to keep track of the current number of
full and empty buffers respectively. Producers produce a product and consumers consume the
product, but both use of one of the containers each time.
Dining-Philosophers Problem
The Dining Philosopher Problem states that K philosophers seated around a circular table
with one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One
chopstick may be picked up by any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.

Readers and Writers Problem


Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database. We distinguish between these two types of processes by
referring to the former as readers and to the latter as writers. Precisely in OS we call this
situation as the readers-writers problem. Problem parameters:
 One set of data is shared among a number of processes.
 Once a writer is ready, it performs its write. Only one writer may write at a time.
 If a process is writing, no other process can read it.
 If at least one reader is reading, no other process can write.
 Readers may not write and only read.
Sleeping Barber Problem
 Barber shop with one barber, one barber chair and N chairs to wait in. When no
customers the barber goes to sleep in barber chair and must be woken when a
customer comes in. When barber is cutting hair new customers take empty seats to
wait, or leave if no vacancy. This is basically the Sleeping Barber Problem.

10. Monitors
Monitor in OS (operating system) is a synchronization construct that enables multiple
processes or threads to coordinate actions and ensures that they are not interfering with each
other or producing unexpected results. Also, it ensures that only one thread is executed at a
critical code section.
The monitors’ concept was introduced in the programming language Concurrent Pascal by
Per Brinch Hansen in 1972. Since then, they have been implemented in various programming
languages. Monitors are dynamic tools that help to manage concurrent access to shared
resources in the operating system. Concurrent access means allowing more than one user to
access a computer simultaneously.
Why are Monitors Used?
Monitors in operating systems are used to manage access to shared resources, like files or
data, among multiple processes. They ensure that only one process can use a resource
simultaneously, preventing conflicts and data corruption. Monitors simplify synchronization
and protect data integrity, making it easier for programmers to create reliable software.
They serve as "guards" for critical code sections, ensuring that no two processes can enter
them simultaneously. Monitors are like traffic lights that control access to resources,
preventing crashes and ensuring a smooth flow of data and tasks in an operating system.
UNIT – III
1. Deadlocks:
A deadlock is a situation where a set of processes is blocked because each process is holding
a resource and waiting for another resource acquired by some other process. In this article,
we will discuss deadlock, its necessary conditions, etc. in detail.
What is Deadlock?
Deadlock is a situation in computing where two or more processes are unable to proceed
because each is waiting for the other to release resources. Key concepts include mutual
exclusion, resource holding, circular wait, and no preemption.
Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. This
is a practical example of deadlock.
How Does Deadlock occur in the Operating System?
Before going into detail about how deadlock occurs in the Operating System, let’s first
discuss how the Operating System uses the resources present. A process in an operating
system uses resources in the following way.
 Requests a resource
 Use the resource
 Releases the resource
A situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process
1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.
Necessary Conditions for Deadlock in OS
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
 Mutual Exclusion: Two or more resources are non-shareable (Only one process can
use at a time).
 Hold and Wait: A process is holding at least one resource and waiting for resources.
 No Preemption: A resource cannot be taken from a process unless the process releases
the resource.
 Circular Wait: A set of processes waiting for each other in circular form.
What is Deadlock Detection?
Deadlock detection is a process in computing where the system checks if there are any sets of
processes that are stuck waiting for each other indefinitely, preventing them from moving
forward. In simple words, deadlock detection is the process of finding out whether any
process are stuck in loop or not. There are several algorithms like
 Resource Aloocation Graph
 Banker’s Algorithm
These algorithms helps in detection of deadlock in Operating System.
What are the Methods For Handling Deadlock?
There are three ways to handle deadlock
 Deadlock Prevention or Avoidance
 Deadlock Recovery
 Deadlock Ignorance
Deadlock Prevention or Avoidance
Deadlock Prevention and Avoidance is the one of the methods for handling deadlock. First,
we will discuss Deadlock Prevention, then Deadlock Avoidance.
What is Deadlock Prevention?
In deadlock prevention the aim is to not let full-fill one of the required condition of the
deadlock. This can be done by this method:
(i) Mutual Exclusion
We only use the Lock for the non-share-able resources and if the resource is share- able (like
read only file) then we not use the locks here. That ensure that in case of share -able
resource , multiple process can access it at same time. Problem- Here the problem is that we
can only do it in case of share-able resources but in case of no-share-able resources like
printer , we have to use Mutual exclusion.
(ii) Hold and Wait
To ensure that Hold and wait never occurs in the system, we must guarantee that whenever
process request for resource , it does not hold any other resources.
 we can provide the all resources to the process that is required for it’s execution
before starting it’s execution . problem – for example if there are three resource that
is required by a process and we have given all that resource before starting execution
of process then there might be a situation that initially we required only two resource
and after one hour we want third resources and this will cause starvation for the
another process that wants this resources and in that waiting time that resource can
allocated to other process and complete their execution.
 We can ensure that when a process request for any resources that time the process
does not hold any other resources. Ex- Let there are three resources DVD, File and
Printer . First the process request for DVD and File for the copying data into the file
and let suppose it is going to take 1 hour and after it the process free all resources then
again request for File and Printer to print that file.
(iii) No Preemption
If a process is holding some resource and requestion other resources that are acquired and
these resource are not available immediately then the resources that current process is holding
are preempted. After some time process again request for the old resources and other required
resources to re-start.
For example – Process p1 have resource r1 and requesting for r2 that is hold by process p2.
then process p1 preempt r1 and after some time it try to restart by requesting both r1 and r2
resources.
Problem – This can cause the Live Lock Problem .
Live Lock : Live lock is the situation where two or more processes continuously changing
their state in response to each other without making any real progress.
Example:
 suppose there are two processes p1 and p2 and two resources r1 and r2.
 Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.
 so according to above method- Both p1 and p2 detect that they can’t acquire second
resource, so they release resource that they are holding and then try again.
 continuous cycle- p1 again acquired r1 and requesting to r2 p2 again acquired r2 and
requesting to r1 so there is no overall progress still process are changing there state as
they preempt resources and then again holding them. This the situation of Live Lock.
(iv) Circular Wait:
To remove the circular wait in system we can give the ordering of resources in which a
process needs to acquire.
Ex: If there are process p1 and p2 and resources r1 and r2 then we can fix the resource
acquiring order like the process first need to acquire resource r1 and then resource r2. so the
process that acquired r1 will be allowed to acquire r2 , other process needs to wait until r1 is
free.
This is the Deadlock prevention methods but practically only fourth method is used as all
other three condition removal method have some disadvantages with them .
What is Deadlock Avoidance?
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an
assumption. We need to ensure that all information about resources that the process will need
is known to us before the execution of the process. We use Banker’s algorithm (Which is in
turn a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.
What is Deadlock Recovery?
If Deadlock prevention or avoidance is not applied to the software then we can handle this by
deadlock detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance
decreases.
Methods of Deadlock Recovery
There are several Deadlock Recovery Techniques:
 Manual Intervention
 Automatic Recovery
 Process Termination
 Resource Preemption
1. Manual Intervention
When a deadlock is detected, one option is to inform the operator and let them handle the
situation manually. While this approach allows for human judgment and decision-making, it
can be time-consuming and may not be feasible in large-scale systems.
2. Automatic Recovery
An alternative approach is to enable the system to recover from deadlock automatically. This
method involves breaking the deadlock cycle by either aborting processes or preempting
resources. Let’s delve into these strategies in more detail.
3. Process Termination
 Abort all Deadlocked Processes
o This approach breaks the deadlock cycle, but it comes at a significant cost.
The processes that were aborted may have executed for a considerable amount
of time, resulting in the loss of partial computations. These computations may
need to be recomputed later.
 Abort one process at a time
o Instead of aborting all deadlocked processes simultaneously, this strategy
involves selectively aborting one process at a time until the deadlock cycle is
eliminated. However, this incurs overhead as a deadlock-detection algorithm
must be invoked after each process termination to determine if any processes
are still deadlocked.
o Factors for choosing the termination order:
o The process’s priority
o Completion time and the progress made so far
o Resources consumed by the process
o Resources required to complete the process
o Number of processes to be terminated
o Process type (interactive or batch)
4. Resource Preemption
 Selecting a Victim
o Resource preemption involves choosing which resources and processes should
be preempted to break the deadlock. The selection order aims to minimize the
overall cost of recovery. Factors considered for victim selection may include
the number of resources held by a deadlocked process and the amount of time
the process has consumed.
 Rollback
o If a resource is preempted from a process, the process cannot continue its
normal execution as it lacks the required resource. Rolling back the process to
a safe state and restarting it is a common approach. Determining a safe state
can be challenging, leading to the use of total rollback, where the process is
aborted and restarted from scratch.
 Starvation Prevention
o To prevent resource starvation, it is essential to ensure that the same process is
not always chosen as a victim. If victim selection is solely based on cost
factors, one process might repeatedly lose its resources and never complete its
designated task. To address this, it is advisable to limit the number of times a
process can be chosen as a victim, including the number of rollbacks in the
cost factor.
What is Deadlock Ignorance?
If a deadlock is very rare, then let it happen and reboot the system. This is the approach that
both Windows and UNIX take. we use the ostrich algorithm for deadlock ignorance.
In Deadlock, ignorance performance is better than the above two methods but the correctness
of data is not there.

UNIT – IV
1. Swapping
Swapping is a memory management scheme in which any process can be temporarily
swapped from main memory to secondary memory so that the main memory can be made
available for other processes. It is used to improve main memory utilization. In secondary
memory, the place where the swapped-out process is stored is called swap space.
The purpose of the swapping in operating system is to access the data present in the hard disk
and bring it to RAM so that the application programs can use it. The thing to remember is
that swapping is used only when data is not present in RAM.
Although the process of swapping affects the performance of the system, it helps to run larger
and more than one process. This is the reason why swapping is also referred to as memory
compaction.
The concept of swapping has divided into two more concepts: Swap-in and Swap-out.
o Swap-out is a method of removing a process from RAM and adding it to the hard
disk.
o Swap-in is a method of removing a program from a hard disk and putting it back into
the main memory or RAM.
Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.
User process size is 2048Kb
Data transfer rate is 1Mbps = 1024 kbps
Time = process size / transfer rate
= 2048 / 1024
= 2 seconds
= 2000 milliseconds
Now taking swap-in and swap-out time, the process will take 4000 milliseconds.
2. Contiguous Memory Allocation
An operating system memory allocation method is contiguous memory allocation. What,
however, is memory allocation? A software or process requires memory space in order to be
run. As a result, a process must be given a specific amount of memory that corresponds to its
needs. Memory allocation is the term for this procedure.
Contiguous memory allocation is one of these memory allocation strategies. We use this
strategy to allocate contiguous blocks of memory to each process, as the name suggests.
Therefore, we allot a continuous segment from the entirely empty area to the process based
on its size whenever a process requests to reach the main memory.
Contiguous Memory Management Techniques
Below are two Contiguous Memory Management Techniques. Lets understand these in detail.
1. Fixed Partition Scheme
In the fixed partition scheme, memory is divided into fixed number of partitions. Fixed means
number of partitions are fixed in the memory. In the fixed partition, in every partition only
one process will be accommodated. Degree of multi-programming is restricted by number of
partitions in the memory. Maximum size of the process is restricted by maximum size of the
partition. Every partition is associated with the limit registers.
 Limit Registers: It has two limit:
 Lower Limit: Starting address of the partition.
 Upper Limit: Ending address of the partition.

Internal Fragmentation is found in fixed partition scheme. To overcome the problem of


internal fragmentation, instead of fixed partition scheme, variable partition scheme is used.
Disadvantages Fix partition scheme
 Maximum process size <= Maximum partition size.
 The degree of multiprogramming is directly proportional to the number of partitions.
 Internal fragmentation which is discussed above is present.
 If a process of 19kb wants to allocate and we have free space which is not continuous
we are not able to allocate the space.
2. Variable Partition Scheme
In the variable partition scheme, initially memory will be single continuous free block.
Whenever the request by the process arrives, accordingly partition will be made in the
memory. If the smaller processes keep on coming then the larger partitions will be made into
smaller partitions.
 In variable partition schema initially, the memory will be full contiguous free block
 Memory divided into partitions according to the process size where process size will
vary.
 One partition is allocated to each active partition.

External Fragmentation is found in variable partition scheme. To overcome the problem of


external fragmentation, compaction technique is used or non-contiguous memory
management techniques are used.
3. Paging
Paging is a technique that divides memory into fixed-sized blocks. The main memory is
divided into blocks known as Frames and the logical memory is divided into blocks known as
Pages. Paging requires extra time for the address conversion, so we use a special hardware
cache memory known as TLB.
This concept of Paging in OS includes dividing each process in the form of pages of equal
size and also, the main memory is divided in the form of frames of fixed size. Now, each
page of the process when retrieved into the main memory, is stored in one frame of the
memory, and hence, it is also important to have the pages and frames of equal size for
mapping and maximum utilization of the memory.
Its main advantage is that the pages can be stored at different locations of the memory and not
necessarily in a contiguous manner, though priority is always set to firstly find the contiguous
frames for allocating the pages.

If a process has n pages in the secondary memory then there must be n frames available in the
main memory for mapping.
Example to understand Paging in OS
CASE-1 (Contiguous Allocation of Pages)
As we can see in the above image, we have main memory divided into 16 frames of the size
of 1KB each. Also, there are 4 processes available in the secondary (local) memory: P1, P2,
P3, and P4 of a size of 4KB each. Clearly, each process needs to be further subdivided into
pages of size of 1KB each, so that one page can be easily mapped to one frame of the main
memory. This divides each process into 4 pages and the total for 4 processes gives 16 pages
of 1KB each. Initially, all the frames were empty and therefore, pages will be allocated here
in a contiguous manner.
CASE-2 (Non-Contiguous Allocation of Pages)
4. Segmentation
Segmentation divides processes into smaller subparts known as modules. The divided
segments need not be placed in contiguous memory. Since there is no contiguous memory
allocation, internal fragmentation does not take place. The length of the segments of the
program and memory is decided by the purpose of the segment in the user program.
We can say that logical address space or the main memory is a collection of segments.

Types of Segmentation
Segmentation can be divided into two types:
1. Virtual Memory Segmentation: Virtual Memory Segmentation divides the processes
into n number of segments. All the segments are not divided at a time. Virtual
Memory Segmentation may or may not take place at the run time of a program.
2. Simple Segmentation: Simple Segmentation also divides the processes
into n number of segments but the segmentation is done all together at once. Simple
segmentation takes place at the run time of a program. Simple segmentation may
scatter the segments into the memory such that one segment of the process can be at a
different location than the other(in a noncontinuous manner).

5. Virtual Memory Management – Demand Paging


Demand paging is a technique used in virtual memory systems where pages enter main
memory only when requested or needed by the CPU. In demand paging, the operating system
loads only the necessary pages of a program into memory at runtime, instead of loading the
entire program into memory at the start. A page fault occurred when the program needed to
access a page that is not currently in memory.
The operating system then loads the required pages from the disk into memory and updates
the page tables accordingly. This process is transparent to the running program and it
continues to run as if the page had always been in memory.

6. Copy – on – write
Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages) of
the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is
called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared pages
will be marked as copy-on-write which means that if any of these processes will try to modify
the shared pages then only a copy of these pages will be created and the modifications will be
done on the copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page
3.
The below figures shows what happens before and after process P modifies page 3.

7. Page replacement
Page replacement algorithms are techniques used in operating systems to
manage memory efficiently when the virtual memory is full. When a new page needs to be
loaded into physical memory, and there is no free space, these algorithms determine which
existing page to replace.
If no page frame is free, the virtual memory manager performs a page replacement operation
to replace one of the pages existing in memory with the page whose reference caused the
page fault. It is performed as follows: The virtual memory manager uses a page replacement
algorithm to select one of the pages currently in memory for replacement, accesses the page
table entry of the selected page to mark it as “not present” in memory, and initiates a page-out
operation for it if the modified bit of its page table entry indicates that it is a dirty page.
Common Page Replacement Techniques [ Solve on your own]
 First In First Out (FIFO)
 Optimal Page replacement
 Least Recently Used
 Most Recently Used (MRU)

8. Thrashing
Thrashing in OS is a phenomenon that occurs in computer operating systems when the
system spends an excessive amount of time swapping data between physical memory (RAM)
and virtual memory (disk storage) due to high memory demand and low available
resources.
Thrashing can occur when there are too many processes running on a system and not enough
physical memory to accommodate them all. As a result, the operating system
must constantly swap pages of memory between physical memory and virtual memory. This
can lead to a significant decrease in system performance, as the CPU is spending more time
swapping pages than it is actually executing code.

You might also like