0% found this document useful (0 votes)
16 views77 pages

OOPERATING SYSTEM 4TH SEMESTER

The document provides an overview of operating systems, defining them as intermediaries between users and computer hardware, responsible for managing resources and executing programs. It details the functions of operating systems, including process management, memory management, file management, and I/O management, as well as their structure and role of the kernel and shell. Additionally, it discusses user and system views of operating systems, highlighting their importance in resource allocation and user interaction.

Uploaded by

Taha Uzair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views77 pages

OOPERATING SYSTEM 4TH SEMESTER

The document provides an overview of operating systems, defining them as intermediaries between users and computer hardware, responsible for managing resources and executing programs. It details the functions of operating systems, including process management, memory management, file management, and I/O management, as well as their structure and role of the kernel and shell. Additionally, it discusses user and system views of operating systems, highlighting their importance in resource allocation and user interaction.

Uploaded by

Taha Uzair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

UNIT-I

Fundamentals of Operating system


Introduction to Operating System
An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs in a convenient and efficient manner.
An operating system is software that manages the computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system and to
prevent user programs from interfering with the proper operation of the system.

Operating System – Definition:


 An operating system is a program that controls the execution of application programs
and acts as an interface between the user of a computer and the computer hardware.
 A more common definition is that the operating system is the one program running at
all times on the computer (usually called the kernel), with all else being application
programs.
 An operating system is concerned with the allocation of resources and services, such
as memory, processors, devices, and information. The operating system
correspondingly includes programs to manage these resources, such as a traffic
controller, a scheduler, memory management module, I/O programs, and a file
system.
Introduction to Operating System
Applications of Operating System
Following are some of the important activities that an Operating System performs
 Security− By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
 Control over system performance− Recording delays between request for a service
and response from the system.
 Job accounting− Keeping track of time and resources used by various jobs and users.
 Error detecting aids− Production of dumps, traces, error messages, and other
debugging and error detecting aids.
 Coordination between other softwares and users− Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the
computer systems.
Functions of an operating system
The primary function of an operating system is to provide an environment for the execution of
users’ program. However, it is divided into number of small pieces that performs specialized
tasks.
The various functions of operating system are:
1. Process Management
2. Main memory Management
3. Secondary storage Management
4. File Management
5. I/O Management
6. Protection and security
7. Networking
8. Command interpretation

1. Process Management
 The process management refers to the assignment of processor to different tasks
being performed by the computer system. The process management schedules the
various processes of a system for execution.
 The operating system is responsible for the following activities in connection with
process management:
1. Creating and deleting both user and system processes.
2. Suspending and resuming processes.
3. Providing mechanisms for process synchronization such as semaphores.
4. Providing mechanism for process communication(i.e. communication between
different processes in a system)
5. Providing mechanisms for deadlock handling. Deadlock is a condition in which the
number of processes waits infinitely for some shared resource.
2. Memory Management
 The operating system manages the Primary Memory or Main Memory. Main memory
is made up of a large array of bytes or words where each byte or word is assigned a
certain address.
 Main memory is a fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory.
 An Operating System performs the following activities for memory management:
1. It keeps tracks of primary memory, i.e., which bytes of memory are used by which
user program.
2. Deciding which processes are to be loaded into memory when memory space
becomes available
3. Allocating and deallocating memory spaces as needed
 In multi programming, the OS decides the order in which process are granted access
to memory, and for how long.

3. Secondary storage Management


 The main memory has a limited size and cannot store all the user programs at once.
Moreover, when the power is lost, the data it holds are also lost. So computer system
provides secondary storage devices such as magnetic disks and tapes to back up main
memory.
 The secondary storage devices store system programs, such as compiler, editor and
assembler and user programs that are not used frequently.
 The operating system performs following functions in connection with disk
management;
1. Free space management i.e. manages free space on disk by reclaiming memory from
used objects.
2. Storage allocation i.e., allocates storage area for storing new programs.
3. Disk scheduling.

4. File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

5. I/O Management
 I/O management refers to the coordination and assignment of the different input and
output devices to the various programs that are being executed.
 Thus, an OS is responsible for managing various I/O devices, such as keyboard,
mouse, printer and monitor.
 The I/O sub system consists of following components:
1. A memory management component that includes buffering, caching and spooling.
2. A general device-driver interface.
3. Drivers for specific hardware devices.
 The operating system performs following function related to I/O management
1. Issuing commands to various input and output devices.
2. Capturing interrupt such as hardware failure.
3. Handling errors that appear in reading and writing process of devices.
4. Security
 Security deals with protecting the various resources and information of a computer
system against destruction and unauthorized access.
 A total approach to computer security involves both external and internal security.
 External security deals with securing the computer system against external factors
such as fires, floods, earthquakes, stolen disks/tapes, leaking out of stored information
by a person who has access to the information.
 Internal security deals with users’ authentication, access control and cryptography.
7. Networking
 Networking is used for exchanging information among different computer that are
distributed across various locations.
 Distributed systems consist of multiple processor and each processors has its own
memory and clock.
 The various processors communicate using communication links, such as telephone
lines or buses.
 The processors in distributed system vary is size and functions. They may include
small microprocessors, workstation, minicomputers and large general purpose
computer systems.
 Thus, a distributed system enables us to share the various resources of the network.
 This results in computation speedup, increased functionality, increased data
availability and better reliability.

8. Command Interpretation
 The command interpreter is the basic interface between the computer and the user.
 Command interpreter provides a set of commands using which the user can give
instruction to the computer for getting some job done by it.
 The various commands supported by command interpretation module are known as
system calls.
 When a user gives instructions to the computer by using these system calls, the
command interpreter takes care of interpreting these commands and directing the
system resources to handle the requests.
 There are two different user interfaces supported by various operating systems:
1. Command line interface
2. Graphical user interface
 Command line Interface (CLI):- It is the textual user interface in which user gives
instruction to computer by typing the commands.
 Graphical user interface (GUI):- GUI provides the user a screen full of graphical
icons or menus and allows the user to make a rapid selection from the displayed icons
or menus to give instruction to computer.
Operating system as a resource manager
 A computer system usually has many hardware and software resources such as
processor, memory, disks, printers, I/O devices etc. The operating system acts as a
manager of these resources.
 The operating system is responsible for controlling and allocating various hardware
and software resources to different users in an optimal and efficient manner.
 The task of resources management becomes essential in multi user operating systems
where different users compete for the same resources.
 As a resource manager, an operating system keeps track of:
1. Who is using which resources
2. Granting resource requests
3. Accounting resource usage
4. Mediating conflicting requests from different programs and users.
 Operating system manages resources in two ways:
1. Time Multiplexing: – It defines the sharing of resources on the basis of fixed time
slices. For example, the operating system allocates the resources, such as CPU to
program A for a fixed time slice. After that time slice is over, the CPU is allocated to
another program B and so on.
2. Space Multiplexing:- It defines the concurrent sharing of resources among different
programs. For example, sharing of hard disk and main memory is space multiplexing.
Structures of Operating Systems
Operating system can be implemented with the help of various structures. The structure of the OS
depends mainly on how the various common components of the operating system are
interconnected and melded into the kernel. Depending on this we have following structures of the
operating system:
Simple structure:
Such operating systems do not have well defined structure and are small, simple and limited
systems. The interfaces and levels of functionality are not well separated. MS-DOS is an example
of such operating system. In MS-DOS application programs are able to access the basic I/O
routines. These types of operating system cause the entire system to crash if one of the user
programs fails.
Diagram of the structure of MS-DOS is shown below.

Layered structure:
An OS can be broken into pieces and retain much more control on system. In this structure the
OS is broken into number of layers (levels). The bottom layer (layer 0) is the hardware and the
topmost layer (layer N) is the user interface. These layers are so designed that each layer uses the
functions of the lower level layers only. This simplifies the debugging process as if lower level
layers are debugged and an error occurs during debugging then the error must be on that layer
only as the lower level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified and
passed on which adds overhead to the system. Moreover careful planning of the layers is
necessary as a layer can use only lower level layers. UNIX is an example of this structure.

Micro-kernel:
This structure designs the operating system by removing all non-essential components from the
kernel and implementing them as system and user programs. This result in a smaller kernel called
the micro-kernel.
Advantages of this structure are that all new services need to be added to user space and does not
require the kernel to be modified. Thus it is more secure and reliable as if a service fails then rest
of the operating system remains untouched. Mac OS is an example of this type of OS.

Modular structure or approach:


It is considered as the best approach for an OS. It involves designing of a modular kernel. The
kernel has only set of core components and other services are added as dynamically loadable
modules to the kernel either during run time or boot time. It resembles layered structure due to
the fact that each kernel has defined and protected interfaces but it is more flexible than the
layered structure as a module can call any other module.
For example Solaris OS is organized as shown in the figure.

Role of kernel and Shell


 Each process asks for the system resources like computing power, memory network
network connectivity etc. The kernel is the bulk of the executable code in charge of
handling such request.
 The kernel is the main component of most computer operating systems. It is a bridge
between applications and the actual data processing done at the hardware level.
The following are the major role of the kernel
Resource allocation
 The kernel’s primary function is to manage the computer’s resources and allow other
programs to run and use these resources. These resources are CPU, memory and I/O
devices.
Process Management
 The kernel is in charge of creating, destroying and handling the input output of the
process.
 Communications amongst the different processes is the responsibility of the kernel.
Memory Management
 The memory is the major resource of the computer and the policy used to deal with it
is a critical.
 The kernel builds a virtual address space for all the process on the top of the resource.
The different parts of the kernel interact with the memory management subsystem by
using the function calls.
File System
 The kernel builds the structured file system on the top of the unstructured file system
on the top of the unstructured hardware.
 Kernel also supports the multiple file system types that is different way of managing
the data.
Device Control
 Every system amps into the physical device.
 All the device control operations are performed by the code that us specific to the
device being addressed. The code is called device driver.
Inter- Process communication
 Kernel provides methods for synchronization and communication between processes
called Inter-process communication (IPC)
 There are various approaches of IPC say, semaphore, shared memory, message
queue, pipe etc.
Security or protection Management
 Kernel also provides protection from faults (error control) and from malicious
behavior.
 One approach toward this can be language based protection sytem, in which the
kernel will only allow code to execute which has been produced by a trusted language
compiler.
Role of Shell
 It gathers input from user and executes programs based on that input when a program
finishes executing; it displays that program’s output.
 It is primary interface between a user sitting at his terminal and operating system,
unless the user is not using a graphical interface.
 A shell is an environment in which user can run out commands, programs and shell
scripts
 There can various kinds of shells such as sh(Bourne shell), csh( C shell),ksh(korn
shell) and bash.
 When any user logs in, a shell is started.
 The shell has the terminal as standard input and standard output.
 It starts out by typing the prompt, a character such as $, which tells user that shell is
waiting to accept command.
 For example if user types date command, $ date
 Tue Feb. 23 06:01:13 IST 2019
 The shell creates a child process and runs date program as child.
 While child process is running, the shell waits for it to terminate.
 When child finishes, the shell types the prompt again and tries to read the next input
line.
 Shell is work as interface, command interpreter and programming language.
 Shell is interface between user and computer.
 User can directly interact with shell.
 Shell provides command prompt to user to execute commands.
 Shell-As command interpreter.
 It read command enter by user on prompt.
 It interprets the command, so kernel can understand it easily.
 Shell – As programming language
 Shell is also work as programming language
 It provides all features of programming language like variables, control structures and
loop structures

Views of operating system


An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
The operating system can be observed from the point of view of the user or the system. This is
known as the user view and the system view respectively. More details about these are given as
follows

User View
The user view depends on the system interface that is used by the users. The different types of
user view experiences can be explained as follows −
 If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the
system, but there is no need for the operating system to worry about resource
utilization. This is because the personal computer uses all the resources available and
there is no sharing.
 If the user is using a system connected to a mainframe or a minicomputer, the
operating system is largely concerned with resource utilization. This is because there
may be multiple terminals connected to the mainframe and the operating system
makes sure that all the resources such as CPU, memory, I/O devices etc. are divided
uniformly between them.
 If the user is sitting on a workstation connected to other workstations through
networks, then the operating system needs to focus on both individual usage of
resources and sharing though the network. This happens because the workstation
exclusively uses its own resources but it also needs to share files etc. with other
workstations across the network.
 If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.
There are some devices that contain very less or no users view because there is no interaction
with the users. Examples are embedded computers in home devices, automobiles etc.
System View
According to the computer system, the operating system is the bridge between applications and
hardware. It is most intimate with the hardware and is used to control it as required.
The different types of system view for operating system can be explained as follows:
 The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system can
run as smoothly as possible.
 The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and there are
no errors. It makes sure that the I/O devices work in a proper manner without creating
problems.
 Operating systems can also be viewed as a way to make using hardware easier.
 Computers were required to easily solve user problems. However it is not easy to
work directly with the computer hardware. So, operating systems were developed to
easily communicate with the hardware.
 An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the
application programs. This is the definition of the operating system that is generally
followed.

Evolution and types of operating systems


Evolution of OS
Evolution of OS since 1950 described in detail in this article. Here we will discuss six main
operating system types evaluated over the past 70 years.
History of the operating system started in 1950. Before 1950, the programmers directly
interact with the hardware there was no operating system at that time. If a programmer wishes
to execute a program on those days, the following serial steps are necessary.
Serial Processing
 Type the program or punched card.
 Convert the punched card to a card reader.
 Submit to the computing machine, is there any errors, the error was indicated by the
lights.
 The programmer examined the register and main memory to identify the cause of an
error
 Take outputs on the printers.
 Then the programmer ready for the next program.
Drawback:
This type of processing is difficult for users, it takes much time and the next program should
wait for the completion of the previous one. The programs are submitted to the machine one
after one, therefore the method is said to be serial processing.
Batch Processing
Before 1960, it is difficult to execute a program using a computer because of the computer
located in three different rooms, one room for the card reader, one room for executing the
program and another room for printing the result.
The user/machine operator runs between three rooms to complete a job. We can solve this
problem by using batch processing.
In batch processing technique, the same type of jobs batch together and execute at a time. The
carrier carries the group of jobs at a time from one room to another.
Therefore, the programmer need not run between these three rooms several times.
Multiprogramming
Multiprogramming is a technique to execute the number of programs simultaneously by a
single processor. In multiprogramming, a number of processes reside in main memory at a
time. The OS(Operating System) picks and begins to execute one of the jobs in main
memory. Consider the following figure, it depicts the layout of the multiprogramming
system. The main memory consisting of 5 jobs at a time, the CPU executes one by one.
In the non-multiprogramming system, the CPU can execute only one program at a time, if the
running program is waiting for any I/O device, the CPU becomes idle so it will effect on the
performance of the CPU.
But in a multiprogramming environment, if any I/O wait happened in a process, then the CPU
switches from that job to another job in the job pool. So, the CPU is not idle at any time.
Advantages:
 Can get efficient memory utilization.
 CPU is never idle so the performance of CPU will increase.
 The throughput of CPU may also increase.
 In the non-multiprogramming environment, the user/program has to wait for CPU much
time. But waiting time is limited in multiprogramming.
Time Sharing System
Time-sharing or multitasking is a logical extension of multiprogramming. Multiple jobs are
executed by the CPU switching between them. The CPU scheduler selects a job from the
ready queue and switches the CPU to that job. When the time slot expires, the CPU switches
from this job to another.
In this method, the CPU time is shared by different processes. So, it is said to be “Time-
Sharing System“. Generally, time slots are defined by the operating system.
Advantages:
 The main advantage of the time-sharing systemis efficient CPU utilization. It was
developed to provide interactive use of a computer system at a reasonable cost. A time
shared OS uses CPU scheduling and multiprogramming to provide each user with a small
portion of a time-shared computer.
 Another advantage of the time-sharing system over the batch processing system is, the
user can interact with the job when it is executing, but it is not possible in batch systems.
Parallel System
There is a trend multiprocessor system, such system have more than one processor in close
communication, sharing the computer bus, the clock, and sometimes memory and peripheral
devices.
These systems are referred to as “Tightly Coupled” system. Then the system is called a
parallel system. In the parallel system, a number of processors are executing there job in
parallel.
Advantages:
 It increases the throughput.
 By increasing the number of processors (CPU), to get more work done in a shorter period
of time.
Distributed System
In a distributed operating system, the processors cannot share a memory or a clock, each
processor has its own local memory. The processor communicates with one another through
various communication lines, such as high-speed buses. These systems are referred to as
“Loosely Coupled” systems.
Advantages:
 If a number of sites connected by high-speed communication lines, it is possible to share
the resources from one site to another site, for example, s1 and s2 are two sites. These
are connected by some communication lines. The site s1 having a printer, but the site
does not have any print. Then the system can be altered without moving from s2 to s1.
Therefore, resource sharing is possible in the distributed operating system.
 A big computer that is partitioned into a number of partitions, these sub-partitions are
run concurrently in distributed systems.
 If a resource or a system fails in one site due to technical problems, we can use other
systems/resources in some other sites. So, the reliability will increase in the distributed
system.
Types of Operating Systems
An Operating System performs all the basic tasks like managing files, processes, and
memory. Thus operating system acts as the manager of all the resources, i.e. resource
manager. Thus, the operating system becomes an interface between user and machine.
Types of Operating Systems: Some widely used operating systems are as follows-
1. Single user system
 In single user operating system, a single user can access the computer at a particular
time.
 The computer that are based on this operating system, have only single processor and
execute only a single program at all the time.
 This system provides all the resources such as CPU, I/O devices to a single user at all the
time.
 Single user operating system is of two types:
1. Single user, single tasking operating system
2. Single user,multi-tasking operating system
 The Single User, Single tasking operating system allows a single user to execute one
program at a particular time. For example MS-DOS and Palm OS for Palm handheld
computers are single user, single tasking OS
 The Single User, Multi tasking operating system allows a single user to execute multiple
programs at a same time. For example, a user can perform different tasks such as making
calculations in Excel sheet, printing a word document and download a file from the
internet at the same time.
 The main Disadvantage of this Operating system is that CPU sists idle for most of the
time and is not utilized to its maximum.

2. Multi user System


 In a multi-user operating system, multiple numbers of users can access different
resources of a computer at a same time.
 The Access is provided using a network that consists of various personal computers
attached to a mainframe computer system.
 The various personal computers can send and receive information to mainframe
computer system.
 Thus, the mainframe computers acts as the server and other personal computers act as
clients for that server.
 The examples of multi-user are UNIX, Windows 2000,Novell Netware

Advantage of Multi user system

 It helps in the sharing of data and information among different users.


 It also helps in the sharing of hardware resources such as printers and modems.
Disadvantages of multi user system
 It requires expensive hardware to setup a mainframe computer.
 When multiple users log on or work on same system it reduces the overall performance
of system
3. Batch Operating System
This type of operating system does not interact with the computer directly. There is an
operator which takes similar jobs having the same requirement and groups them into batches.
It is the responsibility of the operator to sort jobs with similar needs.

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required for any job to complete. Processors
of the batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.
4. Multi Programming
In the multi-programming system, one or multiple programs can be loaded into its main
memoryfor getting to execute. It is capable only one program or process to get CPU for
executes for their instructions, and other programs wait for getting their turn. Main goal of
using of multiprogramming system is overcome issue of under utilization of CPU
and primary memory.
Main objective of multiprogramming is to manage entire resources of the system. The
primary components of multiprogramming system are command processor, file system, I/O
control system, and transient area.
Requirement of Multiprogramming system
1. Large memory
 For a multiprogramming to work satisfactorily, large main memory is required to
accommodate a good number of user programs along with operating system.
2. Memory Protection
 Computers designed for multiprogramming must provide some type of memory
protection mechanism to prevent a program in one memory partition from changing
information or instruction of a program in another memory partition
3. Job status Preservation
 In multiprogramming, when one running job is blocked for I/O operation, the CPU is
taken away from that job and is given to some another job. Later on, when that job has
finished its I/O operation, it needs to be resumed with its execution.
 This requires preserving the status information of that job and restoring this information
back.
 For this an operating system uses process control block to save the status of each
process
4. Proper Job mix
 A proper mix I/O bound and CPU bound processes is required so that the operations of
CPU and I/O devices are balanced.
 If all the loaded jobs need I/O at the same time, the CPU will be idle.
 Thus, the main memory should contain some CPU bound and some I/O bound jobs so
that at least one job is always to utilize the CPU
5. CPU Scheduling
 In a multiprogramming system, often there will be situation in which two or more jobs
will be in the ready state, wating for CPU to be allocated for execution.
 In such a case, the operating system must decide to which process or job should CPU be
allocated
Advantages
 High CPU utilization.
 It appears that many programs are allotted CPU almost concurrently.
Disadvantages
 CPU scheduling is required.
 To accommodate several jobs in memory, memory management
5. Multitasking System
Multitasking operating system provides the interface for executing the multiple program tasks
by single user at a same time on the one computer system. For example, any editing task can
be performed while other programs are executing concurrently. Other example, user can open
Gmail and Power Point same time.
Types of Multitasking Operating System
True Multitasking
True multitasking is the capable for executing and process multiple tasks concurrently
without taking delay instead of switching tasks from one processor to other processor. It can
perform couple of tasks in parallel with underlying the H/W or S/W.
Preemptive Multitasking
Preemptive multitasking is special task that is assigned to computer operating system, in
which it takes decision that how much time spent by one task before assigning other task for
using the operating system. Operating system has control for completing this entire process,
so it is known as “Preemptive”.
Cooperative Multitasking
Cooperative multitasking is known as “Non-Preemptive Multitasking”. Main goal of
Cooperative multitasking is to run currently task, and to release the CPU to allow another
task run. This task is performed by calling taskYIELD().Context-switch is executed when this
function is called.
Advantages of Multitasking Operating System
Time Shareable
In which, all tasks are allocated specific piece of time, so they do not need for waiting time
for CPU.
Manage Several Users
This operating system is more comfort for handling the multiple users concurrently, and
several programs can run smoothly without degradation of system’s performance.
Secured Memory
Multitasking operating system has well defined memory management, because this operating
system does not provide any types of permissions of unwanted programs to wasting the
memory.

Great Virtual Memory


Multitasking operating system contains the best virtual memory system. Due to virtual
memory, any program do not need long waiting g time for completion their tasks, if this
problem is occurred then those programs are transferred to virtual memory.

Background Processing
Multitasking operating system creates the better environment to execute the background
programs. These background programs are not transparent for normal users, but these
programs help to run other programs smoothly such as firewall, antivirus software, and more.

Good Reliability
Multitasking operating system provides the several flexibilities for multiple users, and they
are more satisfied to them. On which, every users can operate single or multiple programs
with smoothly.

Use Multiple Programs


Users can operate multiple programs such as internet browser, PowerPoint, MS Excel, games,
and other utilities concurrently.
Optimize Computer Resources
Multitasking operating system is able to handle smoothly multiple computers’ resources such
as RAM, input/output devices, CPU, hard disk, and more.
Disadvantages of Multitasking Operating System
Memory Boundation
Computer can get slow performance, due to run multiple programs at a same time
because main memory gets more load while loading multiple programs. CPU is not able to
provide separate time for every program, and its response time gets increase. Main reason of
occurring this problem is that it uses to less capacity RAM. So, for getting solution can be
increased the RAM capacity.
Processor Boundation
Computer can run programs slowly due to slow speed of their processors, and its response
time can increase while handling multiple programs. Need better processing power, to
overcome this problem.
CPU Heat up
Multiple processors become busier at a time for executing any task in multitasking nature, So
CPU produces more heat.
Examples of Multitasking Operating System
There are some examples of multi tasking OS like as –
 Windows XP
 Windows Vista
 Windows 7
 Windows 8
 Windows 10
 Windows 2000
 IBM’s OS/390
 Linux
 UNIX
Difference between Multiprogramming and multitasking

Sr.no Multiprogramming Multi-tasking

1. Both of these concepts are for single CPU. Both of these concepts are for single CPU.

2. Concept of Context Switching is used. Concept of Context Switching and Time Sharing is used.

The processor is typically used in time sharing mode.


In multiprogrammed system, the operating
Switching happens when either allowed time expires or
3. system simply switches to, and executes,
where there other reason for current process needs to wait
another job when current job needs to wait.
(example process needs to do IO).

Multi-programming increases CPU utilization In multi-tasking also increases CPU utilization, it also
4.
by organising jobs . increases responsiveness.

The idea is to reduce the CPU idle time for as The idea is to further extend the CPU Utilization concept by
5.
long as possible. increasing responsiveness Time Sharing.

6. 6. Multiprocessing System
 Multiprocessor system is the system that contains two or more processors or CPU’s and
has ability to simultaneously execute several programs. Hence the name multi-processor.
 In such a system, multiple processors share the clock, bus, memory and peripheral
devices.
 A multiprocessor system is also known as parallel system.
 In such a system, instructions from different and independent programs can be
processed at the same instant of time by different CPU’s.
 In this system, the CPU’s simultaneously execute different instructions from the same
program.

Types of Multiprocessors

There are mainly two types of multiprocessors i.e. symmetric and asymmetric
multiprocessors. Details about them are as follows −
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e.
no master – slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of UNIX for the
Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors. Asymmetric multiprocessor system contains
a master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Advantages of Multiprocessor Systems
There are multiple advantages to multiprocessor systems. Some of these are −
More reliable Systems
In a multiprocessor system, even if one processor fails, the system will not halt. This ability
to continue working despite hardware failure is known as graceful degradation. For example:
If there are 5 processors in a multiprocessor system and one of them fails, then also 4
processors are still working. So the system only becomes slower and does not ground to a
halt.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then
the throughput increases by an amount just under N.
More Economic Systems
Multiprocessor systems are cheaper than single processor systems in the long run because
they share the data storage, peripheral devices, power supplies etc. If there are multiple
processes that share data, it is better to schedule them on multiprocessor systems with shared
data than have different computer systems with multiple copies of the data.
Disadvantages of Multiprocessor Systems
There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple
computer systems, still they are quite expensive. It is much cheaper to buy a simple single
processor system than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc.
So, it is much more complicated to schedule processes and impart resources to processes.
than in single processor systems. Hence, a more complex and complicated operating system
is required in multiprocessor systems.
Large Main Memory Required
All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems.
Difference between Multiprocessing and multiprogramming
S.No. Multiprocessing Multiprogramming

The availability of more than one processor per system The concurrent application of more than one
1. that can execute several set of instructions in parallel is program in the main memory is known as
known as multiprocessing. multiprogramming.

2. The number of CPU is more than one. The number of CPU is one.

3. It takes less time for job processing. It takes more time to process the jobs.

4. In this, more than one process can be executed at a time. In this, one process can be executed at a time.

5. It is economical. It is economical.

6. The number of users is can be one or more than one. The number of users is one at a time.

7. Throughput is maximum. Throughput is less.

8. Its efficiency is maximum. Its efficiency is Less.

7. Time-Sharing Operating Systems –


Each task is given some time to execute so that all the tasks work smoothly. Each user
gets the time of CPU as they use a single system. These systems are also known as
Multitasking Systems. The task can be from a single user or different users also. The time
that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.
Advantages of Time-Sharing OS:
 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of the security and integrity of user programs and data
 Data communication problem

Examples of Time-Sharing OSs are: Multics, UNIX, etc.


8. Distributed Operating System
These types of the operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, with a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU.
These are referred to as loosely coupled systems or distributed systems. These system’s
processors differ in size and function. The major benefit of working with these types of the
operating system is that it is always possible that one user can access the files or software
which are not actually present on his system but some other system connected within this
network i.e., remote access is enabled within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.
9. Network Operating System
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems
allow shared access of files, printers, security, applications, and other networking
functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of
all other users within the network, their individual connections, etc. and that’s why
these computers are popularly known as tightly coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated into the system
 Server access is possible remotely from different locations and types of systems
Disadvantages of Network Operating System:
 Servers are costly
 User has to depend on a central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.

10. Real-Time Operating System


These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
 HardReal-TimeSystems:
These OSs are meant for applications where time constraints are very strict and even the
shortest possible delay is not acceptable. These systems are built for saving life like
automatic parachutes or airbags which are required to be readily available in case of any
accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.

Advantages of RTOS:
 Maximum Consumption:Maximum utilization of devices and system, thus more output
from all the resources
 Task Shifting:The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to another,
and in the latest systems, it takes 3 microseconds.
 Focus on Application:Focus on running applications and less importance to applications
which are in the queue.
 Real-timeoperating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
 Error Free:These types of systems are error-free.
 Memory Allocation:Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks:Very few tasks run at the same time and their concentration is very less on
few applications to avoid errors.
 Use heavy system resources:Sometimes the system resources are not so good and they
are expensive as well.
 Complex Algorithms:The algorithms are very complex and difficult for the designer to
write on.
 Device driver and interrupt signals:It needs specific device drivers and interrupts signals
to respond earliest to interrupts.
 Thread Priority:It is not good to set thread priority as these systems are very less prone
to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

Difference between Hard real time and Soft real time system :
HARD REAL TIME SYSTEM SOFT REAL TIME SYSTEM

In hard real time system, the size of data file is


In soft real time system, the size of data file is large.
small or medium.

In this system response time is in millisecond. In this system response time are higher.

Peak load performance should be predictable. In soft real time system, peak load can be tolerated.

In this system safety is critical. In this system safety is not critical.

A hard real time system is very restrictive. A Soft real time system is less restrictive.

In case of an error in a hard real time system, the In case of an soft real time system, computation is rolled back
computation is rolled back. to previously established a checkpoint.

Satellite launch, Railway signaling system etc. DVD player, telephone switches, electronic games etc.

11. Multi-threaded Operating System


 Multithreading is a technique in which a process, excuting an application is divided into
threads that can run concurrently.
 A thread is a dispatchable unit of work. It includes a processor context and its own data
area for a stack.
 A thread executes sequentially and is interruptable so that the processor can turn to
another thread.
 Thus, a thread represents a light weight process and is the smallest unit of CPU
utilization. It is like a miniprocess.
 A process, on other hand is a collection of one or more threads and associated system
resources.
 A thread is not a process by itself. It cannot run on its own. It always run within a
process.
 Thus , a multithreaded process may have multiple execution flows, different ones
belonging to different threads.
 All the threads of a process share same private address space of the process and share
all the resources acquired by the process.
 By breaking a single application into multiple threads, the programmer has great control
over the modularity of the application and the timing of application related events.

The various states exhibited by window thread are


1. Ready: A ready thread may be scheduled for execution. The kernel dispatcher keeps
track of all ready threads and schedules then in priority order.
2. Standby: A thread that has been selected to run next on a particular processor is said to
be in standby state. The tread waits in this state until the processors is made available. If
the priority of standby thread is higher than the thread that is currently running on the
processor than this running thread may be preempted.
3. Running: The thread that is currently utilizing CPU is in running state. It keeps the
processor until it is preempted by a higher priority thread or it gets blocked or its time
slice expires.
4. Waiting: A thread ethers waiting state when:
5. It is blocked on an event.
6. It voluntarily waits for synchronization purpose.
7. An environment subsystem directs the thread to suspend itself.
8. Transition: A thread enters this state after waiting if it is ready to run but resources are
not available. E.g. the thread’s stack may be paged out of memory. When resources are
available, the thread goes to the ready state.
9. Terminated: A thread can be terminated by itself, by another thread or when its parent
process terminated.
Difference between Real Time OS and Time sharing OS
S.NO Time Sharing Operating System Real-Time Operating System

In time sharing operating system, quick response While in real time operating system, computation tasks
1.
is emphasized for a request. are emphasized before its nominative point.

In this operating While in this operating


2.
system Switching method/function is available. system Switching method/function is not available.

In this operating system any modification in the


3. While in this modification does not take place.
program can be possible.

In this OS, computer resources are shared to the But in this OS, computer resources are not shared to the
4.
external. external.

It deals with more than processes or applications Whereas it deals with only one process or application at
5.
simultaneously. a time.

In this OS, the response is provided to the user While in real time OS, the response is provided to the
6.
within a second. user within time constraint.

LESSON 3: Process & Thread Management


Process & Thread Management
Process vs. Program
1. Program
When we execute a program that was just compiled, the OS will generate a process to execute the
program. Execution of the program starts via GUI mouse clicks, command line entry of its name,
etc. A program is a passive entity as it resides in the secondary memory, such as the contents of a
file stored on disk. One program can have several processes.
2. Process:
The term process (Job) refers to program code that has been loaded into a computer’s memory so
that it can be executed by the central processing unit (CPU). A process can be described as an
instance of a program running on a computer or as an entity that can be assigned to and executed
on a processor. A program becomes a process when loaded into memory and thus is an active
entity.

Difference between Program and Process:

Sr.No. Program Process


Program contains a set of instructions designed
1. Process is an instance of an executing program.
to complete a specific task.
Program is a passive entity as it resides in the Process is a active entity as it is created during
2.
secondary memory. execution and loaded into the main memory.
Program exists at a single place and continues Process exists for a limited span of time as it
3.
to exist until it is deleted. gets terminated after the completion of task.
4. Program is a static entity. Process is a dynamic entity.
Program does not have any resource Process has a high resource requirement, it
5. requirement, it only requires memory space for needs resources like CPU, memory address, I/O
storing the instructions. during its lifetime.
Process has its own control block called Process
6. Program does not have any control block.
Control Block.

PCB (Process Control Block)


1. Program:
When we execute a program that was just compiled, the OS will generate a process to
execute the program. Execution of the program starts via GUI mouse clicks,
command line entry of its name, etc. A program is a passive entity as it resides in the
secondary memory, such as the contents of a file stored on disk. One program can
have several processes.
2. Process:
The term process (Job) refers to program code that has been loaded into a computer’s
memory so that it can be executed by the central processing unit (CPU). A process
can be described as an instance of a program running on a computer or as an entity
that can be assigned to and executed on a processor. A program becomes a process
when loaded into memory and thus is an active entity.
Difference between Program and Process:

Sr.No. Program Process


Program contains a set of instructions designed
1. Process is an instance of an executing program.
to complete a specific task.
Program is a passive entity as it resides in the Process is a active entity as it is created during
2.
secondary memory. execution and loaded into the main memory.
Sr.No. Program Process
Program exists at a single place and continues Process exists for a limited span of time as it
3.
to exist until it is deleted. gets terminated after the completion of task.
4. Program is a static entity. Process is a dynamic entity.
Program does not have any resource Process has a high resource requirement; it
5. requirement; it only requires memory space for needs resources like CPU, memory address, I/O
storing the instructions. during its lifetime.
Process has its own control block called Process
6. Program does not have any control block.
Control Block.

PCB (Process Control Block)


 Process control block is a data structure used by operating system to store all the
information about a process. It is also known as process descriptor.
 When a process is created, the operating system creates a corresponding process
control block.
 Information In a process control block is updated during the transition of process
states.
 When the process terminates, its PCB is released to the pool of free cells from which
new PCBs are drawn.
 Each process has a single PCB.
The PCB of a process contains the following information about the process

 Process state: A process can be new, ready, running, waiting, etc.


 Program counter: The program counter lets you know the address of the next
instruction, which should be executed for that process.
 CPU registers: This componentincludes accumulators, index and general-purpose
registers, and information of condition code.
 CPU scheduling information: This componentincludes a process priority, pointers
for scheduling queues, and various other scheduling parameters.
 Accounting and business information: It includes the amount of CPU and time
utilities like real time used, job or process numbers, etc.
 Memory-management information: This information includes the value of the base
and limit registers, the page, or segment tables. This depends on the memory system,
which is used by the operating system.
 I/O status information: This block includes a list of open files, the list of I/O
devices that are allocated to the process, etc.

Process State Transition


 A state transition is a change from one state to another. A state transition is caused by
the occurrence of some event in the system.
 A process has to go through various states for performing its task.
 The transition of a process from one state to another occurs depending on the flow of
the execution of the process.
 A new process is added to data structure called a ready queue, also known as ready
pool or pool of executable processes. This queue stores all processes in a first in first
out (FIFO) manner. A new process is added into the ready queue from its rear end
and the process at the front of the ready queue is sent for execution.
 If the process does not voluntarily release the CPU before the time slice expires, the
interrupting cycle generates an interrupt, causing the operating system to regain
control.
 Each process is assigned a time slice for the execution. A time slice is a very short
period of time and its duration varies in different systems.
 The CPU executes the process at the front end of the ready queue and that process
makes a state transition from ready to the running state. The assignment of the CPU
to the first process on the ready queue is called dispatching. This transition is
indicated as
Dispatch (process name): ready running
 The operating system then adds the previously running process to the rear end of
ready queue and allocates CPU to the first process on the ready queue. These state
transitions are indicated as:
Timerunout (processname): running ready
 If a running process initiates an input/output operation before its time slice expires,
the running process voluntarily release the CPU. It is sent to the waiting queue and
the process state is marked as waiting blocked. This state transition is indicated as:
Block(processname) : running blocked
 After the competition of I/O task, the blocked or waiting process is restored and
placed back in the ready queue and the process state is marked as ready.
 When the execution of process ends, the process state is marked as terminated and the
operating system reclaims all the process allocated to the process.
Scheduling Queues
 In multiprogramming when several processes are in waiting for I/O operation, they
form queues.
 The various queues maintained by operating system are:
1. Job Queue
 As the process enter the system, it is put into a job queue. This queue consists of all
ready to run.
2. Ready queue
 It is a doubly linked list of processes that are residing in the main memory and are
ready to run.
 The various processes in ready queue are placed according to their priority i.e. higher
priority process is at the front of the queue.
 The header of ready queue contains two pointers. The first pointer points to the PCB
of first process and the second pointer points to the PCB of last process in the queue.

Device Queue
 Device queue contains all those processes that are waiting for a particular I/O device.
 Each device has its own device queue.

Types of schedulers
Scheduler
 A scheduler is an operating system module that selects the next job or process to be
admitted into the system.
 Thus, a scheduler selects one of the processes from among the processes in the
memory that are ready to execute and allocates CPU to it.

In complex operating system three different types of schedulers may exist.


1) Long term scheduler
2) Medium term scheduler
3) Short term scheduler

Long term scheduler


 The job scheduler or long-term scheduler selects processes from the storage pool in
the secondary memory and loads them into the ready queue in the main memory for
execution.
 The long-term scheduler controls the degree of multiprogramming. It must select a
careful mixture of I/O bound and CPU bound processes to yield optimum system
throughput. If it selects too many CPU bound processes then the I/O devices are idle
and if it selects too many I/O bound processes then the processor has nothing to do.
 The job of the long-term scheduler is very important and directly affects the system
for a long time.
2) Medium term scheduler
 Medium-term scheduling is an important part of swapping. It enables you to handle
the swapped out-processes. In this scheduler, a running process can become
suspended, which makes an I/O request.
 A running process can become suspended if it makes an I/O request. A suspended
processes can’t make any progress towards completion. In order to remove the
process from memory and make space for other processes, the suspended process
should be moved to secondary storage.
3) Short term scheduler
 Short term scheduling is also known as CPU scheduler. The main goal of this
scheduler is to boost the system performance according to set criteria. This helps you
to select from a group of processes that are ready to execute and allocates CPU to one
of them. The dispatcher gives control of the CPU to the process selected by the short
term scheduler.

Concept of Thread
 A is a single sequential flow of execution of the tasks of a process.
 A thread is a lightweight process and the smallest unit of CPU utilization. Thus a
thread is like a little miniprocess.
 Each thread has a thread id, a program counter, a register set and a stack.
 A thread undergoes different states such as new, ready, running, waiting and
terminated similar to that of a process.
 However, a thread is not a program as it cannot run on its own. It runs within
program.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc.
Types of Threads:
1. User Level thread (ULT)
Is implemented in the user level library, they are not created using the system calls.
Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel
doesn’t know about the user level thread and manages them as if they were single-
threaded processes.
Advantages of ULT
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register set, stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT –
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.

2. Kernel Level Thread (KLT)


Kernel knows and manages the threads. Instead of thread table in each process, the
kernel itself has thread table (a master one) that keeps track of all the threads in the
system. In addition kernel also maintains the traditional process table to keep track of
the processes. OS kernel provides system call to create and manage threads.
Advantages of KLT
 Since kernel has full knowledge about the threads in the system, scheduler may
decide to give more time to processes having large number of threads.
 Good for applications that frequently block.
Disadvantages of KLT
 Slow and inefficient.
 It requires thread control block so it is an overhead

Difference between Kernel level thread and User level thread

User level thread Kernel level thread


User thread are implemented by users. kernel threads are implemented by OS.
OS doesn’t recognized user level threads. Kernel threads are recognized by OS.
Implementation of User threads is easy. Implementation of Kernel thread is complicated.
Context switch time is less. Context switch time is more.
Context switch requires no hardware support. Hardware support is needed.
If one user level thread perform blocking operation If one kernel thread perform blocking operation then
then entire process will be blocked. another thread can continue execution.
User level threads are designed as dependent Kernel level threads are designed as independent
threads. threads.
Example : Java thread, POSIX threads. Example : Window Solaris.
Benefits of Threads
 Enhanced throughput of the system:When the process is split into many threads
and each thread is treated as a job, the number of jobs done in the unit time increases.
That is why the throughput of the system also increases.
 Effective Utilization of Multiprocessor system:When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
 Faster context switch:The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
 Responsiveness:When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
 Communication:Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive
communication strategies for communication between two processes.
 Resource sharing:Resources can be shared between all threads within a process,
such as code, data, and files. Note: The stack and register cannot be shared between
threads. There are a stack and register for each thread.
Process Synchronization
 A co-operating process is one that can affect or be affected by other processes
executing in the system.
 Such co-operating processes may either directly share a logical address space or be
allowed to share data only through files.
 When co-operating processes concurrently share the data, it may result in data
inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly execution of
co-operating processes.
 Thus, process synchronization ensures a perfect co-ordination among the processes.
 Process synchronization can be provided by using several different tools like
semaphore, mutex and monitors.
 Synchronization is important for both user applications and implementation of
operating system.
Concept of Race Condition
 When several processes access and manipulate the same data at the same time, they
may enter into a race condition.
 A race condition is a flaw in a system of processes whereby the output of the process
is dependent on the sequence of other processes.
 Race conditions occur among processes that share common storage and each process
can read and write on this shared common storage.
 Thus, a race condition occurs due to improper synchronization of shared memory
access.
 Race conditions can occur in poorly designed systems.
 If the race condition is allowed to happen in the system the output of the processes
cannot be ascertained.

CPU Scheduling
CPU Scheduling
 Scheduling is a fundamental operating system function.
 Scheduling refers to set of policies and mechanisms built into the operating system
that governs the order in which the work to be done by a computer system is
completed.
 CPU scheduling is the basis of multiprogrammed operating system.
Why do we need Scheduling?
 In Multiprogramming, if the long term scheduler picks more I/O bound processes
then most of the time, the CPU remains idol. The task of Operating system is to
optimize the utilization of resources.
 If most of the running processes change their state from running to waiting then
there may always be a possibility of deadlock in the system. Hence to reduce this
overhead, the OS needs to schedule the jobs to get the optimal utilization of CPU
and to avoid the possibility to deadlock.
CPU Scheduling: Dispatcher
Another component involved in the CPU scheduling function is the Dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short-term scheduler.
This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from
where it left last time.
The dispatcher should be as fast as possible, given that it is invoked during every process switch.
The time taken by the dispatcher to stop one process and start another process is known as
the Dispatch Latency. Dispatch Latency can be explained using the below figure:

Types of CPU Scheduling


CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the runningstate to the waiting state(for I/O request
or invocation of wait for the termination of one of the child processes).
2. When a process switches from the runningstate to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waitingstate to the ready state(for example,
completion of I/O).
4. When a process terminates.
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one exists in
the ready queue) must be selected for execution. There is a choice, however in circumstances 2
and 3.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.
Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.
It is the only method that can be used on certain hardware platforms because It does not require
the special hardware(for example a timer) needed for preemptive scheduling.
In non-preemptive scheduling, it does not interrupt a process running CPU in the middle of the
execution. Instead, it waits till the process completes its CPU burst time, and then after that it can
allocate the CPU to any other process.
Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non-
preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.
Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary
to run a certain task that has a higher priority before another task although it is running.
Therefore, the running task is interrupted for some time and resumed later when the priority task
has finished its execution.
Thus this type of scheduling is used mainly when a process switches either from running state to
ready state or from waiting state to ready state. The resources (that is CPU cycles) are mainly
allocated to the process for a limited amount of time and then are taken away, and after that, the
process is again placed back in the ready queue in the case if that process still has a CPU burst
time remaining. That process stays in the ready queue until it gets the next chance to execute.
Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR),
Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.

CPU Scheduling: Scheduling Criteria


There are many different criteria to check when considering the “best” scheduling algorithm,
they are:
CPU Utilization
To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be working
most of the time(Ideally 100% of the time). Considering a real system, CPU usage should range
from 40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit of time or rather says the total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from the time of
submission of the process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.
CPU-I/O Burst Cycle
 The success of CPU scheduling depends on an observed property of processes:
o Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
o Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O burst,
and so on.
 Eventually, the final CPU burst ends with a system request to terminate execution

 The durations of CPU bursts have been measured extensively. They tend to have a
frequency curve similar to that shown in Fig. 11.

 The curve is generally characterized as exponentialor hyperexponential, with a large


number of short CPU bursts and a small number of long CPU bursts.
o An I/O-boundprogram typically has many short CPU bursts.
o A CPU-bound program might have a few long CPU bursts.
 This distribution can be important in the selection of an appropriateCPU-scheduling
algorithm.
 Nearly all processes alternate bursts of computing with (disk) I/O requests, as shown
in Fig. 12.

 Some processes, such as the one in, spend most of their time computing (CPU-
bound), while others, such as the one in, spend most of their time waiting for I/O
(I/O-bound).
 Having some CPU-bound processes and some I/O-bound processes in memory
together is a better idea than first loading and running all the CPU-bound jobs and
then when they are finished loading and running all the I/O-bound jobs (a careful
mix of processes).

Different scheduling criteria’s, scheduling


algorithms
Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve
maximum CPU utilization, computer scientists have defined some algorithms, they are:
1. First Come First Serve(FCFS) Scheduling
2. Shortest-Job-First(SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling
7. Shortest Remaining Time First (SRTF)
8. Longest Remaining Time First (LRTF)
9. Highest Response Ratio Next (HRRN)
FCFS Scheduling
First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to
their arrival time. The job which comes first in the ready queue will get the CPU first. The
lesser the arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may
cause the problem of starvation if the burst time of the first process is the longest among all
the jobs.
Advantages of FCFS
 Simple
 Easy
 First come, First serve
Disadvantages of FCFS
1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
3. Although it is easy to implement, but it is poor in performance since the average waiting
time is higher as compare to other scheduling algorithms.
Example
Let’s take an example of The FCFS scheduling algorithm. In the Following schedule, there
are 5 processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1, P2
at time 2, P3 arrives at time 3 and Process P4 arrives at time 4 in the ready queue. The
processes and their respective Arrival and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by using the following formula.
1. Turn Around Time= Completion Time – Arrival Time
2. Waiting Time = Turnaround time – Burst Time
The average waiting Time is determined by summing the respective waiting time of all the
processes and divided the sum by the total number of processes.
Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

0 0 2 2 2 0

1 1 6 8 7 1

2 2 4 12 8 4

3 3 9 21 18 9

4 4 12 33 29 17

Avg Waiting Time=31/5


Shortest Job First (SJF) Scheduling
Till now, we were scheduling the processes according to their arrival time (in FCFS
scheduling). However, SJF scheduling algorithm, schedules the processes according to their
burst time.
In SJF scheduling, the process with the lowest burst time, among the list of available
processes in the ready queue, is going to be scheduled next.
However, it is very difficult to predict the burst time needed for a process hence this
algorithm is very difficult to implement in the system.
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival
time and burst time are given in the table below.
PID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4

Avg Waiting Time = 27/5


Round Robin Scheduling Algorithm
Round Robin scheduling algorithm is one of the most popular scheduling algorithm which
can actually be implemented in most of the operating systems. This is the preemptive
version of first come first serve scheduling. The Algorithm focuses on Time Sharing. In this
algorithm, every process gets executed in a cyclic way. A certain time slice is defined in the
system which is called time quantum. Each process present in the ready queue is assigned
the CPU for that time quantum, if the execution of the process is completed during that time
then the process will terminate else the process will go back to the ready queue and waits
for the next turn to complete the execution.

Advantages
1. It can be actually implementable in the system because it is not depending on the burst
time.
2. It doesn’t suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.

RR Scheduling Example
In the following example, there are six processes named as P1, P2, P3, P4, P5 and P6. Their
arrival time and burst time are given below in the table. The time quantum of the system is 4
units.
Process ID Arrival Time Burst Time

1 0 5

2 1 6

3 2 3

4 3 1

5 4 5

6 6 4

According to the algorithm, we have to maintain the ready queue and the Gantt chart. The
structure of both the data structures will be changed after every scheduling.
Ready Queue:
Initially, at time 0, process P1 arrives which will be scheduled for the time slice 4 units.
Hence in the ready queue, there will be only one process P1 at starting with CPU burst time 5
units.
P1

GANTT chart
The P1 will be executed for 4 units first.
Ready Queue
Meanwhile the execution of P1, four more processes P2, P3, P4 and P5 arrives in the ready
queue. P1 has not completed yet, it needs another 1 unit of time hence it will also be added
back to the ready queue.
P2 P3 P4 P5 P1

6 3 1 5 1

GANTT chart
After P1, P2 will be executed for 4 units of time which is shown in the Gantt chart.
Ready Queue
During the execution of P2, one more process P6 is arrived in the ready queue. Since P2 has
not completed yet hence, P2 will also be added back to the ready queue with the remaining
burst time 2 units.
P3 P4 P5 P1 P6 P2

3 1 5 1 4 2
GANTT chart
After P1 and P2, P3 will get executed for 3 units of time since its CPU burst time is only 3
seconds.
Ready Queue
Since P3 has been completed, hence it will be terminated and not be added to the ready
queue. The next process will be executed is P4.
P4 P5 P1 P6 P2

1 5 1 4 2

GANTT chart
After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit which is lesser then
the time quantum hence it will be completed.
Ready Queue
The next process in the ready queue is P5 with 5 units of burst time. Since P4 is completed
hence it will not be added back to the queue.
P5 P1 P6 P2

5 1 4 2

GANTT chart
P5 will be executed for the whole time slice because it requires 5 units of burst time which is
higher than the time slice.
Ready Queue
P5 has not been completed yet; it will be added back to the queue with the remaining burst
time of 1 unit.
P1 P6 P2 P5

1 4 2 1

GANTT Chart
The process P1 will be given the next turn to complete its execution. Since it only requires 1
unit of burst time hence it will be completed.
Ready Queue
P1 is completed and will not be added back to the ready queue. The next process P6 requires
only 4 units of burst time and it will be executed next.
P6 P2 P5

4 2 1

GANTT chart
P6 will be executed for 4 units of time till completion.
Ready Queue
Since P6 is completed, hence it will not be added again to the queue. There are only two
processes present in the ready queue. The Next process P2 requires only 2 units of time.
P2 P5
2 1

GANTT Chart
P2 will get executed again, since it only requires only 2 units of time hence this will be
completed.
Ready Queue
Now, the only available process in the queue is P5 which requires 1 unit of burst time. Since
the time slice is of 4 units hence it will be completed in the next burst.
P5

GANTT chart
P5 will get executed till completion.
The completion time, Turnaround time and waiting time will be calculated as shown in the
table below.
As, we know,
1. Turn Around Time= Completion Time – Arrival Time
2. Waiting Time= Turn Around Time – Burst Time

Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

1 0 5 17 17 12

2 1 6 23 22 16

3 2 3 11 9 6

4 3 1 12 9 8

5 4 5 24 20 15

6 6 4 21 15 11

Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units

Advantages of Round Robin Scheduling Algorithm


Some advantages of the Round Robin scheduling algorithm are as follows:
 While performing this scheduling algorithm, a particular time quantum is allocated to
different jobs.
 In terms of average response time, this algorithm gives the best performance.
 With the help of this algorithm, all the jobs get a fair allocation of CPU.
 In this algorithm, there are no issues of starvation or convoy effect.
 This algorithm deals with all processes without any priority.
 This algorithm is cyclic in nature.
 In this, the newly created process is added to the end of the ready queue.
 Also, in this, a round-robin scheduler generally employs time-sharing which means
providing each job a time slot or quantum.
 In this scheduling algorithm, each process gets a chance to reschedule after a particular
quantum time.
Disadvantages of Round Robin Scheduling Algorithm
Some disadvantages of the Round Robin scheduling algorithm are as follows:
 This algorithm spends more time on context switches.
 For small quantum, it is time-consuming scheduling.
 This algorithm offers a larger waiting time and response time.
 In this, there is low throughput.
 If time quantum is less for scheduling then its Gantt chart seems to be too big.
Multilevel Queue (MLQ) CPU Scheduling
It may happen that processes in the ready queue can be divided into different classes where
each class has its own scheduling needs. For example, a common division is a foreground
(interactive) process and background (batch) processes.These two classes have different
scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.Now, let us
see how it works.
Ready Queue is divided into separate queues for each class of processes. For example, let us
take three different types of process System processes, Interactive processes and Batch
Processes. All three process have there own queue. Now,look at the below figure.

All three different type of processes have there own queue. Each queue have its own
Scheduling algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3
can use FCFS to schedule there processes.

Scheduling among the queues : What will happen if all the queues have some processes?
Which process should get the cpu? To determine this Scheduling among the queues is
necessary. There are two ways to do so –

1. Fixed priority preemptive scheduling method –Each queue has absolute priority over
lower priority queue. Let us consider following priority order queue 1 > queue 2 > queue
3.According to this algorithm no process in the batch queue(queue 3) can run unless
queue 1 and 2 are empty. If any batch process (queue 3) is running and any system
(queue 1) or Interactive process(queue 2) entered the ready queue the batch process is
preempted.
2. Time slicing– In this method each queue gets certain portion of CPU time and can use it
to schedule its own processes.For instance, queue 1 takes 50 percent of CPU time queue
2 takes 30 percent and queue 3 gets 20 percent of CPU time.
Example Problem :
Consider below table of four processes under Multilevel queue scheduling.Queue number
denotes the queue of the process.
Priority of queue 1 is greater than queue 2. queue 1 uses Round Robin (Time Quantum = 2)
and queue 2 uses FCFS.
At starting both queues have process so process in queue 1 (P1, P2) runs first (because of
higher priority) in the round robin fashion and completes after 7 units then process in queue 2
(P3) starts running (as there is no process in queue 1) but while it is running P4 comes in
queue 1 and interrupts P3 and start running for 5 second and after its completion P3 takes the
CPU and completes its execution.
Advantages:
 The processes are permanently assigned to the queue, so it has advantage of low
scheduling overhead.
Disadvantages:
 Some processes may starve for CPU if some higher priority queues are never becoming
empty.
 It is inflexible in nature.

UNIT-II
: Memory Management
Memory Management: Introduction
 A computer uses two types of storage: Main memory and secondary memory.
 Main memory temporarily stores data and instructions to be executed by the
computer.
 As main memory is usually too small to accommodate all the data & programs, the
information is stored in various secondary storage devices such as magnetic disks
and magnetic tapes in the form of files.
 The part of operating system that manages is called the memory manager.
 The memory manger performs the following function:
1. To keep the track of which parts of memory are in use and which are free.
2. To allocate memory space to processes when they need and to deallocate it when
they are done.
 To manage swapping between main memory and disk when main memory is not big
enough to hold all the processes.
1. To protect various processes so that they do not interface with each memory is not
big enough to hold all the processes.
2. To enable sharing of memory space between processes whenever required.
3. To make the addressing of memory space as transparent as possible from the
programmers.
 In order to implement these functions memory manager uses variety of memory
management schemes
Contiguous, Real memory management system
 Single contiguous memory management system
 Fixed partitioned memory management system
 Variable partitioned memory management system
Non-Contiguous, real memory management system
 Paged memory management system
 Segmented memory management system
 Combined memory management system
Non—contiguous ,virtual memory management system
 Virtual memory paging
 Virtual memory segmentation

Address binding
The Association of program instruction and data to the actual physical memory locations is called
the Address Binding. Let’s consider the following example given below for better understanding.
Consider a program P1 has the set of instruction such that I1, I2, I3, I4, and program counter
value is 10, 20, 30, 40 respectively.
Program
P1I1 –> 10
I2 –> 20
I3 –> 30
I4 –> 40
Program Counter = 10, 20, 30, 40

Types of Address Binding:


Address Binding divided into three types as follows.
1. Compile-time Address Binding
2. Load time Address Binding
3. Execution time Address Binding

Compile-time Address Binding:


 If the compiler is responsible for performing address binding then it is called
compile-time address binding.
 It will be done before loading the program into memory.
 The compiler requires interacts with an OS memory manager to perform compile-
time address binding.

Load time Address Binding:


 It will be done after loading the program into memory.
 This type of address binding will be done by the OS memory manager i.e loader.

Execution time or dynamic Address Binding:


 It will be postponed even after loading the program into memory.
 The program will be kept on changing the locations in memory until the time of
program execution.
 The dynamic type of address binding done by the processor at the time of program
execution.

Requirements of Memory Management System


Memory management keeps track of the status of each memory location, whether it is allocated
or free. It allocates the memory dynamically to the programs at their request and frees it for reuse
when it is no longer needed. Memory management meant to satisfy some requirements that we
should keep in mind.
These Requirements of memory management are:
1. Relocation –The available memory is generally shared among a number of processes
in a multiprogramming system, so it is not possible to know in advance which other
programs will be resident in main memory at the time of execution of his program.
Swapping the active processes in and out of the main memory enables the operating
system to have a larger pool of ready-to-execute process.When a program gets
swapped out to a disk memory, then it is not always possible that when it is swapped
back into main memory then it occupies the previous memory location, since the
location may still be occupied by another process. We may need to relocate the
process to a different area of memory. Thus there is a possibility that program may
be moved in main memory due to swapping.

The figure depicts a process image. The process image is occupying a continuous region of main
memory. The operating system will need to know many things including the location of process
control information, the execution stack, and the code entry. Within a program, there are memory
references in various instructions and these are called logical addresses.
After loading of the program into main memory, the processor and the operating system must be
able to translate logical addresses into physical addresses. Branch instructions contain the address
of the next instruction to be executed. Data reference instructions contain the address of byte or
word of data referenced.
(1) Static Relocation
The first basic method of relocation is static relocation. Static relocation is performed before or
during the loading of the program into memory, by a relocating linker/ loader.
In systems with static relocation, a swapped-out process must either be swapped back into the
same partition from which it was evicted.
Software relocation must be repeated whenever the process is to be loaded into a different
partition.
Systems with static relocation are practically restricted to supporting only static binding of
processes to partitions.
This method is the slow process because it involves software translation. It is used only once
before the initial loading of the program.
What happens exactly in this approach is that here operating system manages the memory address
of a process to reflect its starting position in the memory.
Once a process is assigned a starting address in memory it executes within the space it has been
allocate
(2) Dynamic Relocation
Another method of relocation is dynamic relocation. In it mapping from the virtual address space
to the physical address space is performed at run-time.
This runtime mapping from virtual address to physical address is performed by a hardware device
known as memory management unit.
When the program is compiled or when a process is executed by CPU will generate the logical
address. Put this address as LA and this logical address can never be less than zero, it has to be
zero or more.
So whenever the CPU generates the logical address, you add this logical address with the base
register (relocation register) content and that gives you the physical address in the main memory
where the instruction of that data will be found.

2.Protection –There is always a danger when we have multiple programs at the same
time as one program may write to the address space of another program. So every
process must be protected against unwanted interference when other process tries
to write in a process whether accidental or incidental. Between relocation and
protection requirement a trade-off occurs as the satisfaction of relocation
requirement increases the difficulty of satisfying the protection
requirement.Prediction of the location of a program in main memory is not possible,
that’s why it is impossible to check the absolute address at compile time to assure
protection. Most of the programming language allows the dynamic calculation of
address at run time. The memory protection requirement must be satisfied by the
processor rather than the operating system because the operating system can hardly
control a process when it occupies the processor. Thus it is possible to check the
validity of memory references.
3. Sharing –A protection mechanism must have to allow several processes to access the
same portion of main memory. Allowing each processes access to the same copy of
the program rather than have their own separate copy has an advantage.For
example, multiple processes may use the same system file and it is natural to load
one copy of the file in main memory and let it shared by those processes. It is the
task of Memory management to allow controlled access to the shared areas of
memory without compromising the protection. Mechanisms are used to support
relocation supported sharing capabilities.

Loading and Linking in Memory Management


Loading and Linking is another basic concept in memory management. The meaning or definition
of loading and linking in memory management are defined as follow
Static Dynamic

Loading the entire program into the main memory Loading the program into the main
before start of the program execution is called as static memory on demand is called as dynamic
loading. loading.

Inefficient utilization of memory because whether it is


required or not required entire program is brought into Efficient utilization of memory.
the main memory.

Program execution will be faster. Program execution will be slower.

Statically linked program takes constant load time every Dynamic linking is performed at run time
time it is loaded into the memory for execution. by the operating system.

If the static loading is used then accordingly static If the dynamic loading is used then
linking is applied. accordingly dynamic linking is applied.

Linking
Linking is performed at both compile time, when the source code is translated into machine code
and load time, when the program is loaded into memory by the loader. Linking is performed at
the last step in compiling a program.
 Source code -> compiler -> Assembler -> Object code -> Linker -> Executable file ->
Loader
Linking is of two types:
1. Static Linking –
It is performed during the compilation of source program. Linking is performed
before execution in static linking. It takes collection of reloadable object file and
command-line argument and generate fully linked object file that can be loaded and
run.
Static linkers perform two major tasks:
 Symbol resolution –It associates each symbol reference with exactly one symbol
definition .Every symbol have predefined task.
 Relocation –It relocate code and data section and modify symbol references to the
relocated memory location.
2. Dynamic linking
 Dynamic linking is performed during the run time. This linking is accomplished by
placing the name of a shareable library in the executable image. There is more
chances of error and failure chances. It require less memory space as multiple
program can share a single copy of the library.
 Here we can perform code sharing. it means we are using a same object a number of
times in the program. Instead of linking same object again and again into the library,
each module share information of a object with other module having same object.
The shared library needed in the linking is stored in virtual memory to save RAM. In
this linking we can also relocate the code for the smooth running of code but all the
code is not relocatable.It fixes the address at run time.

Paging
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.
Pages of the process are brought into the main memory only when they are required otherwise
they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as
same as frame size.

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory
will be divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is
divided into pages of 1 KB each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.
Frames, pages and the mapping between the two is shown in the image below.

Memory Management Unit


The purpose of Memory Management Unit (MMU) is to convert the logical address into the
physical address. The logical address is the address generated by the CPU for every page while
the physical address is the actual address of the frame where each page will be stored.
When a page is to be accessed by the CPU by using the logical address, the operating system
needs to obtain the physical address to access that page physically.
The logical address has two parts.
1. Page Number
2. Offset
Memory management unit of OS needs to convert the page number to the frame number.
Example
Considering the above image, let’s say that the CPU demands 10th word of 4th page of process
P3. Since the page number 4 of process P1 gets stored at frame number 9 therefore the 10th word
of 9th frame will be returned as the physical address.
Address generated by CPU is divided into
 Page number(p):Number of bits required to represent the pages in Logical Address
Space or Page number
 Page offset (d):Number of bits required to represent particular word in a page or
page size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number (f):Number of bits required to represent the frame of Physical
Address Space or Frame number.
 Frame offset (d):Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
Benefits of Paging
The benefits of Paging are:
 In Paging, there is no need of external fragmentation.
 In Paging, the swapping between the equal-size pages and page frames is easy.
 Paging is a simple technique that we use for memory management.
Advantages:
· Allocating memory is easy and cheap
· Any free page is ok, OS can take first one out of list it keeps
· Eliminates external fragmentation
· Data (page frames) can be scattered all over PM
· Pages are mapped appropriately anyway
· Allows demand paging and prepaging
· More efficient swapping
· No need for considerations about fragmentation
· Just swap out page least likely to be used
Disadvantages:
· Longer memory access times (page table lookup)
· Can be improved using TLB
· Guarded page tables
· Inverted page tables
· Memory requirements (one entry per VM page)
· Improve using Multilevel page tables and variable page sizes (super-pages)
· Guarded page tables
· Page Table Length Register (PTLR) to limit virtual memory size
· Internal fragmentation
Segmentation
In Operating Systems, Segmentation is a memory management technique in which, the memory
is divided into the variable size parts. Each part is known as segment which can be allocated to a
process.
The details about each segment are stored in a table called as segment table. Segment table is
stored in one (or many) of the segments.
There are types of segmentation:
1. Virtual memory segmentation –
Each process is divided into a number of segments, not all of which are resident at
any one point in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into
memory at run time, though not necessarily contiguously.
Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
1. Segment Number
2. Offset
The Segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset. If the offset is less than the limit then the address is valid otherwise it
throws an error as the address is invalid.
In the case of valid address, the base address of the segment is added to the offset to get the
physical address of actual word in the main memory.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compare to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. It is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
Difference between paging and segmentation
S.NO Paging Segmentation

In paging, program is divided into fixed or In segmentation, program is divided into variable size
1.
mounted size pages. sections.

2. For paging operating system is accountable. For segmentation compiler is accountable.

3. Page size is determined by hardware. Here, the section size is given by the user.

It is faster in the comparison of


4. Segmentation is slow.
segmentation.

5. Paging could result in internal fragmentation. Segmentation could result in external fragmentation.

In paging, logical address is split into page Here, logical address is split into section number and section
6.
number and page offset. offset.
S.NO Paging Segmentation

Paging comprises a page table which encloses While segmentation also comprises the segment table which
7.
the base address of every page. encloses segment number and segment offset.

Page table is employed to keep up the page


8. Section Table maintains the section data.
data.

In paging, operating system must maintain a In segmentation, operating system maintain a list of holes in
9.
free frame list. main memory.

10. Paging is invisible to the user. Segmentation is visible to the user.

In paging, processor needs page number, In segmentation, processor uses segment number, offset to
11.
offset to calculate absolute address. calculate full address.

Virtual Memory
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that’s set up to
emulate the computer’s RAM.
The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in main
memory.
 User written error handling routines are used only when an error occurred in the
data or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
 The ability to execute a program that is only partially in memory would counter
many benefits.
 Less number of I/O would be needed to load or swap each user program into
memory.
 A program would no longer be constrained by the amount of physical memory that is
available.
 Each user program could take less physical memory, more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.
Modern microprocessors intended for general-purpose use, a memory management unit, or
MMU, is built into the hardware. The MMU’s job is to translate virtual addresses into physical
addresses. A basic example is given below −
Virtual memory is commonly implemented by demand paging. It can also be implemented in a
segmentation system. Demand segmentation can also be used to provide virtual memory.
Advantages of Virtual Memory
 Virtual memory helps to gain speed when only a particular segment of the program
is required for the execution of the program.
 It is very helpful in implementing a multiprogramming environment.
 It allows you to run more applications at once.
 It helps you to fit many large programs into smaller programs.
 Common data or code may be shared between memory.
 Process may become even larger than all of the physical memory.
 Data / code should be read from disk whenever required.
 The code can be placed anywhere in physical memory without requiring relocation.
 More processes should be maintained in the main memory, which increases the
effective use of CPU.
 Each page is stored on a disk until it is required after that, it will be removed.
 It allows more applications to be run at the same time.
 There is no specific limit on the degree of multiprogramming.
 Large programs should be written, as virtual address space available is more
compared to physical memory.
Disadvantages of Virtual Memory
Here, are drawbacks/cons of using virtual memory:
 Applications may run slower if the system is using virtual memory.
 Likely takes more time to switch between applications.
 Offers lesser hard drive space for your use.
 It reduces system stability.
 It allows larger applications to run in systems that don’t offer enough physical RAM
alone to run them.
 It doesn’t offer the same performance as RAM.
 It negatively affects the overall performance of a system.
 Occupy the storage space, which may be used otherwise for long term data storage.
Demand Paging
A demand paging system is quite similar to a paging system with swapping where processes
reside in secondary memory and pages are loaded only on demand, not in advance. When a
context switch occurs, the operating system does not copy any of the old program’s pages out to
the disk or any of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that program’s pages as they
are referenced.
While executing a program, if the program references a page which is not available in the main
memory because it was swapped out a little ago, the processor treats this invalid memory
reference as a page fault and transfers control from the program to the operating system to
demand the page back into the memory.

Advantages
Following are the advantages of Demand Paging −
 Large virtual memory.
 More efficient use of memory.
 There is no limit on degree of multiprogramming.
Disadvantages
 Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management techniques.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which an Operating System decides which
memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging
happens whenever a page fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free pages is lower than
required pages.
When the page that was selected for replacement and was paged out, is referenced again, it has to
read in from disk, and this requires for I/O completion. This process determines the quality of the
page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the total
number of page misses, while balancing it with the costs of primary storage and processor time of
the algorithm itself. There are many different page replacement algorithms. We evaluate an
algorithm by running it on a particular string of memory reference and computing the number of
page faults,
Reference String
The string of memory references is called reference string. Reference strings are generated
artificially or by tracing a given system and recording the address of each memory reference. The
latter choice produces a large number of data, where we note two things.
 For a given page size, we need to consider only the page number, not the entire
address.
 If we have a reference to a page p, then any immediately following references to
page pwill never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
 For example, consider the following sequence of addresses −
123,215,600,1234,76,96
 If page size is 100, then the reference string is 1,2,6,12,0,0
First In First Out (FIFO) algorithm
 Oldest page in main memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add new pages at the
head.

Advantages of FIFO page replacement


 It is simple and easy to implement
 It can easily be coded.

Disadvantages of FIFO page replacement


 FIFO replacement algorithm replaces heavily used pages.
 It suffers from belady’s anomaly
Optimal Page algorithm
 An optimal page-replacement algorithm has the lowest page-fault rate of all
algorithms. An optimal page-replacement algorithm exists, and has been called OPT
or MIN.
 Replace the page that will not be used for the longest period of time. Use the time
when a page is to be used.
 Advantages of Optimal page replacement
 It has the lowest rate of occurrence of page faults
 It improves the system performance by reducing overhead for numbering of page
faults and swapping pages in and out, when a page faults occurs.
 It does not suffers from the belady’s anomaly
Disadvantages of optimal page replacement
 It is difficult to implement, because it requires future knowledge of the reference
string.
Least Recently Used (LRU) algorithm
 Page which has not been used for the longest time in main memory is the one which
will be selected for replacement.
 Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm


 To get a process start quickly, keep a pool of free frames.
 On page fault, select a page to be replaced.
 Write the new page in the frame of free pool, mark the page table and restart the
process.
 Now write the dirty page out of disk and place the frame holding replaced page in
free pool.
Least frequently Used (LFU) algorithm
 The page with the smallest count is the one which will be selected for replacement.
 This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.
Most frequently Used (MFU) algorithm
 This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.

UNIT-III
: I/O Device Management
I/O Device Management
 Many different I/O devices are used in modern computer system. For example, disks,
printers, keyboard, network interface cards etc.
 Some examples are purely input and some are purely output and others are both
input output. For example ,a keyword is an input only device, a display monitor is an
output only device and a disk is an input output device.
 The devices are normally manipulated by special software programs called device
drivers.
 Some devices store data temporarily and transfer data from one hardware
component to other and some devices store data permanently and are called
storage devices.
I/O Devices
The system I/O devices generally fall into three categories on the basis of how the devices are
managed and allocated by an operating system:
Dedicated Devices
 Dedicated devices are those which are assigned to a single process at a times by an
operating system
 The process keeps this device until it terminates.
 There are certain devices which can be used only in dedicated manner and cannot
be shared. For example, tape devices and plotters etc.
Shared devices
 Shared devices are shared amongst several processes at a time by interleaved
execution.
 This interleaving of process requests is controlled by device manager.
 However, if two processes requests the same device at the same time, conflict can
be resolved by using some policy that decides which request should be entertained
first.
Virtual devices
 SPOOLing (simultaneous peripheral operations on line) technique can be used to
convert dedicated devices like card readers and printers into shared
 For example, a spooling program can read and copy all card input onto a disk a high
speed.Later,when a process tries to read a card, the spooling program intercepts the
request and converts it to a read from the disk.
 This technique can be applied to various other devices such as printers’ teletypes
and most dedicated slow input/output devices.
Device Controllers
 Device drivers are software modules that can be plugged into an OS to handle a
particular device. Operating System takes help from device drivers to handle all I/O
devices.
 The Device Controller works like an interface between a device and a device driver.
I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical
component and an electronic component where electronic component is called the
device controller.
 There is always a device controller and a device driver for each device to
communicate with the Operating Systems. A device controller may be able to handle
multiple devices. As an interface its main task is to convert serial bit stream to block
of bytes, perform error correction as necessary.
 Any device connected to the computer is connected by a plug and socket, and the
socket is connected to a device controller. Following is a model for connecting the
CPU, memory, controllers, and I/O devices where CPU and device controllers all use a
common bus for communication.

Device Drivers
Device Driver in computing refers to a special kind of software program or a specific type of
software application which controls a specific hardware device that enables different hardware
devices for communication with the computer’s Operating System
A device driver communicates with the computer hardwares by computer subsystem or computer
bus connected to the hardware.
Device Drivers are very essential for a computer system to work properly because without device
driver the particular hardware fails to work accordingly means it fails in doing a particular
function/action for which it has been created.
In a very common way most term it as only a Driver also when someone says Hardware
Driver that also refers to this Device Driver.

Working of Device Driver :


Device Drivers depend upon the Operating System’s instruction to access the device and
performing any particular action. After the action they also shows their reactions by delivering
output or status/message from hardware device to the Operating system.For Example a printer
driver tells the printer in which format to print after getting instruction from OS, similarly A
sound card driver is there due to which 1’s and 0’s data of MP3 file is converted to audio signals
and you enjoy the music. Card reader, controller, modem, network card, sound card, printer,
video card, USB devices, RAM, Speakers etc need Device Drivers to operate.
The following figure illustrates the interaction between user, OS, Device driver and the devices:

Types of Device Driver:


For almost every device associated with the computer system there exist Device Driver for the
particular hardware. But it can be broadly classified into two types i.e.,
1. Kernel-mode Device Driver –
This Kernel-mode device driver includes some generic hardwares which loads with
operating System as part the OS these are BIOS, motherboard, processor and some
other hardwares which are part of kernel software. These includes the minimum
system requirement device drivers for each operating system.

2. User-mode Device Driver –


Other than the devices which are brought by kernel for working of the system the
user also bring some devices for use during the using of a system that devices needs
device drivers to functions those drivers falls under User mode device driver. For
example user needs any plug and play action that comes under this.

Disk Storage
 Disk is a secondary storage device that is used to store data.
 The external devices such as hard disk,magnetic disks and floppy disks stores data in
the sectors.
 These devices provide the data to user programs by means of I/O requests.
 Disk management is an important function of operating system and includes disk
scheduling, disk buffering and management of swap area and disk cache.
Disk scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O scheduling.
Disk scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O request can
be served at a time by the disk controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm
movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look at
some of the important terms:
Seek Time
Seek time is the time taken in locating the disk arm to a specified track where the read/write
request will be satisfied.
Rotational Latency
It is the time taken by the desired sector to rotate itself to the position from where it can access
the R/W heads.
Transfer Time
It is the time taken to transfer the data.
Disk Access Time
Disk access time is given as,
Disk Access Time = Rotational Latency + Seek Time + Transfer Time
Disk Response Time
It is the average of time spent by each request waiting for the IO operation.
Purpose of Disk Scheduling
The main purpose of disk scheduling algorithm is to select a disk request from the queue of IO
requests and decide the schedule when this request will be processed.
Goal of Disk Scheduling Algorithm
 Fairness
 High throughout
 Minimal traveling head time
Disk Scheduling Algorithms
The list of various disks scheduling algorithm is given below. Each algorithm is carrying some
advantages and disadvantages. The limitation of each algorithm leads to the evolution of a new
algorithm.
 FCFS scheduling algorithm
 SSTF (shortest seek time first) algorithm
 SCAN scheduling
 C-SCAN scheduling
 LOOK Scheduling
 C-LOOK scheduling
FCFS Scheduling Algorithm
 It is simplest form of disk scheduling algorithm
 Request that arrives first will be served first.
 It processes the I/O requests in the sequential order i.e in the same order as they
arrive.
 This method improves the response time as a request gets response in fair amount
of time.
 However ,the throughput is not efficient and this algorithm fails to decrease the
average seek time.
 It involves a lot of random hard movements and disk rotations.

Disadvantages
 The scheme does not optimize the seek time.
 The request may come from different processes therefore there is the possibility of
inappropriate movement of the head.
Example
Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4, 50, 89,
52, 61, 87, 25
Head pointer starting at 50 and moving in left direction. Find the number of head movements in
cylinders using FCFS scheduling.
Solution
Shortest seek time First (SSTF) scheduling
Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least disk
arm movement from its current position regardless of the direction. It reduces the total seek time
as compared to FCFS.
It allows the head to move to the closest track in the service queue.

Advantages
 It provides better performance than FIFO and produces less number of head
movements.
 Throughput is higher as compared to FIFO.
 SSTF is useful in batch processing system where throughput is major consideration.
Disadvantages
 It may cause starvation for some requests.
 Switching direction on the frequent basis slows the working of algorithm.
 It is not the most optimal algorithm.
Example
Consider the following disk request sequence for a disk with 100 tracks
45, 21, 67, 90, 4, 89, 52, 61, 87, 25
Head pointer starting at 50. Find the number of head movements in cylinders using SSTF
scheduling.
Solution:
SCAN Scheduling
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular
direction till the end, satisfying all the requests coming in its path,and then it turns backend
moves in the reverse direction satisfying requests coming in its path.
It works in the way an elevator works, elevator moves in a direction completely till the last floor
of that direction and then turns back.

Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements in
cylinders using SCAN scheduling.
Number of Cylinders = 40 + 14 + 65 + 13 + 20 + 24 + 11 + 4 + 46 = 237
Advantages of SCAN Scheduling
 Throughput is better than FIFO.
 It also considers the starvation of request. In this regard it is better than SSTF.
Disadvantages of scan scheduling
 The major drawback of this policy is that the disk arm always stars from the
beginning, no matter other number of requests are present on the other end of disk.
C-SCAN Scheduling
In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until
it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without
servicing any request then it turns back and start moving in that direction servicing the remaining
requests.

Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements in
cylinders using C-SCAN scheduling.
No. of cylinders crossed = 40 + 14 + 199 + 16 + 46 + 4 + 11 + 24 + 20 + 13 = 387
LOOK Scheduling
It is like SCAN scheduling Algorithm to some extant except the difference that, in this scheduling
algorithm, the arm of the disk stops moving inwards (or outwards) when no more request in that
direction exists. This algorithm tries to overcome the overhead of SCAN algorithm which forces
disk arm to move in one direction till the end regardless of knowing if any request exists in the
direction or not.
Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements in
cylinders using LOOK scheduling.
Number of cylinders crossed = 40 + 51 + 13 + +20 + 24 + 11 + 4 + 46 = 209
C Look Scheduling
C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm of
the disk moves outwards servicing requests until it reaches the highest request cylinder, then it
jumps to the lowest request cylinder without servicing any request then it again start moving
outwards servicing the remaining requests.
It is different from C SCAN algorithm in the sense that, C SCAN force the disk arm to move till
the last cylinder regardless of knowing whether any request is to be serviced on that cylinder or
not.

Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements in
cylinders using C LOOK scheduling.

Number of cylinders crossed = 11 + 13 + 20 + 24 + 11 + 4 + 46 + 169 = 298

File Management
File Management
 File management is the process of storing, controlling and managing data stored on
disks or secondary storage in the form of files.
 File management ensures the consistency of data when multiple users access files
concurrently.
 File management also provides the measures for file security and protection.
File Concept
 A file is a logical collection of information stored on secondary storage such as hard
disk.
 Physically, a file is the smallest allotment of secondary storage device e.g. disk.
Logically, a file is a sequence of logical records i.e. a sequence of bits and bytes.
 Files can be used to contain data and programs
 Data files can be numneric, alphabetic, alohanumeric or binary.
 A files has various attributes like name, type, location, size, protection, time and date
of creation etc.
File naming
 A file can be given a name for the convenience of its use by its creator.
 A name is attached to every file so as to uniquely identify it and access it through its
name.
 The exact rules for naming file vary from system to system but all the operating
systems allow string of one to eight letters as legal file name.
 Digits and few special characters are also allowed in file names.
 Some systems differentiate between uppercase and lower case characters in names,
whereas other systems consider the two cases equivalent.
File Attributes
A file has a name and data. Moreover, it also stores Meta information like file creation date and
time, current size, last modified date, etc. All this information is called the attributes of a file
system.
Here, are some important File attributes used in OS:
 Name:It is the only information stored in a human-readable form.
 Identifier: Every file is identified by a unique tag number within a file system known
as an identifier.
 Location:Points to file location on device.
 Type:This attribute is required for systems that support various types of files.
 Size. Attribute used to display the current file size.
 Protection. This attribute assigns and controls the access rights of reading, writing,
and executing the file.
 Time, date and security:It is used for protection, security, and also used for
monitoring
File operations
There are various operations which can be implemented on a file. We will see all of them in
detail.
1. Create
Creation of the file is the most important operation on the file. Different types of files are created
by different methods for example text editors are used to create a text file, word processors are
used to create a word file and Image editors are used to create the image files.
2. Write
Writing the file is different from creating the file. The OS maintains a write pointer for every file
which points to the position in the file from which, the data needs to be written.
3. Read
Every file is opened in three different modes : Read, Write and append. A Read pointer is
maintained by the OS, pointing to the position up to which, the data has been read.
4. Re-position
Re-positioning is simply moving the file pointers forward or backward depending upon the user’s
requirement. It is also called as seeking.
5. Delete
Deleting the file will not only delete all the data stored inside the file, It also deletes all the
attributes of the file. The space which is allocated to the file will now become available and can
be allocated to the other files.
6. Truncate
Truncating is simply deleting the file except deleting attributes. The file is not completely deleted
although the information stored inside the file get replaced.
Access methods
Sequential Access

Most of the operating systems access the file sequentially. In other words, we can say that most of
the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained which initially
points to the base address of the file. If the user wants to read first word of the file then the
pointer provides that word to the user and increases its value by 1 word. This process continues
till the end of the file.
Modern word systems do provide the concept of direct access and indexed access but the most
used method is sequential access due to the fact that most of the files such as text files, audio
files, video files, etc need to be sequentially accessed.

Advantages of Sequential Access


· It is one of the simplest method of file access and easy to implement.
· There is no need for any storage space identification
· It uses disk and memory efficiently.
· It also allows data to be stored on many types of media, in a devices independent manner.
· Errors in the files remain localized.
· It is also economical and easier to organize and maintain.
Disadvantages of sequential access method
· Searching a record is a time consuming affair as it allows as it allows only sequential
access.
· New records can only be added to the end of a file.
· It requires the transactions to be stored in a particular sequence before processing.
· There is high data redundancy.
· It is not possible to handle the random enquires.
Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases, we
need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it will
traverse all the blocks in order to access the needed record.
Direct access will give the required result despite of the fact that the operating system has to
perform some complex tasks such as determining the desired block number. However, that is
generally implemented in database applications.

Advantages
 System performance improves by direct transfer of data between memory and
 I/O (Input/output) devices, by saving CPU the bothers.
 CPU can utilize the saved time for performing the operations that do not require
system buses.
 It also allows the random record deletion and insertion.
 It allows faster updating of several files.
Disadvantage
 The CPU is rendered inactive for long durations in case of burst mode data.
 Only one key is used.
 Data may be accidentally erased or overwritten unless special precautions are taken.

Indexed Access
If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but the
address of a record in the file.
In index accessing, searching in a large database became very quick and easy but we need to have
some extra space in the memory to store the index value.
Advantages:
 It combines both sequential and direct
 Suitable for sequential access and random access
 Provides quick access to records
Disadvantages:
 It uses special software and is expensive
 Extra time is taken to maintain index
 Extra storage for index files
 Expensive hardware is required
Directory structures and management
What is a directory?
Directory can be defined as the listing of the related files on the disk. The directory may store
some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information
related to that file.

A directory can be viewed as a file which contains the Meta data of the bunch of files.
Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
Logical Structure of a directory
A directory is a container that is used to contain folders and files. It organizes files and folders in
a hierarchical manner.
There are several logical structures of a directory, these are given below.

 Single-level directory –
The single-level directory is the simplest directory structure. In it, all files are
contained in the same directory which makes it easy to support and understand.
A single level directory has a significant limitation, however, when the number of files increases
or when the system has more than one user. Since all the files are in the same directory, they must
have a unique name. if two users call their dataset test, then the unique name rule violated.

Advantages:
 Since it is a single directory, so its implementation is very easy.
 If the files are smaller in size, searching will become faster.
 The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.
Disadvantages:
 There may chance of name collision because two files can not have the same name.
 Searching will become time taking if the directory is large.
 This can not group the same type of files together.
Two-level directory
As we have seen, a single level directory often leads to confusion of files names among different
users. the solution to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The UFDs
have similar structures, but each lists only the files of a single user. system’s master file directory
(MFD) is searches whenever a new user id=s logged in. The MFD is indexed by username or
account number, and each entry points to the UFD for that user.

Advantages:
 We can give full path like /User-name/directory-name/.
 Different users can have the same directory as well as the file name.
 Searching of files becomes easier due to pathname and user-grouping.

Disadvantages:
 A user is not allowed to share files with other users.
 Still, it not very scalable, two files of the same type cannot be grouped together in the
same user.
Tree-structured directory
once we have seen a two-level directory as a tree of height 2, the natural generalization is to
extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create their own subdirectories and to organize their files
accordingly.
A tree structure is the most common directory structure. The tree has a root directory, and every
file in the system has a unique path.
Advantages:
 Very general, since full pathname can be given.
 Very scalable, the probability of name collision is less.
 Searching becomes very easy, we can use both absolute paths as well as relative.
Disadvantages:

 Every file does not fit into the hierarchical model, files may be saved into multiple
directories.
 We can not share files.
 It is inefficient, because accessing a file may go under multiple directories.
Acyclic graph directory –
An acyclic graph is a graph with no cycle and allows us to share subdirectories and files. The
same file or subdirectories may be in two different directories. It is a natural generalization of the
tree-structured directory.
It is used in the situation like when two programmers are working on a joint project and they need
to access files. The associated files are stored in a subdirectory, separating them from other
projects and files of other programmers since they are working on a joint project so they want the
subdirectories to be into their own directories. The common subdirectories should be shared. So
here we use Acyclic directories.
It is the point to note that the shared file is not the same as the copy file. If any programmer
makes some changes in the subdirectory it will reflect in both subdirectories.

Advantages:
 We can share files.
 Searching is easy due to different-different paths.
Disadvantages:
 We share the files via linking, in case deleting it may create the problem,
 If the link is a soft link then after deleting the file we left with a dangling pointer.
 In the case of a hard link, to delete a file we have to delete all the references
associated with it.
General graph directory structure
In general graph directory structure, cycles are allowed within a directory structure where
multiple directories can be derived from more than one parent directory.
The main problem with this kind of directory structure is to calculate the total size or space that
has been taken by the files and directories.

Advantages:
 It allows cycles.
 It is more flexible than other directories structure.
Disadvantages:
 It is more costly than others.
 It needs garbage collection
Remote File System
Files can be shared across the network via variety of methods –
 Using FTP i.e., file transfer protocol is used to transfer file from one computer to
other.
 Using distributed file system (DFS) in which remote directories are visible from local
machine.
 Using Remote File System (RFS) in which the arrival of networks has allowed
communication between remote computers. These networks allows various hardware
and software resources to be shared throughout the world.

Client-server model
 Remote file system allows a computer to mount one or more file systems from one or
more remote machines.
 In this case, the machine containing the files is server and the machine wanting access
to the files is the client.
 The server specifies which file can be accessed by a particular client.
 Files are usually specified on a partition or subdirectory level.
 A server can serve multiple clients, and a client can access multiple servers,
depending on the implementation details of a given client server facility.
 Once the remote file system is mounted, file operation requests are sent on the behalf
of the user to the server, via network using DFS protocol
Distributed Information System
 In order to manage client/server services easily, a system called distributed
information system or distributed naming service exists that provide unified access to
the information needed for remote computing.
 Domain name system provides host name to network address translations for the
entire internet.
 Sun Microsoft developed yellow pages (renamed to network information service),
that centralized the storage of user names, host names, printer information etc.
 Now a days lightweight directory access protocol access protocol is being used. It is
secure distributed naming mechanism.
Protection in File System

In computer systems, alot of user’s information is stored, the objective of the operating system is
to keep safe the data of the user from the improper access to the system. Protection can be
provided in number of ways. For a single laptop system, we might provide protection by locking
the computer in a desk drawer or file cabinet. For multi-user systems, different mechanisms are
used for the protection.
Types of Access:
The files which have direct access of the any user have the need of protection. The files which are
not accessible to other users doesn’t require any kind of protection. The mechanism of the
protection provides the facility of the controlled access by just limiting the types of access to the
file. Access can be given or not given to any user depends on several factors, one of which is the
type of access required. Several different types of operations can be controlled:
 Read
Reading from a file.
 Write
Writing or rewriting the file.
 Execute
Loading the file and after loading the execution process starts.
 Append
Writing the new information to the already existing file, editing must be end at the
end of the existing file.
 Delete
Deleting the file which is of no use and using its space for another data.
 List
List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be controlled. There
are many protection mechanisms. Each of them mechanism have different advantages and
disadvantages and must be appropriate for the intended application.
Access Control
 It is the most common approach to protect the files and directories depending upon
the identity if the users.
 Access control limits who can access files and how they can access them.
 Typically, users or group of users are granted certain access rights to a file.
Following are the different operations that are possible on a file
 Read– Reading information contained in the file.
 Write-writing new information into a file at any point or overwriting EXISTING
INFORMATION IN A FILE.
 Append- Writing new information only at the end of the file. However, modifying or
deleting any content is not allowed under this operation.
 Delete- deleting a file from a file system and releasing its storage space for use in
other files.
 List- Reading or listing the name and attributes of a files or directory.
 Execute- Loading the file into the memory and executing it.
 Rename- changing the name of a file.
 Update- It include modifying, deleting and adding to the file.
 Copy– Copying the contents of the file to another file.
Password Protection
 Another approach to protect a file form an unauthorized access is to use a password
with each file.
 The owner of a file can control its access by assigning a password. Thus, only those
users who know the password, can access a particular file.
UNIT-IV
Introduction to Distributed Operating system
Distributed Operating System
These types of the operating system is a recent advancement in the world of computer technology
and are being widely accepted all over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. These
are referred to as loosely coupled systems or distributed systems. These system’s processors
differ in size and function. The major benefit of working with these types of the operating system
is that it is always possible that one user can access the files or software which are not actually
present on his system but some other system connected within this network i.e., remote access is
enabled within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet
Features of Distributed Operating System
There are various important goals that must be met to build a distributed system worth the effort.
A distributed system should easily connect users to resources, it should hide the fact that
resources are distributed across a network, must be open, and must be scalable.
1. Connecting Users and Resources:
The main goal of a distributed system is to make it easy for users to access remote
resources, and to share them with other users in a controlled manner. Resources can
be virtually anything, typical examples of resources are printers, storage facilities,
data, files, web pages, and networks. There are many reasons for sharing resources.
One reason is economics.
2. Transparency:
An important goal of a distributed system is to hide the fact that its process and
resources are physically distributed across multiple computers. A distributed system
that is capable of presenting itself to users and applications such that it is only a
single computer system is called transparent.
The concept of transparency can be applied to many aspects of a distributed system as shown in
table.

Different Forms of Transparency –

S.No. Transparency Description


(1) Access Hide data representation.
(2) Location Hide location
(3) Migration Move place information.
(4) Relocation Hide moved place relocation.
(5) Replication Hide that a resource is replication.
(6) Concurrency Shared data bases access
(7) Failure Hide fact about resource failure.
(8) Persistence Hide fact about memory location.

3. Openness:
Another important goal of distributed systems is openness. An open distributed
system is a system that offers services in standards that describable the syntax and
semantics of those service instances, standard rules in computer networks control the
format, content, and meaning of messages sent and received. Such rules are
formalized in the protocols. In distributed systems, services are typically specified
through interfaces, often called interface definition languages (IDL). Interface
definitions written in IDL almost always capture only the syntax of services. They
accurately specify the names of functions that are available with the types of
parameters, return values, possible exceptions that can be raised and so on.

4. Scalability:
The uncertain trend in distributed systems is towards larger systems. This observation
has implications for distributed file system design. Algorithms that work well for
systems with 100 machines can work for systems with 1000 machines and none at all
for systems with 10, 000 machines. for starters, the centralized algorithm does not
scale well. If opening a file requires contacting a single centralized server to record
the fact that the file is open then the server will eventually become a bottleneck as the
system grows.
5. Reliability:
The main goal of building distributed systems was to make them more reliable than
single processor systems. The idea is that if some machine goes down, some other
machine gets used to it. In other words, theoretically the reliability of the overall
system can be a Boolean OR of the component reliability. For example, with four file
servers, each with a 0.95 chance of being up at any instant, the probability of all four
being down simultaneously is 0.000006, so the probability of at least one being
available is (1-0.000006)= 0.999994, far better than any individual server.

6. Performance:
Building a transparent, flexible, reliable distributed system is useless if it is slow like
molasses. In particular application on a distributed system, it should not deteriorate
better than running some application on a single processor. Various performance
metrics can be used. Response time is one, but so are throughput, system utilization,
and amount of network capacity consumed. Furthermore, The results of any
benchmark are often highly dependent on the nature of the benchmark. A benchmark
involves a large number of independent highly CPU-bound computations which give
radically different results than a benchmark that consists of scanning a single large
file for same pattern.

Architecture of Distributed Operating System

In a distributed system, the following things happen:


 All hardware and software components are located remotely; they coordinate and
communicate with each other by passing the messages.
 Resource sharing is the most important aspect of a distributed system; resources are
managed by the servers and clients uses these resources.
A distributed operating system runs on a number of independent sites, those are connected
through a communication network, but users feel it like a single virtual machine and runs its own
operating system.
The figure below gives the architecture of a distributed system. It shows the workstations,
terminals, different servers are connected to a communication network. It shares the services
together. Each computer node has its own memory. Real life example of a distributed system is
the Internet, Intranet, mobile computing, etc.
Fig: Architecture of distributed system
Issues of Distributed System
The distributed information system is defined as “a number of interdependent computers linked
by a network for sharing information among them”. A distributed information system consists of
multiple autonomous computers that communicate or exchange information through a computer
network.
Design issues of distributed system –
1. Heterogeneity : Heterogeneity is applied to the network, computer hardware,
operating system and implementation of different developers. A key component of
the heterogeneous distributed system client-server environment is middleware.
Middleware is a set of service that enables application and end-user to interacts with
each other across a heterogeneous distributed system.
2. Openness: The openness of the distributed system is determined primarily by the
degree to which new resource sharing services can be made available to the users.
Open systems are characterized by the fact that their key interfaces are published. It is
based on a uniform communication mechanism and published interface for access to
shared resources. It can be constructed from heterogeneous hardware and software.
3. Scalability: Scalability of the system should remain efficient even with a significant
increase in the number of users and resources connected.
4. Security: Security of information system has three components Confidentially,
integrity and availability. Encryption protects shared resources, keeps sensitive
information secrets when transmitted.
5. Failure Handling: When some faults occur in hardware and the software program, it
may produce incorrect results or they may stop before they have completed the
intended computation so corrective measures should to implemented to handle this
case.
Failure handling is difficult in distributed systems because the failure is partial i, e,
some components fail while others continue to function.
6. Concurrency: There is a possibility that several clients will attempt to access a
shared resource at the same time. Multiple users make requests on the same
resources, i.e read, write, and update. Each resource must be safe in a concurrent
environment. Any object that represents a shared resource a distributed system must
ensure that it operates correctly in a concurrent environment.
7. Transparency: Transparency ensures that the distributes system should be perceived
as the single entity by the users or the application programmers rather than the
collection of autonomous systems, which is cooperating. The user should be unaware
of where the services are located and the transferring from a local machine to a
remote one should be transparent.

Communication & Synchronization


Distributed System is a collection of computers connected via the high speed communication
network. In the distributed system, the hardware and software components communicate and
coordinate their actions by message passing. Each node in distributed systems can share their
resources with other nodes. So, there is need of proper allocation of resources to preserve the
state of resources and help coordinate between the several processes. To resolve such conflicts,
synchronization is used. Synchronization in distributed systems is achieved via clocks.
The physical clocks are used to adjust the time of nodes. Each node in the system can share its
local time with other nodes in the system. The time is set based on UTC (Universal Time
Coordination). UTC is used as a reference time clock for the nodes in the system.
The clock synchronization can be achieved by 2 ways: External and Internal Clock
Synchronization.
1. External clock synchronizationis the one in which an external reference clock is
present. It is used as a reference and the nodes in the system can set and adjust their
time accordingly.
2. Internal clock synchronizationis the one in which each node shares its time with
other nodes and all the nodes set and adjust their times accordingly.

There are 2 types of clock synchronization algorithms: Centralized and Distributed.


1. Centralizedis the one in which a time server is used as a reference. The single time
server propagates its time to the nodes and all the nodes adjust the time accordingly.
It is dependent on single time server so if that node fails, the whole system will lose
synchronization. Examples of centralized are- Berkeley Algorithm, Passive Time
Server, Active Time Server etc.
2. Distributedis the one in which there is no centralized time server present. Instead the
nodes adjust their time by using their local time and then, taking the average of the
differences of time with other nodes. Distributed algorithms overcome the issue of
centralized algorithms like the scalability and single point failure. Examples of
Distributed algorithms are – Global Averaging Algorithm, Localized Averaging
Algorithm, NTP (Network time protocol) etc.

Introduction Multiprocessor Operating system


Introduction Multiprocessor operating system
Multiprocessing System
 Multiprocessor system is the system that contains two or more processors or CPU’s
and has ability to simultaneously execute several programs. Hence the name multi-
processor.
 In such a system, multiple processors share the clock, bus, memory and peripheral
devices.
 A multiprocessor system is also known as parallel system.
 In such a system, instructions from different and independent programs can be
processed at the same instant of time by different CPU’s.
 In this system, the CPU’s simultaneously execute different instructions from the
same program.

Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows −
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and they
all communicate with each other. All the processors are in a peer to peer relationship i.e. no
master – slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of UNIX for the
Multimax Computer.

Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor that
gives instruction to all the other processors. Asymmetric multiprocessor system contains a master
slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Advantages of Multiprocessor Systems
There are multiple advantages to multiprocessor systems. Some of these are −
More reliable Systems
In a multiprocessor system, even if one processor fails, the system will not halt. This ability to
continue working despite hardware failure is known as graceful degradation. For example: If
there are 5 processors in a multiprocessor system and one of them fails, then also 4 processors are
still working. So the system only becomes slower and does not ground to a halt.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then the
throughput increases by an amount just under N.
More Economic Systems
Multiprocessor systems are cheaper than single processor systems in the long run because they
share the data storage, peripheral devices, power supplies etc. If there are multiple processes that
share data, it is better to schedule them on multiprocessor systems with shared data than have
different computer systems with multiple copies of the data.
Disadvantages of Multiprocessor Systems
There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple computer
systems, still they are quite expensive. It is much cheaper to buy a simple single processor system
than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc. So,
it is much more complicated to schedule processes and impart resources to processes. than in
single processor systems. Hence, a more complex and complicated operating system is required
in multiprocessor systems.
Large Main Memory Required
All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems.

Multiprocessor Operating system Architecture


Multiprocessor system is divided into following basic architectures:
1. Symmetric Multiprocessor System (SMP)
2. UMA (Uniform Memory Access)
3. NUMA (Non-Uniform Memory Access)

1. Symmetric Multiprocessor System (SMP)


In this architecture, two or more processors are connected to the same memory. It has full access
to input and output devices with the same rights. Only one instance of the operating system runs
all processors equally. No one processor will treat as a special. Most of the multiprocessors use
SMP architecture. SMP structure is given below.
SMP Architecture
Fig: SMP Architecture
SMP has a tightly coupled system because a number of homogeneous processors running
independently of each other. That means each processor running different programs and uses
different data sets. Above figure shows the pool of processors each one having own cache and
sharing the common main memory as well as common i/o devices. When CPU wants to read the
memory, it first checks the bus is idle or not. If the bus is idle, it puts the address of the bus it
wants then it activates certain signals and waits for memory to put the required word on the bus.
But if the bus is busy, the CPU has to wait.
To solve this problem, the cache is used with a processor. Due to this many reads can be possible.
There is much less bus traffic and the system can support more CPUs.
2. Uniform Memory Access (UMA)
In this type of architecture, all processors share the common (Uniform) centralized primary
memory. Each CPU has the same memory access time. This system also called as shared memory
multiprocessor (SMM). In the figure below each processor has a cache at one or more level. And
also shares a common memory as well as input output systems.
UMA architecture

There are three types of UMA


1. I) Using a cross based switch
2. II) Using a multistage interconnection network
III) Using bus based symmetric multiprocessor
1. I) Using a cross based switch
In Normal structure, we can extend the size upto only 16 CPU limits. But sometimes we need to
extend the limit. So, we required a different kind of interconnection networks. One of them is
simple circuit crossbar which connects ‘n’ CPUs to ‘k’ memories. It is used mostly in telephone
switches. Each intersection has a cross point which has a switch for closing and opening purpose.
It is one of the non-blocking networks. E.g. Sun Enterprise 1000 uses this technology. It consists
of a single cabinet with up to 64 CPUs. The crossbar switch is packaged on a circuit board with
eight plugs in slots on each side.
UMA cross based switch

Fig: UMA cross based switch


In the above figure, there are the cross-connection switching of multiple CPUs and memories.
Each cross point has a switch which is either opened or closed (in (b) and (c)). The drawback of
this architecture is ‘n’ number of CPUs and ‘k’ number of memory required n2 switches. i.e.
1000 CPU needs 1000000 switches. But the main advantage is any CPU can access any available
memory with less blocking
1. II) Using multistage interconnection network
To go beyond the Sun Enterprise of limit 1000, we need powerful interconnection network. 2 x 2
switches can be used to build up the large network. Example of this technology is the Omega
network. In this system, the wired pattern is shuffle perfectly. Each memory has given labels
which used to find the route in the network. The Omega network is a blocking network. Requests
come in a sequence but cannot be served concurrently. Conflicts can arise in using a connection
or a switch, in accessing a memory block or in answering a CPU request. Many techniques are
used to minimize conflicts.
Multistage interconnection network

Fig: Multistage interconnection network


In the above figure, 8 CPUs are connected to 8 memories, using 12 switches laid out in three
stages. In generally n CPUs and n memories require log2n stages and n/2 switch per stage, giving
a total of (n/2) log2n.
III) Using bus based symmetric multiprocessor
Bus based symmetric multiprocessor
Fig: Bus based symmetric multiprocessor
The simplest multiprocessors system consists of a single bus. Two or more CPUs and one or
more memory modules all use the same bus for communication. If the bus is busy when a CPU
wants to access memory, it must wait. Adding more CPUs results in more waiting. This can be
mitigated to some degree by including processor cache support.
3. NUMA (Non-Uniform Memory Access)
If we want to improve the limit of the number of CPUs, UMA is not the best option. Because it
accesses the memory uniformly. To solve this problem we have another technique called Non-
Uniform Memory Access (NUMA). They share the single address space (local memory) through
all the CPU for improving the result. But provides faster local access than remote access.

UMA programs can run without change in NUMA machines but performance may be slow.
Memory access time depends on the memory location which is relative to a processor. The
processor can access it’s own local memory fastly than it’s non-local memory. NUMA is used in
a symmetric multiprocessing system.
SMP is obviously tightly coupled and shares everything in multiple processors by using a single
operating system. NUMA has three key properties.
1. Access to remote memory is possible.
2. Accessing remote memory is slower than local memory.
iii. Remote access time is not hidden by the caching.
The following figure shows the structure of NUMA. In this CPU and memory is connected to
MMU (Memory management unit) via local bus and local memory connected to the system bus
via local bus.
NUMA Structure
Fig: NUMA structure
It is used in SMP. Multiprocessor system without NUMA create the problem of cache missed and
only one processor can access the computer memory at a time. To solve this problem, separate
memory is provided for each processor.
There are 2 types of NUMA
1. NC-NUMA:
When the time to accessing a remote is not hidden, the system is called as NC-NUMA (Non-
Caching-Non-Uniform Memory Access). In this type of NUMA, processors have no local cache.
Cache coherence problem is not present in NC-NUMA. Each memory item is in a single location.
Remote memory access is difficult so this system helps to the software’s those are relocated
memory pages from one block to another just for improving the performance. A page scanner
demon activates every few seconds, examines statistics on memory usage, and moves pages from
one block to another. The Figure is the NC-NUMA type structure.
1. CC- NUMA:
When a coherent cache is present, the system is called as CC-NUMA (Cache Coherent-Non
Uniform Memory Access). CC-NUMA uses the directory based protocol rather than snooping.
The basic idea is to manage each node in the system with a directory for its RAM blocks. A
database tells that in which cache is located a RAM block, and what is its state.
Synchronization & Scheduling
In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing becomes
possible. However multiple processor scheduling is more complex as compared to single
processor scheduling. In multiple processor scheduling there are cases when the processors are
identical i.e. HOMOGENEOUS, in terms of their functionality, we can use any processor
available to run any process in the queue.
There are some extra features which multiprocessor perform:
 Process synchronization
 Resource management
 Scheduling
There are various organizations of multiprocessor operating system:
1. Each CPU has its own OS
In this types of the organization then there are much Central processing units in the system and
each CPU has its own private operating system and memory is shared among all the processors
and input-output system are also shared. All the system is connected by the single bus.

2. Master slave multiprocessor


In this type of multiprocessor model, there is a single data structure which keeps track of the
ready processes. In this model, one central processing unit works as master and other central
processing unit work as a slave. In this, all the processors are handled by the single processor
which is called master server. The master server runs the operating system process and the slave
server run the user processes. The memory and input-output devices are shared among all the
processors and all the processor are connected to a common bus. This system is simple and
reduces the data sharing so this system is called Asymmetric multiprocessing.

3. Symmetric multiprocessor
Symmetric Multiprocessors (SMP) is the third model. In this model, there is one copy of the OS
in memory, but any central processing unit can run it. Now, when a system call is made, then the
central processing unit on which the system call was made traps to the kernel and then processes
that system call. This model balances processes and memory dynamical. This approach uses
Symmetric Multiprocessing where each processor is self-scheduling. The scheduling proceeds
further by having the scheduler for each processor examine the ready queue and select a process
to execute. In this system, this is possible that all the process may be in common ready queue or
each processor may have its own private queue for the ready process.
There are mainly three sources of contention that can be found in a multiprocessor
operating system.
 Locking system
As we know that the resources are shared in the multiprocessor system so there is a
need to protect these resources for safe access among the multiple processors. The
main purpose of locking scheme is to serialize access of the resources by the
multiple processors.
 Shared data
When the multiple processor access the same data at the same time then there may
be a chance of inconsistency of data so to protect this we have to use some
protocols or locking scheme.

Cache coherence
It is the shared resource data which is stored in the multiple local caches. Suppose there are two
clients have a cached copy of memory and one client change the memory block and the other
client could be left with invalid cache without notification of the change so this kind of conflict
can be resolved by maintaining a coherence view of the data
Introduction to Real-Time Operating System
Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to process and respond to
inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.

Two types of Real-Time Operating System which are as follows:


 HardReal-TimeSystems:
These OSs are meant for applications where time constraints are very strict and even
the shortest possible delay is not acceptable. These systems are built for saving life
like automatic parachutes or airbags which are required to be readily available in case
of any accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.

Advantages of RTOS:
 Maximum Consumption:Maximum utilization of devices and system, thus more
output from all the resources
 Task Shifting:The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to
another, and in the latest systems, it takes 3 microseconds.
 Focus on Application:Focus on running applications and less importance to
applications which are in the queue.
 Real-timeoperating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
 Error Free:These types of systems are error-free.
 Memory Allocation:Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks:Very few tasks run at the same time and their concentration is very
less on few applications to avoid errors.
 Use heavy system resources:Sometimes the system resources are not so good and
they are expensive as well.
 Complex Algorithms:The algorithms are very complex and difficult for the designer
to write on.
 Device driver and interrupt signals:It needs specific device drivers and interrupts
signals to respond earliest to interrupts.
 Thread Priority:It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
Characteristics of Real-time System:
Following are the some of the characteristics of Real-time System:
1. Time Constraints:
Time constraints related with real-time systems simply means that time interval
allotted for the response of the ongoing program. This deadline means that the task
should be completed within this time interval. Real-time system is responsible for the
completion of all tasks within their time intervals.
2. Correctness:
Correctness is one of the prominent part of real-time systems. Real-time systems
produce correct result within the given time interval. If the result is not obtained
within the given time interval then also result is not considered correct. In real-time
systems, correctness of result is to obtain correct result in time constraint.
3. Embedded:
All the real-time systems are embedded now-a-days. Embedded system means that
combination of hardware and software designed for a specific purpose. Real-time
systems collect the data from the environment and passes to other components of the
system for processing.
4. Safety:
Safety is necessary for any system but real-time systems provide critical safety. Real-
time systems also can perform for a long time without failures. It also recovers very
soon when failure occurs in the system and it does not cause any harm to the data and
information.
5. Concurrency:
Real-time systems are concurrent that means it can respond to a several number of
processes at a time. There are several different tasks going on within the system and it
responds accordingly to every task in short intervals. This makes the real-time
systems concurrent systems.
6. Distributed:
In various real-time systems, all the components of the systems are connected in a
distributed way. The real-time systems are connected in such a way that different
components are at different geographical locations. Thus all the operations of real-
time systems are operated in distributed ways.
7. Stability:
Even when the load is very heavy, real-time systems respond in the time constraint
i.e. real-time systems does not delay the result of tasks even when there are several
task going on a same time. This brings the stability in real-time systems.

Scheduling in Real Time Systems


Real-time systems are systems that carry real-time tasks. These tasks need to be performed
immediately with a certain degree of urgency. In particular, these tasks are related to control of
certain events (or) reacting to them. Real-time tasks can be classified as hard real-time tasks and
soft real-time tasks.
A hard real-time task must be performed at a specified time which could otherwise lead to huge
losses. In soft real-time tasks, a specified deadline can be missed. This is because the task can be
rescheduled (or) can be completed after the specified time,
In real-time systems, the scheduler is considered as the most important component which is
typically a short-term task scheduler. The main focus of this scheduler is to reduce the response
time associated with each of the associated processes instead of handling the deadline.
If a preemptive scheduler is used, the real-time task needs to wait until its corresponding tasks
time slice completes. In the case of a non-preemptive scheduler, even if the highest priority is
allocated to the task, it needs to wait until the completion of the current task. This task can be
slow (or) of the lower priority and can lead to a longer wait.
A better approach is designed by combining both preemptive and non-preemptive scheduling.
This can be done by introducing time-based interrupts in priority based systems which means the
currently running process is interrupted on a time-based interval and if a higher priority process is
present in a ready queue, it is executed by preempting the current process.
Based on schedulability, implementation (static or dynamic), and the result (self or dependent) of
analysis, the scheduling algorithm are classified as follows.

1. Static table-driven approaches:


These algorithms usually perform a static analysis associated with scheduling and
capture the schedules that are advantageous. This helps in providing a schedule that
can point out a task with which the execution must be started at run time.

2. Static priority-driven preemptive approaches:


Similar to the first approach, these type of algorithms also uses static analysis of
scheduling. The difference is that instead of selecting a particular schedule, it
provides a useful way of assigning priorities among various tasks in preemptive
scheduling.

3. Dynamic planning-based approaches:


Here, the feasible schedules are identified dynamically (at run time). It carries a
certain fixed time interval and a process is executed if and only if satisfies the time
constraint.

4. Dynamic best effort approaches:


These types of approaches consider deadlines instead of feasible schedules. Therefore
the task is aborted if its deadline is reached. This approach is used widely is most of
the real-time systems.

Case study of Linux operating system


 Unix V6, released in 1975 became very popular. Unix V6 was free and was
distributed with its source code.
 In 1983, AT&T released Unix System V which was a commercial version.
 Meanwhile, the University of California at Berkeley started the development of its
own version of Unix. Berkeley was also involved in the inclusion of Transmission
Control Protocol/Internet Protocol (TCP/IP) networking protocol.
 The following were the major mile stones in UNIX history early 1980’s
 AT&T was developing its System V Unix.
 Berkeley took initiative on its own Unix BSD (Berkeley Software Distribution) Unix.
 Sun Microsystems developed its own BSD-based Unix called SunOS and later was
renamed to Sun Solaris.
 Microsoft and the Santa Cruz operation (SCO) were involved in another version of
UNIX called XENIX.
 Hewlett-Packard developed HP-UX for its workstations.
 DEC released ULTRIX.
 In 1986, IBM developed AIX (Advanced Interactive eXecutive).
What is Linux?
Just like Windows XP, Windows 7, Windows 8, and Mac OS X, Linux is an operating system.
An operating system is software that manages all of the hardware resources associated with your
desktop or laptop. To put it simply – the operating system manages the communication between
your software and your hardware. Without the operating system (often referred to as the “OS”),
the software wouldn’t function.
The OS is comprised of a number of pieces:
The Bootloader: The software that manages the boot process of your computer. For most users,
this will simply be a splash screen that pops up and eventually goes away to boot into the
operating system
The kernel: This is the one piece of the whole that is actually called “Linux”. The kernel· is the
core of the system and manages the CPU, memory, and peripheral devices. The kernel is the
“lowest” level of the OS.
Daemons: These are background services (printing, sound, scheduling, etc) that either start up
during boot, or after you log into the desktop.
The Shell: You’ve probably heard mention of the Linux command line. This is the shell – a
command process that allows you to control the computer via commands typed into a text
interface. This is what, at one time, scared people away from Linux the most (assuming they had
to learn a seemingly archaic command line structure to make Linux work). This is no longer the
case. With modern desktop Linux, there is no need to ever touch the command line.
Graphical Server: This is the sub-system that displays the graphics on your monitor. It is
commonly referred to as the X server or just “X”.
Desktop Environment: This is the piece of the puzzle that the users actually interact with. There
are many desktop environments to choose from (Unity, GNOME, Cinnamon, Enlightenment,
KDE, XFCE, etc). Each desktop environment includes built-in applications (such as file
managers, configuration tools, web browsers, games, etc).
Applications: Desktop environments do not offer the full array of apps. Just like Windows and
Mac, Linux offers thousands upon thousands of high-quality software titles that can be easily
found and installed. Most modern Linux distributions (more on this in a moment) include App
Store-like tools that centralize and simplify application installation. For example: Ubuntu Linux
has the Ubuntu Software Center (Figure 1) which allows you to quickly search among the
thousands of apps and install them from one centralized location.
Linux is also distributed under an open source license.
Open source follows the following key philosophies: The freedom to run the program, for any
purpose.
 The freedom to study how the program works, and change it to make it do what you
wish
 The freedom to redistribute copies so you can help your neighbor.
 The freedom to distribute copies of your modified versions to others.
What is a “distribution?
Linux has a number of different versions to suit nearly any type of user. From new users to hard-
core users, you’ll find a “flavor” of Linux to match your needs. These versions are called
distributions (or, in the short form, “distros.”) Nearly every distribution of Linux can be
downloaded for free, burned onto disk (or USB thumb drive), and installed (on as many machines
as you like).
The most popular Linux distributions are:
Ubuntu Linux
 Linux Mint
 Arch Linux
 Deepin
 Fedora
 Debian
 openSUSE.
Components of Linux System Linux Operating
System has primarily three components Kernel − Kernel is the core part of Linux. It is
responsible for all major activities of this operating system. It consists of various modules and it
interacts directly with the underlying hardware. Kernel provides the required abstraction to hide
low level hardware details to system or application program
System Library − System libraries are special functions or programs using which application
programs or system utilities accesses Kernel’s features. These libraries implement most of the
functionalities of the operating system and do not requires kernel module’s code access rights.
System Utility − System Utility programs are responsible to do specialized, individual· level
tasks.
Basic Features
Following are some of the important features of Linux Operating System.
Portable − Portability means software can works on different types of hardware in same way.
Linux kernel and application programs supports their installation on any kind of hardware
platform.
Open Source − Linux source code is freely available and it is community based development
project. Multiple teams work in collaboration to enhance the capability of Linux operating system
and it is continuously evolving.
Multi-User − Linux is a multiuser system means multiple users can access system resources like
memory/ ram/ application programs at same time.
Multiprogramming − Linux is a multiprogramming system means multiple applications can run
at same time.
Hierarchical File System − Linux provides a standard file structure in which system files/ user
files are arranged.
Shell − Linux provides a special interpreter program which can be used to execute commands of
the operating system. It can be used to do various types of operations, call application programs.
etc.
Security − Linux provides user security using authentication features like password protection/
controlled access to specific files/ encryption of data.
Architecture
The following illustration shows the architecture of a Linux system −
The architecture of a Linux System consists of the following layers −
Hardware layer − Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
Kernel − It is the core component of Operating System, interacts directly with hardware,
provides low level services to upper layer components.
Shell − An interface to kernel, hiding complexity of kernel’s functions from users. The shell
takes commands from the user and executes kernel’s functions.
Utilities − Utility programs that provide the user most of the functionalities of an operating
systems.

You might also like