0% found this document useful (0 votes)
2 views

Lecture note on operating system_040632

The document provides an introduction to operating systems, detailing their definition, functions, goals, and types. It explains the roles of an operating system as an interface between users and hardware, as well as its responsibilities in resource management, memory management, and process scheduling. Additionally, it outlines the evolution of operating systems through generations and lists examples of popular operating systems.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture note on operating system_040632

The document provides an introduction to operating systems, detailing their definition, functions, goals, and types. It explains the roles of an operating system as an interface between users and hardware, as well as its responsibilities in resource management, memory management, and process scheduling. Additionally, it outlines the evolution of operating systems through generations and lists examples of popular operating systems.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

INTRODUCTION TO OPERATING SYSTEM

OPERATING SYSTEM 1
CSC024
Table of Contents
OVERVIEW OF OPERATING SYSTEM............................................................................................................2
Introduction............................................................................................................................................2
Goals and objectives of the operating system........................................................................................3
Views of operating system......................................................................................................................3
Functions of the operating system.........................................................................................................5
Services provided by the operating systems..........................................................................................7
Examples of operating systems..............................................................................................................8
GENERATIONS AND HISTORY OF OPERATING SYSTEM...............................................................................9
Generations of Operating System...........................................................................................................9
The evolution of operating systems......................................................................................................12
TYPES OF OPERATING SYSTEMS................................................................................................................15
INTER-PROCESS COMMUNICATION..........................................................................................................26
The need for IPC....................................................................................................................................26
IPC Methods..........................................................................................................................................26
Applications of IPC................................................................................................................................27
PROCESS SCHEDULING..............................................................................................................................28
Objectives of Scheduling.......................................................................................................................28
Criteria for scheduling...........................................................................................................................28
Scheduling Queues................................................................................................................................29
Schedulers.............................................................................................................................................30
Context switching..................................................................................................................................32
Co-operating processes.........................................................................................................................34
INTERRUPTS..............................................................................................................................................35
Interrupt request..................................................................................................................................36
TYPES OF INTERRUPTS..........................................................................................................................36
Exceptions.............................................................................................................................................38
Classifications of Exception...................................................................................................................38
THE KERNEL...............................................................................................................................................40
Difference between Operating System and Kernel..............................................................................41
The Basic Responsibilities of the Kernel...............................................................................................42
Kernel Design Decisions........................................................................................................................43
The Kernel Design Approaches.............................................................................................................44

OVERVIEW OF OPERATING SYSTEM

Introduction
An Operating System (OS) can be defined as a set of computer programs that manage
the hardware and software resources of a computer. It is the core of computer programming
that primarily deals with computer architecture. The Operating system is a program that serves
as an interface to coordinate different resources of computer. An operating system processes
raw system and user input and responds by allocating and managing tasks and internal system
resources as a service to users and programs of the system. In other words, an (OS) can be
defined as a set of programs implemented either in software or firmware (hardwired
instructions on chips usually in ROM) or both that makes the hardware usable.
At the foundation of all system software, an operating system performs basic tasks such
as controlling and allocating memory, prioritizing system requests, controlling input and output
devices, facilitating networking and managing file systems. Most operating systems come with
an application that provides an interface to the OS managed resources. These applications have
had command line interpreters as a basic user interface, but more recently have been
implemented as a graphical user interface (GUI) for ease of operation. Operating Systems
themselves, have no user interfaces, and the user of an OS is an application, not a person. The
operating system forms a platform for other system software and for application software.
Windows, Linux, and Mac OS are some of the most popular OS's.
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs. It is an intermediary
between users and computer hardware. It provides users an environment in which a user can
execute programs conveniently and efficiently. In technical terms, it is the software that
manages hardware. The operating System controls the allocation of resources and services such
as memory, processors, devices and information.
The Diagram illustrating the Operating System

Goals and objectives of the operating system


Operating Systems can be described by what they do. That is, by their functions, goals
and objectives.

1. Convenience for the User


They enable the use of the computer convenient. When there is no Operating System, users of
computer system will need to write machine-level program in order to manipulate the
hardware. With Operating System, users can now easily and conveniently use the computer
without the stress of directly programming the hardware. OS provide a convenient interface for
using the computer system.

2. Efficiency
An OS allows computer system resources to be used in an efficient manner. This particularly
important for large shared multi-user systems which are usually expensive. In the past, the
efficiency (i.e. optimal use of the computer resources) considerations were often more
important than convenience.

3. Evolutionary Capabilities
Ability to evolve also happens to be one of the goals of the OS. An OS should be constructed in
such a way that it permits the effective development, testing and introduction of new system
functions without interfering with its service.

Views of operating system


The way the Operating system is viewed depends majorly on the perspective in which a
user view it. The operating system can be view in many ways but only two are discussed:
1. OS as a User/Computer Interface
A computer system can be viewed as a layered or hierarchical structure consisting of the
hardware, operating system, utilities, application programs and users. The users of application
programs are called the end-users and are generally not concerned with the computer’s
architecture. The end-user views the computer system in terms of an application. The
application is developed by the application programmer who uses a programming language and
a language translator. A set of programs called the utilities is provided to assist the programmer
in program creation, file management and the control of Input/output (I/O) devices. The most
important system program, operating system masks the details of the hardware from the
programmer and provides a convenient interface for using the system. It acts as a mediator,
making it easier for the programmer and for application programs to access and use the
available services and facilities.

2. Operating System as a Resource Manager


A computer system has a set of resources for the movement, storage and processing of data.
The OS is responsible for managing these resources. Note that resources include CPU, file
storage space, data, programs, memory space, I/O devices, etc. The OS is like any other
computer program in that it provides instructions for the processor. The key difference is in the
purpose of the program. The OS directs the processor in the use of the other system resources
and in the timing of its execution of other programs. The processor, in order to do any of these
things, must cease execution of the OS program to execute other programs. Thus, the OS
relinquishes control long enough to prepare the processor to do the next piece of work. The
portion of the OS that is always in main memory is called the kernel or nucleus and it contains
the most frequently used functions in the OS. The remainder of the main memory contains
other user programs and data. The allocation of this resource (i.e. main memory) is controlled
jointly by the OS and the memory management hardware in the processor.

In summary, OS is designed to serve two basic purposes:

1. It provides an interface between the computer hardware and the programmer that
simplifies and makes feasible for coding, creation, debugging of application programs.

2. It controls the allocation and use of the computing System’s resources among the
various user and tasks.
Functions of the operating system
The Following are some of important functions of an operating System.

1. Memory Management Memory management refers to management of Primary Memory or


Main Memory (RAM). Main memory is a large array of words or bytes where each word or byte
has its own address. The Main memory provides a fast storage that can be access directly by the
CPU. So for a program to be executed, it must be in the main memory. Operating System does
the following activities for memory management.
 Keeps tracks of primary memory i.e. what part of it are in use by whom, what part are not in
use.
 In multiprogramming, OS decides which process will get memory when and how much.
 Allocates the memory when the process requests it to do so.
 De-allocates the memory when the process no longer needs it or has been terminated.

2. Processor Management In multiprogramming environment, OS decides which process gets


the processor when and how much time. This function is called process scheduling. Operating
System does the following activities for processor management.
 Keeps tracks of processor and status of process. Program responsible for this task is known as
traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when process is no longer required.

3. Device Management
OS manages device communication via their respective drivers. Operating System does the
following activities for device management.
 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.

Drivers for specific hardware devices.


Assembler: The input to an assembler is an assembly language program. The output is an object
program plus information that enables the loader to prepare the object program for execution.
At one time, the computer programmer had at his disposal a basic machine that interpreted,
through hardware, certain fundamental instructions. He would program this computer by
writing a series of ones and zeros (Machine language), place them into the memory of the
machine.

Compiler: The High-level languages- examples are FORTRAN, COBOL, ALGOL and PL/I are
processed by compilers and interpreters. A compiler is a program that accepts a source program
in a “high-level language “and produces a corresponding object program. An interpreter is a
program that appears to execute a source program as if it was machine language. The same
name (FORTRAN, COBOL, etc.) is often used to designate both a compiler and its associated
language.
Loader: A Loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating and direct-linking. In general, the loader must
load, relocate and link the object program. The loader is a program that places programs into
memory and prepares them for execution. In a simple loading scheme, the assembler outputs
the machine language translation of a program on a secondary device and a loader places it in
the core. The loader places into memory the machine language version of the user’s program
and transfers control to it. Since the loader program is much smaller than the assembler, those
make more core available to the user’s program.

The Operating system must support the following tasks:


1. Provides the facilities to create, modification of programs and data files using an editor.
2. Access to the compiler for translating the user program from high level language to
machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.

4. File Management A file system is normally organized into directories for easy navigation and
usage. These directories may contain files and other directions. Operating System does the
following activities for file management.
 Keeps track of information, location, uses, status etc. The collective facilities are often known
as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

5. Security By means of password and similar other techniques, preventing unauthorized access
to programs and data.

6. Control over system performance Recording delays between request for a service and
response from the system.

7. Job accounting Keeping track of time and resources used by various jobs and users.

8. Error detecting aids Production of dumps, traces, error messages and other debugging and
error detecting aids.

9. Coordination between other software and users Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer system.
Services provided by the operating systems
The services provided by the OS can be categorized into two:

1. Convenience for the Programmer/User


The conveniences offered the user are in diverse and following ways:
i. Program Creation: Although text editors and debuggers are not part of the OS, they are
accessed through the OS to assist programmers in creating programs.
ii. Program Execution: OS ensures that programs are loaded into the main memory. I/O devices
and files are initialized and other resources are prepared. The program must be able to end its
execution either normally or abnormally. In case of abnormal end to a program, it must indicate
error.
iii. Access to I/O devices: Each I/O (input/Output) device requires its own set of instructions or
control signal for operation. The OS takes care of the details so that the programmer can think
in terms of reads and writes.
iv. Controlled Access: In the case of files, control includes an understanding of the nature of the
I/O device (e.g. diskette drive, CDROM drive, etc.) as well as the file format of the storage
medium. The OS deals with these details. In the case of the multi-user system, the OS must
provide protection mechanisms to control access to the files.
v. Communications: There are many instances in which a process needs to exchange
information with another process. There are two major ways in which communication can
occur:
(a) It can take place between processes executing on the same computer.
(b) It can take place between processes executing on different computer systems that
are linked by a computer network.
Communications may be implemented through a shared memory or by a technique of message
passing in which packets of information are moved between processes by the OS.
vi. Error Detection: A variety of errors can occur while a computer system is running. These
errors include:
(a) CPU and memory hardware error: This encompasses memory error, power failure, a
device failure such as connection failure on a network, lack of paper in printer.
(b) Software errors: Arithmetic overflow, attempt to access forbidden memory
locations, inability of the OS to grant the request of an application.
In each case, the OS must make a response that makes the less impact on running applications.
The response may range from ending the program that caused the error, retrying the operation
or simply reporting the error to the application.

2. Efficiency of System: Single and Multi-User


In the area of system efficiency, the OS offer the following services:
i. System Access or Protection: In the case of a shared or public system, the OS controls access
to the system and to specific system resources by ensuring that each user authenticates
him/herself to the system, usually by means of passwords to be allowed access to system
resources. It extends to defending external I/O devices including modems, network adapters
from invalid access attempts and to recording all such connections for detection of break-ins.

ii. Resources Allocation: In an environment where there multiple users or multiple jobs running
at the same time, resources must be allocated to each of them. Many different types of
resources are managed by the OS. Some (such as CPU cycles, main memory and file storage)
may have general request and release codes. For instances, in determining how best to use the
CPU, the OS have CPU-scheduling routines that take into account the speed of the CPU, the jobs
that must be executed, the number of registers available and other factors. These routines may
also be used to allocate plotters, modems and other peripheral devices.

iii. Accounting: This helps to keep track of how much of and what types of computer resources
are used by each user. Today, this record keeping is not for billing purposes but for simply
accumulating usage statistics. This statistics may be available tool for researchers who want to
reconfigure the system to improve computing services.

iv. Ease of Evolution of OS: A major OS will evolve over time for a number of reasons such as
hardware upgrades and new types of hardware e.g. The use of graphics terminals may affect OS
design. This is because such a terminal may allow the user to view several applications at the
same time through ‘windows’ on the screen. This requires more sophisticated support in the
OS.

v. New Services: In response to user demands or the need of system managers, the OS may
expand to offer new services.

vi. Error correction: The OS may have faults which may be discovered over the course of time
and fixes will need to be made.

Other features provided by the OS includes:


• Defining the user interface
• Sharing hardware among users
• Allowing users to share data
• Scheduling resources among users
• Facilitating I/O
• Recovering from errors
The OS interfaces with, programs, hardware, users such as administrative personnel, computer
operators, application programmers, system programmers, etc.

Examples of operating systems


Some examples of Operating System are –
 Windows (GUI based, PC)
 GNU/Linux (Personal, Workstations, ISP, File and print server, Three-tier client/Server)
 macOS (Macintosh), used for Apple’s personal computers and work stations (MacBook,
iMac).
 Android (Google’s Operating System for smartphones/tablets/smart watches)
 iOS (Apple’s OS for iPhone, iPad and iPod Touch)

GENERATIONS AND HISTORY OF OPERATING


SYSTEM

Generations of Operating System


Operating Systems and computer architecture have a great deal of influence on each other. To
facilitate the use of the hardware, operating systems were designed. The history of the operating system
can be divided into five different periods called generations. Characterized by the changes in the
hardware components, software development and mode of delivery of computer services.

1. The Zeroth Generation (1940s)


This refers to the period when there was no operating system. The hardware component
technology was the electronic vacuum tubes. There was no benefit of operating systems and
programs were written in machine language. The mode of operation was referred to as the
“Open Shop” and this meant that users signed up for computer time. When the user time
arrives, the whole computer is turned to the user who takes responsibility for the machine set
up and subsequent cleaning up for the next user. This system was inefficient and depends on
the competence of the user who is usually a programmer.

Features of the Zeroth generation of Operating system


1. Computers were very large and had no operating system.
2. Machine language was used as a means of communication with the computer.
3. There was the Job-by-Job mode of processing.
4. Operators used systems by loading decks, pushing buttons, examining storage locations etc.
5. Batch processing was not done.
6. There was no multi user (Multi-programming) facility.
7. The setup and tear down operations consumed a lot of time.

NOTE
Setting up: This involves putting the machine (computer) in an active state and leading a job individually
usually from cards (which was introduced about 1880). This job had the whole memory during duration.
Tearing down: when a job runs to completion or terminates because of some error situation, an
operator will load a program to dump the memory. The operator also removes the cards and printed
output. He also takes the machine back to its initial state (interactive state), no other job is run or be in
the active state when another job is to be processed. Thus, a small program requiring a little CPU time
would take so long to complete because of the set up and tear down times.

2. THE FIRST GENERATION (1951-1956)


Computing systems had reduced in size (due to the introduction of transistors, current could
now flow through wires) though they were still large. The first generation of operating system
was design to automate the set up and tear down of jobs (i.e. to smoothen the transition
between jobs). This was achieved through batch processing. Jobs were gathered in ‘batches’
such that one job was processed after the other without users interfering. This means once a job
is running; it had total control of the machine. As each job terminates, control is returned to the
O.S. that assist the job (housekeeping operations) and the next job is read in.

Features of the First Generation of Operating System


1. Single stream batch processing i.e. program to program transition capabilities in order to
reduce the overhead involved in starting a new job.
2. Error recovery techniques that automatically ‘cleared up’ after a job terminated abnormally
and allowed the next job to be initiated with minimal operator intersection.
3. Job control languages that allow users to specify much of the details for refining their jobs,
the resources the job requested and accounting.
4. Operating system had standard I/O routines called IDOS (I/O) control services so that users
did not have scope concerned with the messy details of machine level decoding of the input
and output resources.
5. Paging and virtual storage concepts were introduced but not implemented.
6. Assembly language was introduced.

These systems are often heavily under-utilized. It is far more important for them to be available
when needed and to respond quickly than for them to be busy throughout the time.

3. THE SECOND GENERATION (1956-1964)

These Operating Systems were batch-oriented and concerned mostly with improving throughput
i.e. work processed per unit time with expensive hardware. Multiprogramming was commonly
employed in which general programs were resident in the main memory at the same time and
the processor switched rapidly between them (programs). This increased throughput and the
imbalance in the speed between I/O devices and processor.

Multiprocessing systems emerged in which several processors cooperate sometimes as


independent computer systems communicating with each other, and sometimes as multiple
processors sharing a common memory. Still in the early sixties, time sharing systems using an
interactive mode were developed in which user could interact directly with the computer
through typewriter terminals. This helped to eradicate the suffering of delays for hours or days
on the batch processing environment. Users could now share data and programs. This increased
productivity and introduced creativity among users. Most time sharing users then spend their
time developing programs or running especially designed application programs. Errors in the
earliest phases of the projects were not located until long after the projects were delivered to
customers. This led to the emergence of the field of software engineering in order to facilitate a
disciplined and structured approach to the construction of reliable, understandable and
maintainable software, e.g. Burroughs introduced an operating system called MCP (Master
control program) in 1960.

Features of the Second Generation of Operating Systems


1. Multiprogramming.
2. Multiprocessing.
3. Virtual storage.
4. They were written in a high level language.
5. Eradication of Batch processing.
6. Error prone Application program.
7. Source language debugging facilities e.g. Language Translators like Interpreters.

4. THE THIRD GENERATION (1964 – 1979)


It began effectively with the introduction of IBM system 360 families of computers. Third generation
computers were designed to be general-purpose computers. This third generation O.S were
multimode systems. Some of them simultaneously supporting batch processing, time sharing of the
processing. They are large and expensive. Throughout the early 70’s vendors sold hardware and gave
out O.S. Support programs, application programs, documentation and educational manuals for no
charge. Thus computer vendors didn’t take much responsibility for the O.S. IBM was the first
company to unbundle its software from its hardware i.e. they charged separately although IBM
continued to supply some basic software for no charge.
Features of the Third Generation of Operating Systems
1.The Electronic component was the electronic circuit.
2.Customers taking responsibility for the quality of their software.
3.Vendor began to design their software more modularly so that they could be sold as individual
units.
4.Users could now shop around for their software.
4. Others manufacturer unbundled rapidly.
5. Some concepts disappeared and later reappeared indifferent forms of paging and virtual storage
concepts.
6. O.S was developed for families of machine.
7. The merging of multiprogramming, batch philosophy with time sharing technology form O.S.
capable of handling both batch and time sharing operations.

5. THE FOURTH GENERATION (1979 TO PRESENT DATE)


The very large scale integrated circuits were introduced which led to the development of
microcomputer. Software was developed for family of machines. In other words, the improvements
in hardware miniaturization and technology evolved so fast that led to the manufacture of
inexpensive workstation-class computer capable of supporting multiprogramming and time-sharing.
Hence the operating systems that supports today’s personal computers and workstations look much
like those which were available for the minicomputers of the third generation.

Networking was enhanced, such that users were no longer be confined to communicating with a
single computer in a time shared mode but rather the user may communicate with geographically
dispersed systems. This gave room for security problems with geographically passing over various
types of vulnerable communications lines. Encryption received attention. Thus it became necessary
to encode data so that it is only useful to the receiver. The highly symbolic mnemonics acronym-
oriented user environment was replaced with menu-driven system that guided users through various
available options in English language. The concept of virtual machines becomes widely used. Today’s
user is not concerned with the internal functioning of the machine, but to accomplished work with a
computer. Database systems have gained wide acceptance and importance. Thousands of online
databases have become available for access via terminals over communication networks. The
concept of distributed data processing has become firmly established.

Generation Year Electronic Device Used Types of OS


Zeroth 1940s Vacuum tubes No OS. Open
Shop Mode of
Operation,
Plug Cards
First 1951-1956 Transistors Closed Shop,
Plugged Cards,
Batch Processing
Second 1956-1964 Integrated Circuits Sequential Batch
Processing and
Experimental
Multiprocessing
OS also Known
as Time Sharing
OS
Third 1964-1979 Large Integrated Circuits Batch
Processing, Time
Sharing and
Multiprogrammi
ng OS.
Fourth 1979 - Date Very Large Integrated Circuits Networked and
Distributed
Systems

The evolution of operating systems


The evolution of operating systems is directly dependent on the development of computer systems and
how users use them. Below is a brief timeline on the evolution of operating systems for the past fifty

Early Evolution
1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
1949: EDSAC and EDVAC
1949: BINAC - a successor to the ENIAC
1951: UNIVAC by Remington
1952: IBM 701
1956: The interrupt
1954-1957: FORTRAN was developed

Operating Systems - Late 1950s


By the late 1950s Operating systems were well improved and started supporting following usages:
1. It was able to perform Single stream batch processing.
2. It could use Common, standardized, input/output routines for device access.
3. Program transition capabilities to reduce the overhead of starting a new job was added.
4. Error recovery to clean up after a job terminated abnormally was added.
5. Job control languages that allowed users to specify the job definition and resource requirements
were made possible.

Operating Systems - In 1960s


1961: The dawn of minicomputers
1962: Compatible Time-Sharing System (CTSS) from MIT
1963: Burroughs Master Control Program (MCP) for the B5000 system
1964: IBM System/360
1960s: Disks became mainstream
1966: Minicomputers got cheaper, more powerful, and really useful.
1967-1968: Mouse was invented.
1964 and onward: Multics
1969: The UNIX Time-Sharing System from Bell Telephone Laboratories.

Supported OS Features by 1970s


1. Multi User and Multi-tasking was introduced.
2. Dynamic address translation hardware and Virtual machines came into picture.
3. Modular architectures came into existence.
4. Personal, interactive systems came into existence.

Accomplishments after 1970


1971: Intel announces the microprocessor
1972: IBM comes out with VM: the Virtual Machine Operating System
1973: UNIX 4th Edition is published
1973: Ethernet
1974 The Personal Computer Age begins
1974: Gates and Allen wrote BASIC for the Altair
1976: Apple II
August 12, 1981: IBM introduces the IBM PC
1983: Microsoft begins work on MS-Windows
1984: Apple Macintosh comes out
1990: Microsoft Windows 3.0 comes out
1991: GNU/Linux
1992: The first Windows virus comes out
1993: Windows NT
2007: iOS
2008: Android OS

And as the research and development work continues, we are seeing new operating systems being
developed and existing ones getting improved and modified to enhance the overall user experience,
making operating systems fast and efficient like never before. Also, with the onset of new devices like
wearables, which includes, Smart Watches, Smart Glasses, VR gears etc., the demand for unconventional
operating systems is also rising.
TYPES OF OPERATING SYSTEMS

Operating Systems can be categories based on various criteria


1. Types of Operating Systems Based on the Types of Computer they Control
and the Kind of Applications they Support
a. Real-Time Operating Systems (RTOS): Real time Operating system is data processing
system in which the time interval required to process and respond to inputs is so
small that it controls the environment. That is, the time interval required to process
and respond to inputs is very small.

It is defined as an operating system known to give maximum time for each of the
critical operations that it performs, like OS calls and interrupt handling.

Real time processing is always on line whereas on line system need not be real time.
The time taken by the system to respond to an input and display of required
updated information is termed as response time. So in this method response time is
very less as compared to the online processing.

Real-time systems are used when there are rigid (limited) time requirements on the
operation of a processor or the flow of data and real-time systems can be used as a
control device in a dedicated application. (They are used when there are strict time
requirements). Real-time operating system has well-defined, fixed time constraints
otherwise system will fail. For example Scientific experiments, medical imaging
systems, industrial control systems, missile/ weapon systems, robots, and home-
appliance controllers, Air traffic control system etc.

There are two types of real-time operating systems.


Hard real-time systems
Hard real-time operating systems are meant for the applications where time
constraints are very strict and even the shortest possible delay is not acceptable. It
guarantees that critical tasks are completed on time. These systems are built for
saving life like automatic parachutes or air bags which are required to be readily
available in case of any accident. It guarantees maximum time for critical operations
and complete them on time. In hard real-time systems secondary storage is limited
or missing with data stored in ROM. In these systems virtual memory is almost never
found.
Soft real-time systems
The soft real-time systems are applied where time-constraint is less strict. Soft real
time systems are less restrictive. Critical real-time task gets priority over other tasks
and retains the priority until it completes.
Soft real-time systems have limited utility than hard real time systems. For example,
Multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and
planetary rovers etc.

Advantages of RTOS
 Maximum Consumption: Maximum utilization of devices and system, thus more
output from all the resources
 Task Shifting: Time assigned for shifting tasks in these systems are very less. For
example in older systems it takes about 10 micro seconds in shifting one task to
another and in latest systems it takes 3 micro seconds.
 Focus on Application: Focus on running applications and less importance to
applications which are in queue.
 Real time operating system in embedded system: Since size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error free.
 Memory Allocation: Memory allocation is best managed in these type of systems.

Disadvantages of RTOS
 Limited Tasks: Very few tasks run at the same time and their concentration is very
less on few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupt
signals to response earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.

b. Single user Operating System: Single user Operating systems provides facilities to
only one user at a time. That is, it allows user access to the computer system one at
time. This class of operating system is usually found in Personal computers (PCs) and
Laptops. They are of two types:
1. Single User, Single Tasks Operating Systems: This type of Operating systems
allows user to perform only one tasks at a time. An example is the Palm OS in
hand held devices.
2. Single user, Multitask Operating System: this type of single user Operating
system allows users to perform multiple tasks at the same time. Examples are, the
Microsoft Windows and Apple Mac OS that allows a single user to work on
multiple programs at the same time. For example, a user can work on a text
editor, browse the World Wide Web and play music at the same time. Most
modern personal computers and laptops are single user multi-tasking operating
systems.
c. Multi User Operating System: Multi user Operating Systems allows multiple users to
access the computer at the same time. The operating system manages the memory
and resources among the various users according to the requirements hence, the
task of one user will not affect the tasks of the other users. Examples of multi user
operating systems are UNIX, Linux, VMS, and mainframe operating systems, such as
MVS, are examples of multi-user operating systems.
There are two types of Multi User Operating Systems
1. Time Sharing Operating: Time sharing operating system allows multiple users in
different locations to use a particular computer system at the same time.
2. Distributed Operating System: In distributed operating system, the data
processing task is divided among the processors accordingly. These systems
can be located in different geographical locations but tasks are share amongst
them for quick delivery.
DIFFERENCE BETWEEN SINGLE USER AND MULTI USER OPERATING SYSTEMS
Single user Operating System Multi User Operating System
Definition A single user operating system A multi user operating system is a
is a type of operating system type of operating system that
that provides facilities to only provides resources and services to
one user at a time multiple users at a time.
Types Single user single task OS and Timesharing OS and Distributed
single user multi-task OS OS
Complexity Simple Complex
Examples Microsoft Windows, Apple Mac UNIX and Linux
OS and Palm OS

NOTE
There are however multi-user operating systems and single-user operating systems
that support networking. For example, Windows 2000 and Novell Netware can each
support hundreds or thousands of networked users, but the operating systems
themselves are not true multi-user operating systems. The system administrator is
the only user for Windows 2000 or Netware. The network support and the entire
remote user logins the network enables are, in the overall plan of the operating
system, a program being run by the administrative user.

2. Types of Operating Systems based on the Nature of Interaction that takes


place between the Computer User and the Program during Processing
Computers can be categorized based on the Interaction that takes place between the User and
the or her Program during its Processing they grouped in three which are the batch, time-
shared and real time operating systems.
a. Batch Processing Operating Systems: In a batch processing operating system
environment, users submit jobs to a central place after they have been prepared in an
offline device like the punch cards, where these jobs are collected into a batches by the
operator, and subsequently placed on an input queue at the computer where they will
be run. Jobs are placed in batches in order to speed up processing. The batching is done
in a way that jobs that require the same resources are sorted and grouped .In this case,
the user has no interaction with the job during its processing, and the computer’s
response time is the turnaround time (i.e. the time from submission of the job until
execution is complete, and the results are ready for return to the person who submitted
the job). The problems with Batch Systems are follows
1. Lack of interaction between the user and job.
2. CPU is often idle, because the speeds of the mechanical I/O devices are slower
than CPU.
3. Difficult to provide the desired priority.
Batch Operating System

In Summary
The Batch Operating System does not interact with the computer directly. There is an
operator which takes similar jobs having same requirement and group them into
batches. It is the responsibility of operator to sort the jobs with similar needs.

Advantages of Batch Operating System


 It is very difficult to guess or know the time required by any job to complete.
Processors of the batch systems know how long the job would be when it is in
queue
 Multiple users can share the batch systems
 The idle time for batch system is very less
 It is easy to manage large work repeatedly in batch systems

Disadvantages of Batch Operating System


 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometime costly
 The other jobs will have to wait for an unknown time if any job fails

Examples of Batch based Operating System: Payroll System, Bank Statements etc

b. Time-sharing operating systems: Time sharing is a technique which enables many


users, located at various terminals, to use a particular computer system at the same
time. Time-sharing or multitasking is a logical extension of multiprogramming.
Processor's time which is shared among multiple users simultaneously is termed as
time-sharing. The tasks to be processed can be from a single user or from different
users. The time that each task gets to execute is called quantum. After this time interval
is over, OS switches over to next task.
The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of multiprogrammed batch systems, objective is to maximize
processor use, whereas in Time-Sharing Systems objective is to minimize response
time. Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate response.

The Time sharing Operating System

Operating system uses CPU scheduling and multiprogramming to provide each user with
a small
portion of a time. Computer systems that were designed primarily as batch systems
have been modified to time-sharing systems.

Advantages of Timesharing operating systems


1. Provide advantage of quick response.
2. Avoids duplication of software.
3. Reduces CPU idle time.

Disadvantages of Timesharing operating systems


1. Problem of reliability.
2. Question of security and integrity of user programs and data.
3. Problem of data communication.

Examples of Time-Sharing OSs are: Multics, Unix etc


c. Real Time Operating System: The third class of operating systems, real time operating
systems, are designed to service those applications where response time is of the
essence in order to prevent error, misrepresentation or even disaster. Examples of real
time operating systems are those which handle airlines reservations, machine tool
control, and monitoring of a nuclear power station. The systems, in this case, are
designed to be interrupted by external signal that require the immediate attention of
the computer system.
Most computer operating systems are hybrids, providing for more than one of these types of
computing service simultaneously. It is especially common to have a background batch system
running in conjunction with one of the other two on the same.
3. Types of Operating Systems based on the Definition of the
System/Environment
a. Multiprogramming Operating System: A multiprogramming operating system is a
system that allows more than one active user program (or part of user program) to be
stored in main memory simultaneously. Thus, it is evident that a time-sharing system is a
multiprogramming system, but note that a multiprogramming system is not necessarily a
time-sharing system.
A batch or real time operating system could, and indeed usually does, have more than
one active user program simultaneously in main storage.
Another important, and all too similar, term is ‘multiprocessing’. A multiprocessing
system is a computer hardware configuration that includes more than one
independent processing unit. The term multiprocessing is generally used to refer to
large computer hardware complexes found in major scientific or commercial
applications.

Multiprogrammed System

b Network operating System: A networked computing system is a collection of physical


interconnected computers. The operating system of each of the interconnected
computers must contain, in addition to its own stand-alone functionality, provisions for
handling communication and transfer of programs and data among the other computers
with which it is connected.

Network Operating System runs on a server and provides server the capability to
manage data, users, groups, security, applications, and other networking functions. The
primary purpose of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local area network (LAN),
private network or to other networks.

Network operating system

In a network operating system, the users are aware of the existence of multiple
computers, and can log in to remote machines and copy files from one machine to
another and that is why these computers are popularly known as tightly coupled
systems. Each machine runs its own local operating system and has its own user (or
users). Network operating systems are designed with more complex functional
capabilities.

Advantages of network operating systems


1. Centralized servers are highly stable.
2. Security is server managed.
3. Upgrades to new technologies and hardware can be easily integrated into the
system.
4. Remote access to servers is possible from different locations and types of
systems.

Disadvantages of network operating systems


1. High cost of buying and running a server.
2. Dependency on a central location for most operations.
3. Regular maintenance and updates are required.

Examples of network operating systems: Microsoft Windows Server 2003, Microsoft


Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
c. Distributed Operating Systems: A distributed computing system consists of computers
that are connected and managed so that they automatically share the job processing
load among the constituent computers, or separate the job load as appropriate
particularly configured processors. Independent systems possess their own memory unit
and CPU. They use multiple central processors which may differ in size and function, to
serve multiple real time application and multiple users. Data processing jobs are
distributed among the processors accordingly to which one can perform each job most
efficiently. The processors communicate with one another through various
communication lines (such as high-speed buses or telephone lines). These are referred
as loosely coupled systems or distributed systems. Processors in a distributed system
may vary in size and function. These processors are referred as sites, nodes, and
computers. The major benefit of working with these types of operating system is that
it is always possible that one user can access the files or software which are not
actually present on his system but on some other system connected within this
network i.e., remote access is enabled within the devices connected in that network.
Distributed System

The Computer systems in a Distributed System, require an operating system which in


addition to the typical stand-alone functionality, provides coordination of the operations
and information flow among the component computers. The distributed computing
environment and its operating systems, like networking environment, are designed with
more complex functional capabilities. However, a distributed operating system, in
contrast to a network operating system, is one that appears to its users as a
traditional uniprocessor system, even though it is actually composed of multiple
processors. In a true distributed system, users should not be aware of where their
programs are being run or where their files are located; that should all be handled
automatically and efficiently by the operating system.

Advantages of Distributed Systems


1. With resource sharing facility user at one site may be able to use the resources available
at another.
2. Speedup the exchange of data with one another via electronic mail.
3. If one site fails in a distributed system, the remaining sites can potentially continue
operating.
4. Better service to the customers.
5. Reduction of the load on the host computer.
6. Reduction of delays in data processing.
7. Failure of one will not affect the other network communication, as all systems are
independent from each other

Disadvantages of Distributed Operating System


1. Failure of the main network will stop the entire communication.
2. To establish distributed systems the language which are used are not well defined
yet.
3. These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet.

Examples of Distributed Operating System are- LOCUS etc.


INTER-PROCESS COMMUNICATION

Inter-process communication (IPC) are vital components of operating systems, allowing


processes to communicate and coordinate with each other efficiently. Inter-Process
Communication (IPC) plays a vital role in the world of operating systems, enabling different
processes to communicate and cooperate with each other. IPC is crucial for multi-tasking,
enabling processes to share data, synchronize their activities, and collaborate effectively.

IPC refers to the mechanisms and techniques that operating systems use to facilitate
communication between different processes. In a multitasking environment, numerous
processes are running concurrently, and IPC serves as the bridge that allows them to exchange
information and coordinate their actions. IPC and Synchronization mechanism helps in
coordination of processes to enable data consistency in the operating system.

The need for IPC

1. Data Sharing: Processes often need to share data. For example, a text editor may need
to pass data to a printer process to generate a hard copy.
2. Synchronization: Processes may need to synchronize their activities. For instance, in a
multi-threaded environment, threads must coordinate to ensure data consistency.
3. Communication: Processes might need to communicate for a variety of purposes, such
as exchanging information, signaling, and error handling.
4. Resource Sharing: IPC helps manage and share resources, like file access, memory, or
hardware devices among processes.

IPC Methods

There are several methods of IPC used in modern operating systems. Each method has its
strengths and weaknesses, and the choice of method depends on the specific requirements of
the processes involved: (Pain, 2023)
1. Message Passing: In message passing, processes exchange data by sending and receiving
messages through a messaging system. This method is particularly useful for inter-
process communication in distributed systems.
2. Shared Memory: Processes can communicate by sharing a common memory region.
This method is efficient but requires synchronization mechanisms to avoid data
inconsistencies.
3. Sockets: Sockets are commonly used for IPC in networked systems. Processes can
communicate over a network or on the same machine by reading and writing data
through sockets.
4. Pipes and FIFOs: These are used for communication between related processes. Pipes
are unidirectional, while FIFOs (named pipes) are bidirectional and can be used between
unrelated processes.
5. Signals: Processes can send signals to each other to notify about events or requests.
Signals are lightweight and are often used for process management and error handling.

Applications of IPC

IPC is a fundamental component of modern operating systems and finds applications in various
scenarios:

1. Shell Pipelines: In Unix-like systems, the shell uses pipes to connect the output of one
command to the input of another.
2. Graphical User Interfaces (GUIs): GUI applications use IPC for event handling, such as
sending messages between windows or processes.
3. Server-Client Communication: IPC is essential in client-server applications. Clients and
servers communicate over sockets, pipes, or other IPC mechanisms.
4. Multi-threading: In multi-threaded programs, threads within a process must
communicate and synchronize through IPC mechanisms like semaphores and mutexes.
5. Distributed Systems: IPC is crucial in distributed computing, where processes may run
on different machines. Message passing is commonly used in such scenarios.
PROCESS SCHEDULING
Process scheduling is an important part of a Multiprogramming operating system. Such operating
systems allow more than one process to be loaded into the executable memory at a time and loaded
process shares the processor using time multiplexing.
Definition: The process scheduling is the activity of the process manager that handles the removal of the
running process from the processor and the selection of another process on the basis of a particular
strategy.

Objectives of Scheduling
The scheduling of processes affects the overall system performance and behavior. The objective of
scheduling are to:
1. Maximize the system throughput.

2. Be ‘fair’ to all users. This does not mean all users must be treated equally, but consistently,
relative to the importance of the work being done.

3. Provide tolerable response (for on-line users) or turn-around time (for batch users).

4. Break down its performance gracefully. If the system becomes overloaded, it should not
‘collapse’, but avoid further loading (e.g. by preventing new jobs or users) or temporarily reduce
the level of service (e.g. response time).

5. Be consistent and predictable. The response and turn-around time should be relatively stable
from day to day.

Criteria for scheduling


In making decisions about the scheduling of jobs for processing some criteria need taken into
consideration by the operating system. The effect of these criteria are more evident with the description
of specific scheduling algorithms. The criteria to be considered include the following:
1. Priority assigned to job.

2. Class of job; i.e. batch or on-line or real-time. On-line users require a tolerable response time,
while real-time systems often demand instant service.

3. Resource requirements; e.g. expected run-time, memory required, I/O devices.

4. I/O or CPU bound that is, whether job uses predominately I/O time or processor time. A CPU-
bound process requires more CPU time or spends more time in the running state. An I/O-bound
process requires more I/O time and less CPU time. An I/O-bound process spends more time in
the waiting state. This criterion is often of consequence because of the need to balance the use
of processor and the I/O system. If the processor is overwhelmed in CPU- intensive work, it is
unlikely that the I/O devices are being serviced frequently enough to sustain maximum
throughput.

5. Resources used to date; i.e. the amount of processor time already consumed.

6. Waiting time to date; i.e. the amount of time spent waiting for service so far.

 Some of these criteria are ‘static’ characteristics which can be assessed prior to commencement
of the process execution. One of these is the priority. This is a value which can assign to each
process and indicates the relative ‘importance’ of The arrows shows the process flow in the
system.
the process, such that a high priority process will be selected for execution in preference to a lower
priority one. Scheduling in this way on the basis of a single priority value enables rapid decisions to be
made by the scheduler.
An initial priority can be assigned to each process, in some schemes, the priority is static and is used as a
basis for scheduling throughout the life of the process, and while in other schemes the priority is
dynamic, being modified to reflect the changing importance of the process. The priority can be supplied
by a user or could be derived from the characteristics of the job, or both.

Scheduling Queues
Scheduling queues refers to queues of processes or devices. When processes enter the system, they are
put into a job queue. This queue consists of all processes in the system. The processes that are residing
in main memory and are ready and waiting to be executed are kept on a list called the ready queue. This
queue is generally stored as a linked list. A ready-queue header contains pointers to the first and final
PCB of processes in the list. Each PCB extends to include a pointer field that points to the next PCB in the
ready queue.

The operating system also has other queues. When a process is allocated the CPU, it executes for a while
and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the
completion of an I/O request. In the case of I/O request, such a request may be to a dedicated tape
drive, or to a shared device, such as a disk. Since the system has many processes, the disk may be busy
with the I/O request of some other process. The process therefore may have to wait for the disk. The list
of processes waiting for a particular I/O device is called a device queue. The Device queue is a queue for
which multiple processes are waiting for a particular I/O device and each device has its own queue.

A common way of representing Process Scheduling is the use of the Queuing Diagram. This figure below
shows the queuing diagram of process scheduling.
 The Queues is represented by rectangular boxes
 The circles represent the resources that serve the queues.
A Queuing Diagram
There are two types of queues, the ready queue and set of device queues. A newly arrived process is
put in the ready queue. It waits in the ready queue until it is selected for execution (or dispatched). Once
the process is assigned to the CPU and is executing, one of several events could occur:
1. The process could issue an I/O request and then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt and put back in
the ready queue.
4. In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it terminates,
at which time it is removed from all queues and has its PCB and resources deallocated.

The Two State Process Model


Process may be in one of two states:
a) Running: when new process is created by Operating System, the process enters into the system
in the running state.

b) Not Running: Processes that are not running are kept in queue, waiting their turn to execute.
Each entry in the queue is a printer to a particular process. Queue is implemented by using
linked list. The Use of dispatcher is as follows. When a process interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then select a process from the queue to execute.

Schedulers
A process moves between the various scheduling queues throughout its lifetime. The operating system
must select, for scheduling purposes, processes for these queues in a particular manner. The selection
process is carried out by the appropriate scheduler.

In a multiprogramming environment, several processes will be competing for the processor. At any
instant, only one process will be running while the others will be ready waiting for the processor or in
some other wait condition. The Operating System has to determine the optimum sequence and timing of
assisting processes to the processor. This is referred to as SCHEDULLING.
In batch systems, often more processes are submitted than cannot be executed immediately. These
processes are pooled to a mass-storage device, where they are kept for later execution. The long-term
scheduler, or job scheduler, selects processes from this pool and loads them into memory for execution.
The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to
execute, and allocates the processor to one of them.

There are three types of Schedulers. They are:


1.Long Term Scheduler (High Level Scheduling (HLS))
It is also called job scheduler. Long term scheduler determines which programs are admitted to
the system for processing (that is why it is sometime called the admission scheduling). Job
scheduler selects processes from the queue and loads them into memory for execution. Process
loads into the memory for CPU scheduler. The primary objective of the job scheduler is to provide
a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the system. On same
systems, the long term scheduler may be absent or minimal. Timesharing operating systems have
no long term scheduler. When process changes the state from new to ready, then there is a long
term scheduler.

2.Medium term scheduling (or intermediate scheduling or medium level scheduling)


It is concerned with the decision to temporarily remove a process from the system (in order to
reduce the system load or to process) a process known as swapping.
Medium term scheduling is part of the swapping function. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium term scheduler is in charge of
handling the swapped out-processes.

Queuing Diagram showing Medium term scheduling

Running process may become suspended by making an I/O request. Suspended processes cannot make
any progress towards completion. In this condition, there is the need to remove the process from
memory and make space for other process. The process of moving a Suspended process to the
secondary storage is called swapping, and the process is said to be swapped out or rolled out. Swapping
may be necessary to improve the process mix.
3.Short Term Scheduler (Low level scheduling or processor scheduling)
It is also called CPU scheduler. It decides on the ready process to be assigned to the processor. This level
is often called the dispatcher but the term is more accurately refers to as the actual activity of
transferring control to the selected process.
The main objective of this scheduler is increasing system performance in accordance with the chosen set
of criteria. It is the change of ready state to running state of the process. CPU scheduler selects from
among the processes that are ready to execute and allocates the CPU to one of them. Short term
scheduler is faster than long term scheduler.

Comparison between Schedulers


S/N LONG TERM SHORT TERM MEDIUM TERM
1 It is the job scheduler It is the CPU Scheduler It is the process swapping
scheduler
2 Speed is less than short Speed is very fast. It is the Speed is in between both
term scheduler fastest among the two the short and long term
schedulers
3 It controls degree of It gives less control over Reduce the degree of
multiprogramming degree of multiprogramming multiprogramming
4 Absent or minimal in time Minimal in time sharing system. Time sharing system use
sharing system. medium term scheduler. I.e.
it is part of a time sharing
system.
5 It select processes from pool It select from among the Process can be reintroduced
and load them into memory processes that are ready to into memory and its
for execution. execute. execution can be
Continued.
6 Process state is (New to Process state is (Ready -
Ready) to Running)
7 Select a good process, mix of Select a new process for a CPU -
I/O bound and CPU bound. quite frequently.

Context switching
A context switch is the mechanism to store and restore the state or context of a CPU in Process Control
Block so that a process execution can be resumed from the same point at a later time. That is, the
process of saving the context of one process and loading the context of another process is known as
Context Switching. In simple terms, it is like loading and unloading the process from running state to
ready state. Context Switching happens when:
1. When a high-priority process comes to ready state (i.e. with higher priority than the running
process)
2. An Interrupt occurs
3. User and kernel mode switch (It is not necessary though)
4. Preemptive CPU scheduling used.
Using this technique a context switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to executing another, the context
switcher saves the content of all processor registers for the process being removed from the CPU in its
process being removed from the CPU in its process descriptor. The context of a process is represented in
the process control block of a process. Context switch time is pure overhead. Context switching can
significantly affect performance, since modern computers have a lot of general and status registers to be
saved.

Content switch times are highly dependent on hardware support. Context switch requires ( n + m ) bXK
time units to save the state of the processor with n general registers, assuming b store operations are
required to save register and each store instruction requires K time units.

A diagram illustrating Context Switching from process to process

Some hardware systems employ two or more sets of processor registers to reduce the amount of context
switching time. When the process is switched the information stored is:
1. Program Counter
2. Scheduling Information
3. Base and limit register value
4. Currently used register
5. Changed State
6. I/O State

Mode Switch
A mode switch occurs when CPU privilege level is changed, for example when a system call is made or a
fault occurs. The kernel works in more a privileged mode than a standard user task. If a user process
wants to access things which are only accessible to the kernel, a mode switch must occur. The currently
executing process need not be changed during a mode switch. A mode switch typically occurs for a
process context switch to occur. Only the kernel can cause a context switch.

Co-operating processes
A Co-operating process is a process that can affect or be affected by the other processes while executing.
If suppose any process is sharing data with other processes, then it is called co-operating process.
The Advantages of the co-operating processes are:
1. Sharing of information: Co-operating processes share the information such as a file, memory and so
on. System must provide an environment to allow concurrent access to these types of resources
2. Increases computation speed: Computation speed will increase if the computer has multiple
processing elements that are connected together.
3. Modularity: System is constructed in a modular fashion. System function is divided into number of
modules.
4. Convenience

The Behavior of co-operating processes is nondeterministic i.e. it depends on relative execution


sequence and cannot be predicted.

Co-operating processes are also Reproducible. For example, suppose one process writes “ABC” and
another writes “CBA” different outputs can be derived but we cannot tell what comes from which, which
process output first “C” in “ABCCBA”. The subtle state sharing occurs through the terminal. Not just
anything can happen, though. For example, “AABBCC “cannot occur.

Co-operating processes
INTERRUPTS
Processors are faster than hardware devices that they control and communicate with. It is not ideal for
the kernel (the core of an operating system) to issue a request and wait for a response from slower
hardware. Instead, the kernel must be free to go and handle other functions of the operating system
only dealing with the hardware after it has actually completed its work.

One way the processor works with hardware without affecting the overall performance of the computer,
is the provision a mechanism for the hardware to signal to the operating system when its attention is
needed. This mechanism is called an interrupt.

Interrupts occur at random times during the execution of a program, in response to signals from
hardware. System hardware uses interrupts to handle events external to the processor, such as requests
to service peripheral devices. Software can also generate interrupts by executing the INT instruction
(program interrupt).
Definition: An interrupt is an event that changes the sequence or order in which the processor executes
instructions. It is usually generated by electric signals form the computer hardware. It could also be in
response to an asynchronous or exceptional event that automatically saves the current processor status
to allow later restart and causes an automatic transfer to a specific routine (the interrupt handler).

Two points to note in understanding interrupts are


1. Interrupts enable hardware to signal to the processor. For example, when you type, the
keyboard controller (i.e. the hardware device that manages the keyboard) sends an electrical
signal to the processor to alert the operating system of newly available key presses. These
electrical signals are interrupts. The processor receives the interrupt and signals the operating
system to enable the operating system to respond to the new data.
2. Hardware devices generate interrupts asynchronously (with respect to the processor clock).
Consequently, the operating system can be interrupted at any time to process interrupts.

Processing an Interrupt
An interrupt is produced by electronic signals from hardware devices and directed into input pins on an
interrupt controller (a simple chip that multiplexes multiple interrupt lines into a single line to the
processor):
1. Upon receiving an interrupt, the interrupt controller sends a signal to the processor.
2. The processor detects this signal and interrupts its current execution to handle the interrupt.
3. The processor can then notify the operating system that an interrupt has occurred, and the
operating system can handle the interrupt appropriately.

When an interrupt occur, the Operating System does the following


(i) Gains control,
(ii) Saves the state of the interrupted process, usually in the interrupt ‘process’ PCB (Process
Control Block),
(iii) Analyses the interrupt and,
(iv) Transfer control to the specific routine to handle the interrupt (interrupt handler).
Interrupt request
Different devices are associated with different interrupts using a unique value associated with each
interrupt. This enables the operating system to differentiate between interrupts and to know which
hardware device caused which interrupt. In turn, the operating system can service each interrupt with its
corresponding handler. These interrupt values are called interrupt request (IRQ) lines:
 Each IRQ line is assigned a numeric value. For example, on a classic PC, IRQ zero is the timer
interrupt and IRQ one is the keyboard interrupt.
 Some interrupts are dynamically assigned, such as interrupts associated with devices on the PCI
bus. Other non-PC architectures have similar dynamic assignments for interrupt values.
 The kernel knows that a specific interrupt is associated with a specific device. The hardware then
issues interrupts to get the kernel's attention.
TYPES OF INTERRUPTS
(a) Program Interrupt: A program interrupt is generated by error continuously occurring within the user
programs. It is generated by the processor.
Typical events are:
(i) Address violation: is an attempt to address outside a program’s memory space. It is referred to as
the protection interrupt. This could also be applied to an attempt by a user program to execute a
privileged instruction (Operating System service).
(ii) Execution of an invalid instruction
(iii) Arithmetic overflow: An attempt to divide a number with zero

(b) Supervisor Call Interrupt (SVC): A supervisor call (SVC) interrupt is generated by a running process
that executes an unacceptable SVC instruction. A SVC instruction is a user generated request for a
particular system service such as performing I/O operation, obtaining more storage or communicating
with other user/systems’ operator. Typical examples of unacceptable SVC instructions are:
(i) An attempt by program to access a portion of a job that is not currently in Memory.
(ii) An attempt to access a segment that is not presently in the memory.

(c) Input/output (I/O) Interrupt: An I/O interrupt is generated by the controller of an I/O device to
signal the normal completion, start up, occurrence of an error or failure condition of an I/O device.
Typical events are:
(i) An invalid I/O command
(ii) An I/O channel ends its job that is, An I/O channel end interrupt occurs, when the channel finishes
its Job before the device does which is normal e.g. the output channel transfer data from the memory
to the printer’s interrupt buffer. The channel will finishes transferring the last batch of data before the
printer finishes printing the job.

(d) External Interrupt: An external interrupt is generated by any machine component. It is not
necessarily signal for a fault. Typical events are:
(i) A user aborts/terminates his program.
(ii) A signal to the O.S. of a user’s time slice. This is generated by the internal clock within the
processor.
(iii) The recipient of a signal from another processor on a multiuser processor system.

(e) Machine Check Interrupt: A machine check interrupt is generated by the malfunctioning of the
hardware. Typical events are:
(i) The screen showing ‘fixed disk controller bad’
(ii) The Monitor showing ‘keyboard bad
PROGRAM STATUS WORD (PSW)
The Program Status Word is a special portion of the Operating System that gives the complete picture of
the system at any time. It is dynamic in nature and change very frequently to show the current picture of
the system.

INTERRUPT HANDLERS (I.H)


An interrupt handler (IH) also known as interrupt service routine (ISR) is the function (routine) that the
operating System runs (process) in response to a particular interrupt. Each device that generates
interrupts has an associated interrupt handler and the interrupt handler for a device is part of the
device's driver (the kernel code that manages the device).

The difference between the interrupt handlers and other Operating System functions is that the
Operating System invokes Interrupts Handlers in response to interrupts and they run in a special context
called interrupt context. This special context is occasionally called atomic context because code
executing in this context is unable to block.

Because an interrupt can occur at any time, an interrupt handler can be executed at any time. It is
important that the handler runs quickly in order to resume execution of the interrupted processes as
soon as possible. It is important that the operating system services the interrupt without delay which is
to the advantage of the Hardware and to the rest of the System, the interrupt handler executes in as
short a period as possible.

There are 5 types of interrupt handlers namely: Supervisor call IH, External IH, Input/ Output IH, program
and machine check IH. Interrupt Handlers are stored in a special portion in memory (RAM).

In summary, the sequences of Events that occur when an interrupt occurs (interrupt processing) are:
(i) The processor stops executing the job or program.
(ii) The Operating System saves the current state of the processor (i.e. the interrupted process) in the
OLD PSW.
(iii) Control is transferred to the Interrupt Handler.
(iv) The Interrupt becomes the current process. The appropriate interrupt Handler required is
selected. The address of this Interrupt Handler is stored in the NEW PSW.
(v) The Interrupt Handler loads the NEW PSW from its position to the CURRENT PSW (the CURRENT
PSW now contains the address of the appropriate I.H)
(vii) The Interrupt Handler analyses and processes the interrupts (the problem is solved)
(viii) The Interrupts Handler signals the system at the completion of its task.
(ix) The OLD PSW is reloaded to the CURRENT PSW and the next instruction on the interrupted
program is executed.
This implies that there is only one CURRENT PSW though its content changes periodically. The action of
holding the current state of a process which has been temporarily stopped and the starting of another
process is called content switching (or content change).
Exceptions
Exceptions occur synchronously with respect to the processor clock; they are often called synchronous
interrupts. Exceptions occur when the processor detects an error condition while executing an
instruction such as division by zero or abnormal conditions like protection violations, page faults, and
internal machine faults that must be handled by the Operating System.

The Simple definition for Interrupts is an asynchronous interrupts generated by hardware while
Exceptions are synchronous interrupts generated by the processor.

System calls are a type of exception that are implemented by the issuance of a software interrupt, which
traps into the Operating system and causes execution of a special system call handler

Note: Interrupts and Exceptions are referred to as Vector Events and they have Special IDs called
Vectors.

Classifications of Exception
Exceptions are classified as faults, traps, or aborts depending on the way they are reported and whether
the instruction that caused the exception can be restarted without loss of program or task continuity.

Trap: A trap is an exception in a user process. It is caused by division by zero or invalid memory access. It
is also the usual way to invoke a system call because system calls run with a higher priority than user
code. A trap is reported immediately following the execution of the trapping instruction. Traps allow
execution of a program or task to be continued without loss of program continuity. The return address
for the trap handler points to the instruction to be executed after the trapping instruction.

The difference between interrupts and trap are:


(i) Interrupts are hardware interrupts, while traps are software-invoked interrupts.
(ii) The occurrences of hardware interrupts usually disable other hardware interrupts, but this is not
true for traps.
(iii) Traps are active and synchronous while interrupts are passive and asynchronous.

Fault: A fault is an exception that can generally be corrected and that, once corrected, allows the
program to be restarted with no loss of continuity. When a fault is reported, the processor restores the
machine state to the state prior to the beginning of execution of the faulting instruction.

Example: A page fault is often recoverable. A piece of an application's address space may have been
swapped out to disk from Memory. The application will trigger a page fault when it tries to access
memory that was swapped out. The kernel can pull that memory from disk to memory, and hand control
back to the application. The application will continue where it left off (at the faulting instruction that was
accessing swapped out memory), but this time the memory access should succeed without faulting.

Abort: An abort is an exception that does not always report the precise location of the instruction
causing the exception and does not allow a restart of the program or task that caused the exception.
Aborts are used to report severe errors, such as hardware errors and inconsistent or illegal values in
system tables.
Edge Cases: Software invoked interrupts (triggered by the INT instruction) behave in a trap-like manner.
The instruction completes before the processor saves its state and jumps to the interrupt handler.
THE KERNEL

A kernel connects the application software to the hardware of a computer. It is the central
component of most computer operating systems (OS). It is responsible for managing the
system's resources and the communication between hardware and software components. It is
the basic component of the operating system that provides the lowest-level abstraction layer for
the resources (especially memory, processors and I/O devices) that applications must control to
perform their function. It makes the Computer resources available for application processes
through inter-process communication mechanisms and system calls.

The kernel connecting the application software to the hardware component of a computer.

Most Operating system depend on the concept of the Kernel. The existence of a kernel is a
natural consequence of designing a computer system as a series of abstraction layers, each
relying on the functions of layers beneath itself. The kernel, from this viewpoint, is simply the
name given to the lowest level of abstraction that is implemented in software.

While it is today mostly called the kernel, the same part of the operating system has also in the
past been known as the nucleus or core. (You should note, however, that the term core has also
been used to refer to the primary memory of a computer system, typically because some early
computers used a form of memory called Core memory.)

In most cases, the boot loader starts executing the kernel in supervisor mode, the kernel then
initializes itself and starts the first process. After this, the kernel does not typically execute
directly, only in response to external events (e.g. via system calls used by applications to
request services from the kernel, or via interrupts used by the hardware to notify the kernel of
events). Additionally, the kernel typically provides a loop that is executed whenever no
processes are available to run; this is often called the idle process.

The kernel usually provides features for low-level scheduling of processes (dispatching), Inter-
process communication, process synchronization, context switch, manipulation of process
control blocks, interrupt handling, process creation and destruction, process suspension and
resumption (see process states in the next module).

A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel,
operating system and applications.

Difference between Operating System and Kernel


Operating System: it is a system program that provides interface between user and computer. It
is the first program that loads when the when the computer boots up.

Kernel: The kernel is the core component of an operating system. It is also a system program. It
is the part of Operating System which coverts user command into machine language.
The differences between the Operating system and the kernel are presented in the table below:
OPERATING SYSTEM KERNEL
It is a system software. It is a system software which is part
of operating system.
It provides interface between the It provides interface between the
user and hardware. application and hardware.
It also provides protection and Its main purpose is memory
security. management, disk management,
process management and task
management.
All system need operating system All operating system need the
to run. kernel to run.
Type of operating system includes Type of kernel includes Monolithic
single and multiuser OS, and Micro kernel, hybrid kernel (the
multiprocessor OS, realtime OS, nano and pico kernel)
Distributed OS.
It is the first program that loads It is the first program to load when
when computer boots up. operating system loads.

The Basic Responsibilities of the Kernel


The kernel's primary purpose is to manage the computer's resources and allow other programs
to run and use these resources. Typically, the resources consist of:
 The CPU (usually called the processor): This is the most central part of a computer
system, responsible for running or executing programs on it. It is the brain of the
computer that consist of the Control unit and the Arithmetic and Logic Unit (ALU). The
kernel takes responsibility for deciding at any time which of the many running programs
should be allocated to the processor or processors (each of which can usually run only
one program at once).

 The computer's memory. Memory is used to store both program instructions and data.
Typically, both need to be present in memory in order for a program to execute. Often
multiple programs will want access to memory, frequently demanding more memory
than the computer has available. The kernel is responsible for deciding which memory
each process can use, and determining what to do when not enough is available.

 Any Input/output (I/O) devices present in the computer, such as disk drives, printers,
displays, etc. The kernel allocates requests from applications to perform I/O to an
appropriate device (or subsection of a device, in the case of files on a disk or windows
on a display) and provides convenient methods for using the device (typically abstracted
to the point where the application does not need to know implementation details of the
device)
Kernels also usually provide methods for synchronization and communication between
processes (called inter-process communication or IPC).

A kernel may implement these features itself, or rely on some of the processes it runs to provide
the facilities to other processes, although in this case it must provide some means of IPC to
allow processes to access the facilities provided by each other.

Finally, a kernel must provide running programs with a method to make requests to access
these facilities.
1. Process Management
The main task of a kernel is to allow the execution of applications and support them with
features such as hardware abstractions. To run an application, a kernel typically sets up an
address space for the application, loads the file containing the application's code into memory,
and sets up a stack for the program and branches to a given location inside the program, thus
starting its execution.

2. Memory Management
The kernel has full access to the system's memory and must allow processes to access this
memory safely as they require it. Often the first step in doing this is virtual addressing, usually
achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given
physical address appear to be another address, the virtual address. Virtual address spaces may
be different for different processes; the memory that one process accesses at a particular
(virtual) address may be different memory from what another process accesses at the same
address. This allows every program to behave as if it is the only one (apart from the kernel)
running and thus prevents applications from crashing each other.

3. Device Management
To perform useful functions, processes need access to the peripherals connected to the
computer, which are controlled by the kernel through device drivers. For example, to show the
user something on the screen, an application would make a request to the kernel, which would
forward the request to its display driver, which is then responsible for actually plotting the
character/pixel.

A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an
embedded system where the kernel will be rewritten if the available hardware changes),
configured by the user (typical on older PCs and on systems that are not designed for personal
use) or detected by the operating system at run time (normally called Plug and Play).

In a plug and play system, a device manager first performs a scan on different hardware buses,
such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect
installed devices, then searches for the appropriate drivers.

4. System Calls
To actually perform useful work, a process must be able to access the services provided by the
kernel. This is implemented differently by each kernel, but most provide a C library or an API,
which in turn invoke the related kernel functions.

Kernel Design Decisions


1. Fault Tolerance
An important consideration in the design of a kernel is fault tolerance; specifically, in cases
where multiple programs are running on a single computer, it is usually important to prevent a
fault in one of the programs from negatively affecting the other. Extended to malicious design
rather than a fault, this also applies to security, and is necessary to prevent processes from
accessing information without being granted permission.

2. Security
An important kernel design decision is the choice of the abstraction levels where the security
mechanisms and policies should be implemented. One approach is to use firmware and kernel
support for fault tolerance, and build the security policy for malicious behaviour on top of that
(adding features such as cryptography mechanisms where necessary), delegating some
responsibility to the compiler. Approaches that delegate enforcement of security policy to the
compiler and/or the application level are often called language-based security.

3. Hardware-Based Protection or Language-Based Protection


In a language-based protection system, the kernel allows code that have been programmed by a
trusted language compiler to execute. The language can be designed such that it is impossible
for the programmer to instruct it to do something that will violate a security requirement of the
operating system.

The Kernel Design Approaches


The above functionalities of kernel can be performed by the kernel depending on the design
approach of the kernel. These design approaches can also be referred to as categories or types
of kernel. They are:
1. Monolithic kernel: The monolithic kernel manages system resources between
application and hardware, this it does by implementing user services and kernel services
under same address space. This increases the size of the kernel, thus increases size of
operating system as well.
This kernel provides CPU scheduling, memory management, file management and other
operating system functions through system calls. As both services are implemented
under same address space, this makes operating system execution faster. Below is the
diagrammatic representation of Monolithic Kernel
The diagrammatic representation of the monolithic kernel

One of the major merit of having monolithic kernel is that it provides CPU scheduling, memory
management, file management and other operating system functions through system calls.
The other one is that it is a single large process running entirely in a single address space.
It is a single static binary file.
Example of some Monolithic Kernel based OSs are: UNIX, Linux, Open VMS, XTS-400, z/TPF.
The major drawback monolithic kernel is that, if any service fails it leads to entire system failure
and if a user has to add any new service, the user needs to modify the entire operating system.

2. The Microkernel: Microkernel like the monolithic kernel, manages all system resources but in a
microkernel, the user services and kernel services are implemented in different address space.
The user services are kept in user address space, and kernel services are kept under kernel
address space, thereby reducing the size of kernel and size of operating system as well.

The diagrammatic representation of the microkernel

It provides minimal services of process and memory management. The communication


between client program/application and services running in user address space is
established through message passing, reducing the speed of execution microkernel. The
Operating System remains unaffected as user services and kernel services are isolated so
if any user service fails it does not affect kernel service. This is one of the advantages of
the microkernel over the monolithic kernel. It is easily extendable i.e. if any new services
are to be added they are added to user address space and hence requires no
modification in kernel space. It is also portable, secure and reliable.

Since kernel is the core part of the operating system, so it is meant for handling the most
important services only. Thus in this architecture only the most important services are
inside kernel and rest of the OS services are present inside system application program.
Thus users are able to interact with those not-so important services within the system
application. And the microkernel is solely responsible for the most important services of
operating system which are: Inter process-Communication, Memory Management and
CPU-Scheduling.

The advantages of the microkernel include the better functionality due its small and
isolated architecture. The Expansion of the system is easier as it is simply added in the
system application without disturbing the kernel. The Eclipse IDE is a good example of
Microkernel Architecture.

Comparison of the Monolithic and Microkernel


3. The Hybrid Kernel: The Hybrid kernel is a kernel architecture based on combining
aspects of microkernel and monolithic kernel architectures used in computer operating
systems. The category is controversial due to the similarity to monolithic kernel; the
term has been dismissed by some as just marketing. The usually accepted categories are
monolithic kernels and microkernels (with nanokernels and exokernels seen as more
extreme versions of microkernels). The hybrid kernel approach tries to combine the
speed and simpler design of a monolithic kernel with the modularity and execution
safety of a microkernel.

The Diagrammatic representation of the Hybrid Kernel Approach

Hybrid kernels are essentially a compromise between the monolithic kernel approach
and the microkernel system. This implies running some services (such as the network
stack or the file system) in kernel space to reduce the performance overhead of a
traditional microkernel, but still running kernel code (such as device drivers) as servers
in user space.
The idea behind this quasi-category is to have a kernel structure similar to a
microkernel, but implemented as a monolithic kernel. In contrast to a microkernel, all
(or nearly all) services are in kernel space. As in a monolithic kernel, there is no
performance overhead associated with microkernel message passing and context
switching between kernel and user mode. Also, as with monolithic kernels, there are
none of the benefits of having services in user space.

You might also like