Lecture note on operating system_040632
Lecture note on operating system_040632
OPERATING SYSTEM 1
CSC024
Table of Contents
OVERVIEW OF OPERATING SYSTEM............................................................................................................2
Introduction............................................................................................................................................2
Goals and objectives of the operating system........................................................................................3
Views of operating system......................................................................................................................3
Functions of the operating system.........................................................................................................5
Services provided by the operating systems..........................................................................................7
Examples of operating systems..............................................................................................................8
GENERATIONS AND HISTORY OF OPERATING SYSTEM...............................................................................9
Generations of Operating System...........................................................................................................9
The evolution of operating systems......................................................................................................12
TYPES OF OPERATING SYSTEMS................................................................................................................15
INTER-PROCESS COMMUNICATION..........................................................................................................26
The need for IPC....................................................................................................................................26
IPC Methods..........................................................................................................................................26
Applications of IPC................................................................................................................................27
PROCESS SCHEDULING..............................................................................................................................28
Objectives of Scheduling.......................................................................................................................28
Criteria for scheduling...........................................................................................................................28
Scheduling Queues................................................................................................................................29
Schedulers.............................................................................................................................................30
Context switching..................................................................................................................................32
Co-operating processes.........................................................................................................................34
INTERRUPTS..............................................................................................................................................35
Interrupt request..................................................................................................................................36
TYPES OF INTERRUPTS..........................................................................................................................36
Exceptions.............................................................................................................................................38
Classifications of Exception...................................................................................................................38
THE KERNEL...............................................................................................................................................40
Difference between Operating System and Kernel..............................................................................41
The Basic Responsibilities of the Kernel...............................................................................................42
Kernel Design Decisions........................................................................................................................43
The Kernel Design Approaches.............................................................................................................44
Introduction
An Operating System (OS) can be defined as a set of computer programs that manage
the hardware and software resources of a computer. It is the core of computer programming
that primarily deals with computer architecture. The Operating system is a program that serves
as an interface to coordinate different resources of computer. An operating system processes
raw system and user input and responds by allocating and managing tasks and internal system
resources as a service to users and programs of the system. In other words, an (OS) can be
defined as a set of programs implemented either in software or firmware (hardwired
instructions on chips usually in ROM) or both that makes the hardware usable.
At the foundation of all system software, an operating system performs basic tasks such
as controlling and allocating memory, prioritizing system requests, controlling input and output
devices, facilitating networking and managing file systems. Most operating systems come with
an application that provides an interface to the OS managed resources. These applications have
had command line interpreters as a basic user interface, but more recently have been
implemented as a graphical user interface (GUI) for ease of operation. Operating Systems
themselves, have no user interfaces, and the user of an OS is an application, not a person. The
operating system forms a platform for other system software and for application software.
Windows, Linux, and Mac OS are some of the most popular OS's.
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs. It is an intermediary
between users and computer hardware. It provides users an environment in which a user can
execute programs conveniently and efficiently. In technical terms, it is the software that
manages hardware. The operating System controls the allocation of resources and services such
as memory, processors, devices and information.
The Diagram illustrating the Operating System
2. Efficiency
An OS allows computer system resources to be used in an efficient manner. This particularly
important for large shared multi-user systems which are usually expensive. In the past, the
efficiency (i.e. optimal use of the computer resources) considerations were often more
important than convenience.
3. Evolutionary Capabilities
Ability to evolve also happens to be one of the goals of the OS. An OS should be constructed in
such a way that it permits the effective development, testing and introduction of new system
functions without interfering with its service.
1. It provides an interface between the computer hardware and the programmer that
simplifies and makes feasible for coding, creation, debugging of application programs.
2. It controls the allocation and use of the computing System’s resources among the
various user and tasks.
Functions of the operating system
The Following are some of important functions of an operating System.
3. Device Management
OS manages device communication via their respective drivers. Operating System does the
following activities for device management.
Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
De-allocates devices.
Compiler: The High-level languages- examples are FORTRAN, COBOL, ALGOL and PL/I are
processed by compilers and interpreters. A compiler is a program that accepts a source program
in a “high-level language “and produces a corresponding object program. An interpreter is a
program that appears to execute a source program as if it was machine language. The same
name (FORTRAN, COBOL, etc.) is often used to designate both a compiler and its associated
language.
Loader: A Loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating and direct-linking. In general, the loader must
load, relocate and link the object program. The loader is a program that places programs into
memory and prepares them for execution. In a simple loading scheme, the assembler outputs
the machine language translation of a program on a secondary device and a loader places it in
the core. The loader places into memory the machine language version of the user’s program
and transfers control to it. Since the loader program is much smaller than the assembler, those
make more core available to the user’s program.
4. File Management A file system is normally organized into directories for easy navigation and
usage. These directories may contain files and other directions. Operating System does the
following activities for file management.
Keeps track of information, location, uses, status etc. The collective facilities are often known
as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
5. Security By means of password and similar other techniques, preventing unauthorized access
to programs and data.
6. Control over system performance Recording delays between request for a service and
response from the system.
7. Job accounting Keeping track of time and resources used by various jobs and users.
8. Error detecting aids Production of dumps, traces, error messages and other debugging and
error detecting aids.
9. Coordination between other software and users Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer system.
Services provided by the operating systems
The services provided by the OS can be categorized into two:
ii. Resources Allocation: In an environment where there multiple users or multiple jobs running
at the same time, resources must be allocated to each of them. Many different types of
resources are managed by the OS. Some (such as CPU cycles, main memory and file storage)
may have general request and release codes. For instances, in determining how best to use the
CPU, the OS have CPU-scheduling routines that take into account the speed of the CPU, the jobs
that must be executed, the number of registers available and other factors. These routines may
also be used to allocate plotters, modems and other peripheral devices.
iii. Accounting: This helps to keep track of how much of and what types of computer resources
are used by each user. Today, this record keeping is not for billing purposes but for simply
accumulating usage statistics. This statistics may be available tool for researchers who want to
reconfigure the system to improve computing services.
iv. Ease of Evolution of OS: A major OS will evolve over time for a number of reasons such as
hardware upgrades and new types of hardware e.g. The use of graphics terminals may affect OS
design. This is because such a terminal may allow the user to view several applications at the
same time through ‘windows’ on the screen. This requires more sophisticated support in the
OS.
v. New Services: In response to user demands or the need of system managers, the OS may
expand to offer new services.
vi. Error correction: The OS may have faults which may be discovered over the course of time
and fixes will need to be made.
NOTE
Setting up: This involves putting the machine (computer) in an active state and leading a job individually
usually from cards (which was introduced about 1880). This job had the whole memory during duration.
Tearing down: when a job runs to completion or terminates because of some error situation, an
operator will load a program to dump the memory. The operator also removes the cards and printed
output. He also takes the machine back to its initial state (interactive state), no other job is run or be in
the active state when another job is to be processed. Thus, a small program requiring a little CPU time
would take so long to complete because of the set up and tear down times.
These systems are often heavily under-utilized. It is far more important for them to be available
when needed and to respond quickly than for them to be busy throughout the time.
These Operating Systems were batch-oriented and concerned mostly with improving throughput
i.e. work processed per unit time with expensive hardware. Multiprogramming was commonly
employed in which general programs were resident in the main memory at the same time and
the processor switched rapidly between them (programs). This increased throughput and the
imbalance in the speed between I/O devices and processor.
Networking was enhanced, such that users were no longer be confined to communicating with a
single computer in a time shared mode but rather the user may communicate with geographically
dispersed systems. This gave room for security problems with geographically passing over various
types of vulnerable communications lines. Encryption received attention. Thus it became necessary
to encode data so that it is only useful to the receiver. The highly symbolic mnemonics acronym-
oriented user environment was replaced with menu-driven system that guided users through various
available options in English language. The concept of virtual machines becomes widely used. Today’s
user is not concerned with the internal functioning of the machine, but to accomplished work with a
computer. Database systems have gained wide acceptance and importance. Thousands of online
databases have become available for access via terminals over communication networks. The
concept of distributed data processing has become firmly established.
Early Evolution
1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
1949: EDSAC and EDVAC
1949: BINAC - a successor to the ENIAC
1951: UNIVAC by Remington
1952: IBM 701
1956: The interrupt
1954-1957: FORTRAN was developed
And as the research and development work continues, we are seeing new operating systems being
developed and existing ones getting improved and modified to enhance the overall user experience,
making operating systems fast and efficient like never before. Also, with the onset of new devices like
wearables, which includes, Smart Watches, Smart Glasses, VR gears etc., the demand for unconventional
operating systems is also rising.
TYPES OF OPERATING SYSTEMS
It is defined as an operating system known to give maximum time for each of the
critical operations that it performs, like OS calls and interrupt handling.
Real time processing is always on line whereas on line system need not be real time.
The time taken by the system to respond to an input and display of required
updated information is termed as response time. So in this method response time is
very less as compared to the online processing.
Real-time systems are used when there are rigid (limited) time requirements on the
operation of a processor or the flow of data and real-time systems can be used as a
control device in a dedicated application. (They are used when there are strict time
requirements). Real-time operating system has well-defined, fixed time constraints
otherwise system will fail. For example Scientific experiments, medical imaging
systems, industrial control systems, missile/ weapon systems, robots, and home-
appliance controllers, Air traffic control system etc.
Advantages of RTOS
Maximum Consumption: Maximum utilization of devices and system, thus more
output from all the resources
Task Shifting: Time assigned for shifting tasks in these systems are very less. For
example in older systems it takes about 10 micro seconds in shifting one task to
another and in latest systems it takes 3 micro seconds.
Focus on Application: Focus on running applications and less importance to
applications which are in queue.
Real time operating system in embedded system: Since size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error free.
Memory Allocation: Memory allocation is best managed in these type of systems.
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their concentration is very
less on few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupt
signals to response earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
b. Single user Operating System: Single user Operating systems provides facilities to
only one user at a time. That is, it allows user access to the computer system one at
time. This class of operating system is usually found in Personal computers (PCs) and
Laptops. They are of two types:
1. Single User, Single Tasks Operating Systems: This type of Operating systems
allows user to perform only one tasks at a time. An example is the Palm OS in
hand held devices.
2. Single user, Multitask Operating System: this type of single user Operating
system allows users to perform multiple tasks at the same time. Examples are, the
Microsoft Windows and Apple Mac OS that allows a single user to work on
multiple programs at the same time. For example, a user can work on a text
editor, browse the World Wide Web and play music at the same time. Most
modern personal computers and laptops are single user multi-tasking operating
systems.
c. Multi User Operating System: Multi user Operating Systems allows multiple users to
access the computer at the same time. The operating system manages the memory
and resources among the various users according to the requirements hence, the
task of one user will not affect the tasks of the other users. Examples of multi user
operating systems are UNIX, Linux, VMS, and mainframe operating systems, such as
MVS, are examples of multi-user operating systems.
There are two types of Multi User Operating Systems
1. Time Sharing Operating: Time sharing operating system allows multiple users in
different locations to use a particular computer system at the same time.
2. Distributed Operating System: In distributed operating system, the data
processing task is divided among the processors accordingly. These systems
can be located in different geographical locations but tasks are share amongst
them for quick delivery.
DIFFERENCE BETWEEN SINGLE USER AND MULTI USER OPERATING SYSTEMS
Single user Operating System Multi User Operating System
Definition A single user operating system A multi user operating system is a
is a type of operating system type of operating system that
that provides facilities to only provides resources and services to
one user at a time multiple users at a time.
Types Single user single task OS and Timesharing OS and Distributed
single user multi-task OS OS
Complexity Simple Complex
Examples Microsoft Windows, Apple Mac UNIX and Linux
OS and Palm OS
NOTE
There are however multi-user operating systems and single-user operating systems
that support networking. For example, Windows 2000 and Novell Netware can each
support hundreds or thousands of networked users, but the operating systems
themselves are not true multi-user operating systems. The system administrator is
the only user for Windows 2000 or Netware. The network support and the entire
remote user logins the network enables are, in the overall plan of the operating
system, a program being run by the administrative user.
In Summary
The Batch Operating System does not interact with the computer directly. There is an
operator which takes similar jobs having same requirement and group them into
batches. It is the responsibility of operator to sort the jobs with similar needs.
Examples of Batch based Operating System: Payroll System, Bank Statements etc
Operating system uses CPU scheduling and multiprogramming to provide each user with
a small
portion of a time. Computer systems that were designed primarily as batch systems
have been modified to time-sharing systems.
Multiprogrammed System
Network Operating System runs on a server and provides server the capability to
manage data, users, groups, security, applications, and other networking functions. The
primary purpose of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local area network (LAN),
private network or to other networks.
In a network operating system, the users are aware of the existence of multiple
computers, and can log in to remote machines and copy files from one machine to
another and that is why these computers are popularly known as tightly coupled
systems. Each machine runs its own local operating system and has its own user (or
users). Network operating systems are designed with more complex functional
capabilities.
IPC refers to the mechanisms and techniques that operating systems use to facilitate
communication between different processes. In a multitasking environment, numerous
processes are running concurrently, and IPC serves as the bridge that allows them to exchange
information and coordinate their actions. IPC and Synchronization mechanism helps in
coordination of processes to enable data consistency in the operating system.
1. Data Sharing: Processes often need to share data. For example, a text editor may need
to pass data to a printer process to generate a hard copy.
2. Synchronization: Processes may need to synchronize their activities. For instance, in a
multi-threaded environment, threads must coordinate to ensure data consistency.
3. Communication: Processes might need to communicate for a variety of purposes, such
as exchanging information, signaling, and error handling.
4. Resource Sharing: IPC helps manage and share resources, like file access, memory, or
hardware devices among processes.
IPC Methods
There are several methods of IPC used in modern operating systems. Each method has its
strengths and weaknesses, and the choice of method depends on the specific requirements of
the processes involved: (Pain, 2023)
1. Message Passing: In message passing, processes exchange data by sending and receiving
messages through a messaging system. This method is particularly useful for inter-
process communication in distributed systems.
2. Shared Memory: Processes can communicate by sharing a common memory region.
This method is efficient but requires synchronization mechanisms to avoid data
inconsistencies.
3. Sockets: Sockets are commonly used for IPC in networked systems. Processes can
communicate over a network or on the same machine by reading and writing data
through sockets.
4. Pipes and FIFOs: These are used for communication between related processes. Pipes
are unidirectional, while FIFOs (named pipes) are bidirectional and can be used between
unrelated processes.
5. Signals: Processes can send signals to each other to notify about events or requests.
Signals are lightweight and are often used for process management and error handling.
Applications of IPC
IPC is a fundamental component of modern operating systems and finds applications in various
scenarios:
1. Shell Pipelines: In Unix-like systems, the shell uses pipes to connect the output of one
command to the input of another.
2. Graphical User Interfaces (GUIs): GUI applications use IPC for event handling, such as
sending messages between windows or processes.
3. Server-Client Communication: IPC is essential in client-server applications. Clients and
servers communicate over sockets, pipes, or other IPC mechanisms.
4. Multi-threading: In multi-threaded programs, threads within a process must
communicate and synchronize through IPC mechanisms like semaphores and mutexes.
5. Distributed Systems: IPC is crucial in distributed computing, where processes may run
on different machines. Message passing is commonly used in such scenarios.
PROCESS SCHEDULING
Process scheduling is an important part of a Multiprogramming operating system. Such operating
systems allow more than one process to be loaded into the executable memory at a time and loaded
process shares the processor using time multiplexing.
Definition: The process scheduling is the activity of the process manager that handles the removal of the
running process from the processor and the selection of another process on the basis of a particular
strategy.
Objectives of Scheduling
The scheduling of processes affects the overall system performance and behavior. The objective of
scheduling are to:
1. Maximize the system throughput.
2. Be ‘fair’ to all users. This does not mean all users must be treated equally, but consistently,
relative to the importance of the work being done.
3. Provide tolerable response (for on-line users) or turn-around time (for batch users).
4. Break down its performance gracefully. If the system becomes overloaded, it should not
‘collapse’, but avoid further loading (e.g. by preventing new jobs or users) or temporarily reduce
the level of service (e.g. response time).
5. Be consistent and predictable. The response and turn-around time should be relatively stable
from day to day.
2. Class of job; i.e. batch or on-line or real-time. On-line users require a tolerable response time,
while real-time systems often demand instant service.
4. I/O or CPU bound that is, whether job uses predominately I/O time or processor time. A CPU-
bound process requires more CPU time or spends more time in the running state. An I/O-bound
process requires more I/O time and less CPU time. An I/O-bound process spends more time in
the waiting state. This criterion is often of consequence because of the need to balance the use
of processor and the I/O system. If the processor is overwhelmed in CPU- intensive work, it is
unlikely that the I/O devices are being serviced frequently enough to sustain maximum
throughput.
5. Resources used to date; i.e. the amount of processor time already consumed.
6. Waiting time to date; i.e. the amount of time spent waiting for service so far.
Some of these criteria are ‘static’ characteristics which can be assessed prior to commencement
of the process execution. One of these is the priority. This is a value which can assign to each
process and indicates the relative ‘importance’ of The arrows shows the process flow in the
system.
the process, such that a high priority process will be selected for execution in preference to a lower
priority one. Scheduling in this way on the basis of a single priority value enables rapid decisions to be
made by the scheduler.
An initial priority can be assigned to each process, in some schemes, the priority is static and is used as a
basis for scheduling throughout the life of the process, and while in other schemes the priority is
dynamic, being modified to reflect the changing importance of the process. The priority can be supplied
by a user or could be derived from the characteristics of the job, or both.
Scheduling Queues
Scheduling queues refers to queues of processes or devices. When processes enter the system, they are
put into a job queue. This queue consists of all processes in the system. The processes that are residing
in main memory and are ready and waiting to be executed are kept on a list called the ready queue. This
queue is generally stored as a linked list. A ready-queue header contains pointers to the first and final
PCB of processes in the list. Each PCB extends to include a pointer field that points to the next PCB in the
ready queue.
The operating system also has other queues. When a process is allocated the CPU, it executes for a while
and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the
completion of an I/O request. In the case of I/O request, such a request may be to a dedicated tape
drive, or to a shared device, such as a disk. Since the system has many processes, the disk may be busy
with the I/O request of some other process. The process therefore may have to wait for the disk. The list
of processes waiting for a particular I/O device is called a device queue. The Device queue is a queue for
which multiple processes are waiting for a particular I/O device and each device has its own queue.
A common way of representing Process Scheduling is the use of the Queuing Diagram. This figure below
shows the queuing diagram of process scheduling.
The Queues is represented by rectangular boxes
The circles represent the resources that serve the queues.
A Queuing Diagram
There are two types of queues, the ready queue and set of device queues. A newly arrived process is
put in the ready queue. It waits in the ready queue until it is selected for execution (or dispatched). Once
the process is assigned to the CPU and is executing, one of several events could occur:
1. The process could issue an I/O request and then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt and put back in
the ready queue.
4. In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it terminates,
at which time it is removed from all queues and has its PCB and resources deallocated.
b) Not Running: Processes that are not running are kept in queue, waiting their turn to execute.
Each entry in the queue is a printer to a particular process. Queue is implemented by using
linked list. The Use of dispatcher is as follows. When a process interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then select a process from the queue to execute.
Schedulers
A process moves between the various scheduling queues throughout its lifetime. The operating system
must select, for scheduling purposes, processes for these queues in a particular manner. The selection
process is carried out by the appropriate scheduler.
In a multiprogramming environment, several processes will be competing for the processor. At any
instant, only one process will be running while the others will be ready waiting for the processor or in
some other wait condition. The Operating System has to determine the optimum sequence and timing of
assisting processes to the processor. This is referred to as SCHEDULLING.
In batch systems, often more processes are submitted than cannot be executed immediately. These
processes are pooled to a mass-storage device, where they are kept for later execution. The long-term
scheduler, or job scheduler, selects processes from this pool and loads them into memory for execution.
The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to
execute, and allocates the processor to one of them.
Running process may become suspended by making an I/O request. Suspended processes cannot make
any progress towards completion. In this condition, there is the need to remove the process from
memory and make space for other process. The process of moving a Suspended process to the
secondary storage is called swapping, and the process is said to be swapped out or rolled out. Swapping
may be necessary to improve the process mix.
3.Short Term Scheduler (Low level scheduling or processor scheduling)
It is also called CPU scheduler. It decides on the ready process to be assigned to the processor. This level
is often called the dispatcher but the term is more accurately refers to as the actual activity of
transferring control to the selected process.
The main objective of this scheduler is increasing system performance in accordance with the chosen set
of criteria. It is the change of ready state to running state of the process. CPU scheduler selects from
among the processes that are ready to execute and allocates the CPU to one of them. Short term
scheduler is faster than long term scheduler.
Context switching
A context switch is the mechanism to store and restore the state or context of a CPU in Process Control
Block so that a process execution can be resumed from the same point at a later time. That is, the
process of saving the context of one process and loading the context of another process is known as
Context Switching. In simple terms, it is like loading and unloading the process from running state to
ready state. Context Switching happens when:
1. When a high-priority process comes to ready state (i.e. with higher priority than the running
process)
2. An Interrupt occurs
3. User and kernel mode switch (It is not necessary though)
4. Preemptive CPU scheduling used.
Using this technique a context switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to executing another, the context
switcher saves the content of all processor registers for the process being removed from the CPU in its
process being removed from the CPU in its process descriptor. The context of a process is represented in
the process control block of a process. Context switch time is pure overhead. Context switching can
significantly affect performance, since modern computers have a lot of general and status registers to be
saved.
Content switch times are highly dependent on hardware support. Context switch requires ( n + m ) bXK
time units to save the state of the processor with n general registers, assuming b store operations are
required to save register and each store instruction requires K time units.
Some hardware systems employ two or more sets of processor registers to reduce the amount of context
switching time. When the process is switched the information stored is:
1. Program Counter
2. Scheduling Information
3. Base and limit register value
4. Currently used register
5. Changed State
6. I/O State
Mode Switch
A mode switch occurs when CPU privilege level is changed, for example when a system call is made or a
fault occurs. The kernel works in more a privileged mode than a standard user task. If a user process
wants to access things which are only accessible to the kernel, a mode switch must occur. The currently
executing process need not be changed during a mode switch. A mode switch typically occurs for a
process context switch to occur. Only the kernel can cause a context switch.
Co-operating processes
A Co-operating process is a process that can affect or be affected by the other processes while executing.
If suppose any process is sharing data with other processes, then it is called co-operating process.
The Advantages of the co-operating processes are:
1. Sharing of information: Co-operating processes share the information such as a file, memory and so
on. System must provide an environment to allow concurrent access to these types of resources
2. Increases computation speed: Computation speed will increase if the computer has multiple
processing elements that are connected together.
3. Modularity: System is constructed in a modular fashion. System function is divided into number of
modules.
4. Convenience
Co-operating processes are also Reproducible. For example, suppose one process writes “ABC” and
another writes “CBA” different outputs can be derived but we cannot tell what comes from which, which
process output first “C” in “ABCCBA”. The subtle state sharing occurs through the terminal. Not just
anything can happen, though. For example, “AABBCC “cannot occur.
Co-operating processes
INTERRUPTS
Processors are faster than hardware devices that they control and communicate with. It is not ideal for
the kernel (the core of an operating system) to issue a request and wait for a response from slower
hardware. Instead, the kernel must be free to go and handle other functions of the operating system
only dealing with the hardware after it has actually completed its work.
One way the processor works with hardware without affecting the overall performance of the computer,
is the provision a mechanism for the hardware to signal to the operating system when its attention is
needed. This mechanism is called an interrupt.
Interrupts occur at random times during the execution of a program, in response to signals from
hardware. System hardware uses interrupts to handle events external to the processor, such as requests
to service peripheral devices. Software can also generate interrupts by executing the INT instruction
(program interrupt).
Definition: An interrupt is an event that changes the sequence or order in which the processor executes
instructions. It is usually generated by electric signals form the computer hardware. It could also be in
response to an asynchronous or exceptional event that automatically saves the current processor status
to allow later restart and causes an automatic transfer to a specific routine (the interrupt handler).
Processing an Interrupt
An interrupt is produced by electronic signals from hardware devices and directed into input pins on an
interrupt controller (a simple chip that multiplexes multiple interrupt lines into a single line to the
processor):
1. Upon receiving an interrupt, the interrupt controller sends a signal to the processor.
2. The processor detects this signal and interrupts its current execution to handle the interrupt.
3. The processor can then notify the operating system that an interrupt has occurred, and the
operating system can handle the interrupt appropriately.
(b) Supervisor Call Interrupt (SVC): A supervisor call (SVC) interrupt is generated by a running process
that executes an unacceptable SVC instruction. A SVC instruction is a user generated request for a
particular system service such as performing I/O operation, obtaining more storage or communicating
with other user/systems’ operator. Typical examples of unacceptable SVC instructions are:
(i) An attempt by program to access a portion of a job that is not currently in Memory.
(ii) An attempt to access a segment that is not presently in the memory.
(c) Input/output (I/O) Interrupt: An I/O interrupt is generated by the controller of an I/O device to
signal the normal completion, start up, occurrence of an error or failure condition of an I/O device.
Typical events are:
(i) An invalid I/O command
(ii) An I/O channel ends its job that is, An I/O channel end interrupt occurs, when the channel finishes
its Job before the device does which is normal e.g. the output channel transfer data from the memory
to the printer’s interrupt buffer. The channel will finishes transferring the last batch of data before the
printer finishes printing the job.
(d) External Interrupt: An external interrupt is generated by any machine component. It is not
necessarily signal for a fault. Typical events are:
(i) A user aborts/terminates his program.
(ii) A signal to the O.S. of a user’s time slice. This is generated by the internal clock within the
processor.
(iii) The recipient of a signal from another processor on a multiuser processor system.
(e) Machine Check Interrupt: A machine check interrupt is generated by the malfunctioning of the
hardware. Typical events are:
(i) The screen showing ‘fixed disk controller bad’
(ii) The Monitor showing ‘keyboard bad
PROGRAM STATUS WORD (PSW)
The Program Status Word is a special portion of the Operating System that gives the complete picture of
the system at any time. It is dynamic in nature and change very frequently to show the current picture of
the system.
The difference between the interrupt handlers and other Operating System functions is that the
Operating System invokes Interrupts Handlers in response to interrupts and they run in a special context
called interrupt context. This special context is occasionally called atomic context because code
executing in this context is unable to block.
Because an interrupt can occur at any time, an interrupt handler can be executed at any time. It is
important that the handler runs quickly in order to resume execution of the interrupted processes as
soon as possible. It is important that the operating system services the interrupt without delay which is
to the advantage of the Hardware and to the rest of the System, the interrupt handler executes in as
short a period as possible.
There are 5 types of interrupt handlers namely: Supervisor call IH, External IH, Input/ Output IH, program
and machine check IH. Interrupt Handlers are stored in a special portion in memory (RAM).
In summary, the sequences of Events that occur when an interrupt occurs (interrupt processing) are:
(i) The processor stops executing the job or program.
(ii) The Operating System saves the current state of the processor (i.e. the interrupted process) in the
OLD PSW.
(iii) Control is transferred to the Interrupt Handler.
(iv) The Interrupt becomes the current process. The appropriate interrupt Handler required is
selected. The address of this Interrupt Handler is stored in the NEW PSW.
(v) The Interrupt Handler loads the NEW PSW from its position to the CURRENT PSW (the CURRENT
PSW now contains the address of the appropriate I.H)
(vii) The Interrupt Handler analyses and processes the interrupts (the problem is solved)
(viii) The Interrupts Handler signals the system at the completion of its task.
(ix) The OLD PSW is reloaded to the CURRENT PSW and the next instruction on the interrupted
program is executed.
This implies that there is only one CURRENT PSW though its content changes periodically. The action of
holding the current state of a process which has been temporarily stopped and the starting of another
process is called content switching (or content change).
Exceptions
Exceptions occur synchronously with respect to the processor clock; they are often called synchronous
interrupts. Exceptions occur when the processor detects an error condition while executing an
instruction such as division by zero or abnormal conditions like protection violations, page faults, and
internal machine faults that must be handled by the Operating System.
The Simple definition for Interrupts is an asynchronous interrupts generated by hardware while
Exceptions are synchronous interrupts generated by the processor.
System calls are a type of exception that are implemented by the issuance of a software interrupt, which
traps into the Operating system and causes execution of a special system call handler
Note: Interrupts and Exceptions are referred to as Vector Events and they have Special IDs called
Vectors.
Classifications of Exception
Exceptions are classified as faults, traps, or aborts depending on the way they are reported and whether
the instruction that caused the exception can be restarted without loss of program or task continuity.
Trap: A trap is an exception in a user process. It is caused by division by zero or invalid memory access. It
is also the usual way to invoke a system call because system calls run with a higher priority than user
code. A trap is reported immediately following the execution of the trapping instruction. Traps allow
execution of a program or task to be continued without loss of program continuity. The return address
for the trap handler points to the instruction to be executed after the trapping instruction.
Fault: A fault is an exception that can generally be corrected and that, once corrected, allows the
program to be restarted with no loss of continuity. When a fault is reported, the processor restores the
machine state to the state prior to the beginning of execution of the faulting instruction.
Example: A page fault is often recoverable. A piece of an application's address space may have been
swapped out to disk from Memory. The application will trigger a page fault when it tries to access
memory that was swapped out. The kernel can pull that memory from disk to memory, and hand control
back to the application. The application will continue where it left off (at the faulting instruction that was
accessing swapped out memory), but this time the memory access should succeed without faulting.
Abort: An abort is an exception that does not always report the precise location of the instruction
causing the exception and does not allow a restart of the program or task that caused the exception.
Aborts are used to report severe errors, such as hardware errors and inconsistent or illegal values in
system tables.
Edge Cases: Software invoked interrupts (triggered by the INT instruction) behave in a trap-like manner.
The instruction completes before the processor saves its state and jumps to the interrupt handler.
THE KERNEL
A kernel connects the application software to the hardware of a computer. It is the central
component of most computer operating systems (OS). It is responsible for managing the
system's resources and the communication between hardware and software components. It is
the basic component of the operating system that provides the lowest-level abstraction layer for
the resources (especially memory, processors and I/O devices) that applications must control to
perform their function. It makes the Computer resources available for application processes
through inter-process communication mechanisms and system calls.
The kernel connecting the application software to the hardware component of a computer.
Most Operating system depend on the concept of the Kernel. The existence of a kernel is a
natural consequence of designing a computer system as a series of abstraction layers, each
relying on the functions of layers beneath itself. The kernel, from this viewpoint, is simply the
name given to the lowest level of abstraction that is implemented in software.
While it is today mostly called the kernel, the same part of the operating system has also in the
past been known as the nucleus or core. (You should note, however, that the term core has also
been used to refer to the primary memory of a computer system, typically because some early
computers used a form of memory called Core memory.)
In most cases, the boot loader starts executing the kernel in supervisor mode, the kernel then
initializes itself and starts the first process. After this, the kernel does not typically execute
directly, only in response to external events (e.g. via system calls used by applications to
request services from the kernel, or via interrupts used by the hardware to notify the kernel of
events). Additionally, the kernel typically provides a loop that is executed whenever no
processes are available to run; this is often called the idle process.
The kernel usually provides features for low-level scheduling of processes (dispatching), Inter-
process communication, process synchronization, context switch, manipulation of process
control blocks, interrupt handling, process creation and destruction, process suspension and
resumption (see process states in the next module).
A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel,
operating system and applications.
Kernel: The kernel is the core component of an operating system. It is also a system program. It
is the part of Operating System which coverts user command into machine language.
The differences between the Operating system and the kernel are presented in the table below:
OPERATING SYSTEM KERNEL
It is a system software. It is a system software which is part
of operating system.
It provides interface between the It provides interface between the
user and hardware. application and hardware.
It also provides protection and Its main purpose is memory
security. management, disk management,
process management and task
management.
All system need operating system All operating system need the
to run. kernel to run.
Type of operating system includes Type of kernel includes Monolithic
single and multiuser OS, and Micro kernel, hybrid kernel (the
multiprocessor OS, realtime OS, nano and pico kernel)
Distributed OS.
It is the first program that loads It is the first program to load when
when computer boots up. operating system loads.
The computer's memory. Memory is used to store both program instructions and data.
Typically, both need to be present in memory in order for a program to execute. Often
multiple programs will want access to memory, frequently demanding more memory
than the computer has available. The kernel is responsible for deciding which memory
each process can use, and determining what to do when not enough is available.
Any Input/output (I/O) devices present in the computer, such as disk drives, printers,
displays, etc. The kernel allocates requests from applications to perform I/O to an
appropriate device (or subsection of a device, in the case of files on a disk or windows
on a display) and provides convenient methods for using the device (typically abstracted
to the point where the application does not need to know implementation details of the
device)
Kernels also usually provide methods for synchronization and communication between
processes (called inter-process communication or IPC).
A kernel may implement these features itself, or rely on some of the processes it runs to provide
the facilities to other processes, although in this case it must provide some means of IPC to
allow processes to access the facilities provided by each other.
Finally, a kernel must provide running programs with a method to make requests to access
these facilities.
1. Process Management
The main task of a kernel is to allow the execution of applications and support them with
features such as hardware abstractions. To run an application, a kernel typically sets up an
address space for the application, loads the file containing the application's code into memory,
and sets up a stack for the program and branches to a given location inside the program, thus
starting its execution.
2. Memory Management
The kernel has full access to the system's memory and must allow processes to access this
memory safely as they require it. Often the first step in doing this is virtual addressing, usually
achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given
physical address appear to be another address, the virtual address. Virtual address spaces may
be different for different processes; the memory that one process accesses at a particular
(virtual) address may be different memory from what another process accesses at the same
address. This allows every program to behave as if it is the only one (apart from the kernel)
running and thus prevents applications from crashing each other.
3. Device Management
To perform useful functions, processes need access to the peripherals connected to the
computer, which are controlled by the kernel through device drivers. For example, to show the
user something on the screen, an application would make a request to the kernel, which would
forward the request to its display driver, which is then responsible for actually plotting the
character/pixel.
A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an
embedded system where the kernel will be rewritten if the available hardware changes),
configured by the user (typical on older PCs and on systems that are not designed for personal
use) or detected by the operating system at run time (normally called Plug and Play).
In a plug and play system, a device manager first performs a scan on different hardware buses,
such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect
installed devices, then searches for the appropriate drivers.
4. System Calls
To actually perform useful work, a process must be able to access the services provided by the
kernel. This is implemented differently by each kernel, but most provide a C library or an API,
which in turn invoke the related kernel functions.
2. Security
An important kernel design decision is the choice of the abstraction levels where the security
mechanisms and policies should be implemented. One approach is to use firmware and kernel
support for fault tolerance, and build the security policy for malicious behaviour on top of that
(adding features such as cryptography mechanisms where necessary), delegating some
responsibility to the compiler. Approaches that delegate enforcement of security policy to the
compiler and/or the application level are often called language-based security.
One of the major merit of having monolithic kernel is that it provides CPU scheduling, memory
management, file management and other operating system functions through system calls.
The other one is that it is a single large process running entirely in a single address space.
It is a single static binary file.
Example of some Monolithic Kernel based OSs are: UNIX, Linux, Open VMS, XTS-400, z/TPF.
The major drawback monolithic kernel is that, if any service fails it leads to entire system failure
and if a user has to add any new service, the user needs to modify the entire operating system.
2. The Microkernel: Microkernel like the monolithic kernel, manages all system resources but in a
microkernel, the user services and kernel services are implemented in different address space.
The user services are kept in user address space, and kernel services are kept under kernel
address space, thereby reducing the size of kernel and size of operating system as well.
Since kernel is the core part of the operating system, so it is meant for handling the most
important services only. Thus in this architecture only the most important services are
inside kernel and rest of the OS services are present inside system application program.
Thus users are able to interact with those not-so important services within the system
application. And the microkernel is solely responsible for the most important services of
operating system which are: Inter process-Communication, Memory Management and
CPU-Scheduling.
The advantages of the microkernel include the better functionality due its small and
isolated architecture. The Expansion of the system is easier as it is simply added in the
system application without disturbing the kernel. The Eclipse IDE is a good example of
Microkernel Architecture.
Hybrid kernels are essentially a compromise between the monolithic kernel approach
and the microkernel system. This implies running some services (such as the network
stack or the file system) in kernel space to reduce the performance overhead of a
traditional microkernel, but still running kernel code (such as device drivers) as servers
in user space.
The idea behind this quasi-category is to have a kernel structure similar to a
microkernel, but implemented as a monolithic kernel. In contrast to a microkernel, all
(or nearly all) services are in kernel space. As in a monolithic kernel, there is no
performance overhead associated with microkernel message passing and context
switching between kernel and user mode. Also, as with monolithic kernels, there are
none of the benefits of having services in user space.