0% found this document useful (0 votes)
454 views27 pages

Os Material Unit 1

This document is a course material on Operating Systems, prepared by Mrs. V. Savithakumari, covering various topics including the history of operating systems, process management, concurrency, deadlock, scheduling, memory management, and distributed computing. It outlines the syllabus divided into five units, detailing key concepts, algorithms, and the evolution of operating systems from the first generation to the present. The document also includes references for textbooks and additional reading materials related to operating systems.

Uploaded by

K. Pycho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
454 views27 pages

Os Material Unit 1

This document is a course material on Operating Systems, prepared by Mrs. V. Savithakumari, covering various topics including the history of operating systems, process management, concurrency, deadlock, scheduling, memory management, and distributed computing. It outlines the syllabus divided into five units, detailing key concepts, algorithms, and the evolution of operating systems from the first generation to the present. The document also includes references for textbooks and additional reading materials related to operating systems.

Uploaded by

K. Pycho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

A COURSE MATERIAL ON

OPERATING SYSTEM

PREPARED BY
Mrs.V.SAVITHAKUMARI M.Sc.,B.Ed.,M.Phil.

ASSISTANT PROFESSOR

PG & RESEARCH DEPARTMENT OF COMPUTER SCIENCE AND APPLICATIONS

SRI VIDYA MANDIR ARTS AND SCIENCE COLLEGE (AUTONOMOUS)

KATTERI, UTHANGARAI, KRISHNAGIRI DT.


OPERATING SYSTEM SYLLABUS
UNIT – I
Introduction: operating system, history (1990s to 2000 and beyond), distributed computing, parallel
computation. Process concepts: definition of process, process states – Life cycle of a process, process
management – process state transitions, process control block (PCB), process operations, suspend and
resume, context switching, Interrupts – Interrupt processing, interrupt classes, Inter process communication
- signals, message passing.
UNIT – II
Asynchronous concurrent processes: mutual exclusion - critical section, mutual exclusion primitives,
implementing mutual exclusion primitives, Peterson‘s algorithm, software solutions to the mutual Exclusion
Problem - n-thread mutual exclusion-Lamports Bakery Algorithm. Semaphores – Mutual exclusion with
Semaphores, thread synchronization with semaphores, counting semaphores, implementing semaphores.
Concurrent programming: monitors, message passing.
UNIT – III
Deadlock and indefinite postponement: Resource concepts, four necessary conditions for deadlock,
deadlock prevention, deadlock avoidance and Dijkstra‗s Banker‗s algorithm, deadlock detection, deadlock
recovery.
UNIT – IV
Job and processor scheduling: scheduling levels, scheduling objectives, scheduling criteria, preemptive vs
non-preemptive scheduling, interval timer or interrupting clock, priorities, scheduling algorithms - FIFO
scheduling, RR scheduling, quantum size, SJF scheduling, SRT scheduling, HRN scheduling, multilevel
feedback queues, Fairshare scheduling.
UNIT – V
Real Memory organization and Management: Memory organization, Memory management, Memory
hierarchy, Memory management strategies, contiguous vs non-contiguous memory allocation, single user
contiguous memory allocation, fixed partition multiprogramming, variable partition multiprogramming,
Memory swapping.
Virtual Memory organization: virtual memory basic concepts, multilevel storage organization, block
mapping, paging basic concepts, segmentation, paging /segmentation systems.
Virtual Memory Management: Demand Paging, Page replacement strategies.
Text Book
1. H.M.Deitel, Operating Systems, Third Edition, Pearson Education Asia, 2011.
Reference Books
1. William Stallings, Operating System: Internals and Design Principles, Seventh Edition, Prentice-Hall
of India, 2012.
2. A.Silberschatz, and P.B. Galvin., Operating Systems Concepts, Ninth Edition, John Wiley &
Sons(ASIA) Pte. Ltd., 2012.
UNIT - I
INTRODUCTION:
 An operating system acts as an intermediary between the user of a computer and the computer
hardware.
 OS is software that manages the computer hardware.
 Purpose of OS: Provide an environment in which a user can execute programs in a convenient and
efficient manner.
 Mainframe OS: Optimize utilization of hardware.
 Personal computer OS: Support complex games, business application etc.
 Mobile computer OS: A user can easily interface with computer to execute programs.
 The operating system is the one program running at all times on the computer – usually called the
kernel.
In kernel, there are 2 types of programs.
 System programs: Associated with the OS but are not necessarily part of the kernel.
 Application programs: All programs not associated with the operation of the system.
 Mobile OS: core kernel & middleware.
 Middleware: Set of software frameworks that provide additional services to application
developers.
 Features of core kernel with middleware: supports database, multimedia, and graphics etc.
WHAT IS AN OS?
 The software that controls the hardware.
 The layer of software.
 An operating system is software that enables applications to interact with a computer‘s hardware.
 The operating system is a ―black box‖ between the applications and the hardware they run on that
ensures the proper result, given appropriate inputs.
 Operating system are primarily resource managers-they manage hardware, including processor,
memory, input/output devices and communication devices.
 They manage applications and other software abstractions.
HISTORY OF OPERATING SYSTEM
The First Generation (1945-55) : Vacuum Tubes and Plugboards
 The first electronic computer was developed without any operating system.
 No programming languages
 In these early days, a single group of people designed, built, programmed, operated, and
maintained each machine.
 All programming was done in absolute machine language.
 Programs often by wiring up plugboards to control the machine‗s basic functions.
 By the early 1950s, punched cards are started to used.
 It was now possible to write programs on cards and read them in cards instead of using
plugboards.
The Second Generation (1955-65) : Transistors and Batch Systems
 By the early 1950's, the routine had improved somewhat with the introduction of punchcards.
 The first operating system (OS) was created known as GMOS. The General Motors Research
Laboratories implemented the first operating systems in early 1950's for their IBM 701 computer.
 The second-generation operating system was based on a single stream batch processing system
because it collects all similar jobs in groups or batches and then submits the jobs to the operating
system using a punch card to complete all jobs in a machine.
 The transistors are started to use in the middle 1950s.
 There was a clear separation between designers, builders, operators, programmers, and
maintenance personnel.
 Assembler and Fortran were started to be used for programming punch cards.
Batch system
Batch processing is execution of a series of programs ("jobs") on a computer without manual
intervention.

Figure 1.4 An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs
onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries
output tape to 1401. (f) 1401 prints output.
After finishing process the results were written in output tape. (Such as the IBM 1401)
The Third Generation (1965-1980) : ICs and Multiprogramming
 7094 was the word-oriented, large-scale scientific computers which were used for numerical
calculations in science and engineering.
 On the other hand, 1401 was the character-oriented, commercial computers which were
widely used for commercial works.
 Both of these machines are very huge and people need small machines.
 IBM produced the System/360 to solve these problems.
 All the machines had the same architecture and instruction set, programs written for one
machine could run on all the others.
 360 was designed to handle both scientific and commercial computing.
 The 360 was the first major computer in which is used (small-scale) Integrated Circuits
first.
 OS/360 is the operating system used in third generation computers.
 Multiprogramming is first used in OS/360.

Figure 1.5 A multiprogramming system with three jobs in memory


 Another major feature present in third-generation operating systems was the spooling
(Simultaneous Peripheral Operation OnLine).
 Spooling refers to putting jobs in a buffer, a special area in memory or on a disk where a device
can access them when it is ready
 Timesharing, a variant of multiprogramming, in which each user has an online terminal.
The Fourth Generation (1980-Present): Personal Computers
 The age of the personal computer started with the development of LSI (Large Scale Integration)
circuits, chips containing thousands of transistors on silicon.
 In the early 1980s, IBM designed the IBM PC.
 CP/M(Control Program for Microcomputers), MS-DOS, and other operating systems for early
microcomputers were all based on users typing in commands from the keyboard.
 Microsoft produced a GUI-based system called Windows, which originally ran on top of MS-DOS.
 In the starting in 1995 a freestanding version of Windows, Windows 95, was released.
 In 1998, a slightly modified version of this system, called Windows 98 was released.
 Another Microsoft operating system is Windows NT (NT stands for New Technology) which is a
full 32-bit system.
 Windows NT was renamed Windows 2000 in early 1999.
 The other major OS in the personal computer world is UNIX which is strongest on workstations
and other high-end computers, such as networkservers.
 On Pentium-based computers, Linux is becoming a popular alternative to Windows for students
and increasingly many corporate users.
The Fifth Generation (1990- Present): Mobile Computers
 In 1997, Ericsson coined the term smartphone for its GS88 ‗‗Penelope.‘‘. At the time of writing,
Google‘s Android is the dominant operating system with Apple‘s iOS.
 It was the operating system of choice for popular brands like Samsung, Sony Ericsson, Motorola, and
especially Nokia. However, other operating systems like RIM‘s Blackberry OS introduced for
smartphones in 2002 and Apple‘s iOS released for the first iPhone in 2007.
1990s: The Era of Graphical Interfaces and Competition
Key Developments:
1. Microsoft Windows Evolution:
 Windows 3.0 (1990): Introduced a graphical user interface (GUI) that became popular.
 Windows 95 (1995): Major success; introduced Start menu, taskbar, plug and play, and
better multitasking.
 Windows 98 (1998): Improved hardware support, USB, and Internet Explorer
integration.
2. Mac OS:
 Continued refining its GUI with System 7 and Mac OS 8/9.
 These were still based on older technology and lacked features like protected memory.
3. UNIX and Linux:
 Linux began in 1991 by Linus Torvalds. It was open-source and quickly adopted by
developers and academia.
 Various UNIX systems (Solaris, HP-UX, AIX) dominated enterprise servers.
4. OS/2 and Others:
 IBM's OS/2 tried to compete with Windows but eventually faded out.
 BeOS, AmigaOS, and others had niche use or cult followings.
2000s and Beyond: Dominance, Open Source, and Mobile Expansion
Key Developments:
1. Windows XP and Later:
 Windows 2000/XP (2001): Combined home and business features. XP became
extremely popular.
 Later versions included Vista, Windows 7, 8, 10, and 11, with major UI and security
improvements.
2. Mac OS X (Now macOS):
 Released in 2001, based on a UNIX-like foundation (Darwin).
 Introduced the Aqua interface and eventually evolved into today‘s macOS.

3. Linux Growth:
 Linux matured with distributions like Ubuntu, Red Hat, and Debian.
 It became the standard OS for servers, supercomputers, and embedded systems.
4. Mobile OS Emergence:
 iOS (2007) and Android (2008) transformed mobile computing.
 Based on UNIX and Linux respectively, they brought touch interfaces and app
ecosystems.
5. Cloud & Virtualization:
 Operating systems adapted to cloud computing and virtualization (e.g., via VMware,
Docker, AWS).
 Server OSes became modular and container-friendly.
Beyond 2000: Cloud, Mobility, and New Paradigms
Cloud Computing: Operating systems began to evolve to better integrate with cloud services,
with features supporting cloud storage, application streaming, and virtualization becoming more
prominent.
Ubiquitous Mobility: Mobile operating systems like iOS and Android matured into powerful
platforms with vast app ecosystems, fundamentally changing how people interact with
technology.
New Form Factors: The rise of tablets, smartwatches, and other connected devices led to the
development of specialized operating system variants or entirely new OS designed for these
unique form factors.
Focus on Security and Privacy: With increasing cyber threats and growing concerns about data
privacy, modern operating systems have placed a strong emphasis on security features, regular
updates, and user privacy controls.
Integration of AI: More recently, operating systems have started incorporating artificial
intelligence features for tasks like voice assistance, personalized recommendations, and system
optimization.
HISTORY OF OPERATING SYSTEM
The operating system has been evolving through the years. The following table shows the history of
OS.
Generation Year Electronic Device Used Types of OS / Devices

1st 1940s – ❌ No operating system; manual machine


Vacuum tube computers
Generation 1955 code execution
Programs loaded using punch cards or
Examples: ENIAC, UNIVAC
switches

2nd 1955 – ▶ Batch processing systems (e.g., IBM


Transistor-based mainframes
Generation 1965 1401)
Magnetic tape, punched card
Early OS: GM-NAA I/O, IBSYS
readers

3rd 1965 – Integrated Circuit-based


▶ Multiprogramming and Time-sharing OS
Generation 1975 computers

▶ DOS (MS-DOS, PC-DOS)


4th 1975 – Personal Computers (PCs),
▶ Mac OS (System 1–6)
Generation 1990s workstations
▶ Windows 1.0–3.x

Early GUI systems, floppy and HDD-based


Microprocessors
storage

5th 1990 – ▶ Windows 95/98/XP, Linux (1991+),


PCs, laptops, servers
Generation 2010 macOS X (2001)

Mobile devices (late 90s) ▶ Symbian, Palm OS, Windows CE

Internet-connected devices ▶ Network OS, Unix variants

6th 2010 – Smartphones, cloud systems, ▶ iOS, Android, Windows 10/11, Linux
Generation Present IoT devices (Ubuntu, RHEL)

Virtual machines, containers, ▶ Docker, Kubernetes, RTOS, Embedded


embedded Linux, Cloud OS
DISTRIBUTED COMPUTING
What is Distributed Computing?
Distributed computing refers to a system where processing and data storage is distributed across
multiple devices or systems, rather than being handled by a single central device. In a distributed
system, each device or system has its own processing capabilities and may also store and manage its
own data. These devices or systems work together to perform tasks and share resources, with no single
device serving as the central hub.
One example of a distributed computing system is a cloud computing system, where resources
such as computing power, storage, and networking are delivered over the Internet and accessed on
demand. In this type of system, users can access and use shared resources through a web browser or
other client software.

Components
There are several key components of a Distributed Computing System
 Devices or Systems: The devices or systems in a distributed system have their own processing
capabilities and may also store and manage their own data.
 Network: The network connects the devices or systems in the distributed system, allowing them to
communicate and exchange data.
 Resource Management: Distributed systems often have some type of resource management system
in place to allocate and manage shared resources such as computing power, storage, and networking.
The architecture of a Distributed Computing System is typically a Peer-to-Peer Architecture, where
devices or systems can act as both clients and servers and communicate directly with each other.
Characteristics
There are several characteristics that define a Distributed Computing System
 Multiple Devices or Systems: Processing and data storage is distributed across multiple devices or
systems.
 Peer-to-Peer Architecture: Devices or systems in a distributed system can act as both clients and
servers, as they can both request and provide services to other devices or systems in the network.
 Shared Resources: Resources such as computing power, storage, and networking are shared among
the devices or systems in the network.
 Horizontal Scaling: Scaling a distributed computing system typically involves adding more devices
or systems to the network to increase processing and storage capacity. This can be done through
hardware upgrades or by adding additional devices or systems to the network.
PARALLEL COMPUTATION
Parallel computation is a method of performing multiple calculations or processes simultaneously,
with the goal of solving problems more efficiently, especially those that are large or complex. It involves
dividing a task into smaller sub-tasks that can be executed at the same time on multiple processors or
cores.

Key Concepts of Parallel Computation:


1. Concurrency vs. Parallelism:
 Concurrency is when multiple tasks make progress at the same time.
 Parallelism is when multiple tasks actually run at the same time (e.g., on different CPU
cores).
2. Types of Parallelism:
 Data Parallelism: The same operation is performed on different parts of data (e.g.,
matrix multiplication).
 Task Parallelism: Different tasks are executed simultaneously (e.g., web server handling
multiple requests).
3. Architectures for Parallel Computation:
 Shared Memory Systems (e.g., multi-core processors).
 Distributed Systems (e.g., computer clusters, cloud computing).
 GPU-based Systems (massive parallelism for tasks like image processing or machine
learning).
4. Programming Models:
 Multithreading (e.g., POSIX threads, OpenMP).
 Message Passing (e.g., MPI – Message Passing Interface).
 MapReduce (popular in big data frameworks like Hadoop).
 CUDA/OpenCL (for GPU programming).
5. Challenges:
 Synchronization and communication overhead.
 Load balancing among processors.
 Debugging and testing parallel programs.
 Race conditions and deadlocks.
6. Applications:
 Scientific simulations
 Image and video processing
 Cryptography
 Machine learning
 Real-time systems (e.g., autonomous vehicles)
Key Differences between Distributed and Parallel Computing
Distributed Computing Parallel Computing
Processing tasks across multiple computers Simultaneous processing of a single task using
connected by a network. multiple processors or cores.
Relies heavily on inter-process communication Requires minimal communication as tasks are
over a network. divided and processed independently.
Shares data among connected computers, often Shares less data, typically within the same memory
leading to higher latency. space, reducing latency.
More resilient to hardware failures as tasks can Less resilient to hardware failures, but individual
be rerouted to other nodes. tasks are isolated.
Can scale horizontally by adding more Can scale vertically by adding more processors or
machines to the network. cores to a single machine.
Complex programming model due to the need Often uses simpler programming models,
for handling distributed resources. especially in shared-memory architectures.
May have dependencies on remote data, Minimizes data dependency, allowing tasks to
affecting execution speed. execute independently.
Offers greater flexibility in terms of hardware More rigid in terms of hardware requirements,
and geographical distribution. often centralized.
Resource utilization may vary based on the Optimizes resource utilization by dividing tasks
load and distribution of tasks. efficiently among processors.
Examples: Apache Hadoop, Distributed Examples: Parallelized algorithms, MPI (Message
Databases (e.g., Cassandra). Passing Interface).
PROCESS CONCEPTS:

DEFINITION OF PROCESS:
 A Program in execution
 A asynchronous activity
 The ―animated spirit‖ of a procedure
 The ―locus of control‖ of a procedure in execution
The data structure of the process called a ―process descriptor‖ or a ―process control block‖.
There are two key concepts
1. A process is an ―entity‖ – each process has its own address space, which consists of a text region,
data region and stack region.
Text region: stores the code that the processor execute.
Data region: stores variable and dynamically allocated memory
that the process uses during execution.
Stack region: stores instruction and local variables for active
procedure calls.
2. A process is a ―program in execution‖.
PROCESS STATES
 When a process executes, it passes through different states.
 These stages may differ in different operating systems.
 In general, a process can have one of the following five states at a time.
 New: This is the initial state when a process is first started/created.
 Ready: The process are waiting to have the processor allocated to them by the operating system so
that they can run.
o Process may come into this state after leaving the start state or while running but be
interrupted by the scheduler to assign CPU to some other process.
 Running: After Ready state, the process state is set to running and the processor execute its
instruction.
 Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
 Terminated: Once the process finishes its execution or its is terminated by the operating system, it
is moved to the terminated state where it waits to be removed from main memory.

LIFE CYCLE OF A PROCESS


Process Life cycle
When you run a program (which becomes a process), it goes through different phases
before it completion. These phases, or states, can vary depending on the operating system, but
the most common process lifecycle includes two, five, or seven states. Here‘s a simple
explanation of these states:
The Two-State Model
The simplest way to think about a process‘s lifecycle is with just two states:
Running: This means the process is actively using the CPU to do its work.

 Not Running: This means the process is not currently using the CPU. It could be waiting for
something, like user input or data, or it might just be paused. New: This is the initial state when a
process is first started/created.
 Ready: The process are waiting to have the processor allocated to them by the operating system so that
they can run.
o Process may come into this state after leaving the start state or while running but be interrupted by the
scheduler to assign CPU to some other process.
 Running: After Ready state, the process state is set to running and the processor execute its instruction.
 Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
 Terminated: Once the process finishes its execution or its is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.

Two State Process Model


When a new process is created, it starts in the not running state. Initially, this process is kept in a
program called the dispatcher.
Here’s what happens step by step:
Not Running State: When the process is first created, it is not using the CPU.
Dispatcher Role: The dispatcher checks if the CPU is free (available for use).
Moving to Running State: If the CPU is free, the dispatcher lets the process use the CPU, and it
moves into the running state.
CPU Scheduler Role: When the CPU is available, the CPU scheduler decides which process gets to
run next. It picks the process based on a set of rules called the scheduling scheme, which varies from
one operating system to another.
The Five-State Model
The five-state process lifecycle is an expanded version of the two-state model. The two-
state model works well when all processes in the not running state are ready to run. However, in
some operating systems, a process may not be able to run because it is waiting for something, like
input or data from an external device. To handle this situation better, the not running state is divided
into two separate states:
Here’s a simple explanation of the five-state process model:
New: This state represents a newly created process that hasn‘t started running yet. It has not been
loaded into the main memory, but its process control block (PCB) has been created, which holds
important information about the process.
Ready: A process in this state is ready to run as soon as the CPU becomes available. It is waiting for
the operating system to give it a chance to execute.
Running: This state means the process is currently being executed by the CPU. Since we‘re assuming
there is only one CPU, at any time, only one process can be in this state.
Blocked/Waiting: This state means the process cannot continue executing right now. It is waiting for
some event to happen, like the completion of an input/output operation (for example, reading data
from a disk).
Exit/Terminate: A process in this state has finished its execution or has been stopped by the user for
some reason. At this point, it is released by the operating system and removed from memory.
The Seven-State Model
The states of a process are as follows:
New State: In this step, the process is about to be created but not yet created. It is the program that is
present in secondary memory that will be picked up by the OS to create the process.
Ready State: New -> Ready to run. After the creation of a process, the process enters the ready state
i.e. the process is loaded into the main memory. The process here is ready to run and is waiting to get
the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a
queue called a ready queue for ready processes.
Run State: The process is chosen from the ready queue by the OS for execution and the instructions
within the process are executed by any one of the available processors.
Blocked or Wait State: Whenever the process requests access to I/O needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or waits
state. The process continues to wait in the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.
Terminated or Completed State: Process is killed as well as PCB is deleted. The resources allocated
to the process will be released or deallocated.
Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler is said to be
in suspend ready state. The process will transition back to a ready state whenever the process is again
brought onto the main memory.
Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory. When
work is finished it may go to suspend ready.

CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is
called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is
called I/O bound process.
PROCESS MANAGEMENT
Process Management in an operating system (OS) involves overseeing the lifecycle of processes—
from their creation to termination. This ensures efficient CPU utilization, multitasking, and system stability.
PROCESS STATE TRANSITIONS:
In an operating system, process state transition describes how a process moves between different
states of execution. These states represent the various stages a process goes through, from creation to
termination. The main states a process can be in are new, ready, running, waiting (blocked), and terminated.
1. New:
When a new process is created, it starts in the "New" state. It's awaiting admission to the "Ready" state.
2. Ready:
A process in the "Ready" state is waiting to be assigned to a CPU to begin execution.
3. Running:
When a process is assigned a CPU, it transitions to the "Running" state and is actively executing.
4. Waiting (Blocked):
If a process needs to wait for an event to occur (e.g., I/O completion, resource availability), i t moves to
the "Waiting" or "Blocked" state.
5. Terminated:
Once a process completes execution, it transitions to the "Terminated" state.
6. Other states:
Some operating systems also include "Suspended" states, which can be "Suspended Ready" (waiting for
the CPU in a suspended state) or "Suspended Wait" (waiting for an event while suspended).

PROCESS CONTROL BLOCK (PCB):


 A Process Control Block is a data structure maintained by the Operating System for every
process.
 The PCB is identified by an integer process ID (PID).
 A PCB keeps all the information needed to keep track of a process as listed below
1.Process State: The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2.Process privileges: This is required to allow/disallow access to system resources.
3.Process ID: Unique identification for each of the process in the operating system.
4.Pointer: A pointer to parent process.
5.Program Counter: Program Counter is a pointer to the address of the next instruction to be
executed for this process.
6.CPU registers: Various CPU registers where process need to be stored for execution for running state.
7.CPU Scheduling Information: Process priority and other scheduling information which is
required to schedule the process.
8.Memory management information: This includes the information of page table, memory limits,
Segment table depending on memory used by the operating system.
9.Accounting information: This includes the amount of CPU used for process execution, time
limits, execution ID etc.
10.IO status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB

Process Control Block (PCB) Diagram


The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
PROCESS OPERATIONS
A process may spawn a new process
– The creating process is called the parent process
– The created process is called the child process
– Exactly one parent process creates a child
– When a parent process is destroyed, operating systems typically respond in one of
two ways:
• Destroy all child processes of that parent
• Allow child processes to proceed independently of their parents
Fig. Process creation hierarchy.

SUSPEND AND RESUME


Suspending a process

 Indefinitely removes it from contention for time on a processor without being destroyed.
 Useful for detecting security threats and for software debugging purposes.
 A suspension may be initiated by the process being suspended or by another process.
 A suspended process must be resumed by another process.
 Two suspended states:
• suspendedready
• suspendedblocked

Process state transitions with suspend and resume.


CONTEXT SWITCHING

Context Switching in an operating system is a critical function that allows the CPU to
efficiently manage multiple processes. By saving the state of a currently active process and loading
the state of another, the system can handle various tasks simultaneously without losing progress. This
switching mechanism ensures optimal use of the CPU, enhancing the system‘s ability to perform
multitasking effectively.
Working Process Context Switching

State Diagram of Context Switching


In the context switching of two processes, the priority-based process occurs in the ready queue of the
process control block. Following are the steps:
 The state of the current process must be saved for rescheduling.
 The process state contains records, credentials, and operating system-specific information stored
on the PCB or switch.
 The PCB can be stored in a single layer in kernel memory or in a custom OS file.
 A handle has been added to the PCB to have the system ready to run.
 The operating system aborts the execution of the current process and selects a process from the
waiting list by tuning its PCB.
 Load the PCB‘s program counter and continue execution in the selected process.
 Process/thread values can affect which processes are selected from the queue, this can be
important.
INTERRUPT:

An interrupt in an operating system is a mechanism that allows the CPU to respond to an external
or internal event that requires immediate attention. When an interrupt occurs, the CPU temporarily stops
executing the current instructions and begins executing a function (often referred to as an interrupt
handler or interrupt service routine, ISR) to handle the event.

Interrupts can come from various sources, and they can be categorized into two main types:
1. Hardware Interrupts:
 These are triggered by external hardware devices, like keyboards, mice, or network interfaces, to
signal the CPU that they need processing.
 For example:
o Keyboard input: When a key is pressed, a hardware interrupt is generated to inform the
CPU to process the input.
o Timer interrupts: Generated by a timer to ensure the CPU doesn't get stuck in long-
running processes.
o I/O devices: Devices like hard drives, printers, etc., can send interrupts when they are
ready to transfer data.
Hardware interrupts are generally further classified into:
 Maskable interrupts (IRQ): These can be disabled (masked) by the CPU if it is currently
processing something more urgent.
 Non-maskable interrupts (NMI): These cannot be disabled and typically indicate critical
hardware errors, like a system crash or power failure.
2. Software Interrupts:
 These are triggered by software programs to request a service from the operating system or to
handle a system call. For instance, a program may need access to system resources like file
handling, memory allocation, or input/output operations.
 Examples include system calls for reading/writing files, managing memory, or handling
processes.
Interrupt Handling Process:
1. Interrupt Signal: An interrupt is triggered, either by hardware or software.
2. Interrupt Acknowledgment: The CPU acknowledges the interrupt and stops executing the
current instructions.
3. Context Saving: The CPU saves its current state (like register values) so it can return to the
previous state once the interrupt has been serviced.
4. Interrupt Service Routine (ISR): The CPU jumps to the address of the interrupt handler
function to process the interrupt.
5. Restoration: After the ISR finishes, the CPU restores the previous state and resumes the
interrupted task.
INTERRUPT PROCESSING:
Interrupt processing refers to the mechanism that allows the CPU to handle interrupts efficiently
and ensure that the system can respond to asynchronous events, such as I/O operations, hardware
signals, or software requests. The process ensures that interrupts do not disrupt normal program
execution too much while still allowing for timely handling of important events.
Basic Steps of Interrupt Processing
Here‘s a detailed breakdown of how interrupt processing works in most systems:
1. Interrupt Occurrence
 An interrupt is triggered by either hardware (e.g., I/O devices like keyboards, disk drives) or
software (e.g., system calls).
 Hardware interrupts come from devices (keyboard, mouse, network card, etc.) requesting the
CPU‘s attention.
 Software interrupts are generated by programs to request system services.
2. Interrupt Acknowledgment
 The interrupting device sends an interrupt signal to the CPU, notifying it of the need for
attention.
 Depending on the system, the interrupt is either maskable (can be ignored for a while) or non-
maskable (cannot be ignored).
3. Interrupt Masking
 In most systems, interrupts can be masked (disabled) while the CPU is handling critical tasks.
 Maskable interrupts can be temporarily turned off by the operating system or hardware to
prevent them from interfering with critical operations.
 Non-maskable interrupts (NMI) cannot be masked, often used for emergency situations like
hardware failures.
4. Saving Context
 Before processing the interrupt, the CPU saves its context. This includes:
o Register values
o Program Counter (PC) to resume from where the interrupted process left off
o Flags or status registers
 This is done to ensure that once the interrupt is handled, the CPU can return to the task it was
previously performing.
5. Interrupt Vector Table
 The CPU uses an interrupt vector table (IVT) to locate the address of the appropriate Interrupt
Service Routine (ISR) for the given interrupt.
 Each type of interrupt (e.g., keyboard input, network packet arrival) has a unique vector, which
maps to a function or routine that should handle the interrupt.
6. Interrupt Service Routine (ISR) Execution
 The CPU jumps to the ISR, a special function designed to handle the interrupt.
o For example, if the interrupt is caused by a keypress, the ISR would handle the key input
and store it for further processing.
o The ISR is typically very short and efficient to minimize delays.
 After the ISR finishes, it returns control to the previously executing program, restoring the
context that was saved earlier.
7. Restoring Context
 After the ISR has completed, the saved context is restored. This includes:
o Restoring the CPU registers
o Setting the Program Counter (PC) to the address of the next instruction that should be
executed
 The CPU can now continue executing the previously interrupted process.
8. Interrupt Enable
 After the interrupt has been serviced and context restored, the system often re-enables interrupt
processing if it was disabled during the ISR.
INTERRUPT CLASSES:
Interrupt classes in operating systems can be categorized by their source and behavior,
including hardware, software, timer, external, internal (exceptions), and maskable/non-maskable
interrupts. Hardware interrupts originate from external devices, while software interrupts are triggered by
programs or exceptional conditions.
Detailed Breakdown:
Hardware Interrupts:
These interrupts are generated by external hardware devices like keyboards, mice, network cards, and
other peripheral devices.
Software Interrupts:
These interrupts are generated by software or due to exceptional conditions, such as an error or
a system call.
Timer Interrupts:
These are periodic interrupts that occur at regular intervals, often used for scheduling tasks or
time-based operations.
External Interrupts:
Similar to hardware interrupts, these originate from external sources, but may also include
interrupts generated by other modules within the system.
Internal Interrupts (Exceptions):
These are interrupts caused by exceptional conditions within the processor itself, such as a
division by zero or attempting to access an invalid memory address.
Maskable vs. Non-Maskable Interrupts:
Maskable interrupts can be temporarily disabled by the operating system, while non-maskable
interrupts are critical and must be handled immediately.
Interrupt Priority:
Interrupts can be assigned different priorities to determine which ones should be handled first.
Interrupt Handling:
When an interrupt occurs, the processor suspends its current task, saves the state, and executes
an interrupt handler routine to handle the interrupt. After the interrupt is handled, the processor
restores its state and resumes the original task.
INTER PROCESS COMMUNICATION (IPC)
Inter Process Communication (IPC) is a mechanism that allows processes (independent
running programs) to communicate and coordinate with each other. This is essential in modern operating
systems where multiple processes need to share data, resources, or coordinate execution.
Key Goals of IPC
1. Data exchange: Share information between processes.
2. Synchronization: Ensure processes execute in the correct sequence.
3. Resource sharing: Prevent conflicts when accessing shared data or devices.
4. Event signaling: Allow processes to notify others about certain events.
Types of IPC Mechanisms
Description Use Case
Mechanism
Pipes One-way or two-way communication channel using Parent-child process
a buffer in kernel space. communication.
FIFOs (Named Like pipes but with a name; can be used by Communication between
Pipes) unrelated processes. unrelated processes.
Message Queues Messages are sent to and retrieved from a queue Queue-based communication.
maintained by the OS.
Shared Memory Memory segment is shared between processes. Large data exchange with
Fastest IPC, but needs sync. synchronization.
Semaphores Used to control access to shared resources (mostly Prevent race conditions.
for synchronization).
Sockets Network-based communication using IP address and Inter-process or inter-machine
ports (even on same machine). communication.
Signals OS sends a simple signal to a process (like Process control (e.g., kill,
interrupt). notify).

SIGNALS:
What is a Signal?
A signal is a limited form of inter-process communication used in Unix/Linux systems. It is a
software interrupt sent to a process to notify it that an event has occurred.
Common Use Cases:
 Terminate a process (SIGTERM)
 Interrupt a process (SIGINT, e.g., Ctrl+C)
 Notify a process of illegal operations (SIGSEGV, SIGFPE)
 Custom signaling between user processes (SIGUSR1, SIGUSR2)
Common Signals:
Signal Description
SIGINT Interrupt (Ctrl+C)
SIGTERM Termination request
SIGKILL Force kill a process (cannot be caught)
SIGSTOP Stop/suspend process
SIGCONT Continue a stopped process
SIGUSR1 User-defined signal 1
MESSAGE PASSING:
What is Message Passing?
Message passing is an IPC method where processes send and receive messages using OS-provided
mechanisms like message queues, mailboxes, or sockets.
Characteristics:
 Direct or indirect communication
 Synchronous (blocking) or asynchronous (non-blocking)
 Supports structured communication (e.g., with headers, priorities)
Key Methods:
Mechanism Description
Message Queues Queue in the kernel that stores messages sent between processes.
Sockets Useful for both local and network communication.
Mailboxes Named communication objects, mostly in some OS like Windows or RTOS.

You might also like