Os Material Unit 1
Os Material Unit 1
OPERATING SYSTEM
PREPARED BY
Mrs.V.SAVITHAKUMARI M.Sc.,B.Ed.,M.Phil.
ASSISTANT PROFESSOR
Figure 1.4 An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs
onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries
output tape to 1401. (f) 1401 prints output.
After finishing process the results were written in output tape. (Such as the IBM 1401)
The Third Generation (1965-1980) : ICs and Multiprogramming
7094 was the word-oriented, large-scale scientific computers which were used for numerical
calculations in science and engineering.
On the other hand, 1401 was the character-oriented, commercial computers which were
widely used for commercial works.
Both of these machines are very huge and people need small machines.
IBM produced the System/360 to solve these problems.
All the machines had the same architecture and instruction set, programs written for one
machine could run on all the others.
360 was designed to handle both scientific and commercial computing.
The 360 was the first major computer in which is used (small-scale) Integrated Circuits
first.
OS/360 is the operating system used in third generation computers.
Multiprogramming is first used in OS/360.
3. Linux Growth:
Linux matured with distributions like Ubuntu, Red Hat, and Debian.
It became the standard OS for servers, supercomputers, and embedded systems.
4. Mobile OS Emergence:
iOS (2007) and Android (2008) transformed mobile computing.
Based on UNIX and Linux respectively, they brought touch interfaces and app
ecosystems.
5. Cloud & Virtualization:
Operating systems adapted to cloud computing and virtualization (e.g., via VMware,
Docker, AWS).
Server OSes became modular and container-friendly.
Beyond 2000: Cloud, Mobility, and New Paradigms
Cloud Computing: Operating systems began to evolve to better integrate with cloud services,
with features supporting cloud storage, application streaming, and virtualization becoming more
prominent.
Ubiquitous Mobility: Mobile operating systems like iOS and Android matured into powerful
platforms with vast app ecosystems, fundamentally changing how people interact with
technology.
New Form Factors: The rise of tablets, smartwatches, and other connected devices led to the
development of specialized operating system variants or entirely new OS designed for these
unique form factors.
Focus on Security and Privacy: With increasing cyber threats and growing concerns about data
privacy, modern operating systems have placed a strong emphasis on security features, regular
updates, and user privacy controls.
Integration of AI: More recently, operating systems have started incorporating artificial
intelligence features for tasks like voice assistance, personalized recommendations, and system
optimization.
HISTORY OF OPERATING SYSTEM
The operating system has been evolving through the years. The following table shows the history of
OS.
Generation Year Electronic Device Used Types of OS / Devices
6th 2010 – Smartphones, cloud systems, ▶ iOS, Android, Windows 10/11, Linux
Generation Present IoT devices (Ubuntu, RHEL)
Components
There are several key components of a Distributed Computing System
Devices or Systems: The devices or systems in a distributed system have their own processing
capabilities and may also store and manage their own data.
Network: The network connects the devices or systems in the distributed system, allowing them to
communicate and exchange data.
Resource Management: Distributed systems often have some type of resource management system
in place to allocate and manage shared resources such as computing power, storage, and networking.
The architecture of a Distributed Computing System is typically a Peer-to-Peer Architecture, where
devices or systems can act as both clients and servers and communicate directly with each other.
Characteristics
There are several characteristics that define a Distributed Computing System
Multiple Devices or Systems: Processing and data storage is distributed across multiple devices or
systems.
Peer-to-Peer Architecture: Devices or systems in a distributed system can act as both clients and
servers, as they can both request and provide services to other devices or systems in the network.
Shared Resources: Resources such as computing power, storage, and networking are shared among
the devices or systems in the network.
Horizontal Scaling: Scaling a distributed computing system typically involves adding more devices
or systems to the network to increase processing and storage capacity. This can be done through
hardware upgrades or by adding additional devices or systems to the network.
PARALLEL COMPUTATION
Parallel computation is a method of performing multiple calculations or processes simultaneously,
with the goal of solving problems more efficiently, especially those that are large or complex. It involves
dividing a task into smaller sub-tasks that can be executed at the same time on multiple processors or
cores.
DEFINITION OF PROCESS:
A Program in execution
A asynchronous activity
The ―animated spirit‖ of a procedure
The ―locus of control‖ of a procedure in execution
The data structure of the process called a ―process descriptor‖ or a ―process control block‖.
There are two key concepts
1. A process is an ―entity‖ – each process has its own address space, which consists of a text region,
data region and stack region.
Text region: stores the code that the processor execute.
Data region: stores variable and dynamically allocated memory
that the process uses during execution.
Stack region: stores instruction and local variables for active
procedure calls.
2. A process is a ―program in execution‖.
PROCESS STATES
When a process executes, it passes through different states.
These stages may differ in different operating systems.
In general, a process can have one of the following five states at a time.
New: This is the initial state when a process is first started/created.
Ready: The process are waiting to have the processor allocated to them by the operating system so
that they can run.
o Process may come into this state after leaving the start state or while running but be
interrupted by the scheduler to assign CPU to some other process.
Running: After Ready state, the process state is set to running and the processor execute its
instruction.
Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
Terminated: Once the process finishes its execution or its is terminated by the operating system, it
is moved to the terminated state where it waits to be removed from main memory.
Not Running: This means the process is not currently using the CPU. It could be waiting for
something, like user input or data, or it might just be paused. New: This is the initial state when a
process is first started/created.
Ready: The process are waiting to have the processor allocated to them by the operating system so that
they can run.
o Process may come into this state after leaving the start state or while running but be interrupted by the
scheduler to assign CPU to some other process.
Running: After Ready state, the process state is set to running and the processor execute its instruction.
Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
Terminated: Once the process finishes its execution or its is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is
called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is
called I/O bound process.
PROCESS MANAGEMENT
Process Management in an operating system (OS) involves overseeing the lifecycle of processes—
from their creation to termination. This ensures efficient CPU utilization, multitasking, and system stability.
PROCESS STATE TRANSITIONS:
In an operating system, process state transition describes how a process moves between different
states of execution. These states represent the various stages a process goes through, from creation to
termination. The main states a process can be in are new, ready, running, waiting (blocked), and terminated.
1. New:
When a new process is created, it starts in the "New" state. It's awaiting admission to the "Ready" state.
2. Ready:
A process in the "Ready" state is waiting to be assigned to a CPU to begin execution.
3. Running:
When a process is assigned a CPU, it transitions to the "Running" state and is actively executing.
4. Waiting (Blocked):
If a process needs to wait for an event to occur (e.g., I/O completion, resource availability), i t moves to
the "Waiting" or "Blocked" state.
5. Terminated:
Once a process completes execution, it transitions to the "Terminated" state.
6. Other states:
Some operating systems also include "Suspended" states, which can be "Suspended Ready" (waiting for
the CPU in a suspended state) or "Suspended Wait" (waiting for an event while suspended).
Indefinitely removes it from contention for time on a processor without being destroyed.
Useful for detecting security threats and for software debugging purposes.
A suspension may be initiated by the process being suspended or by another process.
A suspended process must be resumed by another process.
Two suspended states:
• suspendedready
• suspendedblocked
Context Switching in an operating system is a critical function that allows the CPU to
efficiently manage multiple processes. By saving the state of a currently active process and loading
the state of another, the system can handle various tasks simultaneously without losing progress. This
switching mechanism ensures optimal use of the CPU, enhancing the system‘s ability to perform
multitasking effectively.
Working Process Context Switching
An interrupt in an operating system is a mechanism that allows the CPU to respond to an external
or internal event that requires immediate attention. When an interrupt occurs, the CPU temporarily stops
executing the current instructions and begins executing a function (often referred to as an interrupt
handler or interrupt service routine, ISR) to handle the event.
Interrupts can come from various sources, and they can be categorized into two main types:
1. Hardware Interrupts:
These are triggered by external hardware devices, like keyboards, mice, or network interfaces, to
signal the CPU that they need processing.
For example:
o Keyboard input: When a key is pressed, a hardware interrupt is generated to inform the
CPU to process the input.
o Timer interrupts: Generated by a timer to ensure the CPU doesn't get stuck in long-
running processes.
o I/O devices: Devices like hard drives, printers, etc., can send interrupts when they are
ready to transfer data.
Hardware interrupts are generally further classified into:
Maskable interrupts (IRQ): These can be disabled (masked) by the CPU if it is currently
processing something more urgent.
Non-maskable interrupts (NMI): These cannot be disabled and typically indicate critical
hardware errors, like a system crash or power failure.
2. Software Interrupts:
These are triggered by software programs to request a service from the operating system or to
handle a system call. For instance, a program may need access to system resources like file
handling, memory allocation, or input/output operations.
Examples include system calls for reading/writing files, managing memory, or handling
processes.
Interrupt Handling Process:
1. Interrupt Signal: An interrupt is triggered, either by hardware or software.
2. Interrupt Acknowledgment: The CPU acknowledges the interrupt and stops executing the
current instructions.
3. Context Saving: The CPU saves its current state (like register values) so it can return to the
previous state once the interrupt has been serviced.
4. Interrupt Service Routine (ISR): The CPU jumps to the address of the interrupt handler
function to process the interrupt.
5. Restoration: After the ISR finishes, the CPU restores the previous state and resumes the
interrupted task.
INTERRUPT PROCESSING:
Interrupt processing refers to the mechanism that allows the CPU to handle interrupts efficiently
and ensure that the system can respond to asynchronous events, such as I/O operations, hardware
signals, or software requests. The process ensures that interrupts do not disrupt normal program
execution too much while still allowing for timely handling of important events.
Basic Steps of Interrupt Processing
Here‘s a detailed breakdown of how interrupt processing works in most systems:
1. Interrupt Occurrence
An interrupt is triggered by either hardware (e.g., I/O devices like keyboards, disk drives) or
software (e.g., system calls).
Hardware interrupts come from devices (keyboard, mouse, network card, etc.) requesting the
CPU‘s attention.
Software interrupts are generated by programs to request system services.
2. Interrupt Acknowledgment
The interrupting device sends an interrupt signal to the CPU, notifying it of the need for
attention.
Depending on the system, the interrupt is either maskable (can be ignored for a while) or non-
maskable (cannot be ignored).
3. Interrupt Masking
In most systems, interrupts can be masked (disabled) while the CPU is handling critical tasks.
Maskable interrupts can be temporarily turned off by the operating system or hardware to
prevent them from interfering with critical operations.
Non-maskable interrupts (NMI) cannot be masked, often used for emergency situations like
hardware failures.
4. Saving Context
Before processing the interrupt, the CPU saves its context. This includes:
o Register values
o Program Counter (PC) to resume from where the interrupted process left off
o Flags or status registers
This is done to ensure that once the interrupt is handled, the CPU can return to the task it was
previously performing.
5. Interrupt Vector Table
The CPU uses an interrupt vector table (IVT) to locate the address of the appropriate Interrupt
Service Routine (ISR) for the given interrupt.
Each type of interrupt (e.g., keyboard input, network packet arrival) has a unique vector, which
maps to a function or routine that should handle the interrupt.
6. Interrupt Service Routine (ISR) Execution
The CPU jumps to the ISR, a special function designed to handle the interrupt.
o For example, if the interrupt is caused by a keypress, the ISR would handle the key input
and store it for further processing.
o The ISR is typically very short and efficient to minimize delays.
After the ISR finishes, it returns control to the previously executing program, restoring the
context that was saved earlier.
7. Restoring Context
After the ISR has completed, the saved context is restored. This includes:
o Restoring the CPU registers
o Setting the Program Counter (PC) to the address of the next instruction that should be
executed
The CPU can now continue executing the previously interrupted process.
8. Interrupt Enable
After the interrupt has been serviced and context restored, the system often re-enables interrupt
processing if it was disabled during the ISR.
INTERRUPT CLASSES:
Interrupt classes in operating systems can be categorized by their source and behavior,
including hardware, software, timer, external, internal (exceptions), and maskable/non-maskable
interrupts. Hardware interrupts originate from external devices, while software interrupts are triggered by
programs or exceptional conditions.
Detailed Breakdown:
Hardware Interrupts:
These interrupts are generated by external hardware devices like keyboards, mice, network cards, and
other peripheral devices.
Software Interrupts:
These interrupts are generated by software or due to exceptional conditions, such as an error or
a system call.
Timer Interrupts:
These are periodic interrupts that occur at regular intervals, often used for scheduling tasks or
time-based operations.
External Interrupts:
Similar to hardware interrupts, these originate from external sources, but may also include
interrupts generated by other modules within the system.
Internal Interrupts (Exceptions):
These are interrupts caused by exceptional conditions within the processor itself, such as a
division by zero or attempting to access an invalid memory address.
Maskable vs. Non-Maskable Interrupts:
Maskable interrupts can be temporarily disabled by the operating system, while non-maskable
interrupts are critical and must be handled immediately.
Interrupt Priority:
Interrupts can be assigned different priorities to determine which ones should be handled first.
Interrupt Handling:
When an interrupt occurs, the processor suspends its current task, saves the state, and executes
an interrupt handler routine to handle the interrupt. After the interrupt is handled, the processor
restores its state and resumes the original task.
INTER PROCESS COMMUNICATION (IPC)
Inter Process Communication (IPC) is a mechanism that allows processes (independent
running programs) to communicate and coordinate with each other. This is essential in modern operating
systems where multiple processes need to share data, resources, or coordinate execution.
Key Goals of IPC
1. Data exchange: Share information between processes.
2. Synchronization: Ensure processes execute in the correct sequence.
3. Resource sharing: Prevent conflicts when accessing shared data or devices.
4. Event signaling: Allow processes to notify others about certain events.
Types of IPC Mechanisms
Description Use Case
Mechanism
Pipes One-way or two-way communication channel using Parent-child process
a buffer in kernel space. communication.
FIFOs (Named Like pipes but with a name; can be used by Communication between
Pipes) unrelated processes. unrelated processes.
Message Queues Messages are sent to and retrieved from a queue Queue-based communication.
maintained by the OS.
Shared Memory Memory segment is shared between processes. Large data exchange with
Fastest IPC, but needs sync. synchronization.
Semaphores Used to control access to shared resources (mostly Prevent race conditions.
for synchronization).
Sockets Network-based communication using IP address and Inter-process or inter-machine
ports (even on same machine). communication.
Signals OS sends a simple signal to a process (like Process control (e.g., kill,
interrupt). notify).
SIGNALS:
What is a Signal?
A signal is a limited form of inter-process communication used in Unix/Linux systems. It is a
software interrupt sent to a process to notify it that an event has occurred.
Common Use Cases:
Terminate a process (SIGTERM)
Interrupt a process (SIGINT, e.g., Ctrl+C)
Notify a process of illegal operations (SIGSEGV, SIGFPE)
Custom signaling between user processes (SIGUSR1, SIGUSR2)
Common Signals:
Signal Description
SIGINT Interrupt (Ctrl+C)
SIGTERM Termination request
SIGKILL Force kill a process (cannot be caught)
SIGSTOP Stop/suspend process
SIGCONT Continue a stopped process
SIGUSR1 User-defined signal 1
MESSAGE PASSING:
What is Message Passing?
Message passing is an IPC method where processes send and receive messages using OS-provided
mechanisms like message queues, mailboxes, or sockets.
Characteristics:
Direct or indirect communication
Synchronous (blocking) or asynchronous (non-blocking)
Supports structured communication (e.g., with headers, priorities)
Key Methods:
Mechanism Description
Message Queues Queue in the kernel that stores messages sent between processes.
Sockets Useful for both local and network communication.
Mailboxes Named communication objects, mostly in some OS like Windows or RTOS.