0% found this document useful (0 votes)
50 views

Cb3402 Unit 1 Notes

The document provides an overview of operating systems, detailing their role as intermediaries between users and hardware, and their components including hardware, operating system, application programs, and users. It discusses computer system organization, including CPU, memory, I/O devices, and the importance of resource management, interrupts, storage structures, and I/O structures. Additionally, it covers computer architecture, distinguishing between single-processor and multiprocessor systems, and their respective functionalities.

Uploaded by

Siva Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Cb3402 Unit 1 Notes

The document provides an overview of operating systems, detailing their role as intermediaries between users and hardware, and their components including hardware, operating system, application programs, and users. It discusses computer system organization, including CPU, memory, I/O devices, and the importance of resource management, interrupts, storage structures, and I/O structures. Additionally, it covers computer architecture, distinguishing between single-processor and multiprocessor systems, and their respective functionalities.

Uploaded by

Siva Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

LOYOLA INSTITUTE OF TECHNOLOGY

Palanchur, Chennai – 600123.


Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

CB3402- OPERATING SYSTEMS AND


SECURITY

UNIT-1

UNIT I OPERATING SYSTEM OVERVIEW


9
Computer-System Organization – Architecture – Operating-System Operations – Resource
Management – Security and Protection – Distributed Systems – Kernel Data Structures –
Operating-System Services – System Calls – System Services – Why Applications Are Operating-
System Specific – Operating-System Design and Implementation - Operating-System Structure –
Building and Booting an Operating System
OPERATING SYSTEM: INTRODUCTION
➢ An operating system is a program which manages all the computer’s hardware.
➢ An operating system acts as an intermediary between the user of a computer and the computer
hardware.
➢ The purpose of an operating system is to provide an environment in which a user can
execute programs in a convenient and efficient manner.
➢ An operating system is software that manages a computer’s hardware. It provides a basis for
application programs and acts as an intermediary between the computer user and the computer
hardware.
1.1 ELEMENTS / COMPONENTS OF COMPUTER SYSTEM
A computer system can be divided into four components:
1. The hardware,
2. The operating system,
3. The application programs, and
4. The users.
The hardware: provides the basic computing resources for the system Eg: the central processing unit
(CPU), the memory, and the input/output (I/O) devices.

Fig 1.1: Abstract view of the components of the computer system


The application programs: define the ways in which these resources are used to solve users
computing problems. Eg: word processors, spreadsheets, compilers, and Web browsers.
The operating system: controls the hardware and coordinates its use among the various application
programs for the various users

The User
There are two views
i. User View
ii. System View

i. User View
➢ The user’s view of the computer varies according to the interface being used.
➢ Most computer users sit with Laptop/PC, consisting of a monitor, keyboard, and mouse. Such
systems are designed for one user.
➢ Here, the operating system is designed for ease of use, less concentration on
performance, security and resource utilization.
➢ users interacting with mobile devices such as smartphones and tablets uses touch screen as the
interface, where the user interacts with the system by pressing and swiping fingers across the
screen.
➢ Many mobile devices also allow users to interact through a voice recognition
interface, such as Apple’s Siri.
➢ Some computers such as embedded computers are designed to run without user intervention.
They are used in home devices and automobiles and have numeric keypads and may turn
indicator lights on or off to show status.
ii. System View
➢ From the computer’s point of view, we can view an operating system as a
resource allocator.
➢ A computer system has many resources that may be required to solve a problem: CPU time,
memory space, file-storage space, I/O devices, and so on.
➢ The operating system acts as the manager of these resources.
➢ The operating system must decide how to allocate them to specific programs and operate the
computer system efficiently and fairly.
➢ An operating system is a control program. A control program manages the
execution of user programs to prevent errors and improper use of the computer.
➢ It is especially concerned with the operation and control of I/O devices.
Defining Operating Systems
The operating system is the one program running at all times on the computer, usually called the
kernel. Along with the kernel, there are two other types of programs:
❖ System programs, which are associated with the operating system but are not
necessarily part of the kernel
❖ Application programs, which include all programs not associated with the operation of the
system.
Some mobile OS include
❖ Middleware, a set of software frameworks that provide additional services to application
For example, mobile operating systems,Apple’s iOS and Google’s Android. The
operating system always includes
▪ Kernel,
▪ Middleware frameworks that ease application development and provide features, and
▪ System programs that aid in managing the system while it is running.

1.2 COMPUTER SYSTEM- ORGANIZATION


➢ A modern general-purpose computer system consists of one or more CPUs and a number of device
controllers connected through a common bus that provides access between components and shared
memory.
➢ Each device controller is in charge of a specific type of device (for example, a disk
drive, audio device, or graphics display).
➢ Depending on the controller, more than one device may be attached.
➢ A device controller maintains some local buffer storage and a set of special-purpose registers. The
device controller is responsible for moving the data between the peripheral devices that it controls
and its local buffer storage.
➢ Typically, operating systems have a device driver for each device controller. This
device driver understands the device controller and provides the rest of the operating system with a
uniform interface to the device.

Fig 1.2 A typical PC computer system


Three key aspects of the system
1. Interrupts
2. Storage Structure
3. I/O structure
1.2.1 INTERRUPTS
What is interrupts?
Interrupts are the event that can be caused by hardware or software that signals the processor to
complete the ongoing instruction and immediately handle the Interrupt Service Routine (ISR)
which contains the information for dealing with the interrupt.
➢ To start an I/O operation, the device driver loads the appropriate registers in the device
controller. The device controller examines the contents of these registers to determine what action to
take (such as ―read a character from the keyboard‖).
➢ The controller starts the transfer of data from the device to its local buffer.
➢ Once the transfer of data is complete, the device controller informs the device driver that it has
finished its operation.
➢ The device driver then gives control to other parts of the operating system.
The device controller informs the device driver that it has finished its operationwith the help of interrupt.

1.2.1.1 Overview
➢ Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by
way of the system bus.
➢ When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location.
➢ The fixed location usually contains the starting address where the service routine for the
interrupt is located.
➢ The interrupt service routine executes; on completion, the CPU resumes the
interrupted computation.
➢ The interrupt must transfer control to the appropriate interrupt service routine.
Fig 1.3 Interrupt timeline for a single program doing output
1.2.1.2 Implementation
➢ The CPU hardware has a wire called the interrupt-request line that the CPU senses after
executing every instruction.
➢ When the CPU detects that a controller has asserted a signal on the interrupt-
request line, it reads the interrupt number and jumps to the interrupt-handler routine by using
that interrupt number as an index into the interrupt vector.
➢ The Device controller raises an interrupt by asserting a signal on the interrupt
request line,
▪ The CPU catches the interrupt and dispatches it to the interrupt handler,
▪ The handler clears the interrupt by servicing the device.
In a modern operating system, we need more sophisticated interrupt handling features.
1. The ability to defer interrupts handling during critical processing.
2. An efficient way to dispatch to the proper interrupt handler for a device.
3. Multilevel interrupts, so that the operating system can distinguish between high- and low- priority
interrupts and can respond with the appropriate degree of urgency.
In modern computer hardware, these three features are provided by the CPU and the interrupt- controller
hardware.
CPUs have two interrupt request lines
i. Non Maskable Interrupts
Reserved for events such as unrecoverable memory errors.
Fig 1.4 Interrupt-driven I/O cycle
ii. Maskable Interrupts
▪ The maskable interrupt is used by device controllers to request service.
▪ It can be turned off by the CPU before the execution of critical instruction sequences
that must not be interrupted.
Intel processor event-vector table
▪ The events from 0 to 31, which are non-maskable, are used to signal various error
conditions.
▪ The events from 32 to 255, which are maskable, are used for purposes
such as device- generated interrupts.
▪ The interrupt mechanism also implements a system of interrupt priority levels.
▪ These levels enable the CPU to defer the handling of low-priority
interrupts without masking all interrupts and makes it possible for a high-priority
interrupt to preempt the execution of a low-priority interrupt.
1.2.2 STORAGE STRUCTURE
➢ The CPU can load instructions only from memory, so any programs must first be loaded into
memory to run.
➢ General-purpose computers run most of their programs from rewritable memory,
called main memory (also called random-access memory, or RAM).
➢ Main memory commonly is implemented in a semiconductor technology called Dynamic
Random-Access Memory (DRAM).
Table 1.1 Intel processor event-vector table.
➢ Computers use other forms of memory as well. For example, the first program to run on computer
power-on is a bootstrap program, which then loads the operating system.
➢ Since RAM is volatile, bootstrap program is not loaded in RAM, but it is loaded on
EEPROM.
➢ The computer uses electrically erasable programmable read-only memory (EEPROM) and other
forms of firmware storage that is infrequently written to and is non-volatile.
➢ All forms of memory provide an array of bytes. Each byte has its own address.
Interaction is achieved through a sequence of load or store instructions to specific memory
addresses.
➢ The load instruction moves a byte or word from main memory to an internal register
within the CPU, whereas the store instruction moves the content of a register to main memory.

Von-Neumann Architecture
➢ Von-Neumann architecture shows a typical instruction–execution cycle.
➢ First fetches an instruction from memory and stores that instruction in the instruction register.
➢ The instruction is then decoded, and operands are fetched from memory and stored in
some internal register.
➢ After the instruction is executed, the result may be stored back in memory.
1.2.2.1 Secondary Storage Devices
➢ Most computer systems provide secondary storage as an extension of main memory.
➢ The main requirement for secondary storage is that, to hold large quantities of data
permanently.
➢ The most common secondary-storage devices are hard-disk drives (HDDs) and
nonvolatile memory (NVM) devices, which provide storage for both programs and data.
➢ Most programs (system and application) are stored in secondary storage until they are
loaded into memory.
➢ Many programs use secondary storage as both the source and the destination of their processing.
➢ Secondary storage is also much slower than main memory. Hence, the proper
management of secondary storage is of central importance to a computer system.
➢ Special purpose of storage is, to store backup copies of material stored on other devices are tertiary
storage.
1.2.2.2 Hierarchy of Storage-device
➢ The wide variety of storage systems can be organized in a hierarchy (Figure 1.5) according to
storage capacity and access time.
➢ The five hierarchies in a system's memory are register, cache memory, main memory,
magnetic disc, and magnetic tape.
➢ The top four levels of memory in the figure are constructed using semiconductor memory, which
consists of semiconductor-based electronic circuits.
➢ NVM devices, at the fourth level, have several variants but are faster than hard disks.
The most common form of NVM device is flash memory, which is popular in mobile devices such
as smartphones and tablets. Increasingly, flash memory is being used for long-term storage on
laptops, desktops, and servers as well.
➢ External or Secondary Memory
❖ It consists of Magnetic Tape, Optical Disk, Magnetic Disk, i.e., it includes peripheral
storage devices that are accessible by the system’s processor via I/O Module.
➢ Internal Memory or Primary Memory
❖ It consists of CPU registers, Cache Memory, and Main Memory. It is accessible
directly by the processor.

Fig 1.5 Storage-device hierarchy


1.2.3 I/O STRUCTURE
➢ A large portion of operating system code is dedicated to managing I/O, because of its importance
to the reliability and performance of a system and because of the varying nature of the devices.
Fig 1.6 How a modern computer system works.
➢ A general-purpose computer system consists of multiple devices, all of which exchange data via a
common bus.
➢ The form of interrupt-driven I/O is fine for moving small amounts of data but can
produce high overhead when used for bulk data movement.
➢ To solve this problem, direct memory access (DMA) is used. After setting up buffers, pointers, and
counters for the I/O device, the device controller transfers an entire block of data directly to or
from the device and main memory, with no intervention by the CPU.
➢ Only one interrupt is generated per block, to tell the device driver that the operation
has completed, rather than the one interrupt per byte generated for low-speed devices.
➢ While the device controller is performing these operations, the CPU is available to accomplish
other work.
➢ Some high-end systems use switch rather than bus architecture. On these systems,
multiple components can talk to other components concurrently, rather than competing for
cycles on a shared bus.
1.3 COMPUTER SYSTEM ARCHITECTURE
Computer System Architecture Computer system can be divided into four
components Hardware – provides
basic computing resources
CPU, memory, I/O devices, Operating system
Controls and coordinates use of hardware among various applications and users
Application programs – define the ways in which the system resources are used to solve the computing
problems of the users
Word processors, compilers, web browsers, database systems, video
games Users
People, machines, other computers Four
Components of a Computer System

Computer architecture means construction/design of a computer. A computer system may be


organized in different ways. Some computer systems have single processor and others have
multiprocessors. So based on the processors used in computer systems, they are categorized
into the following systems.

1. Single-processor system

2. Multiprocessor system

3. Clustered Systems:

1. Single-Processor Systems:

Some computers use only one processor such as microcomputers (or personal computers PCs).
On a single-processor system, there is only one CPU that performs all the activities in the
computer system. However, most of these systems have other special purpose processors, such
as I/O processors that move data quickly among different components of the computers. These
processors execute only a limited system programs and do not run the user program. Sometimes
they are managed by the operating system. Similarly, PCs contain a special purpose
microprocessor in the keyboard, which converts the keystrokes into computer codes to be
sent to the CPU. The use of special purpose microprocessors is common in microcomputer.
But it does not mean that this system is multiprocessor. A system that has only one general-
purpose CPU, is considered as single- processor system.

2. Multiprocessor Systems:

In multiprocessor system, two or more processors work together. In this system, multiple programs
(more than one program) are executed on different processors at the same time. This type of
processing is known as multiprocessing. Some operating systems have features of multiprocessing.
UNIX is an example of multiprocessing operating system. Some versions of Microsoft Windows
also support multiprocessing.

Multiprocessor system is also known as parallel system. Mostly the processors of


multiprocessor system share the common system bus, clock, memory and peripheral devices.
This system is very fast in data processing.

Types of Multiprocessor Systems:

The multiprocessor systems are further divided into two


types; (i). Asymmetric multiprocessing system
(ii). Symmetric multiprocessing system

(i) Asymmetric Multiprocessing System(AMS):

The multiprocessing system, in which each processor is assigned a specific task, is known as
Asymmetric Multiprocessing System. For example, one processor is dedicated for handling
user's requests, one processor is dedicated for running application program, and one processor
is dedicated for running image processing and so on. In this system, one processor works as
master processor, while other processors work as slave processors. The master processor
controls the operations of system. It also schedules and distributes tasks among the slave
processors. The slave processors perform the predefined tasks.

(ii) Symmetric Multiprocessing System(SMP):

The multiprocessing system, in which multiple processors work together on the same task, is
known as Symmetric Multiprocessing System. In this system, each processor can perform all
types of tasks. All processors are treated equally and no master-slave relationship exists
between the processors.
For example, different processors in the system can communicate with each other. Similarly,
an I/O can be processed on any processor. However, I/O must be controlled to ensure that
the data reaches the appropriate processor. Because all the processors share the same
memory, so the input data given to the processors and their results must be separately
controlled. Today all modern operating systems including Windows and Linux provide
support for SMP.
It must be noted that in the same computer system, the asymmetric multiprocessing and
symmetric multiprocessing technique can be used through different operating systems.

A Dual-Core Design

3. Clustered Systems:

Clustered system is another form of multiprocessor system. This system also contains multiple
processors but it differs from multiprocessor system. The clustered system consists of two or
more individual systems that are coupled together. In clustered system, individual systems (or
clustered computers) share the same storage and are linked together ,via Local Area Network
(LAN).

A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of
the other nodes over the LAN. If the monitored machine fails due to some technical fault (or
due to other reason), the monitoring machine can take ownership of its storage. The
monitoring machine can also restart the applications that were running on the failed machine.
The users of the applications see only an interruption of service.

Types of Clustered Systems:

Like multiprocessor systems, clustered system can also be of two


types (i). Asymmetric Clustered System
(ii). Symmetric Clustered System
(i). Asymmetric Clustered System:
In asymmetric clustered system, one machine is in hot-standby mode while the other
machine is running the application. The hot-standby host machine does nothing. It only
monitors the active server. If the server fails, the hot-standby machine becomes the active
server.
(ii). Symmetric Clustered System:
In symmetric clustered system, multiple hosts (machines) run the applications. They also
monitor each other. This mode is more efficient than asymmetric system, because it uses all
the available hardware. This mode is used only if more than one application be available to
run.

7
1.4 OPERATING SYSTEM OPERATIONS

1.4.1 Multiprogramming and Multitasking


2) Multitasking
1.4.2 Dual-Mode Operation·

In order to ensure the proper execution of the operating system, we must be able to distinguish
between the execution of operating-system code and user defined code. The approach taken by
most computer systems is to provide hardware support that allows us to differentiate among
various modes of execution.

At the very least we need two separate modes of operation.user mode and kernel mode.
A bit, called the mode bit is added to the hardware of the computer to indicate the current mode:
kernel (0) or user (1).with the mode bit we are able to distinguish between a task that is
executed on behalf of the operating system and one that is executed on behalf of the user, When

the computer system is executing on behalf of a user application, the system is in user
mode. However, when a user application requests a service from the operating system (via
a.. system call), it must transition from user to kernel mode to fulfill the request.

At system boot time, the hardware starts in kernel mode. The operating system is then loaded
and starts user applications in user mode. Whenever a trap or interrupt occurs, the hardware
switches from user mode to kernel mode (that is, changes the state of the mode bit to 0). Thus,
whenever the operating system gains control of the computer, it is in kernel mode. The system
always switches to user mode (by setting the mode bit to 1) before passing control to a user
program.
The dual mode of operation provides us with the means for protecting the
operating system from errant users-and errant users from one another. We
accomplish this protection by designating some of the machine instructions that
may cause harm as privileged instructions. the hardware allows privileged
instructions to be executed only in kernel mode. If an attempt is made to execute
a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system.
The instruction to switch to kernel mode is an example of a privileged
instruction. Some other examples include I/0 control timer management and
interrupt management.
1.4.3

1.5 RESOURCE MANAGEMENT

Resource management in an operating system (OS) refers to the efficient and fair allocation,
utilization, and management of resources (such as CPU time, memory, I/O devices, etc.) among
processes or programs running in a system. The goal is to optimize the performance of the system
while ensuring that processes have access to the resources they need to execute.

Process Management

 A process is a program in execution, and the OS must manage processes by allocating CPU
time, memory, and other resources. The OS also ensures that processes do not interfere with
each other and that they complete their execution without errors.
 Creating and deleting both user and system processes
 Scheduling processes and threads on the CPUs
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication

Memory Management

 Memory is a limited resource, so the OS must manage memory allocation for processes
efficiently. The OS keeps track of which parts of memory are in use and allocates memory to
processes as needed.
 Techniques:
o Contiguous memory allocation: Processes are assigned a single contiguous block of
memory.
o Paged memory management: The process's memory is divided into fixed-size blocks
(pages), and physical memory is divided into blocks of the same size (frames).
o Segmented memory management: Memory is divided into different segments based
on the logical divisions of a program (code, data, stack, etc.).
o Virtual memory: Uses disk space to extend physical memory, allowing larger
programs to run on a system than would otherwise fit in physical memory.

File System Management

 The file system provides a mechanism to store, organize, and manage data on storage devices
(like hard drives or SSDs). The OS needs to manage how files are created, accessed, and
deleted, and how data is stored.
 Key tasks:
o File allocation: Deciding how files are stored on the disk (contiguous, linked, indexed).
o File protection: Ensuring that only authorized users or processes can access or modify
files.
o File system organization: Organizing data into directories and subdirectories for easier
navigation.

Mass-Storage Management

As we have already seen, the computer system must provide secondary storage to back up main
memory. Most modern computer systems use HDDs and NVM devices as the principal on-line storage
media for both programs and data.
Most programs—including compilers, web browsers, word processors, and games—are stored on
these devices until loaded into memory.
The programs then use the devices as both the source and the destination of their processing. Hence,
the proper management of secondary storage is of central importance to a computer system.
The operating system is responsible for the following activities in connection with secondary storage
management:
• Mounting and unmounting
• Free-space management
• Storage allocation
• Disk scheduling
• Partitioning
• Protection

I/O Management in Operating Systems

Input/Output (IO) management is a crucial component of any operating system (OS). It encompasses

the techniques and mechanisms an OS uses to manage the input and output operations of the computer

system, ensuring efficient and effective communication between hardware and software components.
This article will delve into the principles of IO management, focusing on how Linux implements these

principles, and provide insights into how administrators can use and troubleshoot IO management

features.
Principles of IO Management

1. IO Devices and Controllers

IO devices, such as keyboards, mice, printers, and storage devices, interact with the computer through

device controllers. These controllers act as intermediaries, converting signals from the devices into a

format the computer can understand and vice versa.

2. IO Operations

IO operations can be categorized into three main types:

 Character IO: Involves reading and writing one character at a time (e.g., keyboards).

 Block IO: Involves reading and writing large blocks of data (e.g., disk drives).

 Network IO: Involves data transfer over a network.

3. Buffering

Buffering is a technique where data is temporarily stored in a buffer while being transferred between

two locations. This helps in accommodating the speed difference between the IO device and the CPU.

4. Caching

Caching involves storing copies of data in a high-speed storage (cache) to reduce access time.

Frequently accessed data is kept in the cache, reducing the need for time-consuming IO operations.

5. Spooling

Spooling (Simultaneous Peripheral Operations On-Line) is a method where data is temporarily held in a

buffer (spool) until the device is ready to process it. It is commonly used in print spooling.

6. Device Drivers

Device drivers are specialized software modules that enable the OS to communicate with hardware

devices. They provide a standardized interface for IO operations, abstracting the hardware specifics.
7. Direct Memory Access (DMA)

DMA allows certain hardware subsystems to access main system memory independently of the CPU.

This enhances performance by offloading IO operations from the CPU.

Cache management

It is an operating system refers to the process of managing data stored in caches to improve system
performance by reducing access time to frequently used data. The OS uses different types of caches,
such as CPU cache, disk cache, and memory cache, to speed up data access.

Key Concepts in Cache Management:

1. CPU Cache:
o Stores frequently accessed data and instructions.
o Divided into multiple levels (L1, L2, L3).
o Uses cache replacement policies (e.g., LRU, FIFO) to decide which data to evict.
2. Disk Cache:
o Temporarily stores frequently accessed data from disk in faster memory (RAM).
o Reduces disk I/O operations, improving system performance.
o Uses algorithms like write-back (write data to disk later) and write-through (write
data to disk immediately).
3. Memory Cache:
o Stores frequently accessed data pages or file blocks in RAM to speed up access.
o Uses page replacement algorithms (e.g., LRU, Clock) to manage which pages to keep
in memory.
4. Buffering:
o Temporarily holds data before it is sent to or from a device, helping to manage speed
differences between devices and the CPU.
5. Cache Coherence (in multi-core systems):
o Ensures consistency between the caches of different CPU cores, especially when one
core modifies data.

1.6 SECURITY AND PROTECTION

Security and protection in operating systems (OS) are critical components that ensure the integrity,
confidentiality, and availability of the system and its resources. Here's a breakdown of the key aspects
of security and protection in OS:

1. User Authentication and Authorization

 Authentication: Verifying the identity of users or processes. This can be done using
passwords, biometrics, two-factor authentication (2FA), or certificates.
 Authorization: Once authenticated, the OS checks if the user or process has permission to
perform certain actions. This is typically controlled by access control mechanisms like user
roles and access control lists (ACLs).

2. Access Control Mechanisms

 Discretionary Access Control (DAC): The owner of the resource (e.g., file) decides who can
access it and in what way (read, write, execute).
 Mandatory Access Control (MAC): The OS enforces strict access policies, and access
permissions are based on security labels (e.g., security clearance levels).
 Role-Based Access Control (RBAC): Access is granted based on roles, which are predefined
by administrators. This is used in many enterprise environments.

3. Encryption

 Data Encryption: Ensures that data stored on disk or transmitted over a network is unreadable
without the proper decryption key. This protects data from unauthorized access.
 File System Encryption: OS-level encryption (e.g., BitLocker, FileVault) ensures that the
entire file system is encrypted, protecting it if the hardware is stolen.

4. Process Isolation

 The OS ensures that processes run in isolated memory spaces to prevent unauthorized access or
interference. This is typically achieved through virtual memory management.
 Sandboxing: A technique used to limit the actions of a program to a controlled environment,
protecting the system from potential malicious behavior.

5. Secure Boot and Kernel Protection

 Secure Boot: Ensures that the OS is only loaded if it hasn't been tampered with. This is
achieved through digital signatures and checks during the boot process.
 Kernel Protection: The kernel (core part of the OS) is protected against unauthorized
modifications or exploits. Techniques like Address Space Layout Randomization (ASLR) and
Data Execution Prevention (DEP) are commonly used.

6. Intrusion Detection and Prevention Systems (IDPS)

 Intrusion Detection Systems (IDS): Monitors network traffic or system behavior for signs of
malicious activity.
 Intrusion Prevention Systems (IPS): Actively blocks any suspicious or malicious activities
identified by the IDS.

7. Audit Logging and Monitoring

 OSs maintain audit logs to record system events, which can be used for detecting and analyzing
security breaches or policy violations.
 Security Information and Event Management (SIEM): Systems that collect and analyze
logs to provide real-time insights into security incidents.

8. Patch Management and Updates

 Keeping the OS and installed software up to date is crucial for patching security vulnerabilities.
Regular updates help mitigate known security threats.

9. Malware Protection

 Antivirus Software: Helps detect and remove malicious software like viruses, worms, and
Trojans.
 Sandboxing for Malware: Some OS features isolate suspicious files in a "sandbox" where
they cannot harm the system.
10. Virtualization and Containerization

 Virtualization: Allows multiple virtual machines (VMs) to run on a single physical machine,
providing isolation and reducing the impact of security breaches.
 Containerization: A lightweight form of virtualization, where applications run in isolated
containers, limiting the potential damage if a container is compromised.

11. Security Policies and Enforcement

 Operating systems define and enforce security policies that control how resources are accessed
and used. Examples include policies related to password strength, file access, and process
execution.

12. Firewalls and Network Security

 Firewall: Controls incoming and outgoing network traffic based on predetermined security
rules. Firewalls can be configured to protect against unauthorized access to system resources.
 Network Isolation: Ensures that different parts of the system or network are isolated from
each other, preventing lateral movement in case of an attack.

13. Backup and Recovery

 Regular backups of critical data are essential for recovering from security incidents like
ransomware attacks, system crashes, or data corruption.

1.7 DISTRIBUTED SYSTEMS

A distributed system in an operating system (OS) is a network of independent computers that work
together to appear as a single unified system to users. Here’s a short overview:

Key Features:

1. Transparency: The system hides its distributed nature, making it appear as a single system
(e.g., access, location, and failure transparency).
2. Scalability: Can grow in size (users or resources) without losing performance.
3. Fault Tolerance: The system continues to work even if some components fail, often using
replication and redundancy.
4. Resource Sharing: Resources (CPU, storage) are shared across multiple nodes in the system.
5. Openness: The system is open and adheres to standards for integration of different
components.

Types:

1. Client-Server: One machine (server) provides resources to many others (clients).


2. Peer-to-Peer: All machines have equal responsibilities, sharing resources directly.
3. Cloud Computing: Scalable computing resources provided over the internet.
4. Cluster/Grid Computing: Multiple machines working together for high-performance tasks.

Communication:

 Message Passing and Remote Procedure Calls (RPC) allow distributed nodes to
communicate.
 Distributed File Systems (e.g., NFS, HDFS) enable sharing of files across machines.
Challenges:

 Concurrency: Multiple processes running across nodes need coordination.


 Consistency: Ensuring data consistency across distributed nodes.
 Fault Tolerance: Handling failures without affecting the system.
 Latency and Network Issues: Communication delays over the network can impact
performance.

1.8 KERNEL DATA STRUCTURES

Next, we discuss a subject crucial to operating-system implementation: the organisation of data within
the system. In this part, we provide a quick overview of a number of fundamental data structures that
are frequently used in operating systems. The bibliography is located at the conclusion of the chapter
and should be consulted by readers who want further information on these structures or any others. 1.
Lists, stacks, and queues A simple data structure called an array allows for direct access to every
element. For instance, main memory is built using an array. The item is addressed as item number item
size if the data item being saved requires more than one byte to be allocated to it. What about storing
something whose size might change, though? And what happens if one item needs to be removed but
the relative locations of the other objects need to be kept? During such times, different data structures
take the place of arrays. Lists are maybe the most fundamental data structures in computer science
after arrays. The elements in a list must be accessed in a specific sequence, as opposed to each item in
an array which can be accessed directly. In other words, a list depicts a grouping of data elements as a
series. A linked list, in which items are connected to one another, is the most popular way to create this
structure. There are various forms of linked lists

• Each item in a single linked list points to its succeedor, as shown in Figure 1.13. • As shown in
Figure 1.14, an item in a doubly linked list can refer to either its predecessor or its successor. • As
shown in Figure 1.15, a circularly linked list has a last entry that references to the first element rather
than null. Linked lists make it simple to add and remove things and can accept items of various sizes.
The speed of accessing a specific item from a list of size n is linear, or O(n), since it may, in the worst
case, need traversing all n entries. This is one potential drawback of using lists. Sometimes, kernel
algorithms will directly employ lists. However, they are frequently employed to build stronger data
structures like stacks and queues. A stack is a sequentially ordered data structure that adds and
removes items according to the last in, first out (LIFO) principle, where the last item added to a stack
is the first item deleted. Pushing and popping are the operations used to add and remove objects from a
stack, respectively. A stack is frequently used by an operating system when calling functions. When a
function is called, parameters, local variables, and the return address are pushed onto the stack;
returning from the function call removes them.
Trees

A tree is a data structure that can be used to represent data hierarchically. Data values in a tree
structure are linked through parent–child relationships. In a general tree, a parent may have an
unlimited number of children. In a binary tree, a parent may have at most two children, which we term
the left child and the right child. A binary search tree additionally requires an ordering between the
parent’s two children in which left child <= right child. Figure 1.20 provides an example of a binary
search tree. When we search for an item in a binary search tree, the worst-case performance is O(n)
(consider how this can occur). To remedy this situation, we can use an algorithm to create a balanced
binary search tree. Here, a tree containing n items has at most lg n levels, thus ensuring worst-case
performance of O(lg n). We shall see in Section 5.7.1 that Linux uses a balanced binary search tree
(known as a red-black tree) as part its CPU-scheduling algorithm.

Hash Functions and Maps

A hash function takes data as its input, performs a numeric operation on the data, and returns a
numeric value. This numeric value can then be used as an index into a table (typically an array) to
quickly retrieve the data. Whereas searching for a data item through a list of size n can require up to
O(n) comparisons, using a hash function for retrieving data from a table can be as good as O(1),
depending on implementation details. Because of this performance, hash functions are used
extensively in operating systems. One potential difficulty with hash functions is that two unique inputs
can result in the same output value— that is, they can link to the same table

location. We can accommodate this hash collision by having a linked list at the table location that
contains all of the items with the same hash value. Of course, the more collisions there are, the less
efficient the hash function is. One use of a hash function is to implement a hash map, which associates
(or maps) [key:value] pairs using a hash function. Once the mapping is established, we can apply the
hash function to the key to obtain the value from the hash map (Figure 1.21). For example, suppose
that a user name is mapped to a password. Password authentication then proceeds as follows: a user
enters her user name and password. The hash function is applied to the user name, which is then used
to retrieve the password. The retrieved password is then compared with the password entered by the
user for authentication.

Bitmaps

Abitmap is a string of n binary digits that can be used to represent the status of n items. For example,
suppose we have several resources, and the availability of each resource is indicated by the value of a
binary digit: 0 means that the resource is available, while 1 indicates that it is unavailable (or vice
versa). The

value of the i th position in the bitmap is associated with the i th resource. As an example, consider the
bitmap shown below: 001011101 Resources 2, 4, 5, 6, and 8 are unavailable; resources 0, 1, 3, and 7
are available. The power of bitmaps becomes apparent when we consider their space efficiency. If we
were to use an eight-bit Boolean value instead of a single bit, the resulting data structure would be
eight times larger. Thus, bitmaps are commonly used when there is a need to represent the availability
of a large number of resources. Disk drives provide a nice illustration. A medium-sized disk drive
might be divided into several thousand individual units, called disk blocks. A bitmap can be used to
indicate the availability of each disk block. In summary, data structures are pervasive in operating
system implementations. Thus, we will see the structures discussed here, along with others, throughout
this text as we explore kernel algorithms and their implementations.

1.9 OPERATING SYSTEM SERVICES

An Operating System provides services to both the users and to the programs.
➢ It provides an environment for the execution of programs.
➢ It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system
❖ Program execution
❖ I/O operations
❖ File System manipulation
❖ Communication
❖ Error Detection
❖ Resource Allocation
❖ Protection
❖ User interface

1.9.1 Program execution


➢ The system must be able to load a program into memory and to run that program.
➢ The program must be able to end its execution, either normally or abnormally.
1.9.2 User interface.
1.9.2.1 All operating systems have a user interface (UI).
1.9.2.2 A Graphical User Interface (GUI), this interface is a window system with a
mouse that serves as a pointing device to direct I/O, choose from menus, and make
selections and a keyboard to enter text.
1.9.2.3 Mobile systems such as phones and tablets provide a Touch-Screen interface,
enabling users to slide their fingers across the screen or press buttons on the screen to select
choices.
1.9.2.4 Command-Line Interface (CLI), which uses text commands and a method for
entering them. Some systems provide two or all three of these variations.
1.9.3 I/O Operation
1.9.3.1 A running program may require I/O, which may involve a file or an I/O device.
1.9.3.2 For specific devices, special functions may be desired (such as
reading from a network interface or writing to a file system).
1.9.3.3 For efficiency and protection, users usually cannot control I/O devices directly.
1.9.3.4 Therefore, the operating system must provide a means to do I/O.
1.9.4 File system manipulation
1.9.4.1 Program needs to read a file or write a file.
1.9.4.2 The operating system gives the permission to the program for operation on file.
1.9.4.3 Permission varies from read-only, read-write, denied and so on.
1.9.4.4 Operating System provides an interface to the user to create/delete files.
1.9.4.5 Operating System provides an interface to the user to create/delete directories.
1.9.4.6 Operating System provides an interface to create the backup of file system.
1.9.5 Communication
1.9.5.1 Two processes often require data to be transferred between them
1.9.5.2 Both the processes can be on one computer or on different
computers but are connected through a computer network.
1.9.5.3 Communication may be implemented by two methods, either by Shared
Memory or by Message Passing.
1.9.6 Error Detection
1.9.6.1 Errors can occur anytime and anywhere. An error may occur in CPU, in I/O
devices or in the memory hardware.
1.9.6.2 The OS constantly checks for possible errors.
1.9.6.3 The OS takes an appropriate action to ensure correct and consistent computing.
1.9.7 Resource Allocation
1.9.7.1 In case of multi-user or multi-tasking environment, resources such as main
memory, CPU cycles and files storage are to be allocated to each user or job.
1.9.7.2 The OS manages all kinds of resources using schedulers.
1.9.7.3 CPU scheduling algorithms are used for better utilization of CPU.
1.9.8 Protection and security
1.9.8.1 The OS ensures that all access to system resources is controlled.
1.9.8.2 Security of the system from outsiders is done by means of a password, to gain
access to system resources.
1.9.8.3 It extends to defending external I/O devices, including network adapters,
from invalid access attempts and recording all such connections for detection of break-ins.
1.9.9 Logging
1.9.9.1 To keep track of which programs use how much and what kinds of computer resources.
1.9.9.2This record keeping may be used for accounting (so that users can be billed) or simply for
accumulating usage statistics.
1.9.9.3 Usage statistics may be a valuable tool for system administrators who wish to
reconfigure
the system to improve computing services.
1.10 SYSTEM CALLS
➢ System calls provide the interface between a process and the operating system.
➢ System call is how a user program requests a service from an operating system's
kernel that it does not normally have permission to run.
1.10.1 Example for System Call
➢ Consider that there is a user program that will read the content of one file and will
copy it into another file. Here the first information that the system requires is the
name of two files, the first file from where the data has to be read (input file) and the
second file where the data has to be copied (output file).Depending on the different
types of operating systems a sequence of system calls will be required.
➢ A sequence of system calls in an interactive system will be:
Fig: 1.10 Examples of System Calls

1.10.2 Working of System Call


➢ Initially, a processor executes a user program in the user mode.
➢ Then if the program needs the services of the operating system the processor is
interrupted by the system call.
➢ A system call is always prioritized over the other executions and the system call is
then executed in the kernel mode.
➢ Once the system call is executed completely the control goes back to the user mode.
➢ And the execution of the program resumes back in the user mode.
➢ Three most common APIs are
❖ Win32 API for Windows,
❖ POSIX API (all versions of UNIX, Linux, and Mac OS X), and
❖ Java API for the Java virtual machine (JVM)

Fig: 1.10 System Call Handled by the Kernel


1.10.3 Three general methods exist for passing parameters to the OS:
1. Parameters can be passed in registers.
2. When there are more parameters than registers, parameters can be stored in a block
and the block address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.

Fig:1.10 Passing Parameters to the Operating Systems


1.10.4 TYPES OF SYSTEM CALLS
➢ There are 6 different categories of system calls:
❖ Process control,
❖ File Management
❖ Device management
❖ Information maintenance
❖ Communication
❖ Protection
1.10.4.1 Process Control
1.10.4.1.1 System call includes calls to create process, terminate process ,load, execute ,get
process attributes, set process attributes,wait event, signal event, allocate and free
memory.
1.10.4.1.2 A running program needs to be able to halt its execution either normally
(end()) or abnormally (abort()). If a system call is made to terminate the currently running
program abnormally, or if the program runs into a problem and causes an error trap, a
dump of memory is taken and an error message generated.
1.10.4.1.3 The dump is written to a special log file on disk and may be examined by a
debugger.
1.10.4.1.4 A system program designed to aid the programmer in finding and correcting errors, or
bugs to determine the cause of the problem.
1.10.4.2 File Management
1.10.4.2.1 System calls under this category include calls to create file, delete file, open,
close, read, write, reposition, get file attributes, set file attributes.
1.10.4.3 Device Management
1.10.4.3.1 System calls under this category includes calls to request device, release device,
read, write, reposition, get device attributes, set device attributes ,logically attach
or detach devices.
1.10.4.3.2 A process need several resources to execute main memory, disk drives, access
to
files, and so on. If the resources are available, they can be granted, and control can
be returned to the user process. Otherwise, the process will have to wait until
sufficient resources are available
1.10.4.4 Information Maintenance
1.10.4.4.1 System calls under this category includes calls to return information about the
system, such as get time or date, set time or date , get system data, set system
data ,get process, file, or device attributes ,set process, file, or device attributes.
1.10.4.4.2 the operating system keeps information about all its processes, and system
calls
are used to access this information
1.10.4.5 Communication
1.10.4.5.1 System calls under this category includes calls to create, delete communication
connection, send, receive messages, transfer status information, attach or detach
remote devices.
1.10.4.5.2 There are two models of interprocess communication, the message-passing
model
and the shared memory model.
1.10.4.5.3 In a shared memory model, a process uses memory for communication. One
process will create a memory partition which other processes can access. A shared
segment can be attached multiple times by the same process
1.10.4.5.4 In message passing communication is done either direct or indirect
❖ In direct communication process address each other by their PID assigned
to them by OS.
❖ In indirect communication messages are sent and received via mailbox
1.10.4.6 Protection
1.10.4.6.1 System calls under this category includes calls to get file permissions, set file
permissions.
1.10.4.6.2 Protection provides a mechanism for controlling access to the resources
provided by a computer system. protection was a concern only on
multiprogrammed computer systems with several users.
1.10.4.6.3 However, with the advent of networking and the Internet, all computer
systems, from
servers to mobile handheld devices, must be concerned with protection.

Table: Examples of WINDOWS and UNIX system calls

1.11 SYSTEM SERVICES


➢ In addition to system calls, modern systems also provide variety of system
programs. These programs act as an interface between the operating system and
application programs.
➢ They provide an environment in which application programs can be developed
and executed in a convenient manner.
➢ The system programs can be classified into following categories
1.11.1 File Management
➢ The System programs under this category provide commands, such as cut, copy,
save, print, etc to perform various operations on files and directories.

1.11.2 File modification


➢ The system programs under this category allow creating or modifying the contents
of a file stored on disk or some other storage devices.
➢ Text editor is a system program that belongs to this category. Vi is the text editor
used in Unix operating systems.
1.11.3 Communications
➢ The system programs under this category enable communication among different
users, processes, or systems by establishing virtual connection between them.
➢ With the help of these programs, user can send messages to other users, log on
some remote systems or transfer data from other system to its own systems.
1.11.4 Status information
➢ The system programs under this category are used to present the status of the
computer system, such as system date and time, number of users connected, CPU
utilization, disk and memory usage, Configuration information, etc.
1.11.5 Programming Language Support
➢ Several programming languages support different system programs, such as
compilers, assemblers and interpreters.
➢ These system programs are generally provided to users along with OS.
1.11.6 Program loading and execution
➢ After the user program has been compiled, it needs to be in main memory for
execution.
➢ The task of loading a program into memory is performed by loader, a system
program.
➢ The successful execution of the program also requires debugging, which is
performed by debugger (a system program under this category).

1.12 WHY APPLICATIONS ARE OPERATING SYSTEM SPECIFIC

Applications are operating system (OS) specific for several reasons, primarily due to differences in
how operating systems manage resources, run programs, and provide services. Here's a breakdown of
why this happens:

1. System Calls and APIs

 Each operating system has its own set of system calls and application programming interfaces
(APIs) that applications use to interact with the hardware and other system resources.
 For example, Windows provides APIs like Win32, while macOS has its own Cocoa and
Carbon frameworks, and Linux uses POSIX-based APIs. Applications are often designed to
use these specific APIs, meaning they rely on the OS they were developed for to work
correctly.

2. Hardware Management

 The operating system manages hardware resources such as memory, CPU, storage, and
peripherals (like printers and keyboards). Different operating systems may handle hardware
differently, meaning that an application built for one OS may not be able to interface properly
with hardware on another OS without modification.

3. File Systems

 Each operating system uses a different file system (e.g., NTFS for Windows, HFS+ or APFS
for macOS, and ext4 for Linux). Applications are usually written to interact with the file
system used by the target operating system, which means they may not function on a different
OS without adaptation.
4. User Interface Design

 Different OSes have distinct user interface paradigms and graphical frameworks. For instance,
Windows applications are often built using WinForms or WPF, macOS apps use AppKit, and
Linux applications might use GTK or Qt. These differences mean that the same application
needs to be rewritten or at least adjusted to match the UI conventions of the different OSes.

5. Security Models

 Security models vary across operating systems, and applications need to comply with the
specific rules and models of the OS. For instance, Windows uses different security protocols
than Linux or macOS, and an application must be designed to interact with these security
models to function properly.

6. Software Dependencies

 Many applications rely on libraries or frameworks that are OS-specific. For example, a Linux
application may rely on libraries like GTK, while a Windows application may depend on
Microsoft-specific libraries. These dependencies often require the application to be compiled or
packaged for a specific operating system.

7. Multitasking and Process Management

 Operating systems handle processes and multitasking in unique ways. An application might
depend on the process management mechanisms of the OS (such as memory management,
multitasking, and process synchronization), which can differ significantly between platforms.

8. Application Distribution Systems

 The way software is distributed and installed can vary significantly. For instance, Windows
apps often use .exe or .msi installers, macOS apps use .app bundles, and Linux distributions
use package managers like apt or yum. These differences create a need for platform-specific
applications.

9. Compiler and Build Tools

 Applications are often compiled into machine code specific to the processor architecture and
operating system. This means that an application compiled for Windows will not run on
macOS or Linux unless it is specifically recompiled or ported to those platforms.

10. Performance Optimizations


Different operating systems have different performance optimization techniques. An application
optimized for one OS might not perform as efficiently on another because of these OS-specific
optimizations

1.13 OPERATING SYSTEM DESIGN AND IMPLEMENTATION


➢ An operating system is a construct that allows the user application programs to
interact with the system hardware. Operating system by itself does not provide
any function but it provides an atmosphere in which different applications and
programs can do useful work.
➢ There are many problems that can occur while designing and implementing an
operating system. These are covered in operating system design and
implementation.
1.13.1 Operating System Design Goals
➢ It is quite complicated to define all the goals and specifications of the operating
system while designing it. The design changes depending on the type of the operating
system
i.e if it is batch system, time shared system, single user system, multi-user system,
distributed system etc.
➢ There are basically two types of goals while designing an operating system. These are
1.13.1.1 User Goals

The operating system should be convenient, easy to use, reliable, safe and fast according to the users.
However, these specifications are not very useful as there is no set method to achieve these goals.

Fig 1.13 Layered Operating System Design


1.13.1.2 System Goals
1.13.1.2.1 The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating
system. But there is not specific method to achieve these goals as well.
1.13.2 Operating System Mechanisms and Policies
➢ There is no specific way to design an operating system as it is a highly creative task.
However, there are general software principles that are applicable to all operating
systems.
➢ A subtle difference between mechanism and policy is that mechanism shows how to
do something, and policy shows what to do. Policies may change over time and this
would lead to changes in mechanism. So, it is better to have a general mechanism that
would require few changes even when a policy change occurs.
➢ For example - If the mechanism and policy are independent, then few changes are
required in mechanism if policy changes. If a policy favours I/O intensive processes
over CPU intensive processes, then a policy change to preference of CPU intensive
processes will not change the mechanism.
1.13.3 Operating System Implementation
➢ The operating system needs to be implemented after it is designed. Earlier they were
written in assembly language, but now higher-level languages are used. The first
system not written in assembly language was the Master Control Program (MCP) for
Burroughs Computers.
1.13.4 Advantages of Higher-Level Language
➢ There are multiple advantages to implementing an operating system using a higher-
level language such as: the code is written more fast, it is compact and also easier to
debug and understand. Also, the operating system can be easily moved from one
hardware to another if it is written in a high-level language.
1.13.5 Disadvantages of Higher-Level Language

Using high level language for implementing an operating system leads to a loss in speed and increase
in storage requirements. However, in modern systems only a small amount of code is needed for high
performance, such as the CPU scheduler and memory manager. Also, the bottleneck routines in the
system can be replaced by assembly language equivalents if required.

1.14 OPERATING SYSTEM STRUCTURES


➢ Every operating system has its own internal structure in terms of file arrangement,
memory management, storage management etc.
➢ The performance of the system depends on its structure
➢ A common approach is to partition the task into small components, or modules, rather than
have one single system. Each of these modules should be a well-defined portion of the system,
with carefully defined interfaces and functions.
Modern operating system structure is complex in nature
1. Monolithic Structure
2. Layered Approach
3. Microkernel
4. Modules
5. Hybrid Systems
1.14.1 Monolithic Structure
Structure of the operating system was not well defined.
That is, place all the functionality of the kernel into a single, static binary file that runs in
a single address space. This approach is known as a monolithic structure.
UNIX operating system is an example of Monolithic Structure operating system.
UNIX operating system, consists of two separable parts: the kernel and the system
programs. The kernel is further separated into a series of interfaces and device drivers.

address space, but does have a modular design that allows the kernel to be modified
during run time.
The monolithic approach is known as a tightly coupled system because changes to
one part of the system can have wide-ranging effects on other parts.
Alternatively, we could design a loosely coupled system,Such a system is divided
into separate, smaller components that have specific and limited functionality. All
these components together comprise the kernel.
1.14.2 Layered Approach
➢ A system can be made modular in many ways. One method is the layered approach,
in which the OS is broken up into a number of layers (levels).
➢ The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
➢ The main advantage of the layered approach is simplicity of construction and debugging.
➢ The layers are selected so that each uses functions (operations) and services of only
lower-level layers. This approach simplifies debugging and system verification.
➢ The first layer can be debugged without any concern for the rest of the system.
➢ Once the first layer is debugged, its correct functioning can be assumed while the second
layer is debugged, and so on.

➢ If an error is found during the debugging of a particular layer, the error must be on that
layer, because the layers below it are already debugged.

Fig:1.14 Linux System Structure


➢ Each layer is implemented only with operations provided by lower-level layers. A layer
does not need to know how these operations are implemented.
➢ It needs to know only what these operations do.

Fig:1.14 A layered Operating system


1.14.3 MicroKernel
-1980s, researchers at Carnegie Mellon University developed an
operating system called Mach that modularized the kernel using the microkernel
approach.

components from the kernel and implementing them as user level programs that
reside in separate address spaces. This is called as Microkernel.

client program and the various services that are running in user space.
The kernel is the central module of an operating system (OS). It is otherwise called
as heart of the Operating System (OS).
It is the part of the operating system that loads first, and it remains in main memory.

Because it stays in memory, it is important for the kernel to be as small as possible


while still providing all the essential services required by other parts of the
operating system and applications.
es more security and reliability, since most services are
running as user rather than kernel processes.
If a service fails, the rest of the operating system remains untouched.
-function
overhead.

Fig:1.14 Architecture of typical Microkernel


The Kernel does not interact directly with the user. But it interacts with the
shell, other programs and hardware.
Memory manager allocates space in memory for processes that requires the
services of kernel.
1.14.4 Modules
➢ Current methodology for operating-system design uses loadable kernel modules
(LKMs).
➢ The kernel has a set of core components and can link additional services via
modules, either at boot time or during run time.
➢ This type of design is common in modern implementations of UNIX, such as
Linux, macOS, and Solaris, Windows.
➢ The idea of the design for the kernel is to provide core services, while other
services are implemented dynamically.
➢ Linking services dynamically is preferable to adding new features directly to the
kernel.
➢ Linux uses loadable kernel modules, primarily for supporting device drivers and
file systems.
➢ LKMs can be ―inserted‖ into the kernel as the system is started (or booted) or
during run time, such as when a USB device is plugged into a running machine. If
the Linux kernel does not have the necessary driver, it can be dynamically loaded.
➢ LKMs can be removed from the kernel during run time as well.
1.14.5 Hybrid Systems
➢ Few operating systems adopt a single, strictly defined structure.
➢ They combine different structures, resulting in hybrid systems that address
performance, security, and usability issues.
➢ There are Three hybrid systems:
• the Apple macOS operating system and
• mobile operating systems—iOS and Android
1.14. 5.1 macOS and iOS
➢ Apple’s macOS operating system is designed to run primarily on desktop and
laptop computer systems.
➢ iOS is a mobile operating system designed for the iPhone smartphone and iPad
tablet computer. Architecturally, macOS
➢ The general architecture of these two systems is shown in Figure.
➢ Highlights of the various layers include the following:
• User experience layer. This layer defines the software interface that allows
users to interact with the computing devices. macOS uses the Aqua user
interface, which is designed for a mouse or trackpad, whereas iOS uses the
Springboard user interface, which is designed for touch devices.
• Application frameworks layer. This layer includes the Cocoa and Cocoa
Touch frameworks, which provide an API for the Objective-C and Swift
programming languages. The primary difference between Cocoa and Cocoa
Touch is that the former is used for developing macOS applications, and the
latter by iOS to provide support for hardware features unique to mobile
devices, such as touch screens.
• Core frameworks. This layer defines frameworks that support graphics and
media including, Quicktime and OpenGL. Kernel environment. This
environment, also known as Darwin, includes the Mach microkernel and the
BSD UNIX kernel.

Fig:1.14 Architecture of Apple’s macOS and iOS operating system


1.14.5.2 Android
➢ The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smartphones and tablet
computers.
➢ Android runs on a variety of mobile platforms and is opensourced.
➢ Android is similar to iOS in that it is a layered stack of software that provides a
rich set of frameworks supporting graphics, audio, and hardware features.
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

➢ These features, in turn, provide a platform for developing mobile applications that run
on a multitude of Android-enabled devices.
➢ Google has designed a separate Android API for Java development. Java
applications are compiled into a form that can execute on the Android RunTime ART,
a virtual machine designed for Android and optimized for mobile devices with limited
memory and CPU processing capabilities.
➢ Java programs are first compiled to a Java bytecode .class file and then translated
into an executable .dex file.
➢ The set of native libraries available for Android applications includes frameworks for
developing web browsers (webkit), database support (SQLite), and network support,
such as secure sockets (SSLs).
➢ Google has chosen to abstract the physical hardware through the hardware
abstraction layer, or HAL.
➢ The standard C library used by Linux systems is the GNU C library (glibc).
➢ Google instead developed the Bionic standard C library for Android. Not only does
Bionic have a smaller memory footprint than glibc.
➢ At the bottom of Android’s software stack is the Linux kernel. Google has
modified the Linux kernel used in Android in a variety of areas to support the special
needs of mobile systems, such as power management.
➢ It has also made changes in memory management and allocation and has added a
new form of IPC known as Binder

Fig:1.14 Architecture of Google’s Android


LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

1.15 BULDING AND BOOTING AN OPERATING SYSTEM

• Operating systems generally designed to run on a class of systems


with variety of peripherals

• Commonly, operating system already installed on purchased computer

– But can build and install some other operating systems

– If generating an operating system from scratch

• Write the operating system source code

• Configure the operating system for the system on which it will run

• Compile the operating system

• Install the operating system

• Boot the computer and its new operating system

Booting

• The operating system is loaded via a bootstrapping procedure, often known as


booting.
• The process of booting involves turning on a computer.

• The memory of the CPU is empty when it is first turned on.

• After loading the operating system into the main memory and turning on the
computer, it is prepared to accept user commands.
• The initial collection of tasks the computer carries out when turned on is known as
the boot sequence.

• The kernel is located, loaded into the main memory, and executed by the
bootstrap loader. In some systems, the kernel is loaded after a straightforward
bootstrap loader retrieves a more intricate boot program from the disk.
• Restarting a computer is sometimes referred to as rebooting.
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

Steps of Booting Process in Operating System

Step 1: BIOS is loaded


Step 2: BIOS: Power on Self-Test

• This test runs a preliminary examination of the computer's main memory, disk
drives, input/output devices, etc. Moreover, the system emits a beep sound in
case of any errors.
Step 3: Loading of OS

• The bootable sequence stored in the CMOS is read by BIOS following the
successful completion of POST
Step 4: system configuration
Step 5 : Loading system utilities
• In this step, system utilities like antivirus and volume control are loaded into the
memory.

Step 6: User Authentication

Building and Booting Linux

• Download Linux source code (http://www.kernel.org)


• Configure kernel via ―make menuconfig‖
• Compile the kernel using ―make‖
– Produces vmlinuz, the kernel image
– Compile kernel modules via ―make modules‖
– Install kernel modules into vmlinuzvia ―make modules_install‖
– Install new kernel on the system via ―make install‖

System Boot
• When power initialized on system, execution starts at a fixed memory location
• Operating system must be made available to hardware so hardware can
start it
– Small piece of code – bootstrap loader, BIOS, stored in ROM or
EEPROM
locates the kernel, loads it into memory, and starts it
– Sometimes two-step process where boot block at fixed location
loaded by ROM code, which loads bootstrap loader from disk
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

– Modern systems replace BIOS with Unified Extensible Firmware


Interface
(UEFI)
• Common bootstrap loader, GRUB, allows selection of kernel
from multiple disks, versions, kernel options
• Kernel loads and system is then running
• Boot loaders frequently allow various boot states, such as single user mode

Operating-System Debugging

• Debugging is finding and fixing errors, or bugs


• Also performance tuning
• OS generate log files containing error information
• Failure of an application can generate core dump file capturing memory
of the process
• Operating system failure can generate crash dump file containing
kernel memory
• Beyond crashes, performance tuning can optimize system performance
– Sometimes using trace listings of activities, recorded for analysis
– Profiling is periodic sampling of instruction pointer to look for
statistical trends

Kernighan’s Law: ―Debugging is twice as hard as writing the code in the first place. Therefore, if
you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.‖

Performance Tuning

• Improve performance by removing bottlenecks


• OS must provide means of computing and displaying measures of system
behavior
• For example, ―top‖ program or Windows Task Manager
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

Tracing

 Collects data for a specific event,


such as steps involved in a system
call invocation
 Tools include
• strace – trace system calls invoked by a process
• gdb – source-level debugger
• perf – collection of Linux performance tools

tcpdump – collects network packets

BCC

 Debugging interactions between user-level and kernel code nearly


impossible without toolset that understands both and an instrument their
actions
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)

 BCC (BPF Compiler Collection) is a rich toolkit providing tracing


features for Linux
• See also the original DTrace
 For example, disksnoop.py traces disk I/O activity

Linux bcc/BPF Tracing Tools

You might also like