Cb3402 Unit 1 Notes
Cb3402 Unit 1 Notes
UNIT-1
The User
There are two views
i. User View
ii. System View
i. User View
➢ The user’s view of the computer varies according to the interface being used.
➢ Most computer users sit with Laptop/PC, consisting of a monitor, keyboard, and mouse. Such
systems are designed for one user.
➢ Here, the operating system is designed for ease of use, less concentration on
performance, security and resource utilization.
➢ users interacting with mobile devices such as smartphones and tablets uses touch screen as the
interface, where the user interacts with the system by pressing and swiping fingers across the
screen.
➢ Many mobile devices also allow users to interact through a voice recognition
interface, such as Apple’s Siri.
➢ Some computers such as embedded computers are designed to run without user intervention.
They are used in home devices and automobiles and have numeric keypads and may turn
indicator lights on or off to show status.
ii. System View
➢ From the computer’s point of view, we can view an operating system as a
resource allocator.
➢ A computer system has many resources that may be required to solve a problem: CPU time,
memory space, file-storage space, I/O devices, and so on.
➢ The operating system acts as the manager of these resources.
➢ The operating system must decide how to allocate them to specific programs and operate the
computer system efficiently and fairly.
➢ An operating system is a control program. A control program manages the
execution of user programs to prevent errors and improper use of the computer.
➢ It is especially concerned with the operation and control of I/O devices.
Defining Operating Systems
The operating system is the one program running at all times on the computer, usually called the
kernel. Along with the kernel, there are two other types of programs:
❖ System programs, which are associated with the operating system but are not
necessarily part of the kernel
❖ Application programs, which include all programs not associated with the operation of the
system.
Some mobile OS include
❖ Middleware, a set of software frameworks that provide additional services to application
For example, mobile operating systems,Apple’s iOS and Google’s Android. The
operating system always includes
▪ Kernel,
▪ Middleware frameworks that ease application development and provide features, and
▪ System programs that aid in managing the system while it is running.
1.2.1.1 Overview
➢ Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by
way of the system bus.
➢ When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location.
➢ The fixed location usually contains the starting address where the service routine for the
interrupt is located.
➢ The interrupt service routine executes; on completion, the CPU resumes the
interrupted computation.
➢ The interrupt must transfer control to the appropriate interrupt service routine.
Fig 1.3 Interrupt timeline for a single program doing output
1.2.1.2 Implementation
➢ The CPU hardware has a wire called the interrupt-request line that the CPU senses after
executing every instruction.
➢ When the CPU detects that a controller has asserted a signal on the interrupt-
request line, it reads the interrupt number and jumps to the interrupt-handler routine by using
that interrupt number as an index into the interrupt vector.
➢ The Device controller raises an interrupt by asserting a signal on the interrupt
request line,
▪ The CPU catches the interrupt and dispatches it to the interrupt handler,
▪ The handler clears the interrupt by servicing the device.
In a modern operating system, we need more sophisticated interrupt handling features.
1. The ability to defer interrupts handling during critical processing.
2. An efficient way to dispatch to the proper interrupt handler for a device.
3. Multilevel interrupts, so that the operating system can distinguish between high- and low- priority
interrupts and can respond with the appropriate degree of urgency.
In modern computer hardware, these three features are provided by the CPU and the interrupt- controller
hardware.
CPUs have two interrupt request lines
i. Non Maskable Interrupts
Reserved for events such as unrecoverable memory errors.
Fig 1.4 Interrupt-driven I/O cycle
ii. Maskable Interrupts
▪ The maskable interrupt is used by device controllers to request service.
▪ It can be turned off by the CPU before the execution of critical instruction sequences
that must not be interrupted.
Intel processor event-vector table
▪ The events from 0 to 31, which are non-maskable, are used to signal various error
conditions.
▪ The events from 32 to 255, which are maskable, are used for purposes
such as device- generated interrupts.
▪ The interrupt mechanism also implements a system of interrupt priority levels.
▪ These levels enable the CPU to defer the handling of low-priority
interrupts without masking all interrupts and makes it possible for a high-priority
interrupt to preempt the execution of a low-priority interrupt.
1.2.2 STORAGE STRUCTURE
➢ The CPU can load instructions only from memory, so any programs must first be loaded into
memory to run.
➢ General-purpose computers run most of their programs from rewritable memory,
called main memory (also called random-access memory, or RAM).
➢ Main memory commonly is implemented in a semiconductor technology called Dynamic
Random-Access Memory (DRAM).
Table 1.1 Intel processor event-vector table.
➢ Computers use other forms of memory as well. For example, the first program to run on computer
power-on is a bootstrap program, which then loads the operating system.
➢ Since RAM is volatile, bootstrap program is not loaded in RAM, but it is loaded on
EEPROM.
➢ The computer uses electrically erasable programmable read-only memory (EEPROM) and other
forms of firmware storage that is infrequently written to and is non-volatile.
➢ All forms of memory provide an array of bytes. Each byte has its own address.
Interaction is achieved through a sequence of load or store instructions to specific memory
addresses.
➢ The load instruction moves a byte or word from main memory to an internal register
within the CPU, whereas the store instruction moves the content of a register to main memory.
Von-Neumann Architecture
➢ Von-Neumann architecture shows a typical instruction–execution cycle.
➢ First fetches an instruction from memory and stores that instruction in the instruction register.
➢ The instruction is then decoded, and operands are fetched from memory and stored in
some internal register.
➢ After the instruction is executed, the result may be stored back in memory.
1.2.2.1 Secondary Storage Devices
➢ Most computer systems provide secondary storage as an extension of main memory.
➢ The main requirement for secondary storage is that, to hold large quantities of data
permanently.
➢ The most common secondary-storage devices are hard-disk drives (HDDs) and
nonvolatile memory (NVM) devices, which provide storage for both programs and data.
➢ Most programs (system and application) are stored in secondary storage until they are
loaded into memory.
➢ Many programs use secondary storage as both the source and the destination of their processing.
➢ Secondary storage is also much slower than main memory. Hence, the proper
management of secondary storage is of central importance to a computer system.
➢ Special purpose of storage is, to store backup copies of material stored on other devices are tertiary
storage.
1.2.2.2 Hierarchy of Storage-device
➢ The wide variety of storage systems can be organized in a hierarchy (Figure 1.5) according to
storage capacity and access time.
➢ The five hierarchies in a system's memory are register, cache memory, main memory,
magnetic disc, and magnetic tape.
➢ The top four levels of memory in the figure are constructed using semiconductor memory, which
consists of semiconductor-based electronic circuits.
➢ NVM devices, at the fourth level, have several variants but are faster than hard disks.
The most common form of NVM device is flash memory, which is popular in mobile devices such
as smartphones and tablets. Increasingly, flash memory is being used for long-term storage on
laptops, desktops, and servers as well.
➢ External or Secondary Memory
❖ It consists of Magnetic Tape, Optical Disk, Magnetic Disk, i.e., it includes peripheral
storage devices that are accessible by the system’s processor via I/O Module.
➢ Internal Memory or Primary Memory
❖ It consists of CPU registers, Cache Memory, and Main Memory. It is accessible
directly by the processor.
1. Single-processor system
2. Multiprocessor system
3. Clustered Systems:
1. Single-Processor Systems:
Some computers use only one processor such as microcomputers (or personal computers PCs).
On a single-processor system, there is only one CPU that performs all the activities in the
computer system. However, most of these systems have other special purpose processors, such
as I/O processors that move data quickly among different components of the computers. These
processors execute only a limited system programs and do not run the user program. Sometimes
they are managed by the operating system. Similarly, PCs contain a special purpose
microprocessor in the keyboard, which converts the keystrokes into computer codes to be
sent to the CPU. The use of special purpose microprocessors is common in microcomputer.
But it does not mean that this system is multiprocessor. A system that has only one general-
purpose CPU, is considered as single- processor system.
2. Multiprocessor Systems:
In multiprocessor system, two or more processors work together. In this system, multiple programs
(more than one program) are executed on different processors at the same time. This type of
processing is known as multiprocessing. Some operating systems have features of multiprocessing.
UNIX is an example of multiprocessing operating system. Some versions of Microsoft Windows
also support multiprocessing.
The multiprocessing system, in which each processor is assigned a specific task, is known as
Asymmetric Multiprocessing System. For example, one processor is dedicated for handling
user's requests, one processor is dedicated for running application program, and one processor
is dedicated for running image processing and so on. In this system, one processor works as
master processor, while other processors work as slave processors. The master processor
controls the operations of system. It also schedules and distributes tasks among the slave
processors. The slave processors perform the predefined tasks.
The multiprocessing system, in which multiple processors work together on the same task, is
known as Symmetric Multiprocessing System. In this system, each processor can perform all
types of tasks. All processors are treated equally and no master-slave relationship exists
between the processors.
For example, different processors in the system can communicate with each other. Similarly,
an I/O can be processed on any processor. However, I/O must be controlled to ensure that
the data reaches the appropriate processor. Because all the processors share the same
memory, so the input data given to the processors and their results must be separately
controlled. Today all modern operating systems including Windows and Linux provide
support for SMP.
It must be noted that in the same computer system, the asymmetric multiprocessing and
symmetric multiprocessing technique can be used through different operating systems.
A Dual-Core Design
3. Clustered Systems:
Clustered system is another form of multiprocessor system. This system also contains multiple
processors but it differs from multiprocessor system. The clustered system consists of two or
more individual systems that are coupled together. In clustered system, individual systems (or
clustered computers) share the same storage and are linked together ,via Local Area Network
(LAN).
A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of
the other nodes over the LAN. If the monitored machine fails due to some technical fault (or
due to other reason), the monitoring machine can take ownership of its storage. The
monitoring machine can also restart the applications that were running on the failed machine.
The users of the applications see only an interruption of service.
7
1.4 OPERATING SYSTEM OPERATIONS
In order to ensure the proper execution of the operating system, we must be able to distinguish
between the execution of operating-system code and user defined code. The approach taken by
most computer systems is to provide hardware support that allows us to differentiate among
various modes of execution.
At the very least we need two separate modes of operation.user mode and kernel mode.
A bit, called the mode bit is added to the hardware of the computer to indicate the current mode:
kernel (0) or user (1).with the mode bit we are able to distinguish between a task that is
executed on behalf of the operating system and one that is executed on behalf of the user, When
the computer system is executing on behalf of a user application, the system is in user
mode. However, when a user application requests a service from the operating system (via
a.. system call), it must transition from user to kernel mode to fulfill the request.
At system boot time, the hardware starts in kernel mode. The operating system is then loaded
and starts user applications in user mode. Whenever a trap or interrupt occurs, the hardware
switches from user mode to kernel mode (that is, changes the state of the mode bit to 0). Thus,
whenever the operating system gains control of the computer, it is in kernel mode. The system
always switches to user mode (by setting the mode bit to 1) before passing control to a user
program.
The dual mode of operation provides us with the means for protecting the
operating system from errant users-and errant users from one another. We
accomplish this protection by designating some of the machine instructions that
may cause harm as privileged instructions. the hardware allows privileged
instructions to be executed only in kernel mode. If an attempt is made to execute
a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system.
The instruction to switch to kernel mode is an example of a privileged
instruction. Some other examples include I/0 control timer management and
interrupt management.
1.4.3
Resource management in an operating system (OS) refers to the efficient and fair allocation,
utilization, and management of resources (such as CPU time, memory, I/O devices, etc.) among
processes or programs running in a system. The goal is to optimize the performance of the system
while ensuring that processes have access to the resources they need to execute.
Process Management
A process is a program in execution, and the OS must manage processes by allocating CPU
time, memory, and other resources. The OS also ensures that processes do not interfere with
each other and that they complete their execution without errors.
Creating and deleting both user and system processes
Scheduling processes and threads on the CPUs
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Memory Management
Memory is a limited resource, so the OS must manage memory allocation for processes
efficiently. The OS keeps track of which parts of memory are in use and allocates memory to
processes as needed.
Techniques:
o Contiguous memory allocation: Processes are assigned a single contiguous block of
memory.
o Paged memory management: The process's memory is divided into fixed-size blocks
(pages), and physical memory is divided into blocks of the same size (frames).
o Segmented memory management: Memory is divided into different segments based
on the logical divisions of a program (code, data, stack, etc.).
o Virtual memory: Uses disk space to extend physical memory, allowing larger
programs to run on a system than would otherwise fit in physical memory.
The file system provides a mechanism to store, organize, and manage data on storage devices
(like hard drives or SSDs). The OS needs to manage how files are created, accessed, and
deleted, and how data is stored.
Key tasks:
o File allocation: Deciding how files are stored on the disk (contiguous, linked, indexed).
o File protection: Ensuring that only authorized users or processes can access or modify
files.
o File system organization: Organizing data into directories and subdirectories for easier
navigation.
Mass-Storage Management
As we have already seen, the computer system must provide secondary storage to back up main
memory. Most modern computer systems use HDDs and NVM devices as the principal on-line storage
media for both programs and data.
Most programs—including compilers, web browsers, word processors, and games—are stored on
these devices until loaded into memory.
The programs then use the devices as both the source and the destination of their processing. Hence,
the proper management of secondary storage is of central importance to a computer system.
The operating system is responsible for the following activities in connection with secondary storage
management:
• Mounting and unmounting
• Free-space management
• Storage allocation
• Disk scheduling
• Partitioning
• Protection
Input/Output (IO) management is a crucial component of any operating system (OS). It encompasses
the techniques and mechanisms an OS uses to manage the input and output operations of the computer
system, ensuring efficient and effective communication between hardware and software components.
This article will delve into the principles of IO management, focusing on how Linux implements these
principles, and provide insights into how administrators can use and troubleshoot IO management
features.
Principles of IO Management
IO devices, such as keyboards, mice, printers, and storage devices, interact with the computer through
device controllers. These controllers act as intermediaries, converting signals from the devices into a
2. IO Operations
Character IO: Involves reading and writing one character at a time (e.g., keyboards).
Block IO: Involves reading and writing large blocks of data (e.g., disk drives).
3. Buffering
Buffering is a technique where data is temporarily stored in a buffer while being transferred between
two locations. This helps in accommodating the speed difference between the IO device and the CPU.
4. Caching
Caching involves storing copies of data in a high-speed storage (cache) to reduce access time.
Frequently accessed data is kept in the cache, reducing the need for time-consuming IO operations.
5. Spooling
Spooling (Simultaneous Peripheral Operations On-Line) is a method where data is temporarily held in a
buffer (spool) until the device is ready to process it. It is commonly used in print spooling.
6. Device Drivers
Device drivers are specialized software modules that enable the OS to communicate with hardware
devices. They provide a standardized interface for IO operations, abstracting the hardware specifics.
7. Direct Memory Access (DMA)
DMA allows certain hardware subsystems to access main system memory independently of the CPU.
Cache management
It is an operating system refers to the process of managing data stored in caches to improve system
performance by reducing access time to frequently used data. The OS uses different types of caches,
such as CPU cache, disk cache, and memory cache, to speed up data access.
1. CPU Cache:
o Stores frequently accessed data and instructions.
o Divided into multiple levels (L1, L2, L3).
o Uses cache replacement policies (e.g., LRU, FIFO) to decide which data to evict.
2. Disk Cache:
o Temporarily stores frequently accessed data from disk in faster memory (RAM).
o Reduces disk I/O operations, improving system performance.
o Uses algorithms like write-back (write data to disk later) and write-through (write
data to disk immediately).
3. Memory Cache:
o Stores frequently accessed data pages or file blocks in RAM to speed up access.
o Uses page replacement algorithms (e.g., LRU, Clock) to manage which pages to keep
in memory.
4. Buffering:
o Temporarily holds data before it is sent to or from a device, helping to manage speed
differences between devices and the CPU.
5. Cache Coherence (in multi-core systems):
o Ensures consistency between the caches of different CPU cores, especially when one
core modifies data.
Security and protection in operating systems (OS) are critical components that ensure the integrity,
confidentiality, and availability of the system and its resources. Here's a breakdown of the key aspects
of security and protection in OS:
Authentication: Verifying the identity of users or processes. This can be done using
passwords, biometrics, two-factor authentication (2FA), or certificates.
Authorization: Once authenticated, the OS checks if the user or process has permission to
perform certain actions. This is typically controlled by access control mechanisms like user
roles and access control lists (ACLs).
Discretionary Access Control (DAC): The owner of the resource (e.g., file) decides who can
access it and in what way (read, write, execute).
Mandatory Access Control (MAC): The OS enforces strict access policies, and access
permissions are based on security labels (e.g., security clearance levels).
Role-Based Access Control (RBAC): Access is granted based on roles, which are predefined
by administrators. This is used in many enterprise environments.
3. Encryption
Data Encryption: Ensures that data stored on disk or transmitted over a network is unreadable
without the proper decryption key. This protects data from unauthorized access.
File System Encryption: OS-level encryption (e.g., BitLocker, FileVault) ensures that the
entire file system is encrypted, protecting it if the hardware is stolen.
4. Process Isolation
The OS ensures that processes run in isolated memory spaces to prevent unauthorized access or
interference. This is typically achieved through virtual memory management.
Sandboxing: A technique used to limit the actions of a program to a controlled environment,
protecting the system from potential malicious behavior.
Secure Boot: Ensures that the OS is only loaded if it hasn't been tampered with. This is
achieved through digital signatures and checks during the boot process.
Kernel Protection: The kernel (core part of the OS) is protected against unauthorized
modifications or exploits. Techniques like Address Space Layout Randomization (ASLR) and
Data Execution Prevention (DEP) are commonly used.
Intrusion Detection Systems (IDS): Monitors network traffic or system behavior for signs of
malicious activity.
Intrusion Prevention Systems (IPS): Actively blocks any suspicious or malicious activities
identified by the IDS.
OSs maintain audit logs to record system events, which can be used for detecting and analyzing
security breaches or policy violations.
Security Information and Event Management (SIEM): Systems that collect and analyze
logs to provide real-time insights into security incidents.
Keeping the OS and installed software up to date is crucial for patching security vulnerabilities.
Regular updates help mitigate known security threats.
9. Malware Protection
Antivirus Software: Helps detect and remove malicious software like viruses, worms, and
Trojans.
Sandboxing for Malware: Some OS features isolate suspicious files in a "sandbox" where
they cannot harm the system.
10. Virtualization and Containerization
Virtualization: Allows multiple virtual machines (VMs) to run on a single physical machine,
providing isolation and reducing the impact of security breaches.
Containerization: A lightweight form of virtualization, where applications run in isolated
containers, limiting the potential damage if a container is compromised.
Operating systems define and enforce security policies that control how resources are accessed
and used. Examples include policies related to password strength, file access, and process
execution.
Firewall: Controls incoming and outgoing network traffic based on predetermined security
rules. Firewalls can be configured to protect against unauthorized access to system resources.
Network Isolation: Ensures that different parts of the system or network are isolated from
each other, preventing lateral movement in case of an attack.
Regular backups of critical data are essential for recovering from security incidents like
ransomware attacks, system crashes, or data corruption.
A distributed system in an operating system (OS) is a network of independent computers that work
together to appear as a single unified system to users. Here’s a short overview:
Key Features:
1. Transparency: The system hides its distributed nature, making it appear as a single system
(e.g., access, location, and failure transparency).
2. Scalability: Can grow in size (users or resources) without losing performance.
3. Fault Tolerance: The system continues to work even if some components fail, often using
replication and redundancy.
4. Resource Sharing: Resources (CPU, storage) are shared across multiple nodes in the system.
5. Openness: The system is open and adheres to standards for integration of different
components.
Types:
Communication:
Message Passing and Remote Procedure Calls (RPC) allow distributed nodes to
communicate.
Distributed File Systems (e.g., NFS, HDFS) enable sharing of files across machines.
Challenges:
Next, we discuss a subject crucial to operating-system implementation: the organisation of data within
the system. In this part, we provide a quick overview of a number of fundamental data structures that
are frequently used in operating systems. The bibliography is located at the conclusion of the chapter
and should be consulted by readers who want further information on these structures or any others. 1.
Lists, stacks, and queues A simple data structure called an array allows for direct access to every
element. For instance, main memory is built using an array. The item is addressed as item number item
size if the data item being saved requires more than one byte to be allocated to it. What about storing
something whose size might change, though? And what happens if one item needs to be removed but
the relative locations of the other objects need to be kept? During such times, different data structures
take the place of arrays. Lists are maybe the most fundamental data structures in computer science
after arrays. The elements in a list must be accessed in a specific sequence, as opposed to each item in
an array which can be accessed directly. In other words, a list depicts a grouping of data elements as a
series. A linked list, in which items are connected to one another, is the most popular way to create this
structure. There are various forms of linked lists
• Each item in a single linked list points to its succeedor, as shown in Figure 1.13. • As shown in
Figure 1.14, an item in a doubly linked list can refer to either its predecessor or its successor. • As
shown in Figure 1.15, a circularly linked list has a last entry that references to the first element rather
than null. Linked lists make it simple to add and remove things and can accept items of various sizes.
The speed of accessing a specific item from a list of size n is linear, or O(n), since it may, in the worst
case, need traversing all n entries. This is one potential drawback of using lists. Sometimes, kernel
algorithms will directly employ lists. However, they are frequently employed to build stronger data
structures like stacks and queues. A stack is a sequentially ordered data structure that adds and
removes items according to the last in, first out (LIFO) principle, where the last item added to a stack
is the first item deleted. Pushing and popping are the operations used to add and remove objects from a
stack, respectively. A stack is frequently used by an operating system when calling functions. When a
function is called, parameters, local variables, and the return address are pushed onto the stack;
returning from the function call removes them.
Trees
A tree is a data structure that can be used to represent data hierarchically. Data values in a tree
structure are linked through parent–child relationships. In a general tree, a parent may have an
unlimited number of children. In a binary tree, a parent may have at most two children, which we term
the left child and the right child. A binary search tree additionally requires an ordering between the
parent’s two children in which left child <= right child. Figure 1.20 provides an example of a binary
search tree. When we search for an item in a binary search tree, the worst-case performance is O(n)
(consider how this can occur). To remedy this situation, we can use an algorithm to create a balanced
binary search tree. Here, a tree containing n items has at most lg n levels, thus ensuring worst-case
performance of O(lg n). We shall see in Section 5.7.1 that Linux uses a balanced binary search tree
(known as a red-black tree) as part its CPU-scheduling algorithm.
A hash function takes data as its input, performs a numeric operation on the data, and returns a
numeric value. This numeric value can then be used as an index into a table (typically an array) to
quickly retrieve the data. Whereas searching for a data item through a list of size n can require up to
O(n) comparisons, using a hash function for retrieving data from a table can be as good as O(1),
depending on implementation details. Because of this performance, hash functions are used
extensively in operating systems. One potential difficulty with hash functions is that two unique inputs
can result in the same output value— that is, they can link to the same table
location. We can accommodate this hash collision by having a linked list at the table location that
contains all of the items with the same hash value. Of course, the more collisions there are, the less
efficient the hash function is. One use of a hash function is to implement a hash map, which associates
(or maps) [key:value] pairs using a hash function. Once the mapping is established, we can apply the
hash function to the key to obtain the value from the hash map (Figure 1.21). For example, suppose
that a user name is mapped to a password. Password authentication then proceeds as follows: a user
enters her user name and password. The hash function is applied to the user name, which is then used
to retrieve the password. The retrieved password is then compared with the password entered by the
user for authentication.
Bitmaps
Abitmap is a string of n binary digits that can be used to represent the status of n items. For example,
suppose we have several resources, and the availability of each resource is indicated by the value of a
binary digit: 0 means that the resource is available, while 1 indicates that it is unavailable (or vice
versa). The
value of the i th position in the bitmap is associated with the i th resource. As an example, consider the
bitmap shown below: 001011101 Resources 2, 4, 5, 6, and 8 are unavailable; resources 0, 1, 3, and 7
are available. The power of bitmaps becomes apparent when we consider their space efficiency. If we
were to use an eight-bit Boolean value instead of a single bit, the resulting data structure would be
eight times larger. Thus, bitmaps are commonly used when there is a need to represent the availability
of a large number of resources. Disk drives provide a nice illustration. A medium-sized disk drive
might be divided into several thousand individual units, called disk blocks. A bitmap can be used to
indicate the availability of each disk block. In summary, data structures are pervasive in operating
system implementations. Thus, we will see the structures discussed here, along with others, throughout
this text as we explore kernel algorithms and their implementations.
An Operating System provides services to both the users and to the programs.
➢ It provides an environment for the execution of programs.
➢ It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system
❖ Program execution
❖ I/O operations
❖ File System manipulation
❖ Communication
❖ Error Detection
❖ Resource Allocation
❖ Protection
❖ User interface
Applications are operating system (OS) specific for several reasons, primarily due to differences in
how operating systems manage resources, run programs, and provide services. Here's a breakdown of
why this happens:
Each operating system has its own set of system calls and application programming interfaces
(APIs) that applications use to interact with the hardware and other system resources.
For example, Windows provides APIs like Win32, while macOS has its own Cocoa and
Carbon frameworks, and Linux uses POSIX-based APIs. Applications are often designed to
use these specific APIs, meaning they rely on the OS they were developed for to work
correctly.
2. Hardware Management
The operating system manages hardware resources such as memory, CPU, storage, and
peripherals (like printers and keyboards). Different operating systems may handle hardware
differently, meaning that an application built for one OS may not be able to interface properly
with hardware on another OS without modification.
3. File Systems
Each operating system uses a different file system (e.g., NTFS for Windows, HFS+ or APFS
for macOS, and ext4 for Linux). Applications are usually written to interact with the file
system used by the target operating system, which means they may not function on a different
OS without adaptation.
4. User Interface Design
Different OSes have distinct user interface paradigms and graphical frameworks. For instance,
Windows applications are often built using WinForms or WPF, macOS apps use AppKit, and
Linux applications might use GTK or Qt. These differences mean that the same application
needs to be rewritten or at least adjusted to match the UI conventions of the different OSes.
5. Security Models
Security models vary across operating systems, and applications need to comply with the
specific rules and models of the OS. For instance, Windows uses different security protocols
than Linux or macOS, and an application must be designed to interact with these security
models to function properly.
6. Software Dependencies
Many applications rely on libraries or frameworks that are OS-specific. For example, a Linux
application may rely on libraries like GTK, while a Windows application may depend on
Microsoft-specific libraries. These dependencies often require the application to be compiled or
packaged for a specific operating system.
Operating systems handle processes and multitasking in unique ways. An application might
depend on the process management mechanisms of the OS (such as memory management,
multitasking, and process synchronization), which can differ significantly between platforms.
The way software is distributed and installed can vary significantly. For instance, Windows
apps often use .exe or .msi installers, macOS apps use .app bundles, and Linux distributions
use package managers like apt or yum. These differences create a need for platform-specific
applications.
Applications are often compiled into machine code specific to the processor architecture and
operating system. This means that an application compiled for Windows will not run on
macOS or Linux unless it is specifically recompiled or ported to those platforms.
The operating system should be convenient, easy to use, reliable, safe and fast according to the users.
However, these specifications are not very useful as there is no set method to achieve these goals.
Using high level language for implementing an operating system leads to a loss in speed and increase
in storage requirements. However, in modern systems only a small amount of code is needed for high
performance, such as the CPU scheduler and memory manager. Also, the bottleneck routines in the
system can be replaced by assembly language equivalents if required.
address space, but does have a modular design that allows the kernel to be modified
during run time.
The monolithic approach is known as a tightly coupled system because changes to
one part of the system can have wide-ranging effects on other parts.
Alternatively, we could design a loosely coupled system,Such a system is divided
into separate, smaller components that have specific and limited functionality. All
these components together comprise the kernel.
1.14.2 Layered Approach
➢ A system can be made modular in many ways. One method is the layered approach,
in which the OS is broken up into a number of layers (levels).
➢ The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
➢ The main advantage of the layered approach is simplicity of construction and debugging.
➢ The layers are selected so that each uses functions (operations) and services of only
lower-level layers. This approach simplifies debugging and system verification.
➢ The first layer can be debugged without any concern for the rest of the system.
➢ Once the first layer is debugged, its correct functioning can be assumed while the second
layer is debugged, and so on.
➢ If an error is found during the debugging of a particular layer, the error must be on that
layer, because the layers below it are already debugged.
components from the kernel and implementing them as user level programs that
reside in separate address spaces. This is called as Microkernel.
client program and the various services that are running in user space.
The kernel is the central module of an operating system (OS). It is otherwise called
as heart of the Operating System (OS).
It is the part of the operating system that loads first, and it remains in main memory.
➢ These features, in turn, provide a platform for developing mobile applications that run
on a multitude of Android-enabled devices.
➢ Google has designed a separate Android API for Java development. Java
applications are compiled into a form that can execute on the Android RunTime ART,
a virtual machine designed for Android and optimized for mobile devices with limited
memory and CPU processing capabilities.
➢ Java programs are first compiled to a Java bytecode .class file and then translated
into an executable .dex file.
➢ The set of native libraries available for Android applications includes frameworks for
developing web browsers (webkit), database support (SQLite), and network support,
such as secure sockets (SSLs).
➢ Google has chosen to abstract the physical hardware through the hardware
abstraction layer, or HAL.
➢ The standard C library used by Linux systems is the GNU C library (glibc).
➢ Google instead developed the Bionic standard C library for Android. Not only does
Bionic have a smaller memory footprint than glibc.
➢ At the bottom of Android’s software stack is the Linux kernel. Google has
modified the Linux kernel used in Android in a variety of areas to support the special
needs of mobile systems, such as power management.
➢ It has also made changes in memory management and allocation and has added a
new form of IPC known as Binder
• Configure the operating system for the system on which it will run
Booting
• After loading the operating system into the main memory and turning on the
computer, it is prepared to accept user commands.
• The initial collection of tasks the computer carries out when turned on is known as
the boot sequence.
• The kernel is located, loaded into the main memory, and executed by the
bootstrap loader. In some systems, the kernel is loaded after a straightforward
bootstrap loader retrieves a more intricate boot program from the disk.
• Restarting a computer is sometimes referred to as rebooting.
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)
• This test runs a preliminary examination of the computer's main memory, disk
drives, input/output devices, etc. Moreover, the system emits a beep sound in
case of any errors.
Step 3: Loading of OS
• The bootable sequence stored in the CMOS is read by BIOS following the
successful completion of POST
Step 4: system configuration
Step 5 : Loading system utilities
• In this step, system utilities like antivirus and volume control are loaded into the
memory.
System Boot
• When power initialized on system, execution starts at a fixed memory location
• Operating system must be made available to hardware so hardware can
start it
– Small piece of code – bootstrap loader, BIOS, stored in ROM or
EEPROM
locates the kernel, loads it into memory, and starts it
– Sometimes two-step process where boot block at fixed location
loaded by ROM code, which loads bootstrap loader from disk
LOYOLA INSTITUTE OF TECHNOLOGY
Palanchur, Chennai – 600123.
Approved by AICTE, New Delhi and Affiliated to Anna University, Chennai
(Accredited by NAAC with “A” Grade & An ISO 9001-2015 Certified Institution)
Operating-System Debugging
Kernighan’s Law: ―Debugging is twice as hard as writing the code in the first place. Therefore, if
you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.‖
Performance Tuning
Tracing
BCC