0% found this document useful (0 votes)
2 views

QBank

The document is a question bank for the U23CB401 Operating Systems course for the academic year 2024-2025, covering various topics such as the objectives, functions, and components of operating systems, as well as system calls and scheduling algorithms. It includes both Part A with short answer questions and Part B with detailed explanations on the evolution of operating systems and the purpose of system calls. Additionally, it discusses different scheduling algorithms and their performance metrics.

Uploaded by

saint51155544
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

QBank

The document is a question bank for the U23CB401 Operating Systems course for the academic year 2024-2025, covering various topics such as the objectives, functions, and components of operating systems, as well as system calls and scheduling algorithms. It includes both Part A with short answer questions and Part B with detailed explanations on the evolution of operating systems and the purpose of system calls. Additionally, it discusses different scheduling algorithms and their performance metrics.

Uploaded by

saint51155544
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 131

Department of Artificial Intelligence and Data Science

U23CB401 – OPERATING SYSTEMS

Academic year 2024 – 2025 (Even)


Question Bank
Unit-1

PART A
S.No Questions
1 What are the objectives of operating system?
1. Convenience
2. Efficiency
3. Ability to evolve

2 Write about the basic elements of a computer system



Input Unit

Output Unit

Storage Unit

Central Processing Unit (CPU)

3 Define operating system.


An operating system (OS) is basically a collection of software that manages
computer hardware resources and provides common services for computer
programs. Operating system is a crucial component of the system software in a
computer system.
4 Write the functions of OS.

Security

Job accounting

Memory Management

Processor Management

Device Management

File Management
5 List the Components of OS.

Kernel.

Process Execution

Interrupt

Memory Management

Multitasking
6 List out the data fields associated with Process Control Block.

Process-id
Process State
Process Priority
Accounting Information
Program Counter
7 List the different operating system services.

Program Execution

I/O Operations

File System Manipulation

Communication

8 What is booting?
Computer booting is a process of loading operating system into computer's main
memory/random access memory (RAM) and then preparing the system for users to
run applications when the computer is switched on. The computer booting process
may only take seconds on modern computer.

9 Differentiate GUI and CLI

The main di+++++fference between GUI and CLI is that the Graphical User
Interface (GUI) allows the user to interact with the system using graphical
elements such as windows, icons, menus while the Command Line Interface (CLI)
allows the user to
interact with the system using commands.
10 Write an example of standard API.

Weather Snippets Google utilizes APIs to display relevant data from user search
queries. One common API usage example we come across on a daily basis is
weather data. Rich weather snippets seem to be commonplace, found on all
platforms, like Google Search, Apple’s Weather app, or even from your smart
home device.
11 Define the importance of system call.


If a file system requires the creation or deletion of files. Reading and
writing from files also require a system call.

Creation and management of new processes.

Network connections also require system calls. This includes sending and
receiving packets.

Access to a hardware device such as a printer, scanner etc. requires a system
call.

12 Compare preemptive and non preemptive scheduling



Preemptive Scheduling is a technique where the assignments are allotted
with their preferences or priorities. In the case of preemptive scheduling, it
is crucial to execute a higher preference assignment even if the task with a
lower priority is still in the running stage.

Non-preemptive Scheduling is a CPU scheduling method in which a
procedure takes the resource, and holds it till it gets terminated or changes
to
the waiting condition.

13 Identify the three methods of parameter passing.



In mode

Out mode

Inout mode

14 Define dispatcher.

A dispatcher is a special program which comes into play after the scheduler. When
the scheduler completes its job of selecting a process, it is the dispatcher which
takes that process to the desired state/queue. The dispatcher is the module
thatgives a
process control over the CPU after it has been selected by the short-term scheduler.
15 Define scheduler.

A scheduler is a software product that allows an enterprise to schedule and track


computer batch tasks. These units of work include running a security program or
updating software. Job schedulers may also manage the job queue for a computer
cluster.
PART B
1 Explain the evolution of operating system.

An operating system (OS) is a software program that serves as a conduit between


computer hardware and the user. It is a piece of software that coordinates the
execution of application programs, software resources, and computer hardware. It
also aids in the control of software and hardware resources such as file
management, memory management, input/output, and a variety of peripheral
devices such as a disc drive, printers, and so on. Every computer system must have
at least one operating system to run other applications. Browsers, MS Office,
Notepad Games, and other applications require an environment to execute
andfulfill their functions. This blog explains the evolution of operating systems over
thepast years.

What is Evolution of Operating Systems?


Operating systems have progressed from slow and expensive systems to today's
technology, which has exponentially increased computing power at comparatively
modest costs. So let's have a detailed look at the evolution of operating systems.

The operating system can be classified into four generations, as follows:

First Generation
Second Generation
Third Generation
Fourth Generation
First Generation
Serial Processing
The evolution of operating systems began with serial processing. It marks the start
of the development of electronic computing systems as alternatives to mechanical
computers. Because of the flaws in mechanical computing devices, humans'
calculation speed is limited, and they are prone to making mistakes. Because there
is no operating system in this generation, the computer system is given instructions
that must be carried out immediately.
Programmers were incorporated into hardware components without using an
operating system by the 1940s and 1950s. The challenges here are scheduling and
setup time. The user logs in for machine time by wasting computational time. Setup
time is required when loading the compiler, saving the compiled program, the
source program, linking, and buffering. The process is restarted if an intermediate
error occurs.

Example: Windows 95 and 98 are examples of serial processing operating

systems. Second Generation


Batch System
The batched systems marked the second generation in the evolution of operating
systems. In the second generation, the batch processing system was implemented,
which allows a job or task to be done in a series and then completed sequentially.
The computer system in this generation does not have an operating system, although
various operating system functionalities are available, such as FMS and IBSYS. It is
used to improve computer utilization and application. On cards and tapes, jobs were
scheduled and submitted. Then, using Job Control Language, they were
successively executed on the monitors. The first computers employed in the batch
operation method created a computer batch of jobs that never paused or stopped.The
software is written on punch cards and then transferred to the tape's processing unit.
When the computer finishes one job, it immediately moves on to the next item on
the tape. Despite the fact that it is inconvenient for the users, it is designed to keep
the pricey computer as busy as possible by running a leveraged stream of operations.
Memory protection prevents the memory space that makes up the monitor from
being changed, and the timer prevents the job from monopolizing the system. When
the input and output devices are in use, the processor remains idledue to poor
CPU utilization.

Example: MVS Operating System of IBM is an example of a batch processing


operating system.

Third Generation
Multi-Programmed Batched System
The evolution of operating systems embarks the third generation with multi-
programmed batched systems. In the third generation, the operating system was
designed to serve numerous users simultaneously. Interactive users can
communicate with a computer via an online terminal, making the operating system
multi-user and multiprogramming. It is used to execute several jobs that should be
kept in the main memory. The processor determines which program to run through
job scheduling algorithms.

Example: Windows and IOS are examples of multi-programmed batched operating


systems.

Fourth Generation
The operating system is employed in this age for computer networks where users
are aware of the existence of computers connected to one another.

The era of networked computing has already begun, and users are comforted by a
Graphical User Interface (GUI), which is an incredibly comfortable graphical
computer interface. In the fourth generation, the time-sharing operating system and
the Macintosh operating system came into existence.

Time-Sharing Operating System


The Time-sharing of operating systems had a great impact on the evolution of
operating systems. Multiple users can access the system via terminals at the same
time, and the processor's time is divided among them. Printing ports were required
for programs having a command-line user interface, which required written
responses to prompts or written commands. The interaction is scrolled down like a
roll of paper. It was previously used to develop batch replacement systems. The
userinterfaces directly with the computer via printing ports, much like an electric
teletype. Few users shared the computer immediately, and each activity was
completed in a fraction of a second before moving on to the next. By establishing
iterations when they are receiving full attention, the fast server may act on a large
number of users' processes at once. Multiple programs use time-sharing systems to
apply to the computer system by sharing the system interactively.

Example: Unix Operating System is an example of a time-sharing OS.

Macintosh Operating System


It was based on decades of research into graphical operating systems and
applications for personal computers. The photo depicts a Sutherland pioneer
program sketchpad that was developed in 1960, employing many of
thecharacteristics of today's graphical user interface, but the hardware components
costmillions of dollars and took up a room. The initiative on massive computers and
hardware improvements made the Macintosh commercially and economically
viable after many research gaps. Many research laboratories are still working on
research prototypes like sketchpads. It served as the foundation for
anticipatedproducts.

Example: Mac OS X 10.6.8 snow leopard and OS X 10.7.5 Lion are some examples
of macintosh OS.
2 Explain the purpose and importance of system calls with example.
The interface between a process and an operating system is provided by system
calls. In general, system calls are available as assembly language instructions. They
are also included in the manuals used by the assembly level programmers. System
calls are usually made when a process in user mode requires access to a resource.
Then it requests the kernel to provide the resource via a system calls.

In general, system calls are required in the


following situations −

If a file system requires the creation or deletion of files. Reading and writing from
files also require a system call.
Creation and management of new processes.
Network connections also require system calls. This includes sending and receiving
packets.
Access to a hardware device such as a printer, scanner etc. requires a system call.
Types of System Calls
There are mainly five types of system calls. These are explained in detail as follows
-

Process Control
These system calls deal with processes such as process creation, process
termination etc.

File Management
These system calls are responsible for file manipulation such as creating a file,
reading a file, writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from
device buffers, writing into device buffers etc.

Information Maintenance
These system calls handle information and its transfer between the operating system
and the user program.

Communication
These system calls are useful for interprocess communication. They also deal with
creating and deleting a communication connection.

Some of the examples of all the above types of system calls in Windows and Unix
are given as follows -

Types of System Calls Windows Linux


Process Control CreateProcess()
ExitProcess()
WaitForSingleObject()
fork()
exit()
wait()

File Management CreateFile()


ReadFile()
WriteFile()
CloseHandle()
open()
read()
write()
close()
Device Management
SetConsoleMode() ReadConsole()
WriteConsole()
ioctl()
read()
write()
Information Maintenance GetCurrentProcessID()
SetTimer()
Sleep()
getpid()
mmmm
alarm()
sleep()
Communication CreatePipe()
CreateFileMapping()
MapViewOfFile()
pipe()
shmget()
mmap()
Purpose of System Calls
There are many different system calls as shown above. The purpose of some of
those system calls is as follows -

open()
The open() system call is used to provide access to a file in a file system. This
system call allocates resources to the file and provides a handle that the process
usesto refer to the file. A file can be opened by multiple processes at the same time
or berestricted to one process. It all depends on the file organization and file system.

read()
The read() system call is used to access data from a file that is stored in the file
system. The file to read can be identified by its file descriptor and it should be
opened using open() before it can be read. In general, the read() system calls takes
three arguments
i.e. the file descriptor, the buffer which stores read data and the number of bytes to
be read from the file.

write()
The write() system call writes the data from a user buffer into a device such as a
file. This system call is one of the ways to output data from a program. In general,
the write() system calls takes three arguments i.e. the file descriptor,the pointer to
the buffer where data is stored and the number of bytes to write from the buffer.

close()
The close() system call is used to terminate access to a file system. Using this
system call means that the file is no longer required by the program and so the
buffers are flushed, the file metadata is updated and the file resources are de-
allocated.

wait()
In some systems, a process may wait for another process to complete its execution.
This happens when a parent process creates a child process and the execution of the
parent process is suspended until the child process executes. The suspending of the
parent process occurs with a wait() system call. When the child process completes
execution, the control is returned back to the parent process.

This system call runs an executable file in the context of an already running process.
It replaces the previous executable file. This is known as an overlay. The original
process identifier remains since a new process is not created but data, heap, stack
etc. of the process are replaced by the new process.

fork()
Processes use the fork() system call to create processes that are a copy of
themselves. This is one of the major methods of process creation in
operatingsystems. When a parent process creates a child process and the execution
of the parent process is suspended until the child process executes. When the child
process completes execution, the control is returned back to the parent process.
exit()
The exit() system call is used by a program to terminate its execution. In a
multithreaded environment, this means that the thread execution is complete. The
operating system reclaims resources that were used by the process after the exit()
system call.

kill()
The kill() system call is used by the operating system to send a termination signal to
a process that urges the process to exit. However, kill() system call does not
necessarily mean killing the process and can have various meanings.
3 Consider the following set of processes, with the length of the CPU burst given in
milliseconds:

a) Draw Gantt charts that illustrate the execution of these processes using FCFS
and SJF,Priority,SRF,RR algorithms, b) What is the turnaround time of each
process for each of the scheduling algorithm. c) What is the waiting time of each
process for each of these scheduling algorithms. d) Which of the algorithms results
in the minimum average waiting time.

First Come First Serve (FCFS)


Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.
First Come First Serve Scheduling Algorithm
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time


P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)


This is also known as shortest job first, or SJF

This is a non-preemptive, pre-emptive scheduling algorithm.

Best approach to minimize waiting time.


Easy to implement in Batch systems where required CPU time is known in advance.

Impossible to implement in interactive systems where required CPU time is not


known.

The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service


Time P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
Shortest Job First Scheduling Algorithm

Process Waiting
Time P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling


Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.

Each process is assigned a priority. Process with highest priority is to be executed


first and so on.

Processes with same priority are executed on first come first served basis.

Priority can be decided based on memory requirements, time requirements or any


other resource requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority.
Here we are considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service


Time P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
Priority Scheduling Algorithm

Process Waiting
Time P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.

The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.

Impossible to implement in interactive systems where required CPU time is not


known.

It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling


Round Robin is the preemptive process scheduling algorithm.

Each process is provided a fix time to execute, it is called a quantum.

Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.

Context switching is used to save states of preempted processes.

Round Robin Scheduling Algorithm

Process Wait Time : Service Time - Arrival


Time P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
4 What is a process and explain its states
Process States in OS
We'll learn about 'Process States' and the many states processes that occur
throughout the lifespan of a process in this post. The states that the process can
havediffered from one Operating System to the next. However, just a few frequent
states are depicted in the diagram.

Process State Diagram

Let's first understand what is a process. Th


The process is defined as any program that goes under execution. One example of
creating the process is using the fork() system call. The parent process uses a fork()
system call which will create the child process. It is possible that at the same time
multiple processes are created and stored in memory.

Whenever the process creation is taking place process is in a new state and when the
process gets terminated it is in the terminated state or completed state.

The states of the process are stored in Process Control Block(PCB). PCB is a
special data structure that stores information about the process.

Let’s learn about the various states a process can go through in detail in the next
section along with the process state diagram.

List of States of the


Process New State
This is the first state of the process life cycle. When process creation is taking place,
the process is in a new state.

Ready State
When the process creation gets completed, the process comes into a ready state.
During this state, the process is loaded into the main memory and will be placed in
the queue of processes which are waiting for the CPU allocation.

When the process is in the creation process is in a new state and when the process
gets created process is in the ready state.

Running State
Whenever the CPU is allocated to the process from the ready queue, the process
state changes to Running.

Block or Wait State


When the process is executing the instructions, the process might require carrying
out a few tasks which might not require CPU. If the process requires performing
Input- Output task or the process needs some resources which are already acquired
by other processes, during such conditions process is brought back into the main
memory, and the state is changed to Blocking or Wait for the state. Process
isplaced in the queue of processes that are in waiting or block state in the main
memory.

Terminated or Completed
When the entire set of instructions is executed and the process is completed. The
process is changed to terminated or completed state.During this state the PCB of the
process is also deleted.

Terminate Process State

It is possible that there are multiple processes present in the main memory at the
same time.

Suspend Ready
So whenever the main memory is full, the process which is in a ready state is
swapped out from main memory to secondary memory. The process is in a ready
state when goes through the transition of moving from main memory to secondary
memory, the state of that process is changed to Suspend Ready state. Once the
mainmemory will have enough space for the process, the process will be brought
back to the main memory and will be in a ready state.

Suspend Ready State

It’s possible that the process is waiting or blocked state can be swapped out to
secondary memory. Let’s understand in which state process in waiting or block state
will go.

Suspend Wait or Suspend Blocked


Whenever the process that is in waiting for state or block state in main memory gets
to swap out to secondary memory due to main memory being completely full, the
process state is changed to Suspend wait or Suspend blocked state.

Suspend Wait State


5 Oultine the concept of process
scheduling. Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems.


Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

Categories of Scheduling
There are two categories of scheduling:

Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with the
running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling
Queues. The OS maintains a separate queue for each of the process states and PCBs
of all processes in the same execution state are placed in the same queue. When the
state of a process is changed, its PCB is unlinked from its current queue and moved
to its new state queue.

Process Scheduling Queuing


The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.

Two-State Process Model

S.N. State & Description


1
Running
When a new process is created, it enters into the system as in the running state.

2
Not Running

Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented
by using linked list. Use of dispatcher is as follows. When a process is interrupted,
that process is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then selects a process
from the queue to execute.

Schedulers

Long-Term Scheduler
Short-Term Scheduler
Medium-Term
Scheduler Long Term
Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs
are admitted to the system for processing. It selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU
scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average
rate of process creation must be equal to the average departure rate of processes
leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of ready
state to running state of the process. CPU scheduler selects a process among the
processes that are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which


process to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium-term scheduler is
in-charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended


processes cannot make any progress towards completion. In this condition, to
remove the process from memory and make space for other processes, the
suspended process is moved to the secondary storage. This process is called
swapping, and the process is said to be swapped out or rolled out. Swapping may
benecessary to improve the process mix.

Comparison among Scheduler


S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short term scheduler Speed is fastest among other two
Speed is in between both short and long term scheduler.
3 It controls the degree of multiprogramming It provides lesser control over
degree of multiprogramming It reduces the degree of multiprogramming.
4 It is almost absent or minimal in time sharing system It is also minimal
in time sharing system It is a part of Time sharing systems.
5 It selects processes from pool and loads them into memory for execution It
selects those processes which are ready to execute It can re-introduce the process
into memory and execution can be continued.
Context Switching
A context switching is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time. Using this technique, a context switcher enables multiple
processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features.

When the scheduler switches the CPU from executing one process to
executeanother, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its own
PCBand used to set the PC, registers, etc. At that point, the second process can start
executing.

Process Context Switch


Context switches are computationally intensive since register and memory state
must be saved and restored. To avoid the amount of context switching time, some
hardware systems employ two or more sets of processor registers. When the process
is switched, the following information is stored for later use.

Program Counter
Scheduling information
Base and limit register
value Currently used
register Changed State
I/O State information
Accounting information
UNIT-2
PART-A
S.No Questions
1 Define threads.
A thread is a path of execution within a process. A process can contain multiple
threads. A thread is also known as lightweight process.
Types of Threads
There are two types of threads.
* User Level Thread
* Kernel Level Thread
2 List the various states of threads
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked Excluding CREATION and FINISHED
3 Differentiate threads and processes
Process is called heavy weight process. A Thread is lightweight as each thread in a
process shares code, data and resources. 9. Process switching uses interface in
operating system. Thread switching does not require to call an operating system and
cause an interrupt to the kernel. 10.
4 Define process synchronization

It is the task phenomenon of coordinating the execution of processes in such a way


that no two processes can have access to the same shared data and resources. It is a
procedure that is involved in order to preserve the appropriate order of execution of
cooperative processes.
5 What is meant my critical section?

Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure,
CPU or any IO device.
6 List the section of the program.

When a program is loaded into the memory and it becomes a process, it can be
divided into four sections
stack
heap
text
data
7 Indicate the definition of mutex locks.
A mutex lock has a boolean variable available whose value indicates if the lock is
available or not. If the lock is available, a call to acquire () succeeds, and the lock
is then considered unavailable. A process that attempts to acquire an unavailable
lock is blocked until the lock is released.

8 Define deadlock
A deadlock is a situation in which two computer programs sharing the same
resource are effectively preventing each other from accessing the resource,
resultingin both programs ceasing to function.

9 List the issues in thread



The fork () and exec () system call.

Signal handling.

Thread cancelation.

Thread local storage.

Scheduler activation.
10 Write the importance of semaphores

With the help of semaphores, there is a flexible management of
resources.

Semaphores are machine-independent and they should be run in the
machine-independent code of the microkernel.

Semaphores do not allow multiple processes to enter in the critical
section.

They allow more than one thread to access the critical section.

11 Give the disadvantages of Semaphores.



Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.

Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent
the creation of a structured layout for the system.

Semaphores may lead to a priority inversion where low priority processes
may access the critical section first and high priority processes later.

12 Indicate the necessary conditions of deadlock.

Mutual exclusion
Hold and wait
No pre-emption
Circular wait

13 What is meant by hold and wait.

The hold and wait condition states that the process is holding onto a resource/s that
may (or may not) be required by other processes. The key point here is that the
process is holding onto those resources and will not release them until it gets
access
to the requested resources (which are being held by other processes).
14 What are the two methods in semaphore to overcome the wastage of CPU
cycles.

Wait(S) or P: If the semaphore value is greater than 0, decrement the
value. Otherwise, wait until the value is greater than 0 and then decrement
it.

Signal(S) or V: Increment the value of Semaphore
Write the importance of Peterson solution.
15
Peterson’s solution provides a good algorithmic description of solving the critical-
section problem and illustrates some of the complexities involved in designing
software that addresses the requirements of mutual exclusion, progress, and
bounded waiting. The structure of process Pi in Peterson’s solution.

PART - B
1 Explain about different Multithreading models with a neat diagram
Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In
multi-threads, the same process or task can be done by the number of threads, or we
can say that there is more than one thread to perform the task in multithreading.
With the use of multithreading, multitasking can be achieved.

Multithreading Models in Operating system


The main drawback of single threading systems is that only one task can be
performed at a time, so to overcome the drawback of this single threading, there is
multithreading that allows multiple tasks to be performed.

For example:
Play Video

Multithreading Models in Operating system


In the above example, client1, client2, and client3 are accessing the web server
without any waiting. In multithreading, several tasks can run at the same time.

In an operating system, threads are divided into the user-level thread and the Kernel-
level thread. User-level threads handled independent form above the kernel and
thereby managed without any kernel support. On the opposite hand, the operating
system directly manages the kernel-level threads. Nevertheless, theremust be a form
of relationship between user-level and kernel-level threads.

There exists three established multithreading models classifying these relationships


are:

Many to one multithreading model


One to one multithreading model
Many to Many multithreading models
Many to one multithreading model:
The many to one model maps many user levels threads to one kernel thread. This
type of relationship facilitates an effective context-switching environment, easily
implemented even on the simple kernel with no thread support.
The disadvantage of this model is that since there is only one kernel-level thread
schedule at any given time, this model cannot take advantage of the hardware
acceleration offered by multithreaded processes or multi-processor systems. In this,
all the thread management is done in the userspace. If blocking comes, this model
blocks the whole system.

Multithreading Models in Operating system


In the above figure, the many to one model associates all user-level threads to single
kernel-level threads.

One to one multithreading model


The one-to-one model maps a single user-level thread to a single kernel-levelthread.
This type of relationship facilitates the running of multiple threads in parallel.
However, this benefit comes with its drawback. The generation of every new user
thread must include creating a corresponding kernel thread causing an overhead,
which can hinder the performance of the parent process. Windows series and Linux
operating systems try to tackle this problem by limiting the growth of thethread
count.

Multithreading Models in Operating system


In the above figure, one model associates that one user-level thread to a single
kernel- level thread.

Many to Many Model multithreading model


In this type of model, there are several user-level threads and several kernel-level
threads. The number of kernel threads created depends upon a particular
application. The developer can create as many threads at both levels but may not be
the same. The many to many model is a compromise between the other two models.
In this model, if any thread makes a blocking system call, the kernel can schedule
another thread for execution. Also, with the introduction of multiple threads,
complexity is not present as in the previous models. Though this model allows the
creation of multiple kernel threads, true concurrency cannot be achieved by this
model. This is because the kernel can schedule only one process at a time.

Multithreading Models in Operating system


Many to many versions of the multithreading model associate several user-level
threads to the same or much less variety of kernel-level threads in the above figure.
2 What is a critical section? Specify the requirements for a solution to the critical
section problem
Process Synchronization is the task of coordinating the execution of processes in a
way that no two processes can have access to the same shared data and resources.

It is specially needed in a multi-process system when multiple processes are running


together, and more than one processes try to gain access to the same shared resource
or data at the same time.
This can lead to the inconsistency of shared data. So the change made by one
process not necessarily reflected when other processes accessed the same shared
data. To avoid this type of inconsistency of data, the processes need to be
synchronized with each other.

In this operating system tutorial, you will learn:

What is Process Synchronization?


How Process Synchronization Works?
Sections of a Program
What is Critical Section Problem?
Rules for Critical Section
Solutions To The Critical Section
How Process Synchronization Works?
For Example, process A changing the data in a memory location while another
process B is trying to read the data from the same memory location. There is a high
probability that data read by the second process will be erroneous.

Sections of a Program
Here, are four essential elements of the critical section:

Entry Section: It is part of the process which decides the entry of a particular
process.
Critical Section: This part allows one process to enter and modify the
sharedvariable.
Exit Section: Exit section allows the other process that are waiting in the Entry
Section, to enter into the Critical Sections. It also checks that a process that finished
its execution should be removed through this Section.
Remainder Section: All other parts of the Code, which is not in Critical, Entry, and
Exit Section, are known as the Remainder Section.
What is Critical Section Problem?
A critical section is a segment of code which can be accessed by a signal process at
a specific point of time. The section consists of shared data resources that required
to be accessed by other processes.

The entry to the critical section is handled by the wait() function, and it is
represented as P().
The exit from a critical section is controlled by the signal() function, represented as
V().
waiting to execute their critical section, need to wait until the current process
completes its execution.

Rules for Critical Section


The critical section need to must enforce all three rules:

Mutual Exclusion: Mutual Exclusion is a special type of binary semaphore which is


used for controlling access to the shared resource. It includes a priority inheritance
mechanism to avoid extended priority inversion problems. Not more than one
process can execute in its critical section at one time.
Progress: This solution is used when no one is in the critical section, and someone
wants in. Then those processes not in their reminder section should decide who
should go in, in a finite time.
Bound Waiting: When a process makes a request for getting into critical section,
there is a specific limit about number of processes can get into their critical section.
So, when the limit is reached, the system must allow request to the process to get
into its critical section.
Solutions To The Critical Section
In Process Synchronization, critical section plays the main role so that the problem
must be solved.

Here are some widely used methods to solve the critical section problem.

Peterson Solution
Peterson’s solution is widely used solution to critical section problems. This
algorithm was developed by a computer scientist Peterson that’s why it is named as
a Peterson’s solution.

In this solution, when a process is executing in a critical state, then the other process
only executes the rest of the code, and the opposite can happen. This method also
helps to make sure that only a single process runs in the critical section at a specific
time.

Example

PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) ){ wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
Assume there are N processes (P1, P2, … PN) and every process at some point of
time requires to enter the Critical Section
A FLAG[] array of size N is maintained which is by default false. So, whenever a
process requires to enter the critical section, it has to set its flag as true. For
example, If Pi wants to enter it will set FLAG[i]=TRUE.
Another variable called TURN indicates the process number which is currently
wating to enter into the CS.
The process which enters into the critical section while exiting would change the
TURN to another number from the list of ready processes.
Example: turn is 2 then P2 enters the Critical section and while exiting turn=3 and
therefore P3 breaks out of wait loop.
Synchronization Hardware
Some times the problems of the Critical Section are also resolved by hardware.
Some operating system offers a lock functionality where a Process acquires a lock
when entering the Critical section and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to
enter as it is locked. It can only do so if it is free by acquiring the lock itself.

Mutex Locks
Synchronization hardware not simple method to implement for everyone, so strict
software method known as Mutex Locks was also introduced.

In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.

Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between threads. It
is another algorithm or solution to the critical section problem. It is a signaling
mechanism and a thread that is waiting on a semaphore, which can be signaled by
another thread.

It uses two atomic operations, 1)wait, and 2) signal for the process synchronization.

Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
S = S + 1;
3 Discuss the importance of Readers-Writers Problem with its algorithm.
The Problem Statement
There is a shared resource which should be accessed by multiple processes. There
are two types of processes in this context. They are reader and writer. Any number
of readers can read from the shared resource simultaneously, but only one writer
can write to the shared resource. When a writer is writing data to the resource, no
other process can access the resource. A writer cannot write to the resource if there
are non zero number of readers accessing the resource at that time.

The Solution
From the above problem statement, it is evident that readers have higher priority
than writer. If a writer wants to write to the resource, it must wait until there are no
readers currently accessing that resource.

Here, we use one mutex m and a semaphore w. An integer variable read_count is


used to maintain the number of readers currently accessing the resource. The
variable read_count is initialized to 0. A value of 1 is given initially to m and w.

Instead of having the process to acquire lock on the shared resource, we use the
mutex m to make the process to acquire and release lock whenever it is updating the
read_count variable.

The code for the writer process looks like this:

while(TRUE)
{
wait(w);

/* perform the write operation */

signal(w);
}
And, the code for the reader process looks like this:

while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);

//release lock
signal(m);
/* perform the reading operation */

// acquire lock
wait(m);
read_count--;
if(read_count ==
0)
signal(w);

// release lock
signal(m);
}
Here is the Code uncoded(explained)
As seen above in the code for the writer, the writer just waits on the w semaphore
until it gets a chance to write to the resource.
After performing the write operation, it increments w so that the next writer can
access the resource.
On the other hand, in the code for the reader, the lock is acquired whenever the
read_count is updated by a process.
When a reader wants to access the resource, first it increments the read_count value,
then accesses the resource and then decrements the read_count value.
The semaphore w is used by the first reader which enters the critical section and the
last reader which exits the critical section.
The reason for this is, when the first readers enters the critical section, the writer is
blocked from the resource. Only new readers can access the resource now.
Similarly, when the last reader exits the critical section, it signals the writer using
the w semaphore because there are zero readers now and a writer can have the
chance to access the resource.
4 Discuss the importance of Dining philosophers Problem with its algorithm.
The dining philosophers problem is another classic synchronization problem which
is used to evaluate situations where there is a need of allocating multiple resources
to multiple processes.

What is the Problem Statement?


Consider there are five philosophers sitting around a circular dining table. The
dining table has five chopsticks and a bowl of rice in the middle as shown in the
below figure.

Dining Philosophers Problem

Dining Philosophers Problem

At any instant, a philosopher is either eating or thinking. When a philosopher wants


to eat, he uses two chopsticks - one from their left and one from their right. When a
philosopher wants to think, he keeps down both chopsticks at their original place.

Here's the Solution


From the problem statement, it is clear that a philosopher can think for an indefinite
amount of time. But when a philosopher starts eating, he has to stop at some point
of time. The philosopher is in an endless cycle of thinking and eating.
An array of five semaphores, stick[5], for each of the five chopsticks.

The code for each philosopher looks like:

while(TRUE)
{
wait(stick[i]);
/*
mod is used because if i=5, next
chopstick is 1 (dining table is circular)
*/
wait(stick[(i+1) % 5]);

/* eat */
signal(stick[i]);

signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left
and picks up that chopstick. Then he waits for the right chopstick to be available,
and then picks it too. After eating, he puts both the chopsticks down.

But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:

A philosopher must be allowed to pick up the chopsticks only if both the left and
right chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four
philosophers pick up four chopsticks, there will be one chopstick left on the table.
So, one philosopher can start eating and eventually, two chopsticks will be
available. In this way, deadlocks can be avoided.
5 Explain the bankers algorithm with an example.
It is a banker algorithm used to avoid deadlock and allocate resources safely to each
process in the computer system. The 'S-State' examines all possible tests or
activities before deciding whether the allocation should be allowed to each process.
It also helps the operating system to successfully share the resources between all
theprocesses. The banker's algorithm is named because it checks whether a person
should be sanctioned a loan amount or not to help the bank system safely simulate
allocation resources. In this section, we will learn the Banker's Algorithm in detail.
Also, we will solve problems based on the Banker's Algorithm. To understand the
Banker's Algorithm first we will see a real word example of it.

Suppose the number of account holders in a particular bank is 'n', and the total
money in a bank is 'T'. If an account holder applies for a loan; first, the bank
subtracts the loan amount from full cash and then estimates the cash difference is
greater than T to approve the loan amount. These steps are taken because if another
person applies for a loan or withdraws some amount from the bank, it helps the
bank manage and operate all things without any restriction in the functionality of
the banking system.

Similarly, it works in an operating system. When a new process is created in a


computer system, the process must provide all types of information to the operating
system like upcoming processes, requests for their resources, counting them, and
delays. Based on these criteria, the operating system decides which process
sequence should be executed or waited so that no deadlock occurs in a system.
Therefore, it is also known as deadlock avoidance algorithm or deadlock detectionin
the operating system.

Advantages
Following are the essential characteristics of the Banker's algorithm:
Play Video

It contains various resources that meet the requirements of each process.


Each process should provide information to the operating system for upcoming
resource requests, the number of resources, and how long the resources will be held.
It helps the operating system manage and control process requests for each type of
resource in the computer system.
The algorithm has a Max resource attribute that represents indicates each process
can hold the maximum number of resources in a system.
Disadvantages
It requires a fixed number of processes, and no additional processes can be started
in the system while executing the process.
The algorithm does no longer allows the processes to exchange its maximum needs
while processing its tasks.
Each process has to know and state their maximum resource requirement in advance
for the system.
The number of resource requests can be granted in a finite time, but the time limit
for allocating the resources is one year.
When working with a banker's algorithm, it requests to know about three things:

How much each process can request for each resource in the system. It is denoted
by the [MAX] request.
How much each process is currently holding each resource in a system. It is denoted
by the [ALLOCATED] resource.
It represents the number of each resource currently available in the system. It is
denoted by the [AVAILABLE] resource.
Following are the important data structures terms applied in the banker's algorithm
as follows:

Suppose n is the number of processes, and m is the number of each type of resource
used in a computer system.

Available: It is an array of length 'm' that defines each type of resource available in
the system. When Available[j] = K, means that 'K' instances of Resources type R[j]
are available in the system.
number of resources R[j] (each type) in a system.
Allocation: It is a matrix of m x n orders that indicates the type of resources
currently allocated to each process in the system. When Allocation [i, j] = K, it
means that process P[i] is currently allocated K instances of Resources type R[j] in
the system.
Need: It is an M x N matrix sequence representing the number of
remainingresources for each process. When the Need[i] [j] = k, then process P[i]
may require K more instances of resources type Rj to complete the assigned work.
Nedd[i][j] = Max[i][j] - Allocation[i][j].
Finish: It is the vector of the order m. It includes a Boolean value (true/false)
indicating whether the process has been allocated to the requested resources, and all
resources have been released after finishing its task.
The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system:

Safety Algorithm
It is a safety algorithm used to check whether or not a system is in a safe state or
follows the safe sequence in a banker's algorithm:

1. There are two vectors Wok and Finish of length m and n in a safety algorithm.

Initialize: Work = Available


Finish[i] = false; for I = 0, 1, 2, 3, 4… n - 1.

2. Check the availability status for each type of resources [i], such as:

Need[i] <=
Work Finish[i]
== false
If the i does not exist, go to step 4.

3. Work = Work +Allocation(i) // to get new resource allocation

Finish[i] = true

Go to step 2 to check the status of resource availability for the next process.

4. If Finish[i] == true; it means that the system is safe for all processes.

Resource Request Algorithm


A resource request algorithm checks how a system will behave when a process
makes each type of resource request in a system as a request matrix.

Let create a resource request array R[i] for each process P[i]. If the Resource
Requesti [j] equal to 'K', which means the process P[i] requires 'k' instances of
Resources type R[j] in the system.

1. When the number of requested resources of each type is less than the Need
resources, go to step 2 and if the condition fails, which means that the process P[i]
exceeds its maximum claim for the resource. As the expression suggests:

If Request(i) <= Need


Go to step 2;

2. And when the number of requested resources of each type is less than the
available resource for each process, go to step (3). As the expression suggests:

If Request(i) <= Available


Else Process P[i] must wait for the resource since it is not available for use.

3. When the requested resource is allocated to the process by changing

state: Available = Available - Request


Allocation(i) = Allocation(i) + Request (i)
Needi = Needi - Requesti

When the resource allocation state is safe, its resources are allocated to the process
P(i). And if the new state is unsafe, the Process P (i) has to wait for each type of
Request R(i) and restore the old resource-allocation state.

Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the
three resource types A, B and C. Following are the resources types: A has 10, B has
5 and the resource type C has 7 instances.

Process Allocation
A B C Max
A B C Available
A B C
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
Answer the following questions using the banker's algorithm:

What is the reference of the need matrix?


Determine if the system is safe or not.
What will happen if the resource request (1, 0, 0) for process P1 can the system
accept this request immediately?
Ans. 2: Context of the need matrix is as follows:

Need [i] = Max [i] - Allocation [i]


Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1

Process Need
A B C
P1 7 4 3
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1
Hence, we created the context of need matrix.

Ans. 2: Apply the Banker's Algorithm:

Available Resources of A, B and C are 3, 3, and 2.

Now we check if each type of resource request is available for each process.

Step 1: For Process P1:

Need <= Available

7, 4, 3 <= 3, 3, 2 condition is false.

So, we examine another process, P2.

Step 2: For Process P2:

Need <= Available

1, 2, 2 <= 3, 3, 2 condition true

New available = available + Allocation

(3, 3, 2) + (2, 0, 0) => 5, 3, 2

Similarly, we examine another process P3.

Step 3: For Process P3:

P3 Need <= Available

6, 0, 0 < = 5, 3, 2 condition is false.

Similarly, we examine another process, P4.

Step 4: For Process P4:


0, 1, 1 <= 5, 3, 2 condition is true

New Available resource = Available + Allocation

5, 3, 2 + 2, 1, 1 => 7, 4, 3

Similarly, we examine another process P5.

Step 5: For Process P5:

P5 Need <= Available

4, 3, 1 <= 7, 4, 3 condition is true

New available resource = Available + Allocation

7, 4, 3 + 0, 0, 2 => 7, 4, 5

Now, we again examine each type of resource request for processes P1 and P3.

Step 6: For Process P1:

P1 Need <= Available

7, 4, 3 <= 7, 4, 5 condition is true

New Available Resource = Available + Allocation

7, 4, 5 + 0, 1, 0 => 7, 5, 5

So, we examine another process P2.

Step 7: For Process P3:

P3 Need <= Available

6, 0, 0 <= 7, 5, 5 condition is true

New Available Resource = Available + Allocation

7, 5, 5 + 3, 0, 2 => 10, 5, 7

Hence, we execute the banker's algorithm to find the safe state and the safe
sequence like P2, P4, P5, P1 and P3.
gets the request immediately.
UNIT-3
PART-A
S.No Questions
1 Define Swapping.
A process needs to be in memory to be executed. However, a process can be
swapped temporarily out of memory to a backing store and then brought back into
memory for continued execution. This process is called swapping

2 Define Dynamic Linking.


Dynamic linking is similar to dynamic loading, rather that loading being postponed
until execution time, linking is postponed. This feature is usually used with system
libraries, such as language subroutine libraries. A stub is included in the image for
each library-routine reference. The stub is a small piece of code that indicates how
to locate the appropriate memory-resident library routine, or how to load thelibrary
if the routine is not already present.

3 What do you mean by Best Fit and First Fit?


Best fit allocates the smallest hole that is big enough. The entire list has to be
searched, unless it is sorted by size. This strategy produces the smallest leftover

hole.
First fit allocates the first hole that is big enough. Searching can either start at the
beginning of the set of holes or where the previous first-fit search ended. Searching
can be stopped as soon as a free hole that is big enough is found.
4 How is memory protected in a paged environment?
Protection bits that are associated with each frame accomplish memory protection
in a paged environment. The protection bits can be checked to verify that no writes
are being made to a read-only page.
5 Write about External and Internal Fragmentation?
External fragmentation exists when enough total memory space exists to satisfy a
request, but it is not contiguous; storage is fragmented into a large number of small
holes.
When the allocated memory may be slightly larger than the requested memory, the
difference between these two numbers is internal fragmentation.
6 What are Pages and Frames?
Paging is a memory management scheme that permits the physical-address space
of a process to be non-contiguous. In the case of paging, physical memory is
broken into fixed-sized blocks called frames and logical memory is broken into
blocks of
the same size called pages.
7 What is the basic method of Segmentation?
Segmentation is a memory management scheme that supports the user view of
memory. A logical address space is a collection of segments. The logical address
consists of segment number and offset. If the offset is legal, it is added to the
segment base to produce the address in physical memory of the desired byte.

8 What is Virtual Memory?


Virtual memory is a technique that allows the execution of processes that may not
be completely in memory. It is the separation of user logical memory from
physical memory. This separation provides an extremely large virtual memory,
when only a smaller physical memory is available.

9 What is Demand Paging?


Virtual memory is commonly implemented by demand paging. In demand paging,
the pager brings only those necessary pages into memory instead of swapping in a
whole process. Thus it avoids reading into memory pages that will not be used
anyway, decreasing the swap time and the amount of physical memory needed.
10 What is the basic approach of Page Replacement?
If no frame is free is available, find one that is not currently being used and free it.
A frame can be freed by writing its contents to swap space, and changing the page
table to indicate that the page is no longer in memory. Now the freed frame can be
used to hold the page for which the process faulted.
11 What is the various Page Replacement Algorithms used for Page Replacement?

FIFO page replacement

Optimal page replacement

LRU page replacement

LRU approximation page replacement

Counting based page replacement

Page buffering algorithm.

12 What are the major problems to implement Demand Paging?


The two major problems to implement demand paging is developing, ϖ Frame
allocation algorithm ϖ Page replacement algorithm

13 What is a Reference String?


An algorithm is evaluated by running it on a particular string of memory references
and computing the number of page faults. The string of memory reference is called
a reference string.

14 What do you mean by Compaction?

Compaction is a solution to external fragmentation. The memory contents are


shuffled to place all free memory together in one large block. It is possible only if
relocation is dynamic, and is done at execution time

15 Define Dynamic Loading.

To obtain better memory-space utilization dynamic loading is used. With dynamic


loading, a routine is not loaded until it is called. All routines are kept on disk in are
locatable load format. The main program is loaded into memory and executed. If
the routine needs another routine, the calling routine checks whether the routinehas
beenl oaded. If not, the relocatable linking loader is called to load the desired
program into memory

PART-B
S.No Questions
1 Define paging and describe the structure of the page
table with necessary diagrams
The data structure that is used by the virtual memory system in the operating system
of a computer in order to store the mapping between physical and logical addresses
is commonly known as Page Table.

As we had already told you that the logical address that is generated by the CPU is
translated into the physical address with the help of the page table.

Thus page table mainly provides the corresponding frame number (base address of
the frame) where that page is stored in the main memory.

The above diagram shows the paging model of Physical and logical memory.

Characteristics of the Page Table


Some of the characteristics of the Page Table are as follows:

It is stored in the main memory.

Generally; the Number of entries in the page table = the Number of Pages in which
the process is divided.

PTBR means page table base register and it is basically used to hold the base
address for the page table of the current process.

Each process has its own independent page table.

Techniques used for Structuring the Page Table


Some of the common techniques that are used for structuring the Page table are as
follows:

Hierarchical Paging

Hashed Page Tables


Inverted Page Tables

Let us cover these techniques one by one;

Hierarchical Paging
Another name for Hierarchical Paging is multilevel paging.

There might be a case where the page table is too big to fit in a contiguous space, so
we may have a hierarchy with several levels.

In this type of Paging the logical address space is broke up into Multiple page
tables.

Hierarchical Paging is one of the simplest techniques and for this purpose, a two-
level page table and three-level page table can be used.

Two Level Page Table


Consider a system having 32-bit logical address space and a page size of 1 KB and
it is further divided into:

Page Number consisting of 22 bits.

Page Offset consisting of 10 bits.

As we page the Page table, the page number is further divided into :

Page Number consisting of 12 bits.

Page Offset consisting of 10 bits.

Thus the Logical address is as follows:

In the above diagram,

P1 is an index into the Outer Page table.


P2 indicates the displacement within the page of the Inner page Table.

As address translation works from outer page table inward so is known as forward-
mapped Page Table.

Below given figure below shows the Address Translation scheme for a two-level
page table

Three Level Page Table


For a system with 64-bit logical address space, a two-level paging scheme is not
appropriate. Let us suppose that the page size, in this case, is 4KB.If in this case, we
will use the two-page level scheme then the addresses will look like this:

Thus in order to avoid such a large table, there is a solution and that is to divide the
outer page table, and then it will result in a Three-level page table:

Hashed Page Tables


This approach is used to handle address spaces that are larger than 32 bits.

In this virtual page, the number is hashed into a page table.

This Page table mainly contains a chain of elements hashing to the same elements.

Each element mainly consists of :

The virtual page number

The value of the mapped page frame.

A pointer to the next element in the linked list.


Given below figure shows the address translation scheme of the Hashed Page Table:

The above Figure shows Hashed Page Table

The Virtual Page numbers are compared in this chain searching for a match; if the
match is found then the corresponding physical frame is extracted.

In this scheme, a variation for 64-bit address space commonly uses clustered page
tables.

Clustered Page Tables


These are similar to hashed tables but here each entry refers to several pages (that is
16) rather than 1.

Mainly used for sparse address spaces where memory references are non-
contiguous and scattered

Inverted Page Tables


The Inverted Page table basically combines A page table and A frame table into a
single data structure.

There is one entry for each virtual page number and a real page of memory

And the entry mainly consists of the virtual address of the page stored in that real
memory location along with the information about the process that owns the page.

Though this technique decreases the memory that is needed to store each page table;
but it also increases the time that is needed to search the table whenever a page
reference occurs.

Given below figure shows the address translation scheme of the Inverted Page
Table:

In this, we need to keep the track of process id of each entry, because many
processes may have the same logical addresses.
Also, many entries can map into the same index in the page table after going
through the hash function. Thus chaining is used in order to handle this.
2 Describe the concept of swapping in memory management with a neat diagram
Swapping is a memory management technique and is used to temporarily remove
the inactive programs from the main memory of the computer system. Any process
must be in the memory for its execution, but can be swapped temporarily out of
memory to a backing store and then again brought back into the memory to
complete its execution. Swapping is done so that other processes get memory for
their execution.

Due to the swapping technique performance usually gets affected, but it also helps
in running multiple and big processes in parallel. The swapping process is also
known as a technique for memory compaction. Basically, low priority processesmay
be swapped out so that processes with a higher priority may be loaded and executed.

Let us understand this technique with the help of a figure given below:

The above diagram shows swapping of two processes where the disk is used as a
Backing store.

In the above diagram, suppose there is a multiprogramming environment with a


round-robin scheduling algorithm; whenever the time quantum expires then the
memory manager starts to swap out those processes that are just finished and swap
another process into the memory that has been freed. And in the meantime, the CPU
scheduler allocates the time slice to some other processes in the memory.

The swapping of processes by the memory manager is fast enough that


someprocesses will be in memory, ready to execute, when the CPU scheduler wants
to reschedule the CPU.

A variant of the swapping technique is the priority-based scheduling algorithm. If


any higher-priority process arrives and wants service, then the memory manager
swaps out lower priority processes and then load the higher priority processes and
then execute them. When the process with higher priority finishes .then the process
with lower priority swapped back in and continues its execution. This variant is
sometimes known as roll in and roll out.

There are two more concepts that come in the swapping technique and these are:
swap in and swap out.

Swap In and Swap Out in OS


The procedure by which any process gets removed from the hard disk and placed in
the main memory or RAM commonly known as Swap In.
On the other hand, Swap Out is the method of removing a process from the main
memory or RAM and then adding it to the Hard Disk.

Advantages of Swapping
The advantages/benefits of the Swapping technique are as follows:

The swapping technique mainly helps the CPU to manage multiple processes within
a single main memory.

This technique helps to create and use virtual memory.

With the help of this technique, the CPU can perform several tasks simultaneously.
Thus, processes need not wait too long before their execution.

This technique is economical.

This technique can be easily applied to priority-based scheduling in order to


improve its performance.

Disadvantages of Swapping
The drawbacks of the swapping technique are as follows:

There may occur inefficiency in the case if a resource or a variable is commonly


used by those processes that are participating in the swapping process.

If the algorithm used for swapping is not good then the overall method can increase
the number of page faults and thus decline the overall performance of processing.

If the computer system loses power at the time of high swapping activity then the
user might lose all the information related to the program.
3 Explain Contiguous Memory Allocation with its memory prevention and memory
allocation phases.
In the Contiguous Memory Allocation, each process is contained in a single
contiguous section of memory. In this memory allocation, all the available memory
space remains together in one place which implies that the freely available memory
partitions are not spread over here and there across the whole memory space.

In Contiguous memory allocation which is a memory management technique,


whenever there is a request by the user process for the memory then a single section
of the contiguous memory block is given to that process according to its
requirement. Contiguous Memory allocation is achieved just by dividing the
memory into the fixed-sized partition.
The memory can be divided either in the fixed-sized partition or in the variable-
sized partition in order to allocate contiguous space to user processes.

We will cover the concept of different Contiguous Memory allocation techniques


one by one.

Fixed-size Partition Scheme


This technique is also known as Static partitioning. In this scheme, the system
divides the memory into fixed-size partitions. The partitions may or may not be the
same size. The size of each partition is fixed as indicated by the name of the
technique and it cannot be changed.

In this partition scheme, each partition may contain exactly one process. There is a
problem that this technique will limit the degree of multiprogramming because the
number of partitions will basically decide the number of processes.

Whenever any process terminates then the partition becomes available for another
process.

Example
Let's take an example of fixed size partitioning scheme, we will divide a memory
size of 15 KB into fixed-size partitions:

It is important to note that these partitions are allocated to the processes as they
arrive and the partition that is allocated to the arrived process basically depends on
the algorithm followed.

If there is some wastage inside the partition then it is termed Internal


Fragmentation.

Advantages of Fixed-size Partition Scheme


This scheme is simple and is easy to
implement
It supports multiprogramming as multiple processes can be stored inside the main
memory.

Management is easy using this scheme

Disadvantages of Fixed-size Partition Scheme


Some disadvantages of using this scheme are as follows:

1. Internal Fragmentation

Suppose the size of the process is lesser than the size of the partition in that case
some size of the partition gets wasted and remains unused. This wastage inside the
memory is generally termed as Internal fragmentation

As we have shown in the above diagram the 70 KB partition is used to load a


process of 50 KB so the remaining 20 KB got wasted.

2. Limitation on the size of the process

If in a case size of a process is more than that of a maximum-sized partition then


that process cannot be loaded into the memory. Due to this, a condition is imposed
on the size of the process and it is: the size of the process cannot be larger than the
size of the largest partition.

3. External Fragmentation

It is another drawback of the fixed-size partition scheme as total unused space by


various partitions cannot be used in order to load the processes even though there is
the availability of space but it is not in the contiguous fashion.

4. Degree of multiprogramming is less

In this partition scheme, as the size of the partition cannot change according to the
size of the process. Thus the degree of multiprogramming is very less and is fixed.

Variable-size Partition Scheme


This scheme is also known as Dynamic partitioning and is came into existence to
overcome the drawback i.e internal fragmentation that is caused by Static
partitioning. In this partitioning, scheme allocation is done dynamically.
The size of the partition is not declared initially. Whenever any process arrives, a
partition of size equal to the size of the process is created and then allocated to the
process. Thus the size of each partition is equal to the size of the process.

As partition size varies according to the need of the process so in this partition
scheme there is no internal fragmentation.

Advantages of Variable-size Partition Scheme


Some Advantages of using this partition scheme are as follows:

No Internal Fragmentation As in this partition scheme space in the main memory is


allocated strictly according to the requirement of the process thus there is no chance
of internal fragmentation. Also, there will be no unused space left in the partition.

Degree of Multiprogramming is Dynamic As there is no internal fragmentation in


this partition scheme due to which there is no unused space in the memory. Thus
more processes can be loaded into the memory at the same time.

No Limitation on the Size of Process In this partition scheme as the partition is


allocated to the process dynamically thus the size of the process cannot be restricted
because the partition size is decided according to the process size.

Disadvantages of Variable-size Partition Scheme


Some Disadvantages of using this partition scheme are as follows:

External Fragmentation As there is no internal fragmentation which is an advantage


of using this partition scheme does not mean there will no external fragmentation.
Let us understand this with the help of an example: In the above diagram- process
P1(3MB) and process P3(8MB) completed their execution. Hence there are two
spaces left i.e. 3MB and 8MB. Let’s there is a Process P4 of size 15 MB comes. But
the empty space in memory cannot be allocated as no spanning is allowed in
contiguous allocation. Because the rule says that process must be continuously
present in the main memory in order to get executed. Thus it results in External
Fragmentation.

Difficult Implementation The implementation of this partition scheme is difficult as


compared to the Fixed Partitioning scheme as it involves the allocation of memory
at run-time rather than during the system configuration. As we know that OS keeps
the track of all the partitions but here allocation and deallocation are done very
frequently and partition size will be changed at each time so it will be difficult for
the operating system to manage everything.
4 When do page faults occur? Consider the
reference string: 6, 7, 8, 9, 6, 7, 1, 6, 7, 8, 9, 1
How many page faults and page fault rate occur for the FIFO?
and LRU replacement algorithms, assuming three, four page frames?
First In First Out (FIFO) -This algorithm is similar to the operations of the queue.
All the pages are stored in the queue in the order they are allocated frames in the
main memory. The one which is allocated first stays in the front of the queue. The
one which is allocated the memory first is replaced first. The one which is at the
front of the queue is removed at the time of replacement.

Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8, 9, 6, 7,
1, 6, 7, 8, 9, 1

As in the above figure shown, Let there are 3 frames in the memory.
6, 7, 8 are allocated to the vacant slots as they are not in memory.
When 9 comes page fault occurs, it replaces 6 which is the oldest in memory or
front element of the queue.
Then 6 comes (Page Fault), it replaces 7 which is the oldest page in memory now.
Similarly, 7 replaces 8, 1 replaces 9.
Then 6 comes which is already in memory (Page Hit).
Then 7 comes (Page Hit).
Then 8 replaces 6, 9 replaces 7. Then 1 comes (Page Hit).
Number of Page Faults = 9

While using the First In First Out algorithm, the number of page faults increases by
increasing the number of frames. This phenomenon is called Belady's Anomaly.
Let's take the same above order of pages with 4 frames.

In the above picture shown, it can be seen that the number of page faults is
10. There were 9 page faults with 3 frames and 10 page faults with 4 frames.
The number of page faults increased by increasing the number of frames.
Optimal Page Replacement - In this algorithm, the page which would be used after
the longest interval is replaced. In other words, the page which is farthest to come in
the upcoming sequence is replaced.
Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8, 9, 6, 7,
1, 6, 7, 8, 9, 1, 7, 9, 6

First, all the frames are empty. 6, 7, 8 are allocated to the frames (Page Fault).
Now, 9 comes and replaces 8 as it is the farthest in the upcoming sequence. 6 and 7
would come earlier than that so not replaced.
Then, 6 comes which is already present (Page Hit).
Then 7 comes (Page Hit).
Then 1 replaces 9 similarly (Page Fault).
Then 6 comes (Page Hit), 7 comes (Page
Hit).
Then 8 replaces 6 (Page Fault) and 9 replaces 8 (Page Fault).
Then 1, 7, 9 come respectively which are already present in the memory.
Then 6 replaces 9 (Page Fault), it can also replace 7 and 1 as no other page is
present in the upcoming sequence.
The number of Page Faults = 8

This is the most optimal algorithm but is impractical because it is impossible to


predict the upcoming page references.
Least Recently Used - This algorithm works on previous data. The page which is
used the earliest is replaced or which appears the earliest in the sequence is
replaced.

Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8, 9, 6, 7,
1, 6, 7, 8, 9, 1, 7, 9, 6

First, all the frames are empty. 6, 7, 8 are allocated to the frames (Page Fault).
Now, 9 comes and replaces 6 which is used the earliest (Page Fault).
Then, 6 replaces 7, 7 replaces 8, 1 replaces 9 (Page Fault).
Then 6 comes which is already present (Page Hit).

Then 7 comes (Page Hit).


Then 8 replaces 1, 9 replaces 6, 1 replaces 7, and 7 replaces 8 (Page Fault).
Then 9 comes (Page Hit).
Then 6 replaces 1 (Page Fault).
The number of Page Faults = 12
5 Briefly explain and compare, fixed and dynamic memory partitioning
schemes. Fixed (or static) Partitioning in Operating System:-
Memory management in operating systems is the task responsible for allocating and
managing the main memory of the computer. The memory management function
keeps track of the position of each memory location, either allocated or freed, to
ensure efficient and efficient use of primary memory.

There are two techniques of Contiguous memory management: Contiguous and


Non- Contiguous. In Contiguous technology, the executing process must be
completely loaded into memory.

Contiguous technique divided into two types:

1. Fixed (or static) segmentation

2. Variable (or dynamic) segmentation.

Contiguous memory management techniques


Contiguous memory management techniques
Fixed Partitioning:
This is the oldest and simplest technique of keeping more than one process in main
memory. This partition determines the number of partitions (overlapping) in RAM,
but each partition may or may not have the same size. It will not be allowed for any
period as it is a consolidated allotment. Here segmentation is done before
implementation or during system configuration.

Advantages of Fixed Partition –


1. Easy to implement:

The algorithms required to implement a certain division are easy to implement.


Certain partitions must be processed regardless of the emergence of internal and
external fragmentation.

2. Little OS overhead:

The process of fixed partition requires more or less computing power.

Disadvantages of fixed partition –


1. Internal Fragmentation:

Main memory usage is inefficient. Any program, even the smallest, covers the
entire partition. This can lead to internal fragmentation.
2. External Fragmentation:

The total unused space of different partitions cannot be used to load non-
differentiated processes, even if space is available because spanning is not allowed.

3. Limit Process Size:

Processes larger than the partition size cannot be accommodated in main memory.
The partition cannot be resized according to the size of the incoming process.

4. Limitations on the degree of multiprogramming:

Partitioning in main memory is done before implementation or during system


configuration. Main memory is divided into a fixed number of partitions. Suppose
RAM has N1 partitions and N2 is the number of processes, the condition N2 <= N1
must be met. The number of processes in a certain partition greater than the number
of partitions in RAM is invalid.

What is fixed and dynamic partitioning:-


Dynamic Partitioning:-
Dynamic Partitioning is another technique to overcome the problems faced in fixed
partitioning techniques for memory management. In dynamic partitioning, we do
not initially declare the size of the partition. Instead, we declare the size of the
partition at load time.

In this, the operating system retains the first partition. The rest of the space is
divided into different sections. The partition size and process size remain the same.
In dynamic segmentation, we can avoid the internal segmentation problem by
resizing the partition according to the needs of the process.

Advantages of Dynamic Partitioning over Static Partitioning:-


Advantages of dynamic partition over static partition such as follow:-

1. There is no limit on the size of the process:-

In a certain partition, if the size of the process is greater than the size of the
partition, we cannot keep or load the process into memory. But if we talk about
dynamic partition, process size cannot be set and we can resize partition according
to process size.

2. The degree of multiprogramming is dynamic:

In a dynamic partition, there is no internal fragmentation, so no unused space is


available in the partition. At the same time, we can load a lot of processes into
memory.

3. No internal fragmentation:

In dynamic partitioning, partitions are created dynamically as per the requirements


of the process. So, internal fragmentation is not present in a dynamic partition. Also
reason behind this is that in a dynamic partition, there is no space in the partition
which is not used.

Disadvantages of dynamic partitioning:


The disadvantages of dynamic partitioning are:

1. Complex Memory Allocation:-

In case of fixed partition, once we have created a list of partitions, we cannot


change them again. But in the case of dynamic partitioning, the task of allocation
and dematerialization is difficult because the size of the partition changes when the
partition is allocated to a new process. The operating system has to keep track of
each partition.

So, because of the difficulty of further allocation in dynamic memory space and
every time you have to resize the partition; Therefore, it is difficult for the
operatingsystem to handle everything.

2. External Fragmentation:-

The main drawback of dynamic division is external fragmentation.

Let’s say we have three processes P1 (2 MB), P2 (5 MB), and P3 (2 MB) and we
want to load the processes into different partitions of main memory.

Now processes P1 and P3 are completed and the space allocated to processes P1
and P3 is free. Now we have 2 MB partition, which is unused and in main memory.
We cannot use this space to load 4 MB process in memory because space is not
contiguous.

The rule says that we can load in memory only if the process remains firmly in the
main memory. So, if we want to avoid external fragmentation, we have to change
this rule.

What is swapping?
It is a type through which a process is swapped from the main memory to the
temporary memory for some time and that memory is emptied for some other
process, after some time the system again sends that program to the secondary.
Swap from memory to main memory.
Although there is an impact in performance due to swapping, but with its help
multiple and large processes can be run simultaneously, hence swapping is also
known as the technique of memory compaction.
The time taken by a process by it, by transferring the process from the mainmemory
to the secondary memory and again by transferring that program from the secondary
memory to the main memory, the time taken by that process again to regain the
space for itself in the main memory.
UNIT-4
PART-A
S.No Questions
1 List the various file attributes.
A file has certain other attributes, which vary from one operating system to
another, but typically consist of these: Name, identifier, type, location, size,
protection, time, and date and user identification
2 What are the various file operations?
The six basic file operations are

Creating a file

Writing a file

Reading a file

Repositioning within a file

Deleting a file

Truncating a file

3 How free-space is managed using bit vector implementation? List its


advantages.
The free-space list is implemented as a bit map or bit vector. Each block is
represented by 1 bit. If the block is free, the bit is 1; if the block is allocated, the bit
is 0.
Advantages: It is relatively simple and its efficiency in finding the first free block
or n consecutive free blocks on the disk.
4 What are the types of Path Names?
Path names can be of two types.
Absolute path name: Begins at the root and follows a path down to the specified
file, giving the directory names on the path.
Relative path name: Defines a path from the current directory.
5 How does DMA increase system concurrency?
DMA increases system concurrency by allowing the CPU to perform tasks while
the DMA system transfers data via the system and memory buses.
6 Why rotational latency is not considered in disk scheduling?
Most disks do not export their rotational position information to the host. Even if
they did, the time for this information to reach the scheduler would be subject to
imprecision and the time consumed by the scheduler is variable, so the rotational
position information would become incorrect. Further, the disk requests are usually
given in terms of logical block numbers, and the mapping between logical blocks
and physical locations is very complex
7 Why it is important to scale up system bus and device speed as CPU speed
increases?

Consider a system which performs 50% I/O and 50% computes. Doubling the CPU
performance on this system would increase total system performance by only 50%.
Doubling both system aspects would increase performance by 100%. Generally, it
is important to remove the current system bottleneck, and to increase
overallsystem performance, rather than blindly increasing the performance of
individual system components

8 Define C-SCAN scheduling.


The elevator algorithm (also SCAN) is a disk scheduling algorithm to determine
the motion of the disk's arm and head in servicing read and write requests.
This algorithm is named after the behaviour of a building elevator, where the
elevator continues to travel in its current direction (up or down) until empty,
stopping only to let individuals off or to pick up new individuals heading in the
same direction.
9 What are the functions of Virtual File System (VFS) layer in file system
implementation?
A virtual file system (VFS) or virtual file system switch is an abstraction layer on
top of a more concrete file system. The purpose of a VFS is to allow client
applications to access different types of concrete file systems in a uniform way. A
VFS can, for example, be used to access local and network storage devices
transparently without the client application noticing the difference.
10 Examine how an index file is used to speed up the access in direct-access files?
Have an index in memory; the index gives the key and the disk location of its
corresponding record. Scan the index to find the record you want, and then access
it
directly.
11 List various layers of a file system
The file system is composed of many different levels. Each level in the design uses
the feature of the lower levels to Creating new features for use by higher levels.

Application programs

Logical file system

File-organization module

Basic file system

I/O control

Devices

12 What problems could occur if a system allowed a file system to be mounted


simultaneously at more than one location?
There would be multiple paths to the same file, which could confuse users or
encourage mistakes. (Deleting a file with one path deletes the file in all the other).
13 Explain what ISAM is.
ISAM is Indexed sequential access method in which the file is stored in sorted
order. ISAM has a master index file, indicating in what part of another index file the
key you want is; the secondary index points to the file records. In both cases, a
binary search is used to locate a record.

14 Define UFD and MFD.


In the two-level directory structure, each user has her own user file directory (UFD).

Each UFD has a similar structure, but lists only the files of a single user. When a
job
starts the system's master file directory (MFD) is searched. The MFD is indexed by
the user name or account number, and each entry points to the UFD for that user.
Determine the most common schemes for defining the logical structure of a
15
directory?
The most common schemes for defining the logical structure of a directory Single-
Level Directory Two-level Directory Tree-Structured Directories Acyclic-Graph
Directories General Graph Directory

PART - B
1 Explain the different disk scheduling algorithms with examples.
A Process makes the I/O requests to the operating system to access the disk. Disk
Scheduling Algorithm manages those requests and decides the order of the disk
access given to the requests.

Why Disk Scheduling Algorithm is needed?


Disk Scheduling Algorithms are needed because a process can make multiple I/O
requests and multiple processes run at the same time. The requests made by a
process may be located at different sectors on different tracks. Due to this, the seek
time may increase more. These algorithms help in minimizing the seek time by
ordering the requests made by the processes.

Important Terms related to Disk Scheduling Algorithms


Seek Time - It is the time taken by the disk arm to locate the desired track.
Rotational Latency - The time taken by a desired sector of the disk to rotate itself to
the position where it can access the Read/Write heads is called Rotational Latency.
Transfer Time - It is the time taken to transfer the data requested by the processes.
Disk Access Time - Disk Access time is the sum of the Seek Time, Rotational
Latency, and Transfer Time.
Disk Scheduling Algorithms
First Come First Serve (FCFS)
In this algorithm, the requests are served in the order they come. Those who come
first are served first. This is the simplest algorithm.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60.

Seek Time = Distance Moved by the disk arm = (140-70)+(140-50)+(125-50)+(125-


30)+(30-25)+(160-25)=480

Shortest Seek Time First (SSTF)


In this algorithm, the shortest seek time is checked from the current position and
those requests which have the shortest seek time is served first. In simple words,
theclosest request from the disk arm is served first.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60.

Seek Time = Distance Moved by the disk arm = (60-50)+(50-30)+(30-25)+(70-


25)+(125-70)+(140-125)+(160-125)=270

SCAN
In this algorithm, the disk arm moves in a particular direction till the end and serves
all the requests in its path, then it returns to the opposite direction and moves till the
last request is found in that direction and serves all of them.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.

Seek Time = Distance Moved by the disk arm = (170-60)+(170-25)=255

LOOK
In this algorithm, the disk arm moves in a particular direction till the last request is
found in that direction and serves all of them found in the path, and then reverses its
direction and serves the requests found in the path again up to the last request
found. The only difference between SCAN and LOOK is, it doesn't go to the end it
only moves up to which the request is found.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.

Seek Time = Distance Moved by the disk arm = (170-60)+(170-25)=235

C-SCAN
This algorithm is the same as the SCAN algorithm. The only difference between
SCAN and C-SCAN is, it moves in a particular direction till the last and serves the
requests in its path. Then, it returns in the opposite direction till the end and doesn't
serve the request while returning. Then, again reverses the direction and serves the
requests found in the path. It moves circularly.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.

Seek Time = Distance Moved by the disk arm = (170-60)+(170-0)+(50-0)=330


C-LOOK
This algorithm is also the same as the LOOK algorithm. The only difference
between LOOK and C-LOOK is, it moves in a particular direction till the last
request is found and serves the requests in its path. Then, it returns in the opposite
direction till the last request is found in that direction and doesn't serve the request
while returning. Then, again reverses the direction and serves the requests found in
the path. It also moves circularly.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial
position of the Read-Write head is 60. And it is given that the disk arm should move
towards the larger value.

Seek Time = Distance Moved by the disk arm = (160-60)+(160-25)+(50-25)=260


2 Explain different file allocation methods and free space management techniques in
detail.
File Allocation Methods

There are different kinds of methods that are used to allocate disk space. We must
select the best method for the file allocation because it will directly affect the
system performance and system efficiency. With the help of the allocation method,
we can utilize the disk, and also files can be accessed.

There are various types of file allocations method:

Contiguous allocation
Extents
Linked allocation
Clustering
FAT
Indexed allocation
Linked Indexed allocation
Multilevel Indexed allocation
Inode
There are different types of file allocation methods, but we mainly use three types
of file allocation methods:

Contiguous allocation
Linked list allocation
Indexed allocation
These methods provide quick access to the file blocks and also the utilization of

disk space in an efficient manner.

Contiguous Allocation: - Contiguous allocation is one of the most used methods for
allocation. Contiguous allocation means we allocate the block in such a manner, so
that in the hard disk, all the blocks get the contiguous physical block.

We can see in the below figure that in the directory, we have three files. In the table,
we have mentioned the starting block and the length of all the files. We can see in
the table that for each file, we allocate a contiguous block.
File Allocation Methods
Example of contiguous
allocation

We can see in the given diagram, that there is a file. The name of the file is ‘mail.’
The file starts from the 19th block and the length of the file is 6. So, the file
occupies 6 blocks in a contiguous manner. Thus, it will hold blocks 19, 20, 21, 22,
23, 24.

File Allocation Methods


Advantages of Contiguous Allocation
The advantages of contiguous allocation are:

The contiguous allocation method gives excellent read performance.


Contiguous allocation is easy to implement.
The contiguous allocation method supports both types of file access methods that
are sequential access and direct access.
The Contiguous allocation method is fast because, in this method number of seeks
is less due to the contiguous allocation of file blocks.
Disadvantages of Contiguous allocation
The disadvantages of contiguous allocation method are:

In the contiguous allocation method, sometimes disk can be fragmented.


In this method, it is difficult to increase the size of the file due to the availability of
the contiguous memory block.
Linked List Allocation
The linked list allocation method overcomes the drawbacks of the contiguous
allocation method. In this file allocation method, each file is treated as a linked list
of disks blocks. In the linked list allocation method, it is not required that disk
blocks assigned to a specific file are in the contiguous order on the disk. The
directory entry comprises of a pointer for starting file block and also for the ending
file block. Each disk block that is allocated or assigned to a file consists of
apointer, and that pointer point the next block of the disk, which is allocated to
the

same file.

Example of linked list allocation

We can see in the below figure that we have a file named ‘jeep.’ The value of the
start is 9. So, we have to start the allocation from the 9th block, and blocks are
allocated in a random manner. The value of the end is 25. It means the allocation is
finished on the 25th block. We can see in the below figure that the block (25)
comprised of -1, which means a null pointer, and it will not point to another block.

File Allocation Methods


Advantages of Linked list
allocation
There are various advantages of linked list allocation:

In liked list allocation, there is no external fragmentation. Due to this, we can utilize
the memory better.
In linked list allocation, a directory entry only comprises of the starting block
address.
The linked allocation method is flexible because we can quickly increase the size of
the file because, in this to allocate a file, we do not require a chunk of memory in a
contiguous form.
Disadvantages of Linked list Allocation
There are various disadvantages of linked list allocation:

Linked list allocation does not support direct access or random access.
In linked list allocation, we need to traverse each block.
If the pointer in the linked list break in linked list allocation, then the file gets
corrupted.
In the disk block for the pointer, it needs some extra space.
Indexed Allocation
The Indexed allocation method is another method that is used for file allocation. In
the index allocation method, we have an additional block, and that block is known
as the index block. For each file, there is an individual index block. In the index
block, the ith entry holds the disk address of the ith file block. We can see in the
below figure that the directory entry comprises of the address of the index block.

File Allocation Methods


Advantages of Index Allocation
The advantages of index allocation are:

The index allocation method solves the problem of external fragmentation.


Index allocation provides direct access.
Disadvantages of Index Allocation
The disadvantages of index allocation are:

In index allocation, pointer overhead is more.


We can lose the entire file if an index block is not correct.
It is totally a wastage to create an index for a small file.
A single index block cannot hold all the pointer for files with large sizes.

To resolve this problem, there are various mechanism which we can use:

Linked scheme
Multilevel Index
Combined Scheme
Linked Scheme: - In the linked scheme, to hold the pointer, two or more than two
index blocks are linked together. Each block contains the address of the next index
block or a pointer.
Multilevel Index: - In the multilevel index, to point the second-level index block,
we use a first-level index block that in turn points to the blocks of the disk, occupied
by the file. We can extend this up to 3 or more than 3 levels depending on the
maximum size of the file.
Combined Scheme: - In a combined scheme, there is a special block which is called
an information node (Inode). The inode comprises of all the information related to
the file like authority, name, size, etc. To store the disk block addresses that contain
the actual file, the remaining space of inode is used. In inode, the starting pointer is
used to point the direct blocks. This means the pointer comprises of the addresses of
the disk blocks, which consist of the file data. To indicate the indirect blocks, the
next few pointers are used. The indirect blocks are of three types, which are single
indirect, double indirect, and triple indirect.
Inode

In the UNIX operating system, every file is indexed with the help of Inode. An
Inode is a block that is created at the time when the file system is designed.

There are various types of information included in Inode:

Attributes of the file, such as timestamp, permissions, details, ownership, etc.


The total number of direct blocks that comprise of the pointer to the starting blocks.
In Inode, there is a single indirect pointer. It is used to point an index block. If using
direct blocks, entire file cannot be indexed, then, in that case, we use a single
indirect pointer.
Inode also contains a double indirect pointer. This pointer is used to point a disk
block.
In Inode there is another pointer, which is a triple index pointer. This pointer is also
used to point a disk block.
3 Write an elaborate note on RAID and RAID Levels
RAID is a technology that is used to increase the performance and/or reliability of
data storage. The abbreviation stands for either Redundant Array of Independent
Drives or Redundant Array of Inexpensive Disks, which is older and less used. A
RAID system consists of two or more drives working in parallel. These can be hard
discs, but there is a trend to also use the technology for SSD (Solid State Drives).
There are different RAID levels, each optimized for a specific situation. These are
not standardized by an industry group or standardization committee. This explains
why companies sometimes come up with their own unique numbers and
implementations. This article covers the following RAID levels:

RAID 0 – striping
RAID 1 – mirroring
RAID 5 – striping with parity
RAID 6 – striping with double parity
RAID 10 – combining mirroring and striping
The software to perform the RAID-functionality and control the drives can either be
located on a separate controller card (a hardware RAID controller) or it can simply
be a driver. Some versions of Windows, such as Windows Server 2012 as well as
Mac OS X, include software RAID functionality. Hardware RAID controllers cost
more than pure software, but they also offer better performance, especially with
RAID 5 and 6.

RAID-systems can be used with a number of interfaces, including SATA, SCSI,


IDE, or FC (fiber channel.) There are systems that use SATA disks internally, but
that have a FireWire or SCSI-interface for the host system.
Sometimes disks in a storage system are defined as JBOD, which stands for Just a
Bunch Of Disks. This means that those disks do not use a specific RAID level and
acts as stand-alone disks. This is often done for drives that contain swap files or
spooling data.

Below is an overview of the most popular RAID levels:

RAID level 0 – Striping


In a RAID 0 system data are split up into blocks that get written across all the drives
in the array. By using multiple disks (at least 2) at the same time, this offers
superior I/O performance. This performance can be enhanced further by using
multiple controllers, ideally one controller per disk.

Disk storage using RAID 0 striping


RAID 0 – Striping
Advantages of RAID 0
RAID 0 offers great performance, both in read and write operations. There is no
overhead caused by parity controls.
All storage capacity is used, there is no overhead.
The technology is easy to implement.
Disadvantages of RAID 0
RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost.
It should not be used for mission-critical systems.
Ideal use
RAID 0 is ideal for non-critical storage of data that have to be read/written at a high
speed, such as on an image retouching or video editing station.

If you want to use RAID 0 purely to combine the storage capacity of twee drives in
a single volume, consider mounting one drive in the folder path of the other drive.
This is supported in Linux, OS X as well as Windows and has the advantage that a
single drive failure has no impact on the data of the second disk or SSD drive.

RAID level 1 – Mirroring


Data are stored twice by writing them to both the data drive (or set of data drives)
and a mirror drive (or set of drives). If a drive fails, the controller uses either thedata
drive or the mirror drive for data recovery and continuous operation. You need at
least 2 drives for a RAID 1 array.

Disk storage using RAID 0 striping


RAID 1 – Mirroring
Advantages of RAID 1
RAID 1 offers excellent read speed and a write-speed that is comparable to that of a
single drive.
In case a drive fails, data do not have to be rebuild, they just have to be copied to
the replacement drive.
RAID 1 is a very simple technology.
Disadvantages of RAID 1
The main disadvantage is that the effective storage capacity is only half of the total
drive capacity because all data get written twice.
Software RAID 1 solutions do not always allow a hot swap of a failed drive. That
means the failed drive can only be replaced after powering down the computer it is
attached to. For servers that are used simultaneously by many people, this may not
be acceptable. Such systems typically use hardware controllers that do support hot
swapping.
Ideal use
RAID-1 is ideal for mission critical storage, for instance for accounting systems. It
is also suitable for small servers in which only two data drives will be used.

RAID level 5 – Striping with parity


RAID 5 is the most common secure RAID level. It requires at least 3 drives but can
work with up to 16. Data blocks are striped across the drives and on one drive a
parity checksum of all the block data is written. The parity data are not written to a
fixed drive, they are spread across all drives, as the drawing below shows. Using
theparity data, the computer can recalculate the data of one of the other data blocks,
should those data no longer be available. That means a RAID 5 array can withstand
a single drive failure without losing data or access to data. Although RAID 5 can be
achieved in software, a hardware controller is recommended. Often extra cache
memory is used on these controllers to improve the write performance.
Disk storage using RAID 5 striping with parity across drives
RAID 5 – Striping with parity
Advantages of RAID 5
Read data transactions are very fast while write data transactions are somewhat
slower (due to the parity that has to be calculated).
If a drive fails, you still have access to all data, even while the failed drive is being
replaced and the storage controller rebuilds the data on the new drive.
Disadvantages of RAID 5
Drive failures have an effect on throughput, although this is still acceptable.
This is complex technology. If one of the disks in an array using 4TB disks fails and
is replaced, restoring the data (the rebuild time) may take a day or longer,
depending on the load on the array and the speed of the controller. If another disk
goes bad during that time, data are lost forever.
Ideal use
RAID 5 is a good all-round system that combines efficient storage with excellent
security and decent performance. It is ideal for file and application servers that have
a limited number of data drives.

RAID level 6 – Striping with double parity


RAID 6 is like RAID 5, but the parity data are written to two drives. That means it
requires at least 4 drives and can withstand 2 drives dying simultaneously. The
chances that two drives break down at exactly the same moment are of course very
small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive,
it takes hours or even more than a day to rebuild the swapped drive. If another drive
dies during that time, you still lose all of your data. With RAID 6, the RAID array
will even survive that second failure.

Disk storage using RAID 6 stripingwith double parity across drives


RAID 6 – Striping with double parity
Advantages of RAID 6
Like with RAID 5, read data transactions are very fast.
If two drives fail, you still have access to all data, even while the failed drives are
being replaced. So RAID 6 is more secure than RAID 5.
Disadvantages of RAID 6
Write data transactions are slower than RAID 5 due to the additional parity data that
have to be calculated. In one report I read the write performance was 20% lower.
Drive failures have an effect on throughput, although this is still acceptable.
This is complex technology. Rebuilding an array in which one drive failed can take
a long time.
Ideal use
RAID 6 is a good all-round system that combines efficient storage with excellent
security and decent performance. It is preferable over RAID 5 in file and
application servers that use many large drives for data storage.

RAID level 10 – combining RAID 1 & RAID 0


It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID
1 in one single system. This is a nested or hybrid RAID configuration. It provides
security by mirroring all data on secondary drives while using striping across each
set of drives to speed up data transfers.

Disk storage using RAID 1 + 0, combining spriping with mirroring


RAID 10 – Striping and mirroring

Advantages of RAID 10
If something goes wrong with one of the disks in a RAID 10 configuration, the
rebuild time is very fast since all that is needed is copying all the data from the
surviving mirror to a new drive. This can take as little as 30 minutes for drives of 1
TB. Disadvantages of RAID 10
Half of the storage capacity goes to mirroring, so compared to large RAID 5 or
RAID 6 arrays, this is an expensive way to have redundancy.
What about RAID levels 2, 3, 4 and 7?
These levels do exist but are not that common (RAID 3 is essentially like RAID 5
but with the parity data always written to the same drive). This is just a simple
introduction to RAID-systems. You can find more in-depth information on the
pages of Wikipedia or ACNC.

RAID is no substitute for back-ups!


All RAID levels except RAID 0 offer protection from a single drive failure. A
RAID 6 system even survives 2 disks dying simultaneously. For complete security,
you do still need to back-up the data stored on a RAID system.

That back-up will come in handy if all drives fail simultaneously because of a
power spike.
It is a safeguard when the storage system gets stolen.
Back-ups can be kept off-site at a different location. This can come in handy if a
natural disaster or fire destroys your workplace.
The most important reason to back-up multiple generations of data is user error. If
someone accidentally deletes some important data and this goes unnoticed
forseveral hours, days, or weeks, a good set of back-ups ensure you can still retrieve
those files.
4 (i) Explain in detail about Application-I/O interface.
I/O Interface:
There is need of surface whenever any CPU wants to communicate with I/O
devices. The interface is used to interpret address which is generated by CPU.
Thus, surface is used to communicate to I/O devices i.e. to share information
between CPU and I/O devices interface is used which is called as I/O Interface.

Various applications of I/O Interface:


Application of I/O is that we can say interface have access to open any file
without any kind of information about file i.e., even basic information of file is
unknown. It also has feature that it can be used to also add new devices to
computer system even it does not cause any kind of interrupt to operating
system. It can also used to abstract differences in I/O devices by identifying
general kinds. The access to each of general kind is through standardized set of
function which is called as interface.
Each type of operating system has its own category for interface of device-
drivers. The device which is given may ship with multiple device-drivers-for
instance, drivers for Windows, Linux, AIX and Mac OS, devices may is variedby
dimensions which is as illustrated in the following table :

S.No. Basis Alteration Example


1. Mode of Data-transfercharacter or block terminal disk
2. Method of Accessing data sequential or random modem, CD-ROM
3. Transfer schedule synchronous or asynchronous tape, keyboard
4. Sharing methods dedicated or sharable tape, keyboard
5. Speed of device latency, seek time, transfer rate, delay between
operations
6. I/O Interface read only, write only, read-write CD-ROM graphics
controller disk
Character-stream or Block:
A character stream or block both transfers data in form of bytes. The difference
between both of them is that character-stream transfers bytes in linear way i.e.,
one after another whereas block transfers whole byte in single unit.
Sequential or Random Access:
To transfer data in fixed order determined by device, we use sequential device
whereas user to instruct device to seek to any of data storage locations, random-
access device is used.
Synchronous or Asynchronous:
Data transfers with predictable response times is performed by synchronous
device, in coordination with others aspects of system. An irregular or
unpredictable response times not coordinated with other computer events is
exhibits by an asynchronous device.
Sharable or Dedicated:
Several processes or threads can be used concurrently by sharable device;
whereas dedicated device cannot.
Speed of Operation:
The speed of device has range set which is of few bytes per second to few giga-
bytes per second.
Read-write, read only, write-only:
Different devices perform different operations, some supports both input and
output, but others supports only one data transfer direction either input or output.
(ii) Explain the services provided by kernel I/O subsystems
The kernel allows hardware and software components to communicate with one
another. An entire kernel uses device drivers to handle all hardware resources
(such as I/O, memory, and cryptography), arbitrates resource disputes between
processes, and optimizes the use of shared resources such as CPU and cache
consumption file systems and network sockets.

When most computers turn on, the kernel is one of the first programs to load. It
takes care of the rest of the starting process and requests for memory,
peripherals, and software input/output, transforming them into CPU data-
processing instructions.

Kernel I/O Subsystem


With the computer system, we can communicate via input and output (I/O)
devices. The transport of data between RAM and various I/O peripherals is
referred to as I/O. We can enter data via input devices such as keyboards,
mouse, card readers, scanners, voice recognition systems, and touch screens. We
can acquire information from the computer by employing output devices like
monitors, printers, plotters, and speakers.

The processor is not directly connected to these devices. However, the data
exchanges between them are managed through an interface. This interface
converts system bus signals to and from a format appropriate to the provided
device. I/O registers are used to communicate between these external devices
and the processor.

The kernel provides many I/O services. The kernel provides several functions
that rely on the hardware and device driver infrastructure, such as caching,
scheduling, spooling, device reservation, and error handling.

1. Scheduling
The term "schedule" refers to determining an excellent sequence to perform a
series of I/O requests.
Scheduling can increase the system's overall performance, distribute
deviceaccess permissions evenly among all processes, and reduce average wait
times, response times, and turnaround times for I/O to complete.
When an application makes a blocking I/O system call, the request is placed in
the wait queue for that device, maintained by OS engineers.
2. Buffering
The buffer is a section of main memory used to temporarily store or keep data
sent between two devices or between a device and an application.
Assists in dealing with device speed discrepancies.
Assists in dealing with device transfer size mismatches.
Data is transferred from user application memory into kernel memory.
Data from kernel memory is then sent to the device to maintain "copy
semantics."
It prevents an application from altering the contents of a buffer while it is being
written.
3. Caching
It involves storing a replica of data in a location that is easier to reach than the
original.
When you request a file from a Web page, for example, it is stored on your hard
disc in a cache subdirectory under your browser's directory. When you return to
a page you've recently visited, the browser can retrieve files from the cache
rather than the actual server, saving you time and reducing network traffic.
The distinction between cache and buffer is that cache stores a copy of an
existing data item, whereas buffer stores a duplicate copy of another data item.
4. Spooling
A spool is a buffer that holds jobs for a device until it is ready to take them.
Spooling regards disks as a massive buffer that can hold as many tasks as the
device needs until the output devices are ready to take them.
If the device can only serve one request at a time, a buffer retains output for a
device that cannot handle interleaved data streams.
Spooling allows a user to view specific data streams and, if wanted, delete
them. For example, when you are using a printer.
5. Error Handling
Protected memory operating systems can safeguard against a wide range of
hardware and application faults, ensuring that each tiny mechanical glitch does
not result in a complete system failure.
Devices and I/O transfers can fail for various reasons, including transitory
causes, such as when a network gets overcrowded, and permanent reasons, such
as when a disc controller fails.
6. I/O Protection
System calls are required for I/O. Illegal I/O instructions may be used by user
programs to try to interrupt regular operation, either accidentally or on purpose.
To restrict a user from performing all privileged I/O instructions. System calls
must be used to accomplish I/O. Memory-mapped and I/O port memory ports
both need to be secured.
5 Explain about various levels of directory structure?
A Directory is the collection of the correlated files on the disk. In simple words, a
directory is like a container which contains file and folder. In a directory, we can
store the complete file attributes or some attributes of the file. A directory can be
comprised of various files. With the help of the directory, we can maintain the
information related to the files.

Directory Structure in Operating System


To take the advantages of various file systems on the different operating systems,
we can divide the hard disk into multiple partitions, which are of different sizes.
Partitions are known as minidisks or volumes.

There should be at least one directory that must be present in each partition.
Through it, we can list all the files of the partition. In the directory for each file,
there is a directory entry, which is maintained, and in that directory entry, all the
information related to the file is stored.

There are various types of information which are stored in a

directory: Name
Type
Location
Size
Position
Protection
Usage
Mounting
Name: - Name is the name of the directory, which is visible to the user.
Type: - Type of a directory means what type of directory is present such as single-
level directory, two-level directory, tree-structured directory, and Acyclic graph
directory.
Location: - Location is the location of the device where the header of a file is
located. Size: - Size means number of words/blocks/bytes in the file.
Position: - Position means the position of the next-read pointer and the next-write
pointer.
Protection: - Protection means access control on the read/write/delete/execute.
Usage: - Usage means the time of creation, modification, and access, etc.
Mounting: - Mounting means if the root of a file system is grafted into the existing
tree of other file systems.
Operations on Directory
The various types of operations on the directory are:

Creating
Deleting
Searching
List a
directory
Renaming
Link
Unlink
Creating: - In this operation, a directory is created. The name of the directory should
be unique.
Deleting: - If there is a file that we don’t need, then we can delete that file from the
directory. We can also remove the whole directory if the directory is not required.
An empty directory can also be deleted. An empty directory is a directory that only
consists of dot and dot-dot.
Searching: - Searching operation means, for a specific file or another directory, we
can search a directory.
List a directory: - In this operation, we can retrieve all the files list in the directory.
And we can also retrieve the content of the directory entry for every file present in
the list.
If in the directory, we want to read the list of all files, then first, it should be opened,
and afterwards we read the directory, it is a must to close the directory so that the
internal tablespace can be free up.

Types of Directory Structure


There are various types of directory structure:

Single-Level Directory
Two-Level Directory
Tree-Structured Directory
Acyclic Graph Directory
General-Graph Directory
Single-Level Directory: - Single-Level Directory is the easiest directory structure.
There is only one directory in a single-level directory, and that directory is called a
root directory. In a single-level directory, all the files are present in one directory
that makes it easy to understand. In this, under the root directory, the user cannot
create the subdirectories.
Directory Structure in Operating System
Advantages of Single-Level Directory
The advantages of the single-level directory are:

The implementation of a single-level directory is so easy.


In a single-level directory, if all the files have a small size, then due to this, the
searching of the files will be easy.
In a single-Level directory, the operations such as searching, creation, deletion, and
updating can be performed.
Disadvantages of Single-Level Directory
The disadvantages of Single-Level Directory are:

If the size of the directory is large in Single-Level Directory, then the searching
will be tough.
In a single-level directory, we cannot group the similar type of files.
Another disadvantage of a single-level directory is that there is a possibility of
collision because the two files cannot have the same name.
The task of choosing the unique file name is a little bit
complex. Two-Level Directory
Two-Level Directory is another type of directory structure. In this, it is possible to
create an individual directory for each of the users. There is one master node in the
two-level directory that include an individual directory for every user. At the second
level of the directory, there is a different directory present for each of the users.
Without permission, no user can enter into the other user’s directory.

Directory Structure in Operating System


Characteristics of Two-Level Directory
The characteristics of the two-level directory are:

In a two-level directory, there may be same file name of different users.


There is a pathname of each file such as /User-name/directory-name/
In a two-level directory, we cannot group the files which are having the same name
into a single directory for a specific user.
In a two-level directory, searching is more effective because there is only one user’s
list, which is required to be traversed.
Advantages of Two-Level Directory
The advantages of the two-level directory are:

In the two-level directory, various users have the same file name and also directory
name.
Because of using the user-grouping and pathname, searching of files are quite easy.
Disadvantages of Two-Level Directory
The disadvantages of the two-level directory are:

In a two-level directory, one user cannot share the file with another user.
Another disadvantage with the two-level directory is it is not scalable.
Tree-Structured Directory
A Tree-structured directory is another type of directory structure in which the
directory entry may be a sub-directory or a file. The tree-structured directory
reduces the limitations of the two-level directory. We can group the same type of
files into one directory.

In a tree-structured directory, there is an own directory of each user, and any user is
not allowed to enter into the directory of another user. Although the user can read
the data of root, the user cannot modify or write it. The system administrator only
has full access to the root directory. In this, searching is quite effective and we use
the current working concept. We can access the file by using two kinds of paths,
either absolute or relative.

Directory Structure in Operating System


Advantages of tree-structured directory

The advantages of the tree-structured directory are:

The tree-structured directory is very scalable.


In the tree-structures directory, the chances of collision are less.
In the tree-structure directory, the searching is quite easy because, in this, we can
use both types of paths, which are the absolute path and relative path.
Disadvantages of Tree-Structure Directory

The disadvantages of tree-structure directory are:

In the tree-structure directory, the files cannot be shared.


Tree-structure directory is not efficient because, in this, if we want to access a file,
then it may go under multiple directories.
Another disadvantage of the tree-structure directory is that each file does not fit into
the hierarchal model. We have to save the files into various directories.
Acyclic-Graph Directory
In the tree-structure directory, the same files cannot exist in the multiple directories,
so sharing the files is the main problem in the tree-structure directory. With the help
of the acyclic-graph directory, we can provide the sharing of files. In the acyclic-
graph directory, more than one directory can point to a similar file or subdirectory.
We can share those files among the two directory entries.

With the help of aliases, and links, we can create this type of directory graph. We
may also have a different path for the same file. Links may be of two kinds, which
are hard link (physical) and symbolic (logical).

If we delete the files in acyclic graph structures, then


In the hard link (physical) case, we can remove the actual files only if all the
references to the file are deleted.
In the symbolic link (logical) case, we just delete the file, and there is only a
dangling point that is left.
Advantages of Acyclic-Graph Directory
The advantages of the acyclic-graph directory are:

In the acyclic-graph directory, the sharing of files is possible.


In the acyclic-graph directory, because of different-different paths, searching is
easy.
Disadvantages of Acyclic-Graph Directory
The disadvantages of acyclic-graph directory are:

If the files are shared through linking, there may be a problem in the case of
deleting.
If we are using softlink, then in this case, if the file is deleted then there is only a
dangling pointer which is left.
If we are using hardlink, in this case, when we delete a file, then we also have to
remove all the reference connected with it.
General-Graph Directory
The General-Graph directory is another vital type of directory structure. In this type
of directory, within a directory we can create cycle of the directory where we can
derive the various directory with the help of more than one parent directory.

The main issue in the general-graph directory is to calculate the total space or size,
taken by the directories and the files.

Directory Structure in Operating System


Advantages of General-Graph directory
The advantages of general-graph directory are:

The General-Graph directory is more flexible than the other directory structure.
Cycles are allowed in the general-graph directory.
Disadvantages of General-Graph Directory
The disadvantages of general-graph directory are:

In general-graph directory, garbage collection is required.


General-graph directory is more costly, among other directory structures.
Directory Implementation
There are various types of algorithm which we use for directory implementation.
The selection of a suitable algorithm for directory implementation is an essential
task because it directly affects system performance.

We can classify the directory implementation algorithm based on the data

structure. Mostly, we use two types of algorithms:

Linear List
Hash Table
Linear List: - The linear list is the most straightforward algorithm which is used for
directory implementation. In this algorithm, we keep all the files in a directory like
a singly linked list. Every file comprises of a pointer to the data blocks that are
allocated to it and the next file in the directory.
Directory Structure in Operating System
Characteristics of Linear List

The characteristics of the linear list are:

2. Hash Table: - There are some disadvantages in singly linked implementation of


directories. So, to remove this drawback, we use another method that is called a
hash table. In this method, the hash table is used with the linked list.

In a directory, for every file, there is a key-value pair that is generated, and when
the key-value pair is generated, then we store it into the hash table. With the help of
the hash function on the file name, we can determine the key and key points to the
respective file that are stored in a directory.

In a linear list, the task of searching is difficult because, in a linear list, we have to
search the entire list, but in hash table approach, there is no requirement of
searching the entire list. So, in hash table searching is quite efficient. With the help
of the key, we only have to check the entries of the hash table, and when we get the
entry, then by using the value, we will fetch the corresponding file.

Directory Structure in Operating System


UNIT-5
PART-A
S.No Questions
1. What is meant by Denial of Service?
Prevents or inhibits the normal use or management of communication facilities.
Another form of service denial is the disruption of an entire network, either by
disabling the network or overloading it with messages so as to degrade
performance
2. Define Security service.
A processing or communication service that enhances the security of the data
processing systems and the information transfers of an organization. The services
are intended to counter security attacks,and they make use of one or more security
mechanisms to provide the service.
3. Define Threat and attack.
Threat is a possible danger that might exploit a vulnerability to breach security and
thus cause possible harm.
Attack is any attempt to destroy, expose, alter, disable, steal or gain unauthorized

access to or make unauthorized use of an asset


4. Compare Substitution and Transposition techniques.
Substitution techniques: A substitution techniques is one in which the letters of
plaintext are replaced by other letter or by number or symbols. Ex: Caeser cipher.
Transposition techniques: It process in which different kind of mapping is
achieved by performing some sort of permutation on the plaintext letterset. Ex:
DES, AES.
5. What is meant by Message Authentication?
Message Authentication is a mechanism or service used to verify the integrity of
a message. Message authentication assures that data received are exactly as sent
by (i.e., contain no modification, insertion, deletion, or replay) and that
thepurported identity of the sender is valid.

6. Define Digital signature.


A digital signature is an authentication mechanism that enables the creator of a
message to attach a code that acts as a signature. Typically the signature is formed
by taking the hash of the message and encrypting the message with the creator’s
private key. The signature guarantees the source and integrity of the message
7. What are the properties of Digital Signature?
The digital signature must have the following properties:
It must verify the author and the date and time of the signature.
It must authenticate the contents at the time of the signature.
It must be verifiable by third parties, to resolve disputes

8. State three requirements for authentication?


Disclosure release of message content.
Traffic analysis discovery of pattern of traffic between parties.
Content modification changes to the contents of a message.
9. What are the methods of Revocation of Access Rights ?
Capability List – Scheme required to locate capability in the system before
capability can be revoked

Reacquisition – periodic delete, with require and denial if revoked

Back-pointers – set of pointers from each object to all capabilities of
that object (Multics)

Indirection – capability points to global table entry which points
to object – delete entry from global table, not selective (CAL)

Keys – unique bits associated with capability, generated when
capability created

10. What is Language-Based Protection?


Language implementation can provide software for protection enforcement when
automatic hardware-supported checking is unavailable
Interpret protection specifications to generate calls on whatever protection system is
provided by the hardware and the operating system
PART - B
1. Explain in detail the design principles, kernel modules,
process management, scheduling in LINUX system.
The main purpose of a computer is to run a predefined sequence of instructions,
known as a program. A program under execution is often referred to as a process.
Now, most special purpose computers are meant to run a single process, but in a
sophisticated system such a general purpose computer, are intended to run many
processes simultaneously. Any kind of process requires hardware resources such as
Memory, Processor time, Storage space, etc. In a General Purpose Computer
running many processes simultaneously, we need a middle layer to manage the
distribution of the hardware resources of the computer efficiently and fairly among
all the various processes running on the computer. This middle layer is referred toas
the kernel. Basically the kernel virtualizes the common hardware resources of
thecomputer to provide each process with its own virtual resources. This makes the
process seem as it is the sole process running on the machine. The kernel is also
responsible for preventing and mitigating conflicts between different processes.This
schematically represented below:

Figure: Virtual Resources for each Process

The Core Subsystems of the Linux Kernel are as follows:

The Process Scheduler


The Memory Management Unit (MMU)
The Virtual File System (VFS)
The Networking Unit
Inter-Process Communication Unit

Figure: The Linux KernelFor the purpose of this article we will only be focussing
on the 1st three important subsystems of the Linux Kernel. The basic functioning of
each of the 1st three subsystems is elaborated below:
The Process Scheduler: This kernel subsystem is responsible for fairly distributing
the CPU time among all the processes running on the system simultaneously.
The Memory Management Unit: This kernel sub-unit is responsible for proper
distribution of the memory resources among the various processes running on the
system. The MMU does more than just simply provide separate virtual address
spaces for each of the processes.
The Virtual File System: This subsystem is responsible for providing a unified
interface to access stored data across different filesystems and physical storage
media.
2. Explain different methods used to solve the problem of security at the operating
system level
The term operating system (OS) security refers to practices and measures that can
ensure the confidentiality, integrity, and availability (CIA) of operating systems.

The goal of OS security is to protect the OS from various threats, including


malicious software such as worms, trojans and other viruses, misconfigurations,
andremote intrusions.

OS security typically involves the implementation of control techniques that can


protect your assets from unauthorized modification and deletion or theft.

The most common techniques used to protect operating systems include the use of
antivirus software and other endpoint protection measures, regular OS patch
updates, a firewall for monitoring network traffic, and enforcement of secure access
through least privileges and user controls.

This is part of our series of articles about sandboxing.

What are Common OS Security Threats?


Here are a few of the most common threat vectors that can affect an operating
system.

Malware
Malware is short for malicious software, which encompasses a range of attack
vectors such as viruses, worms, trojans, and rootkits. Malware is injected into a
system without the owner’s consent, or by masquerading as legitimate software,with
the objective of stealing, destroying or corrupting data, or compromising the device.

Malware can also replicate, allowing it to spread further in a corporate network and
beyond. Malware attacks often go undetected by the target user, allowing for the
quiet extraction of sensitive data. In other cases attackers silently
“herd”compromised devices into botnets and use them for criminal activities
such asdistributed denial of services (DDoS) attacks.
Denial of Service Attacks
A Denial of Service (DoS) attack is intended to clog a system with fake requests so
it becomes overloaded, and eventually stops serving legitimate requests. Some DoS
attacks, in addition to overwhelming a system’s resources, can cause damage to the
underlying infrastructure.

Modern DoS attacks are waged by a distributed network of thousands or millions of


bots (automated agents)—this is known as distributed denial of service (DDoS), and
can be extremely difficult to mitigate due to its huge scale.

An example of a DoS attack is the repeated use of system requests in a tight loop, or
a “syn flood” in which the attacker sends a large number of network requests,
requiring the server to acknowledge each one, and exhausting its resources.

Network Intrusion
Network intrusion occurs when an individual gains access to a system for improper
use. There are several types of network intrusion depending on the type of intruder:

Careless insiders—authorized users who neglect to follow security policies or best


practices, causing exposure of sensitive assets.
Malicious insiders—authorized users who misuse their privileges for malicious
indigence.
Masqueraders—external individuals who pose as legitimate users, exploiting the
account or credentials of an authorized user to gain access to the system.
Clandestine users—attackers who penetrate the system by gaining supervisory
control and going around access controls.
Buffer Overflow
The main function of a buffer is to temporarily store data. Each buffer has acapacity
of data it can hold. During a buffer overflow attack, the buffer or other temporary
data stores are overflowing with data. When the buffer overflows, the program
attempting to write the data may overwrite other memory locations containing
important information.

Threat actors look for buffer overflow vulnerabilities, which they can exploit to
inject scripts that help them hijack the system or crash it.

How Can You Ensure Operating System Security?


Here are a few ways you can improve operating system security in your

organization. Authentication Measures


Authentication involves matching an identified user with the programs or data they
are allowed to access. All operating systems have controls that can be used to verify
that users who run a particular program are authorized to do so.

You can use the following techniques to authenticate users at the operating system
level:

Security keys: keys are provided by a key generator, usually in the form of a
physical dongle. The user must insert the key into a slot in the machine to log in.
Username-password combinations: The user enters a username that is registered
with the OS, along with a matching password.
Biometric signatures: The user scans a physical attribute, such as a fingerprint or
retina, to identify themselves.
Multi-factor authentication: Modern authentication systems use multiple methods to
identify a user, combining something the user knows (credentials), something they
own (such as a mobile device), and/or a physical characteristic (biometrics).
Using One-Time Passwords
One-time passwords offer an additional layer of security when combined with
standard authentication measures. Users must enter a unique password generated
each time they log in to the system. A one-time password cannot be reused.

Examples of one-time passwords include:

Network passwords: An application sends a one-time password to the users via a


registered email address or mobile phone number. The user must enter this
password to log in to the computer.
Random numbers: The user receives a card with listing numbers that correspond to
matching letters. The OS requires the user to enter the numbers that match a set of
randomly generated letters.
Secret keys: The user receives a device that generates secret keys. The user then
enters the secret key into the OS system, which identifies the user
credentialsassociated with the key.
Virtualization
Virtualization enables you to abstract software from hardware, effectively
separating the two. The main advantage of virtualization is that it introduces a high
level of efficiency and flexibility, while providing greater security coverage. There
are many types of virtualization, including desktop, application, network, server,
network, storage, and OS virtualization.

Operating system virtualization is a form of sandboxing. Learn more in our guide to


sandboxing security.

What is OS virtualization?
OS virtualization enables you to multiple isolated user environments using the same
OS kernel. The technology that creates and enables this type of isolation is called a
“hypervisor”, which serves as a layer located between the device and the virtualized
resources.

The hypervisor manages the virtual machines (VM) running on the device (typically
2-3 Vms). Each VM is used for each user or each security zone. There are several
types of VMs that can run alongside each other. Here are the three main categories:

Fully locked-down VM

Should be used to provide access to sensitive data and corporate systems, such as IT
environments, payment systems, and sensitive customer data.

Unlocked, open VM

Should be used to provide unrestricted access to non-corporate resources. For


example, full web browsing sessions, installation of applications, and use of
external devices.

Semi-locked-down VM

Should be used to provide access to standard corporate applications and resources,


such as office documents, company email, and internal services.

Advantages of OS virtualization

Each type of VM is limited to the actions allowed by design. Any further action is
restricted. This keeps the environment secure. The hypervisor runs below the OS of
the device and splits the device into multiple VMs running locally with their own
OS—effectively isolating users.

Because the users are isolated, the devices remain secure. This ensures that
employees and third parties can gain access to company resources without
endangering company resources.

Another major advantage of OS virtualization is that none of the virtualized


environments can directly access the network. Instead, connectivity is enabled via
an invisible, virtualized network layer that implements network segmentation
directly on the endpoint device.

Testing and Validating Operating System Security


Securing an operating system or any software is an ongoing process that requires
constant testing. Depending on the risk and priority of a system, security posture
tests may take place on a monthly, weekly or daily basis. Here are a few testing
methods you can use.

Vulnerability Assessment
Vulnerability assessment involves testing for weaknesses that may be lying
undetected in an operating system. Identifying vulnerabilities allows you to identify
possible vectors for an attack so you can better understand the risk to your system.

As part of a continuous process, vulnerability assessment attempts to stay on top of


newly exposed vulnerabilities by locating, classifying and prioritizing them
according to severity and impact. This process usually combines manual tasks with
automated tools.

The following are some of the typical methods used for OS vulnerability
assessment:

Scanning for known vulnerabilities


Scanning the software and applications on an operating system
Scanning for malware
Scanning for missing patches and updates
Patch testing
Port scanning
Penetration Testing
Penetration testing, or pentesting, is a security assessment strategy that uses
vulnerability assessment to identify how an attacker may successfully exploit
vulnerabilities in the system. The penetration testing method involves simulating an
exploit to evaluate system security.

Penetration testing helps discover vulnerabilities beyond the obvious, and seeks to
identify the methods an attacker may use to exploit them. Security teams
canleverage the insights provided by pentesting to put in place effective security
measures.

There are three types of penetration testing, each of which provides different types
of insights into operating system security and potential for exploitation:

White Box: The penetration tester has full technical knowledge of the system being
tested.
Grey Box: The pentester has limited technical knowledge of the system being
tested.
Black Box: The pentester doesn’t have any prior technical knowledge of the system
being tested.
Improving Operating System Security with Hysolate
Hysolate is a full OS isolation solution for Windows10 or Windows 11, splitting
your endpoint into a more secure corporate zone and a less secure zone for daily
tasks. This means that one OS can be reserved for corporate access, with strict
networking and security policies, and the other can be a more open zone for
accessing untrusted websites and applications.

Hysolate sits on the user endpoint so provides a good UX, but is managed by a
granular management console via the cloud. This means that admins can monitor
and control exactly what their team is using the isolated OS environment for, and it
can easily be wiped if threats are detected. Hysolate is easy to deploy, and can be
scaled to your entire team, not just the technical members. Hysolate isolates
applications, websites, documents and peripherals, giving you improved security
and manageability.

Try out Hysolate Free today- a free Windows OS solution or get a demo to learn
about Enterprise OS Security.
3. What is protection? Explain principles and goals of protection.
Goals of Protection
Obviously to prevent malicious misuse of the system by users or programs. See
chapter 15 for a more thorough coverage of this goal.
To ensure that each shared resource is used only in accordance with system
policies, which may be set either by system designers or by system administrators.
To ensure that errant programs cause the minimal amount of damage possible.
Note that protection systems only provide the mechanisms for enforcing policies
and ensuring reliable systems. It is up to administrators and users to implement
those mechanisms effectively.
14.2 Principles of Protection
The principle of least privilege dictates that programs, users, and systems be given
just enough privileges to perform their tasks.
This ensures that failures do the least amount of harm and allow the least of harm to
be done.
For example, if a program needs special privileges to perform a task, it is better to
make it a SGID program with group ownership of "network" or "backup" or some
other pseudo group, rather than SUID with root ownership. This limits the amount
of damage that can occur if something goes wrong.
Typically each user is given their own account, and has only enough privilege to
modify their own files.
The root account should not be used for normal day to day activities - The System
Administrator should also have an ordinary account, and reserve use of the root
account for only those tasks which need the root privileges
14.3 Domain of Protection
A computer can be viewed as a collection of processes and objects ( both HW &
SW ).
The need to know principle states that a process should only have access to those
objects it needs to accomplish its task, and furthermore only in the modes for which
it needs access and only during the time frame when it needs access.
The modes available for a particular object may depend upon its type.
14.3.1 Domain Structure
A protection domain specifies the resources that a process may access.
Each domain defines a set of objects and the types of operations that may be
invoked on each object.
An access right is the ability to execute an operation on an object.
A domain is defined as a set of < object, { access right set } > pairs, as shown
below. Note that some domains may be disjoint while others overlap.

Figure 14.1 - System with three protection domains.

The association between a process and a domain may be static or dynamic.


If the association is static, then the need-to-know principle requires a way of
changing the contents of the domain dynamically.
If the association is dynamic, then there needs to be a mechanism for domain
switching.
Domains may be realized in different fashions - as users, or as processes, or as
procedures. E.g. if each user corresponds to a domain, then that domain defines the
access of that user, and changing domains involves changing user ID.
4. Explain in detail about iOS and Android – Architecture and SDK Framework in
Detail
Architecture of IOS is a layered architecture. At the uppermost level iOS works as
an intermediary between the underlying hardware and the apps you make. Apps do
not communicate to the underlying hardware directly.
Apps talk with the hardware through a collection of well defined system interfaces.
These interfaces make it simple to write apps that work constantly on devices
having various hardware abilities.
Lower layers gives the basic services which all application relies on and higher
level layer gives sophisticated graphics and interface related services.
Apple provides most of its system interfaces in special packages called frameworks.
A framework is a directory that holds a dynamic shared library that is .a files,
related resources like as header files, images, and helper apps required to support
that library. Every layer have a set of Framework which the developer use to
construct the applications.

ios architecture1. Core OS Layer:


The Core OS layer holds the low level features that most other technologies are
built upon.

Core Bluetooth Framework.


Accelerate Framework.
External Accessory Framework.
Security Services framework.
Local Authentication framework.
64-Bit support from IOS7 supports the 64 bit app development and enables the
application to run faster.

Take a look at our Installation guide on iOS in our blog on iOS Installation.

Get 100% Hike!

Master Most in Demand Skills Now !

Email Address

Phone Number
2. Core Services Layer
Some of the Important Frameworks available in the core services layers are
detailed:

Address book framework – Gives programmatic access to a contacts database of


user.
Cloud Kit framework – Gives a medium for moving data between your app and
iCloud.
Core data Framework – Technology for managing the data model of a Model View
Controller app.
Core Foundation framework – Interfaces that gives fundamental data management
and service features for iOS apps.
Core Location framework – Gives location and heading information to apps.
Core Motion Framework – Access all motion based data available on a device.
Using this core motion framework Accelerometer based information can be
accessed.
Foundation Framework – Objective C covering too many of the features found in
the Core Foundation framework
Healthkit framework – New framework for handling health-related information of
user
Homekit framework – New framework for talking with and controlling connected
devices in a user’s home.
Social framework – Simple interface for accessing the user’s social media accounts.
StoreKit framework – Gives support for the buying of content and services from
inside your iOS apps, a feature known asIn-App Purchase.
Learn about iOS graphics, core graphics framework, quartz core and more in our
blog on iOS Graphics.

3. Media Layer: Graphics, Audio and Video technology is enabled using the Media
Layer.
Graphics Framework:

UIKit Graphics – It describes high level support for designing images and also used
for animating the content of your views.
Core Graphics framework – It is the native drawing engine for iOS apps and gives
support for custom 2D vector and image based rendering.
Core Animation – It is an initial technology that optimizes the animation experience
of your apps.
Core Images – gives advanced support for controling video and motionless images
in a nondestructive way
OpenGl ES and GLKit – manages advanced 2D and 3D rendering by hardware
accelerated interfaces
Metal – It permits very high performance for your sophisticated graphics rendering
and computation works. It offers very low overhead access to the A7 GPU.
Read these iOS Interview Questions to grab high-paying jobs!

Audio Framework:

Media Player Framework – It is a high level framework which gives simple use to a
user’s iTunes library and support for playing playlists.
AV Foundation – It is an Objective C interface for handling the recording and
playback of audio and video.
OpenAL – is an industry standard technology for providing audio.

Video Framework

AV Kit – framework gives a collection of easy to use interfaces for presenting


video.
AV Foundation – gives advanced video playback and recording capability.
Core Media – framework describes the low level interfaces and data types for
operating media.
Cocoa Touch Layer

EventKit framework – gives view controllers for showing the standard system
interfaces for seeing and altering calendar related events
GameKit Framework – implements support for Game Center which allows users
share their game related information online
iAd Framework – allows you deliver banner-based advertisements from your app.
MapKit Framework – gives a scrollable map that you can include into your user
interface of app.
PushKitFramework – provides registration support for VoIP apps.
Twitter Framework – supports a UI for generating tweets and support for creating
URLs to access the Twitter service.
UIKit Framework – gives vital infrastructure for applying graphical, event-driven
apps in iOS. Some of the Important functions of UI Kit framework:
-Multitasking support.
– Basic app management and infrastructure.
– User interface management
– Support for Touch and Motion event.
– Cut, copy and paste support and many mor
5. How Digital signature differs from authentication protocols?
Authentication and digital signatures are typically considered two different things:
authentication is about ‘logging in’, while digital signatures are used for expressing
your consent with documents, approving transactions…

From a functional point of view there is indeed a clear distinction. Authentication is


about verifying that the user is who he claims to be. A digital signature is about
protecting the integrity of certain data and asserting that the data originated from a
certain user.

There is a partial overlap in security properties: both a digital signature and


authentication are about assuring the user is who (s)he claims to be. Authentication
also proofs liveness: the user (i.e., the credential) is present during the
authentication process. Digital signatures, apart from providing integrity of
thesigned data, provide non-repudiation: the user cannot claim – after the facts –
that (s)he never put a digital signature.

Description AuthenticationDigital
Signatures Authenticity User
The user is who (s)he claims to be.
Liveness
The user is present during the interaction with the verifier.
Integrity Data
The signed data, protected with a digital signature, cannot be altered without
invalidating the signature.
Non-repudiation
The user cannot deny afterwards that (s)he put the digital signature.
When looking at the underlying technology there is a sweet spot where both
authentication and digital signatures provide the same properties: liveness, asserting
the link with the user, linking with data, and non-repudiation.
Digital Signature Technology
Digital signature technology is based on public key cryptography. A private key is
used to sign data, while the corresponding public key can be used to verify a
signature. It should be infeasible to derive the private key from the public key or
from signatures if the signature scheme is to be considered secure.

This contrasts with symmetric key cryptography, where both operations (‘signing’
and verification) are performed using the same key. A MAC (message
authentication code) is considered the symmetric counterpart as it also guarantees
the integrity of the data and the link with the possession of the key. Since
asymmetric key is used this has grave implications for the security properties: the
verifier is also able to generate the MAC, so there is no way of knowing who
generated the MAC. This implies that there is no non-repudiation. An allegedauthor
of a MAC can always blame the verifier for generating the MAC. To make matters
more confusing many incorrectly label MACs as digital signatures, despite
fundamentally different security properties.

Authentication Technology
There is a broad range of technology used for authentication: passwords, OTPs,
zero- knowledge protocols, MAC-based protocols and … protocols using digital
signatures.

Authentication happens through an interaction between the user (device) and the
verifier (server). Contrary to digital signatures, where the verifier can perform the
verification at any time in the future.

To build an authentication mechanism from digital signatures, interaction is needed,


to ensure liveness. By using digital signatures in a carefully designed cryptographic
protocol, many additional security properties can be achieved, such as liveness, but
also confidentiality of the communication. Another advantage of using digital
signatures as building block for authentication, is that there is good support on
modern smartphones for hardware-backed signatures.

A Notable Exception
In some rare cases, using digital signatures for authentication, and hence obtaining
non-repudiation, is an unwanted property. An excellent example can be found in
ePassports. The purpose of a passport is to prove your identity. As part of this
process the authenticity of the chip inside the ePassport is validated through an
authentication protocol. The chip essentially proves knowledge of the private key,
corresponding to the public key that is linked by the government to your identity.
One of the design criteria for ePassports was however that the execution of the
protocol would not lead to some kind of proof. Repudiation was crucial. For this
reason, no digital signatures are used in the authentication protocol for the
ePassportchip.
6. Explain goals and principles of system protection in detail.
Protection is especially important in a multiuser environment when multiple users
use computer resources such as CPU, memory, etc. It is the operating system's
responsibility to offer a mechanism that protects each process from other processes.
In a multiuser environment, all assets that require protection are classified as

objects, and those that wish to access these objects are referred to as subjects. The
operating system grants different 'access rights' to different subjects.

Protection in Operating System


In this article, you will learn the protection in the operating system with its needs,
goals, and authentication.

What is Protection in Operating System?


A mechanism that controls the access of programs, processes, or users to
theresources defined by a computer system is referred to as protection. You may
utilizeprotection as a tool for multi-programming operating systems, allowing
multiple users to safely share a common logical namespace, including a directory or
files.

It needs the protection of computer resources like the software, memory, processor,
etc. Users should take protective measures as a helper to multiprogramming OS so
that multiple users may safely use a common logical namespace like a directory or
data. Protection may be achieved by maintaining confidentiality, honesty and
availability in the OS. It is critical to secure the device from unauthorized access,
viruses, worms, and other malware.

Need of Protection in Operating System


Various needs of protection in the operating system are as follows:

There may be security risks like unauthorized reading, writing, modification, or


preventing the system from working effectively for authorized users.
It helps to ensure data security, process security, and program security against
unauthorized user access or program access.
It is important to ensure no access rights' breaches, no viruses, no unauthorized
access to the existing data.
Its purpose is to ensure that only the systems' policies access programs, resources,
and data.
Goals of Protection in Operating System
Various goals of protection in the operating system are as follows:

The policies define how processes access the computer system's resources, such as
the CPU, memory, software, and even the operating system. It is the responsibility
of both the operating system designer and the app programmer. Although, these
policies are modified at any time.
Protection is a technique for protecting data and processes from harmful or
intentional infiltration. It contains protection policies either established by itself, set
by management or imposed individually by programmers to ensure that their
programs are protected to the greatest extent possible.
It also provides a multiprogramming OS with the security that its users expect when
sharing common space such as files or directories.
Role of Protection in Operating System
Its main role is to provide a mechanism for implementing policies that define the
use of resources in a computer system. Some rules are set during the system's
design, while others are defined by system administrators to secure their files and
programs.

Every program has distinct policies for using resources, and these policies may
change over time. Therefore, system security is not the responsibility of the system's
designer, and the programmer must also design the protection technique to protect
their system against infiltration.

Domain of Protection
Various domains of protection in operating system are as follows:

The protection policies restrict each process's access to its resource handling. A
process is obligated to use only the resources necessary to fulfil its task within the
time constraints and in the mode in which it is required. It is a process's protected
domain.
Processes and objects are abstract data types in a computer system, and theseobjects
have operations that are unique to them. A domain component is defined as
<object, {set of operations on object}>.
Protection in Operating System
Each domain comprises a collection of objects and the operations that may be
implemented on them. A domain could be made up of only one process, procedure,
or user. If a domain is linked with a procedure, changing the domain would mean
changing the procedure ID. Objects may share one or more common operations.
Association between Process and Domain
When processes have the necessary access rights, they can switch from one domain
to another. It could be of two types, as shown below.

1. Fixed or Static

In a fixed association, all access rights could be given to processes at the start.
However, the results in a large number of access rights for domain switching. As a
result, a technique of changing the domain's contents is found dynamically.

2. Changing or dynamic

A process may switch dynamically and creating a new domain in the process.

Security measures of Operating System


There are various security measures of the operating system that the users may take.
Some of them are as follows:

The network used for file transfers must be secure at all times. During the transfer,
no alien software should be able to harvest information from the network. It is
referred to as network sniffing, and it could be avoided by implementing encrypted
data transfer routes. Moreover, the OS should be capable of resisting forceful or
even accidental violations.
Passwords are a good authentication method, but they are the most common and
vulnerable. It is very easy to crack passwords.
Security measures at various levels are put in place to prevent malpractices, like no
one being allowed on the premises or access to the systems.
The best authentication techniques include a username-password combination, eye
retina scan, fingerprint, or even user cards to access the system.
System Authentication
One-time passwords, encrypted passwords, and cryptography are used to create a
strong password and a formidable authentication source.

1. One-time Password

It is a way that is unique at every login by the user. It is a combination of two


passwords that allow the user access. The system creates a random number, and the
user supplies a matching one. An algorithm generates a random number for the
system and the user, and the output is matched using a common function.

2. Encrypted Passwords

It is also a very effective technique of authenticating access. Encrypted data is


passed via the network, which transfers and checks passwords, allowing data to
passwithout interruption or interception.

3. Cryptography

It's another way to ensure that unauthorized users can't access data transferred over
a network. It aids in the data secure transmission. It introduces the concept of a key
to protecting the data. The key is crucial in this situation. When a user sends data,
he encodes it using a computer that has the key, and the receiver must decode the
data with the same key. As a result, even if the data is stolen in the middle of the
process, there's a good possibility the unauthorized user won't be able to access it.

7. What is Access Matrix? Explain implementation of Access Matrix in detail.


Access Matrix is a security model of protection state in computer system. It is
represented as a matrix. Access matrix is used to define the rights of each process
executing in the domain with respect to each object. The rows of matrix represent
domains and columns represent objects. Each cell of matrix represents set of access
rights which are given to the processes of domain means each entry(i, j) defines the
set of operations that a process executing in domain Di can invoke on object Oj.

F1 F2 F3 Printer
D1 read read
D2 print
D3 read execute
D4 read write read write
According to the above matrix: there are four domains and four objects- three
files(F1, F2, F3) and one printer. A process executing in D1 can read files F1 and
F3. A process executing in domain D4 has same rights as D1 but it can also write
onfiles. Printer can be accessed by only one process executing in domain D2.
The

mechanism of access matrix consists of many policies and semantic properties.


Specifically, We must ensure that a process executing in domain Di can access only
those objects that are specified in row i. Policies of access matrix concerning
protection involve which rights should be included in the (i, j)th entry. We must
also decide the domain in which each process executes. This policy is usually
decided by the operating system. The Users decide the contents of the access-
matrixentries. Association between the domain and processes can be either static or
dynamic. Access matrix provides a mechanism for defining the control for this
association between domain and processes. When we switch a process from one
domain to another, we execute a switch operation on an object(the domain). We
cancontrol domain switching by including domains among the objects of the access
matrix. Processes should be able to switch from one domain (Di) to another domain
(Dj) if and only if a switch right is given to access(i, j).

F1 F2 F3 Printer D1 D2 D3 D4
D1 read read switch
D2 print switch switch
D3 read execute
D4 read write read write switch
According to the matrix: a process executing in domain D2 can switch to domain
D3 and D4. A process executing in domain D4 can switch to domain D1 and
process executing in domain D1 can switch to domain D2.

You might also like