0% found this document useful (0 votes)
25 views

OS Important Questions For Externals

The document discusses operating systems and provides information about key concepts like multi-programming, time sharing, multiprocessing, clustering, dual mode operation, and differences between various related terms. It includes definitions and explanations of these topics with examples.

Uploaded by

pradeepgowda3766
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

OS Important Questions For Externals

The document discusses operating systems and provides information about key concepts like multi-programming, time sharing, multiprocessing, clustering, dual mode operation, and differences between various related terms. It includes definitions and explanations of these topics with examples.

Uploaded by

pradeepgowda3766
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

OS Important questions for externals

Chapter 1:Introduction to operating systems

1. What is operating system?Explain multiprogramming and time sharing


systems. (6m[2020]),(8m[2019])
Ans: An operating system is system software that acts as an intermediary between a user of a
computer and the computer hardware. It is software that manages the computer hardware and allows
the user to execute programs in a convenient and efficient manner.

Multi-programming increases CPU utilization by organizing jobs, so that the CPU always has one to
execute.

The operating system keeps several jobs in memory simultaneously as shown in figure. This set of jobs is a
subset of the jobs kept in the job pool.
The operating system picks and begins to execute one of the jobs in memory. Eventually, the job may have
to wait for some tasks, such as an I/O operation, to complete. In a non-multi-program system, the CPU
would sit idle.
In a multi-programmed system, the operating system simply switches to, and executes, another job.
Eventually, the first job finishes waiting and gets the CPU back. Thus, the CPU is never idle. Multi-
programmed systems provide an environment in which the various system resources (for example, CPU,
memory, and peripheral devices) are utilized effectively, but they do not provide for user interaction
with the computer system.

Time sharing systems


In Time sharing (or multitasking) systems, a single CPU executes multiple jobs by switching among them, but
the switches occur so frequently that the users can interact with each program while it is running. The
user feels that all the programs are being executed at the same time. Time sharing requires an
interactive (or hands-on) computer system, which provides direct.
communication between the user and the system. The user gives instructions to the operating system or to
a program directly, using a input device such as a keyboard or a mouse, and waits for immediate results
on an output device. Accordingly, the response time should be short— typically less than one second.
A time-shared operating system allows many users to share the computer simultaneously. As the system
switches rapidly from one user to the next, each user is given the impression that the entire computer
system is dedicated to his use only, even though it is being shared among many users.

Mutturaj Goudra 1
A multiprocessor system is a computer system having two or more CPUs within a single computer system,
each sharing main memory and peripherals. Multiple programs are executed by multiple processors
parallel.

2. Explain dual mode operation in operating system with a neat block diagram.
(5m[2020]),(7m[MQ 2019])
Ans: Dual-Mode Operation Since the operating system and the user programs share the hardware
and software resources of the computer system, it has to be made sure that an error in a user
program cannot cause problems to other programs and the Operating System running in the
system. The approach taken is to use a hardware support that allows us to differentiate among
various modes of execution. The system can be assumed to work in two separate modes of
operation:
1. User mode
2. 2. Kernel mode (supervisor mode, system mode, or privileged mode).
 A hardware bit of the computer, called the mode bit, is used to indicate the current mode:
kernel (0) or user (1). With the mode bit, we are able to distinguish between a task that is
executed by the operating system and one that is executed by the user.
 When the computer system is executing a user application, the system is in user mode. When
a user application requests a service from the operating system (via a system call), the
transition from user to kernel mode takes place.

The dual mode of operation provides us with the means for protecting the operating system from errant
users—and errant users from one another.
 The hardware allows privileged instructions to be executed only in kernel mode. If an attempt is
made to execute a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system. The instruction to
switch to user mode is an example of a privileged instruction.

Mutturaj Goudra 2
 Initial control is within the operating system, where instructions are executed in kernel mode. When
control is given to a user application, the mode is set to user mode. Eventually, control is switched
back to the operating system via an interrupt, a trap, or a system call.

3. Explain the types of multiprocessing and types of clustering. (5m[MQ 2019])


Ans:Multi -Processor Systems (parallel systems or tightly coupled systems) Systems that have two or
more processors in close communication, sharing the computer bus, the clock, memory, and peripheral
devices are the multiprocessor systems.
Multiprocessor systems have three main advantages:
Increased throughput - In multiprocessor system, as there are multiple processors execution of different
programs take place simultaneously. Even if the number of processors is increased the performance
cannot be simultaneously increased.
Economy of scale - Multiprocessor systems can cost less than equivalent number of many single-
processor systems. As the multiprocessor systems share peripherals, mass storage, and power supplies,
the cost of implementing this system is economical.
Increased reliability- In multiprocessor systems functions are shared among several
processors. If one processor fails, the system is not halted, it only slows down.
1. Graceful degradation – As there are multiple processors when one processor fails other process will
take up its work and the system go down slowly.
2. Fault tolerant – When one processor fails, its operations are stopped, the system failure is then
detected, diagnosed, and corrected.
Different types of multiprocessor systems
1. Asymmetric multiprocessing
2. Symmetric multiprocessing

Clustered Systems Clustered systems are two or more individual systems connected together via a
network and sharing software resources. Clustering provides high availability of resources and services.
The service will continue even if one or more systems in the cluster fail. High availability is generally
obtained by storing a copy of files (s/w resources) in the system.
There are two types of Clustered systems – asymmetric and symmetric 1. Asymmetric clustering – one
system is in hot-standby mode while the others are
running the applications. The hot-standby host machine does nothing but monitor the active server. If
that server fails, the hot-standby host becomes the active server.
3. Symmetric clustering – two or more systems are running applications, and are monitoring each other.
This mode is more efficient, as it uses all of the available hardware. If any system fails, its job is taken up
by the monitoring system.

4.List the three main advantages of multiprocessor systems.Also bring out the difference between
graceful degradation
and fault tolerance in the context. (5m[2017])

Mutturaj Goudra 3
Ans:Multi -Processor Systems (parallel systems or tightly coupled systems) Systems that have two or
more processors in close communication, sharing the computer bus, the clock, memory, and peripheral
devices are the multiprocessor systems.
Multiprocessor systems have three main advantages:
Increased throughput - In multiprocessor system, as there are multiple processors execution of different
programs take place simultaneously. Even if the number of processors is increased the performance
cannot be simultaneously increased.
Economy of scale - Multiprocessor systems can cost less than equivalent number of many single-
processor systems. As the multiprocessor systems share peripherals, mass storage, and power supplies,
the cost of implementing this system is economical.
Increased reliability- In multiprocessor systems functions are shared among several
processors. If one processor fails, the system is not halted, it only slows down.
2. Graceful degradation – As there are multiple processors when one processor fails other process will
take up its work and the system go down slowly.
2. Fault tolerant – When one processor fails, its operations are stopped, the system failure is then
detected, diagnosed, and corrected.
Different types of multiprocessor systems

5.Distinguish between the following pairs of terms:


i) Symmetric and asymmetric multiprocessor systems.
ii) Cpu burst and I/O burst jobs.
iii) User’s view and systems view of OS.
iv) Batch systems and time sharing systems.
v) User mode and kernel mode operations. (10m[2017])
Ans:
i) Symmetric and asymmetric multiprocessor systems:

Symmetric multiprocessor systems (SMP) have multiple processors that share memory and peripheral
devices. Each processor performs the same functions and has equal access to resources. Tasks can be
assigned to any processor, and the load is typically balanced among them.
Asymmetric multiprocessor systems have multiple processors but with different roles. One processor,
called the master processor, controls the system and schedules tasks, while the other processors, known
as slave processors, execute the tasks assigned to them by the master processor. Asymmetric systems
may have separate memory banks for each processor, and they are typically used in specific applications
where different processing capabilities are required.

ii) CPU burst and I/O burst jobs:

CPU burst refers to the amount of time a process actively uses the CPU for computation without any I/O
operations. It is the period during which a process executes instructions on the CPU.
I/O burst refers to the period during which a process is waiting for input/output operations to complete,
such as reading from or writing to a disk or waiting for user input. During this time, the CPU is typically
idle.

iii) User’s view and systems view of OS:

The user's view of an operating system (OS) refers to how the OS appears and functions from the
perspective of a user or application program. This includes interfaces such as the graphical user interface

Mutturaj Goudra 4
(GUI) or command-line interface (CLI) that users interact with to run programs, manage files, and
perform other tasks.
The systems view of an OS refers to how the OS manages hardware resources, schedules tasks, handles
memory management, and provides services to applications. It involves the low-level operations and
functions that enable the OS to control the computer's hardware efficiently.

iv) Batch systems and time-sharing systems:

Batch systems allow users to submit jobs to the computer system in batches, where each job runs to
completion before the next job begins. Jobs are typically processed sequentially without user interaction.
Batch systems are suitable for running large-scale, non-interactive tasks such as batch processing of data
or running predefined tasks overnight.
Time-sharing systems allow multiple users to interact with the computer system simultaneously by
sharing the CPU's time. The CPU rapidly switches between executing processes, giving the illusion of
concurrent execution to users. Time-sharing systems enable interactive computing, where users can run
programs, access files, and perform other tasks in real-time.

v) User mode and kernel mode operations:

User mode is a restricted mode of operation where user applications and processes execute. In user
mode, processes have limited access to system resources and cannot directly manipulate hardware or
privileged system functions. User mode is designed to ensure the stability and security of the system by
preventing user processes from interfering with critical system operations.
Kernel mode, also known as supervisor mode or privileged mode, is a privileged mode of operation
where the operating system's kernel executes. In kernel mode, the OS has unrestricted access to system
resources and can execute privileged instructions, such as manipulating hardware and managing system
resources. Kernel mode is used to perform critical system tasks and provide services to user processes.

6. Explain the role of the operating system from different viewpoints.Explain the dual mode of
operation of an operating system. ( 7m[2019])

Operating System can be viewed from two viewpoints– User views & System views User Views: -The
user’s view of the operating system depends on the type of user.
If the user is using standalone system, then OS is designed for ease of use and high performances. Here
resource utilization is not given importance.
 If the users are at different terminals connected to a mainframe or minicomputers, by sharing
information and resources, then the OS is designed to maximize resource utilization. OS is designed
such that the CPU time, memory and i/o are used efficiently and no single user takes more than the
resource allotted to them.
 If the users are in workstations, connected to networks and servers, then the user have a system unit
of their own and shares resources and files with other systems. Here the OS is designed for both ease
of use and resource availability (files).
 Other systems like embedded systems used in home device (like washing m/c) & automobiles do not
have any user interaction. There are some LED's to show the status of its work
 Users of hand-held systems, expects the OS to be designed for ease of use and performance per
amount of battery life

System Views: - Operating system can be viewed as a resource allocator and control program.
• Resource allocator – The OS acts as a manager of hardware and software resources. CPU time, memory
space, file-storage space, I/O devices, shared files etc. are the different resources required during

Mutturaj Goudra 5
execution of a program. There can be conflicting request for these resources by different programs
running in same system. The OS assigns the resources to the requesting program depending on the
priority.
• Control Program – The OS is a control program and manage the execution of user program to prevent
errors and improper use of the computer.

Dual-Mode Operation Since the operating system and the user programs share the hardware and
software resources of the computer system, it has to be made sure that an error in a user program
cannot cause problems to other programs and the Operating System running in the system. The
approach taken is to use a hardware support that allows us to differentiate among various modes
of execution. The system can be assumed to work in two separate modes of operation:
4. User mode
5. 2. Kernel mode (supervisor mode, system mode, or privileged mode).
 A hardware bit of the computer, called the mode bit, is used to indicate the current mode:
kernel (0) or user (1). With the mode bit, we are able to distinguish between a task that is
executed by the operating system and one that is executed by the user.
 When the computer system is executing a user application, the system is in user mode. When
a user application requests a service from the operating system (via a system call), the
transition from user to kernel mode takes place.

Chapter 2:Operating-System Structures

1. What are system calls?Briefly point out its types. (5m[2020])


System calls provides an interface to the services of the operating system. These are generally written in
C or C++, although some are written in assembly for optimal performance.

Mutturaj Goudra 6
There are number of system calls used to finish this task. The first system call is to write a message on the
screen (monitor). Then to accept the input filename. Then another system call to write message on the
screen, then to accept the output filename.
 When the program tries to open the input file, it may find that there is no file of that name or that
the file is protected against access. In these cases, the program should print a message on the
console (another system call) and then terminate abnormally (another system call) and create a new
one (another system call).
 Now that both the files are opened, we enter a loop that reads from the input file (another system
call) and writes to output file (another system call).
 Finally, after the entire file is copied, the program may close both files (another system call), write a
message to the console or window (system call), and finally terminate normally (final system call

2. What are virtual machines?Explain with block diagrams.Point out its benefits. (6m)
3.What are virtual machines?Explain with block diagrams.Point out its benefits. (6m)
4.With a neat diagram,explain the concept of virtual machines. (5m[2020]),(6m[MQ 2019])

Mutturaj Goudra 7
5. What are virtual machines?How are they implemented?

Ans: The fundamental idea behind a virtual machine is to abstract the hardware of a single computer (the
CPU, memory, disk drives, network interface cards, and so forth) into several different execution
environments, thereby creating the illusion that each separate execution environment is running its own
private computer.

• Creates an illusion that a process has its own processor with its own memory.
• Host OS is the main OS installed in system and the other OS installed in the system are called guest

mplementation
 The virtual-machine concept is useful, it is difficult to implement.
 Work is required to provide an exact duplicate of the underlying machine. Remember that the
underlying machine has two modes: user mode and kernel mode.
 The virtual-machine software can run in kernel mode, since it is the operating system. The virtual
machine itself can execute in only user mode.
 Benefits
 Able to share the same hardware and run several different execution environments (OS).
Even though the virtual machines are separated from one another, software resources can be shared
among them. Two ways of sharing s/w resource for communication are:
 To share a file system volume (part of memory).
 To develop a virtual communication network to communicate between the virtual machines.
 The operating system runs on and controls the entire machine. Therefore, the current system must
be stopped and taken out of use while changes are made and tested. This period is commonly called
system development time. In virtual machines such problem is eliminated. User programs are
executed in one virtual machine and system development is done in another environment.
 Multiple OS can be running on the developer’s system concurrently. This helps in rapid porting and
testing of programmer’s code in different environments.
 System consolidation – two or more systems are made to run in a single system.

Mutturaj Goudra 8
Module -2

1. Explain process states with state transition diagram.Also explain PCB with a neat
diagram. (6m[2020])
2. 6.Describe the implementation of interprocess communication using shared memory
and message passing approaches.(8m[2019-20])
7.Explain the process states with a neat diagram. (6m[2019-20])
8. What is a Process? What are the states a process can be in?Give the process state
diagram clearly indicating the conditions for a process to shift from one state to another
state. (8m[2017])
Ans: Process State
A Process has 5 states. Each process may be in one of the following states –
• New - The process is in the stage of being created.
• Ready - The process has all the resources it needs to run. It is waiting to be assigned to the
processor.
• Running – Instructions are being executed.
• Waiting - The process is waiting for some event to occur. For example, the process may be
• waiting for keyboard input, disk access request, inter-process messages, a timer to go off, or a
child process to finish.
• Terminated - The process has completed its execution.

Process Control Block


For each process there is a Process Control Block (PCB), which stores the process-specific
information as shown below –
1. Process State – The state of the process may be new, ready, running, waiting, and so on.
2. Program counter – The counter indicates the address of the next instruction to be executed for
this
process.

Mutturaj Goudra 9
3. CPU registers - The registers vary in number and type, depending on the computer architecture.
They
include accumulators, index registers, stack pointers, and general-purpose registers. Along with
the
program counter, this state information must be saved when an interrupt occurs, to allow the
process to
be continued correctly afterward.
4. CPU scheduling information- This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
5. Memory-management information – This includes information such as the value of the base
and limit
registers, the page tables, or the segment tables.
6. Accounting information – This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
7. I/O status information – This information includes the list of I/O devices allocated to the
process, a
list of open files, and so on.
The PCB simply serves as the repository for any information that may vary from process to
process.

3. What is interprocess communication?Explain its types. (5m[2020],6m[2017])


12.What are the merits of inter process communication?Name the two majors models of
inter process communication.
(6m[2017])

Ans:Inter process Communication- Processes executing may be either co-operative or


independent
processes.
• Independent Processes – processes that cannot affect other processes or be affected by other
processes executing in the system.
• Cooperating Processes – processes that can affect other processes or be affected by other
processes executing in the system.

Co-operation among processes are allowed for following reasons –


• Information Sharing - There may be several processes which need to access the same file. So
the
information must be accessible at the same time to all users.
• Computation speedup - Often a solution to a problem can be solved faster if the problem can be

Mutturaj Goudra 10
broken down into sub-tasks, which are solved simultaneously (particularly when multiple
processors
are involved.)
• Modularity - A system can be divided into cooperating modules and executed by sending
information among one another.
• Convenience - Even a single user can work on multiple tasks by information sharing.

Cooperating processes require some type of inter-process communication. This is allowed by


two models:
• Shared Memory systems
• Message passing systems.

4. Is CPU scheduling necessary?Discuss the five different scheduling criterias used in the
computing scheduling mechanisms. (5m[2020])

Ans:Scheduling Queues
As processes enter the system, they are put into a job queue, which consists of all processes in
thesystem.
The processes that are residing in main memory and are ready and waiting to execute are kepton
a listcalled the ready queue. This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list. Each PCBincludes
pointer field that points to the next PCB in the ready queue.

Mutturaj Goudra 11
A new process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution and is given the CPU. Once the process is allocated the CPU and is executing, one of
several
events could occur:

• The process could issue an I/O request, and then be placed in an I/O queue.
• The process could create a new subprocess and wait for its termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt,and be
put back in the ready queue.

Schedulers
Schedulers are software which selects an available program to be assigned to CPU.
A long-term scheduler or Job scheduler – selects jobs from the job pool (of secondarymemory,
disk) and loads them into the memory.
If more processes are submitted, than that can be executed immediately, such processes will be in
secondary memory. It runs infrequently, and can take time to select the next process.
The short-term scheduler, or CPU Scheduler – selects job from memory and assigns theCPU to it.
It must select the new process for CPU frequently.
The medium-term scheduler - selects the process in ready queue and reintroduced into
thememory.
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations,
CPU-bound process – spends more time doing computations and few I/O operations.
An efficient scheduling system will select a good mix of CPU-bound processes and I/O bound
processes.
• If the scheduler selects more I/O bound process, then I/O queue will be full and readyqueue
will be empty.
• If the scheduler selects more CPU bound process, then ready queue will be full and I/Oqueue
will be empty.

Mutturaj Goudra 12
5. Explain multithreading models.
6. What is a thread?What is need for multi-threaded processes?Indicate the
four major categories of benefits derived from multi-threaded programming.

Mutturaj Goudra 13
Mutturaj Goudra 14
Module -3

Chapter 1:Process Synchronization


1. Define semaphores.Explain its usage and implementation. (6m[2020])
2. 2.Explain Reader-Writer problem with semaphore in detail.
(5m[2020]),(6m[2019])
3. 3.Illustrate how Readers-Writers problem can be solved using Semaphore.
(8m[2019-2020])
4.Define semaphores.Explain its usage and implementation. (6m[2017])
Ans:
SEMAPHORE
• A semaphore is a synchronization tool is used solve various synchronization
problem
and can be implementedefficiently.
• Semaphore do not require busywaiting.
• A semaphore S is an integer variable that, is accessed only through two standard
atomic operations: wait () and signal (). The wait () operation was originally
termed P and signal() was calledV.

• All modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly. That is, when one process modifies the
semaphore value, no other process can simultaneously modify that same semaphore

Mutturaj Goudra 15
value.
Implementation
• The main disadvantage of the semaphore definition requires busywaiting.
• While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code.
• This continual looping is clearly a problem in a real multiprogramming system,
where a single CPU is shared among many processes.

do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section

} while (TRUE);

• Busy waiting wastes CPU cycles that some other process might be able to use
productively. This type of semaphore is also called a spinlock because the process
"spins" while waiting for thelock.

Mutturaj Goudra 16
Mutturaj Goudra 17
Mutturaj Goudra 18
Mutturaj Goudra 19
4. Illustrate Peterson’s solution for the critical section problem. (6m[2019-2020])
5. What is the critical section problem?What requirements should a solution to the critical
section problem satisfy?State Peterson’s solution and indicate how it satisfies the above
requirements.(10m[2017])
Ans:
PETERSON'S SOLUTION
• This is a classic software-based solution to the critical-section problem. There are
no guarantees that Peterson's solution will work correctly on modern computer
architectures
• Peterson's solution provides a good algorithmic description of solving the critical-
section problem and illustrates some of the complexities involved in designing
software that addresses the requirements of mutual exclusion, progress, and
bounded waiting.

Chapter 2:Deadlocks
1. What is deadlock?What are necessary conditions an operating system must satisfy for a
deadlock to occur?(5m[2020], 5m[2019-20])

Mutturaj Goudra 20
6. What is a deadlock?What are necessary conditions an operating system must satisfy for a
deadlock tooccur?Indicate how many of these should occur for deadlock to happen?
(10m[2017])
Ans:
A process requests resources, if the resources are not available at that time, the process enters
awaiting state. Sometimes, a waiting process is never again able to change state, because
theresources it has requested are held by other waiting processes. This situation is called a
Deadlock.

2.What is the Resource Allocation Graph(RAG)?Explain how RAG is very useful in describing deadly
embrace by
considering your own example. (5m[2020], 5m[2019-20])
Ans:
Resource-Allocation Graph
Deadlocks can be described in terms of a directed graph called System Resource-Allocation
Graph
The graph consists of a set of vertices V and a set of edges E. The set of vertices V is partitioned
into two different types of nodes:
• P = {P1, P2, ...,Pn}, the set consisting of all the active processes in the system.
• R = {R1, R2, ..., Rm} the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj it signifies that process
Pi has requested an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj → Pi it signifies that an
instance of resource type Rj has been allocated to process Pi.
• A directed edge Pi → Rj is called a Request Edge.
• A directed edge Rj → Pi is called an Assignment Edge.
Pictorially each process Pi as a circle and each resource type Rj as a rectangle. Since resource
type Rj may have more than one instance, each instance is represented as a dot within the
rectangle.
A request edge points to only the rectangle Rj, whereas an assignment edge must also designate

Mutturaj Goudra 21
one of the dots in the rectangle.

If the graph does contain a cycle, then a deadlock may exist.


• If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred. If the cycle involves only a set of resource types, each of which has only a
single instance, then a deadlock has occurred. Each process involved in the cycle is
deadlocked.
• If each resource type has several instances, then a cycle does not necessarily imply that
a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a
sufficient condition for the existence of deadlock.
3. Discuss the various approaches used for deadlock recovery (6m[2019-20])
5.Explain the process of recovery from deadlock. (5m[2017])
Ans:
RECOVERY FROM DEADLOCK
The system recovers from the deadlock automatically. There are two options for breaking a
deadlock one is simply to abort one or more processes to break the circular wait. The other is to
preempt some resources from one or more of the deadlocked processes.
Process Termination

Mutturaj Goudra 22
To eliminate deadlocks by aborting a process, use one of two methods. In both methods, the
system reclaims all resources allocated to the terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock cycle, but
at great expense; the deadlocked processes may have computed for a long time, and the
results of these partial computations must be discarded and probably will have to be
recomputed later.
2. Abort one process at a time until the deadlock cycle is eliminated: This method
incurs considerable overhead, since after each process is aborted, a deadlock-detection
algorithm must be invoked to determine whether any processes are still deadlocked.

If the partial termination method is used, then we must determine which deadlocked process (or
processes) should be terminated. Many factors may affect which process is chosen, including:
1. What the priority of the process is
2. How long the process has computed and how much longer the process will compute
before completing its designated task
3. How many and what types of resources the process has used.
4. How many more resources the process needs in order to complete
5. How many processes will need to be terminated?
6. Whether the process is interactive or batch
Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources
from processes and give these resources to other processes until the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:
1. Selecting a victim. Which resources and which processes are to be preempted? As inprocess
termination, we must determine the order of preemption to minimize cost. Costfactors may
include such parameters as the number of resources a deadlocked process isholding and the
amount of time the process has thus far consumed during its execution.
2. Rollback. If we preempt a resource from a process, what should be done with thatprocess?
Clearly, it cannot continue with its normal execution; it is missing some neededresource. We must
roll back the process to some safe state and restart it from that state.Since it is difficult to
determine what a safe state is, the simplest solution is a total rollback:
abort the process and then restart it.

3. Starvation. How do we ensure that starvation will not occur? That is, how can weguarantee
that resources will not always be preempted from the same process?

7.State and explain banker’s algorithm for deadlock avoidance. (10m[2017])


Ans:
Banker's Algorithm
The Banker’s algorithm is applicable to a resource allocation system with multiple instances of
each resource type.
• When a new process enters the system, it must declare the maximum number of instances
of each resource type that it may need. This number may not exceed the total number of
resources in the system.
• When a user requests a set of resources, the system must determine whether the allocationof
these resources will leave the system in a safe state. If it will, the resources are
allocated;otherwise, the process must wait until some other process releases enough resources.
To implement the banker's algorithm the following data structures are used.

Mutturaj Goudra 23
Let n = number of processes, and m = number of resources types
Available: A vector of length m indicates the number of available resources of each type.
Ifavailable [j] = k, there are k instances of resource type Rj available.
Max: An n x m matrix defines the maximum demand of each process. If Max [i,j] = k, then
process Pi may request at most k instances of resource type Rj
Allocation: An n x m matrix defines the number of resources of each type currently allocated
toeach process. If Allocation[i,j] = k then Pi is currently allocated k instances of RjNeed: An n x m
matrix indicates the remaining resource need of each process. If Need[i,j] = k,then Pi may need k
more instances of Rj to complete its task.

Need [i,j] = Max[i,j] – Allocation [i,j]

MODULE 4

Chapter 1:Memory Management

1.What are translation Load aside buffer(TLB)?Explain TLB in detail with a simple paging
system with a neat
diagram. (6m[2020])
Translation Look aside Buffer
• A special, small, fast lookup hardware cache, called a translation look-aside buffer
(TLB).
• Each entry in the TLB consists of two parts: a key (or tag) and a value.
• When the associative memory is presented with an item, the item is compared with all
keys simultaneously. If the item is found, the corresponding value field is returned. The
search is fast; the hardware, however, is expensive. Typically, the number of entries in a
TLB is small, often numbering between 64 and 1,024.
• The TLB contains only a few of the page-table entries.
Working:
• When a logical-address is generated by the CPU, its page-number is presented to theTLB.
• If the page-number is found (TLB hit), its frame-number is immediately available andused to
access memory
• If page-number is not in TLB (TLB miss), a memory-reference to page table must bemade. The
obtained frame-number can be used to access memory (Figure 1)

Mutturaj Goudra 24
• Some TLBs have wired down entries that can't be removed.
• Some TLBs store ASID (address-space identifier) in each entry of the TLB that uniquely
identify each process and provide address space protection for that process.

2. Describe both internal and external fragmentation problems encountered in a


contiguous memoryallocation scheme. (5m[2020])
3. Illustrate with example,the internal and external fragmentation problems encountered
in a contiguous memoryallocation. (6m[2019-20])
5.Distinguish between internal and external fragmentation. (2m[2017])
Ans:
Two types of memory fragmentation:
1. Internal fragmentation
2. External fragmentation
1. Internal Fragmentation
• The general approach is to break the physical-memory into fixed-sized blocks and
allocate memory in units based on block size.
• The allocated-memory to a process may be slightly larger than the requested-memory.
• The difference between requested-memory and allocated-memory is called internal
fragmentation i.e. Unused memory that is internal to a partition.
2. External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy a request
but the available-spaces are not contiguous. (i.e. storage is fragmented into a large number of
small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from externalfragmentation.
• Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blockswill be
lost to fragmentation. This property is known as the 50-percent rule.

Two solutions to external fragmentation:


• Compaction: The goal is to shuffle the memory-contents to place all free memory
together in one large hole. Compaction is possible only if relocation is dynamic and doneat
execution-time
• Permit the logical-address space of the processes to be non-contiguous. This allows aprocess to
be allocated physical-memory wherever such memory is available.
Twotechniques achieve this solution:
1) Paging and

Mutturaj Goudra 25
2) Segmentation.

7.What is the principle behind paging?Explain its operation,clearly indicating how the
logical addresses are convertedto physical addresses. (10m[2017])

Ans:

Paging
• Paging is a memory-management scheme.
• This permits the physical-address space of a process to be non-contiguous.
• This also solves the considerable problem of fitting memory-chunks of varying sizes
onto the backing-store.
• Traditionally: Support for paging has been handled by hardware.
• Recent designs: The hardware & OS are closely integrated.
Basic Method of Paging
• The basic method for implementing paging involves breaking physical memory into fixed-sized
blocks called frames and breaking logical memory into blocks of the samesize called pages.
• When a process is to be executed, its pages are loaded into any available memory framesfrom
the backing store.
• The backing store is divided into fixed-sized blocks that are of the same size as the
memory frames.

Mutturaj Goudra 26
4. Explain segmentation with an example. (6m[2017])
Ans:
Basic Method of Segmentation
• This is a memory-management scheme that supports user-view of memory (Figure 1).
• A logical-address space is a collection of segments.
• Each segment has a name and a length.
• The addresses specify both segment-name and offset within the segment.
• Normally, the user-program is compiled, and the compiler automatically constructs segments
reflecting the input program.
• For ex: The code, Global variables, The heap, from which memory is allocated, The
stacks used by each thread, The standard C library

Mutturaj Goudra 27
5. Explain the structure of page table. (8m[2019-20])
The most common techniques for structuring the page table:
1. Hierarchical Paging
2. Hashed Page-tables
3. Inverted Page-tables
1. Hierarchical Paging
• Problem: Most computers support a large logical-address space (232 to 264). In these

Mutturaj Goudra 28
systems, the page-table itself becomes excessively large.
• Solution: Divide the page-table into smaller pieces.
Two Level Paging Algorithm:
• The page-table itself is also paged.
• This is also known as a forward-mapped page-table because address translation works
from the outer page-table inwards.

2. Hashed Page Tables


• This approach is used for handling address spaces larger than 32 bits.
• The hash-value is the virtual page-number.
• Each entry in the hash-table contains a linked-list of elements that hash to the same
location (to handle collisions).
• Each element consists of 3 fields:
1. Virtual page-number
2. Value of the mapped page-frame and

Mutturaj Goudra 29
3. Pointer to the next element in the linked-list.

3. Inverted Page Tables


• Has one entry for each real page of memory.
• Each entry consists of virtual-address of the page stored in that real memory-location
and information about the process that owns the page.

Chapter 2:Virtual Memory Management

3. Explain demand paging system. (5m[2020])


4. Illustrate how demand paging affects systems performance. (8m[2019-20])
DEMAND PAGING

Mutturaj Goudra 30
• A demand paging is similar to paging system with swapping when we want to execute a process
we swap the process the in to memory otherwise it will not be loaded in to memory.
• A swapper manipulates the entire processes, where as a pager manipulates individua pages of
the process.
▪ Bring a page into memory only when it is needed
▪ Less I/O needed
▪ Less memory needed
▪ Faster response
▪ More users
▪ Page is needed ⇒ reference to it
▪ invalid reference ⇒abort
▪ not-in-memory ⇒ bring to memory
▪ Lazy swapper– never swaps a page into memory unless page will be needed
▪ Swapper that deals with pages is a pager.

4. What is thrashing?How can it be controlled? (5m[2020], 4m[2019-20])


Ans:
THRASHING
• If the number of frames allocated to a low-priority process falls below the minimum
number required by the computer architecture then we suspend the process execution.
• A process is thrashing if it is spending more time in paging than executing.

Mutturaj Goudra 31
• If the processes do not have enough number of frames, it will quickly page fault.
During this it must replace some page that is not currently in use. Consequently it
quickly faults again and again.
• The process continues to fault, replacing pages for which it then faults and brings
back. This high paging activity is called thrashing. The phenomenon of excessively
moving pages back and forth b/w memory and secondary has been called thrashing.

reached. If the degree of multi programming is increased further thrashing sets in and
the cpu utilization drops sharply.
• At this point, to increases CPU utilization and stop thrashing, we must increase degree
of multiprogramming.
• we can limit the effect of thrashing by using a local replacement algorithm. To prevent
thrashing, we must provide a process as many frames as it needs.
Working set model
• Working set model algorithm uses the current memory requirements to determine the number
of page frames to allocate to the process, an informal definition is “the collection of pages that a
process is working with and which must be resident if the
process to avoid thrashing”. The idea is to use the recent needs of a process to predict its future
reader.
• The working set is an approximation of programs locality. Ex: given a sequence of memory
reference, if the working set window size to memory references, then working
set at time t1 is{1,2,5,6,7} and at t2 is changed to {3,4}
• At any given time, all pages referenced by a process in its last 4 seconds of execution
are considered to compromise its working set.
• A process will never execute until its working set is resident in main memory.
• Pages outside the working set can be discarded at any movement.
• Working sets are not enough and we must also introduce balance set.
▪ If the sum of the working sets of all the run able process is greater than the size
of memory the refuse some process for a while.
▪ Divide the run able process into two groups, active and inactive. The collection
of active set is called the balance set. When a process is made active its working
set is loaded.
▪ Some algorithm must be provided for moving process into and out of the balance set. As a
working set is changed, corresponding change is made to the balance set.

Mutturaj Goudra 32
▪ Working set presents thrashing by keeping the degree of multi programming as
high as possible. Thus if optimizes the CPU utilization. The main disadvantage
of this is keeping track of the working set.
Page-Fault Frequency
• When page- fault rate is too high, the process needs more frames and when it is
too low,the process may have too many frames.
• The upper and lower bounds can be established on the page-fault rate. If the
actualpage- fault rate exceeds the upper limit, allocate the process another frame
or suspend the process.

5. Describe the steps in handling a page fault. (8m[2019-20])


Ans:Page Fault
If a page is needed that was not originally loaded up, then a page fault trap is generated.

Steps in Handling a Page Fault

1. The memory address requested is first checked, to make sure it was a valid memory
request.
2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is
not present in memory, it must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk.
5. After the page is loaded to memory, the process's page table is updated with the new frame
number, and the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning.

Module-5

Chapter 1:File System

Mutturaj Goudra 33
1. List any five typical file attributes and any five file operations indicating their
purpose in one line each. (10m[2017])
Ans:
File Attributes
• A file is named, for the convenience of its human users, and is referred to by its
name. A name is usually a string of characters, such as example.c
• When a file is named, it becomes independent of the process, the user, and even the
system that created it.
A file's attributes vary from one operating system to another but typically consist of these:

• Name: The symbolic file name is the only information kept in human readable form.
• Identifier: This unique tag, usually a number, identifies the file within the file
system; it is the non-human-readable name for the file.
• Type: This information is needed for systems that support different types of files.
• Location: This information is a pointer to a device and to the location of the file on that device.
• Size: The current size of the file (in bytes, words, or blocks) and possibly the
maximum allowed size are included in this attribute.
• Protection: Access-control information determines who can do reading, writing,
executing, and so on.
• Time, date, and user identification: This information may be kept for creation, last modification,
and last use. These data can be useful for protection, security, and usage monitoring.

The information about all files is kept in the directory structure, which also resides on secondary
storage. Typically, a directory entry consists of the file's name and its unique identifier. The
identifier in turn locates the other file attributes.

2. Explain briefly the various operations performed on files. (6m[2020]),(8m[MQ


2019],6m[2017])
Ans:
File Operations
A file is an abstract data type. To define a file properly, we need to consider the operations
that can be performed on files.
1. Creating a file:Two steps are necessary to create a file,
a) Space in the file system must be found for the file.
b) An entry for the new file must be made in the directory.

2. Writing a file:To write a file, we make a system call specifying both the name of the
file and the information to be written to the file. Given the name of the file, the
system searches the directory to find the file's location. The system must keep a write
pointer to the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.
3. Reading a file:To read from a file, we use a system call that specifies the name of the
file and where the next block of the file should be put. Again, the directory is
searched for the associated entry, and the system needs to keep a read pointer to the
location in the file where the next read is to take place. Once the read has taken place,
the read pointer is updated. Because a process is usually either reading from or
writing to a file, the current operation location can be kept as a per-process current
file-position pointer.
4. Repositioning within a file:The directory is searched for the appropriate entry, and

Mutturaj Goudra 34
the current-file-position pointer is repositioned to a given value. Repositioning within
a file need not involve any actual I/0. This file operation is also known as files seek.
5. Deleting a file:To delete a file, search the directory for the named file. Having found
the associated directory entry, then release all file space, so that it can be reused by
other files, and erase the directory entry.
6. Truncating a file:The user may want to erase the contents of a file but keep its
attributes. Rather than forcing the user to delete the file and then recreate it, this
function allows all attributes to remain unchanged but lets the file be reset to length
zero and its file space released.

3. Explain the various access methods of files. (5m[2020]),(6m[MQ 2019], 6m[2017])


Ans:
ACCESS METHODS
• Files store information. When it is used, this information must be accessed and read
into computer memory. The information in the file can be accessed in several ways.
• Some of the common methods are:

2. Direct Access
• A file is made up of fixed length logical records that allow programs to read and write records
rapidly in no particular order.
• The direct-access method is based on a disk model of a file, since disks allow
random access to any file block. For direct access, the file is viewed as a numbered
sequence of blocks or records.
• Example: if we may read block 14, then read block 53, and then write block 7. There

Mutturaj Goudra 35
are no restrictions on the order of reading or writing for a direct-access file.
• Direct-access files are of great use for immediate access to large amounts of
information such as Databases, where searching becomes easy and fast.

4. Explain the various types of directory structures. (8m[2019])


Ans:
1. Single-level Directory
• The simplest directory structure is the single-level directory. All files are contained in
the same directory, which is easy to support and understand.

A single-level directory has significant limitations, when the number of files increases or
when the system has more than one user.
• As directory structure is single, uniqueness of file name has to be maintained, which is
difficult when there are multiple users.

2. Two-Level Directory
• In the two-level directory structure, each user has its own user file directory (UFD).

Mutturaj Goudra 36
The UFDs have similar structures, but each lists only the files of a single user.
• When a user refers to a particular file, only his own UFD is searched. Different users
may have files with the same name, as long as all the file names within each UFD are
unique.

3. Tree Structured Directories


• A tree is the most common directory structure.
• The tree has a root directory, and every file in the system has a unique path name.
• A directory contains a set of files or subdirectories. A directory is simply another file, but
it is treated in a special way.

4. Acyclic Graph Directories


• The common subdirectory should be shared. A shared directory or file will exist in the
file system in two or more places at once. A tree structure prohibits the sharing of files
or directories.
• An acyclic graph is a graph with no cycles. It allows directories to share subdirectories
and files.

Mutturaj Goudra 37
5. General Graph Directory
• Problem: If there are cycles, we want to avoid searching components twice.
• Solution: Limit the no. of directories accessed in a search.

Chapter 2:Implementation of File Systems

1. Explain the various allocation methods in implementing file systems. (5m[2020]),(6m[MQ


2019], 8m[2017])
2. Describe various file allocation methods. (8m[2019])

Mutturaj Goudra 38
3. 4.Write short notes on linked and indexed allocation method with a neat diagram. (8m[2018-
19])
Ans:
ALLOCATION METHODS
Allocation methods address the problem of allocating space to files so that disk space is utilized
effectively and files can be accessed quickly.
Three methods exist for allocating disk space
• Contiguous allocation
• Linked allocation
• Indexed allocation
Contiguous allocation:
• Requires that each file occupy a set of contiguous blocks on the disk
• Accessing a file is easy – only need the starting location (block #) and length (number of
blocks)
• Contiguous allocation of a file is defined by the disk address and length (in block units) of
the first block. If the file is n blocks long and starts at location b, then it occupies blocks b,
b + 1, b + 2, ... ,b + n - 1. The directory entry for each file indicates the address of the
starting block and the length of the area allocated for this file.

Disadvantages:
1. Finding space for a new file is difficult. The system chosen to manage free space
determines how this task is accomplished. Any management system can be used, but
some are slower than others.
2. Satisfying a request of size n from a list of free holes is a problem. First fit and best
fit are the most common strategies used to select a free hole from the set of available
holes.
3. The above algorithms suffer from the problem of external fragmentation.
▪ As files are allocated and deleted, the free disk space is broken into pieces.
▪ External fragmentation exists whenever free space is broken into chunks.

Linked Allocation:
• Solves the problems of contiguous allocation
• Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk
• The directory contains a pointer to the first and last blocks of a file
• Creating a new file requires only creation of a new entry in the directory
• Writing to a file causes the free-space management system to find a free block

Mutturaj Goudra 39
Disadvantages:
1. The major problem is that it can be used effectively only for sequential-access
files. To filed the i th block of a file, we must start at the beginning of that file
and follow the pointers until we get to the ith block.
2. Space required for the pointers. Solution is clusters. Collect blocks into multiples
and allocate clusters rather than blocks

Indexed allocation:

• Brings all the pointers together into one location called index block.
• Each file has its own index block, which is an array of disk-block addresses.
• The ith entry in the index block points to the ith block of the file. The directory contains
the address of the index block. To find and read the ith block, we use the pointer in the ith
index- block entry.

• Disadvantages :

▪ Suffers from some of the same performance problems as linked allocation


▪ Index blocks can be cached in memory; however, data blocks may be spread all
over the disk volume.
▪ Indexed allocation does suffer from wasted space.
▪ The pointer overhead of the index block is generally greater than the pointer
overhead of linked allocation.

Mutturaj Goudra 40
Implementation:
File system implementation in an operating system refers to how the file system manages the
storage and retrieval of data on a physical storage device such as a hard drive, solid-state drive, or
flash drive. The file system implementation includes several components, including:

File System Structure: The file system structure refers to how the files and directories are
organized and stored on the physical storage device. This includes the layout of file systems data
structures such as the directory structure, file allocation table, and inodes.
File Allocation: The file allocation mechanism determines how files are allocated on the storage
device. This can include allocation techniques such as contiguous allocation, linked allocation,
indexed allocation, or a combination of these techniques.
Data Retrieval: The file system implementation determines how the data is read from and written
to the physical storage device. This includes strategies such as buffering and caching to optimize
file I/O performance.
Security and Permissions: The file system implementation includes features for managing file
security and permissions. This includes access control lists (ACLs), file permissions, and ownership
management.
Recovery and Fault Tolerance: The file system implementation includes features for recovering
from system failures and maintaining data integrity. This includes techniques such as journaling
and file system snapshots.

4. Briefly explain the methods of keeping track of free space on disks. (10m[2017])
5. 5.What do you mean by free space list ? With suitable example, explain any 3 methods
of free space list implementation. (8m[2018-19])
Ans:
Free Space Management
The space created after deleting the files can be reused. Another important aspect of disk
management is keeping track of free space in memory. The list which keeps track of free space in
memory is called the free-space list.

Mutturaj Goudra 41
a) Bit Vector
• Fast algorithms exist for quickly finding contiguous blocks of a given size
• One simple approach is to use a bit vector, in which each bit represents a disk block,
set to 1 if free or 0 if allocated.
b) Linked List
a. A linked list can also be used to keep track of all free blocks.
b. Traversing the list and/or finding a contiguous block of a given size are not
easy, but fortunately are not frequently needed operations. Generally the
system just adds and removes single blocks from the beginning of the list.
c) Grouping
a. A variation on linked list free lists. It stores the addresses of n free blocks in
the first free block. The first n-1 blocks are actually free. The last block
contains the addresses of another n free blocks, and so on.
b. The address of a large number of free blocks can be found quickly.
d) Counting
a. When there are multiple contiguous blocks of free space then the system can
keeptrack of the starting address of the group and the number of contiguous
free blocks.
b. Rather than keeping al list of n free disk addresses, we can keep the address
of firstfree block and the number of free contiguous blocks that follow the first
block.
e) Space Maps
a. Sun's ZFS file system was designed for huge numbers and sizes of files,
directories, and even file systems.
b. The resulting data structures could be inefficient if not implemented carefully.
For example, freeing up a 1 GB file on a 1 TB file system could involve
updating thousands of blocks of free list bit maps if the file was spread across
the disk.
Chapter 3:Secondary Storage Structures

1. Explain the various Disk Scheduling algorithms with example. (8m[2020]),(8m[MQ 2019],
10m[2017])
2.What is disk scheduling?Discuss different disk scheduling techniques. (12m[2017])
3. Explain the following disk scheduling algorithm in brief with examples:
i) FCFS scheduling
ii) SSTF scheduling
iii) SCAN scheduling
iv) LOOK scheduling (9m[2019])
Ans:DISK SCHEDULING
Different types of disk scheduling algorithms are as follows:
1. FCFS (First Come First Serve)
2. SSTF (Shortest Seek Time First)
3. SCAN (Elevator)
4. C-SCAN
5. LOOK
6. C-LOOK

Mutturaj Goudra 42
2. SSTF (Shortest Seek Time First) algorithm:
This selects the request with minimum seek time from the current head position. SSTF
chooses the pending request closest to the current head position.

Mutturaj Goudra 43
Mutturaj Goudra 44
5. Look Scheduling algorithm:
Look and C-Look scheduling are different version of SCAN and C-SCAN respectively.
Here the arm goes only as far as the final request in each direction. Then it reverses, without
going all the way to the end of the disk. The Look and C-Look scheduling look for a request
before continuing to move in a given direction.

Eg:- consider a disk queue with request for i/o to blocks on cylinders. 98, 183, 37, 122, 14,
124, 65, 67

If the disk head is initially at 53 and if the head is moving towards the outer track, it
services 65, 67, 98, 122, 124 and 183. At the final request 183, the arm will reverse and will
move towards the first request 14 and then serves 37.

Mutturaj Goudra 45

You might also like