0% found this document useful (0 votes)
0 views

OS_Model_Solutions

Uploaded by

Triveni Patle
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

OS_Model_Solutions

Uploaded by

Triveni Patle
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

TULSIRAMJI GAIKWAD- PATIL College of Engineering & Technology

Department of Information Technology


Fourth Semester
B.Tech. Examination Solution Set
Subject: Operating System

Section – A
Q 1. a) Define Operating System? Explain Batch, Time Sharing & Real Time Operating
System. (7)
Ans:
An Operating system is a program that controls the execution of application programs and acts as
an interface between the user of a computer and the computer hardware.
An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly includes
programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.

Batch System
 Batch operating system is one where programs and data are collected together in a batch
before processing starts. A job is predefined sequence of commands, programs and data
that are combined in to a single unit called job.
 Figure below shows the memory layout for a simple batch system. Memory management
in batch system is very simple. Memory is usually divided into two areas : Operating
system and user program area.

Fig: Memory Layout for a Simple Batch System


 Scheduling is also simple in batch system. Jobs are processed in the order of submission i.e
first come first served fashion.
 When job completed execution, its memory is releases and the output for the job gets
copied into an output spool for later printing.
 Batch system often provides simple forms of file management. Access to file is serial.
Batch systems do not require any time critical device management.
 Batch systems are inconvenient for users because users can not interact with their jobs to
fix problems. There may also be long turn-around times.

Time Sharing Systems


 Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple
jobs are executed by the CPU switching between them, but the switches occur so
frequently that the users may interact with each program while it is running.
 In an interactive, or hands-on, computer system the user gives instructions to the
operating system or to a program directly, and receives an immediate response. Usually, a
keyboard is used to provide input, and a display screen (such as a cathode-ray tube (CRT)
or monitor) is used to provide output.
 Time-sharing systems were developed to provide interactive use of a computer
system at a reasonable cost. A time-shared operating system uses CPU scheduling and
multiprogramming to provide each user with a small portion of a time-shared
computer. Each user has at least one separate program in memory. A program that is
loaded into memory and is executing is commonly referred to as a process. When a
process executes, it typically executes for only a short time before it either finishes
or needs to perform I/O. I/O may be interactive; that is, output is to a display for
the user and input is from a user keyboard. Since interactive I/O typically runs at people
speeds, it may take a long time to completed.
 A time-shared operating system allows the many users to share the computer
simultaneously. Since each action or command in a time-shared system tends to be
short, only a little CPU time is needed for each user. As the system switches
rapidly from one user to the next, each user is given the impression that she has her
own computer, whereas actually one computer is being shared among many users.
 Time-sharing operating systems are even more complex than are multi-programmed
operating systems. As in multiprogramming, several jobs must be kept simultaneously in
memory, which requires some form of memory management and protection.

Real Time Operating System


 A real-time operating system (RTOS) is an operating system (OS) intended to serve real
time application requests. It must be able to process data as it comes in, typically without
buffering delays. Processing time requirements (including any OS delay) are measured in
tenths of seconds or shorter.
 A real-time operating system (RTOS) is an operating system that guarantees a certain
capability within a specified time constraint. For example, an operating system might be
designed to ensure that a certain object was available for a robot on an assembly line.
 RTOS is categorized into hard RTOS & soft RTOS.
 In "hard" real-time operating system, if the calculation could not be performed for making
the object available at the designated time, the operating system would terminate with a
failure.
 The "soft" real-time operating system is the less restricted type of operating system. If the
data is not processed within the specified time interval then output may loss its utility.
Some real-time operating systems are created for a special application and others are more
general purpose.

Q 1. b) Explain various file access methods. (3)


Ans:

i) Sequential Access:
The simplest access method is sequential access. Information in the file is processed in
order, one record after the other. Editors and compilers usually access files in this fashion.
Reads and writes make up the bulk of the operations on a file. A read operation—read
next—reads the next portion of the file and automatically advances a file pointer, which tracks the
I/O location. Similarly, the write operation—write next—appends to the end of the file and
advances to the end of the newly written material (the new end of file). Such a file can be reset to
the beginning; and on some systems, a program .may be able to skip forward or backward n
records for some integer n. Sequential access is based on tape model of a file.

Fig: Sequential Access File

ii) Direct Access:


Another method is direct access (or relative access). A file is made up of fixed-length
logical records that allow programs to read and write records rapidly in no particular order. The
direct-access method is based on a disk model of a file, since disks allow random access to any
file block. For direct access, the file is viewed as a numbered sequence of blocks or records. Thus,
we may read block 14, then read block 53, and then write block 7. There are no restrictions on the
order of reading or writing for a direct-access file. Direct-access files are of great use for
immediate access to large amounts of information. Databases are often of this type. When a query
concerning a particular subject arrives, we compute which block contains the answer and
then read that block directly to provide the desired information.
For the direct-access method, the file operations must be modified to include the block
number as a parameter. Thus, we have read n, where n is the block number, rather than read next,
and write n rather than write next. An alternative approach is to retain read next and write next, as
with sequential access, and to add an operation position file to n, where n is the block number.
Then, to effect a read n, we would, position to n and then read next.

Q 1. c) Differentiate between Multitasking & multiprogramming. (3)


Ans:

Multiprogramming Multitasking
Multi programming as a concept involves the Multi tasking is a function of a system to
capability of a system to simultaneously run perform more than one tasks at a time.
two or more programs at a time
User cannot interact with the system. User can interact with the system.
The simultaneous execution of two or more The concurrent or interleaved execution of two
programs or instruction sequences by separate or more jobs takes place by a single CPU.
CPUs under integrated control
For example, If you are printing a document of For example, If you are printing a document of
100 pages. While your computer is performing 100 pages. While your computer is performing
that, you still can do other jobs like typing a that, you still can do other jobs like typing a
new document. So, more than one task is new document. So, more than one task is
performed. performed.

Q 2. a) Explain various file allocation methods. (6)


Ans:
Contiguous Allocation
 The contiguous allocation method requires each file to occupy a set of contiguous
blocks on the disk. Contiguous allocation of a file is defined by the disk address and length
(in block units) of the first block. If the file is n blocks long, and starts at location
b, then it occupies blocks b, b + 1, b + 2, ..., b + n – 1. The directory entry for each file
indicates the address of the starting block and the length of the area allocate for this file.
 Accessing a file that has been allocated contiguously is easy. For sequential access, the
file system remembers the disk address of the last block referenced and, when
necessary, reads the next block. For direct access to block i of a file that starts at
block b, we can immediately access block b + i.
 The contiguous disk-space-allocation problem can be seen to be a particular application
of the general dynamic storage-allocation First Fit and Best Fit are the most
common strategies used to select a free hole from the set of available holes.
 These algorithms suffer from the problem of external fragmentation. To prevent loss of
significant amounts of disk space to external fragmentation, the user had to run
repacking routine that copied the entire file system onto another floppy disk or onto a
tape. The original floppy disk was then freed completely, creating one large
contiguous free space. The routine then copied the files back onto the floppy disk
by allocating contiguous space from this one large hole. This scheme effectively
compacts all free space into one contiguous space, solving the fragmentation problem.
 The time cost is particularly severe for large hard disks that use contiguous allocation,
where compacting all the space may take hours and may be necessary on a weekly
basis.
 A major problem is determining how much space is needed for a file. When the file is
created, the total amount of space it will need must be found and allocated. The user will
normally over estimate the amount of space needed, resulting in considerable wasted
space.

Fig: Contiguous Allocation of Disk Space

Linked Allocation
 With link allocation, each file is a linked list disk blocks; the disk blocks may be
scattered anywhere on the disk.
 Each directory entry has pointer initialized to nil to signify empty file to first disk block of
the file.
 There is no external fragmentation with linked allocation, and any free block on the free-
space list can be used to satisfy a request. There is no need to declare the size of a file
when that file is created. A file can continue to grow as long as there are free
blocks.
 The major problem is that it can be used effectively for only sequential access
files. To find the ith block of a file we must start at the beginning of that file, and follow
the pointers until we get to the ith block. Each access to a pointer requires a disk read and
sometimes a disk seek.
 One drawback of linked allocation is the space required for the pointers. If a pointer
requires 4 bytes out of a 512 Byte block then 0.78 percent of the disk is being used for
pointer, rather than for information. The usual solution to this problem is to collect blocks
into multiples, called clusters, and to allocate the clusters rather than blocks. .
 An important variation, on the linked allocation method is the use of a file allocation
table (FAT). The table has one entry for each disk block, and is indexed by block number.
The directory entry contains the block number of the first block of the file. The table
entry indexed by that block number then contains the block number of the next
block in the file. This chain continues until the last block, which has a special end-of-file
value -as the table entry. Unused blocks are indicated by a 0 table value. Allocating a new
block to a file is a simple matter of finding the first 0-valued table entry, and replacing the
previous end-of-file value with the address of the new block. The 0 is then
replaced with the end-of file value.

Fig: Linked Allocation Fig: file Allocation table

Indexed Allocation
 Linked allocation cannot support efficient direct access, since the pointers to the
blocks are scattered with the blocks themselves all over the disk and need to be
retrieved in order. Indexed allocation solves this problem by bringing all the pointers
together into one location: the index block.
 Each file has its own index block, which is an array of disk-block addresses. The ith entry
in the index block points to the ith block of the file. The directory contains the
address of the index block.
 When the file is created, all pointers in the index block are set to nil. When the ith block is
first written, a block is obtained: from the free space manager, and its address- is put in the
ith index-block entry.
 Allocation supports direct access, without suffering from external fragmentation
because any free block on the disk may satisfy a request for more space.
 Indexed allocation suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
1. Linked scheme. An index block is normally one disk block. Thus, it can be read and
written directly by itself.
2. Multilevel index. A variant of the linked representation is to use a first-level
index block to point to a set of second-level index blocks, which in turn point to
the file blocks. To access a block, the operating system uses the first-level index
to find a second-level index block, and that block to find the desired data block.
Fig: Index Allocation

Q 2. b) Compare spooling &buffering. (4)


Ans:

Buffering: It is a method of overlapping input, output and processing of a single job. After the
data has been read and CPU is about to start operating on it, the input device is instructed to begin
the next input immediately. The CPU and the input device are then both busy. By the time that the
CPU is ready for the next data item, the input device will have finished reading it. The CPU can
the begin processing the newly read data, while the input device starts to read the following data.
Similar can be done for output. In this case CPU creates data that is put into a buffer until an
output device can accept it. If the CPU is fast then for input it always find free buffer and for
output it always finds full buffer. In both the cases CPU has to wait for input or output device.
Buffering overlaps input, output and processing of a single job

Spooling: It stands for simultaneous Peripheral Operation Online. In disk technology, rather than
the cards being read from the card reader directly into memory and then the job being processed,
cards are read directly from the card reader onto the disk. The location of the card reader is
recorded in the table kept by the OS. When the job is executed, the OS satisfied its request for the
card reader input by reading from the disk. Similarly, when the job requests the printer to output
the line, that line is copied into the system buffer and is written into the disk. When the job is
completed the output is actually printed. This form of processing is called as spooling. Spooling
allows CPU to overlap the input of one job with the computation and output of other jobs.

Q 2. c) Differentiate tightly coupled & loosely coupled multiprocessing. (3)


Ans:

Tightly Coupled System Loosely Coupled System


Tightly coupled systems contain multiple Loosely coupled multiprocessor system are
CPU’s that are connected at the bus level. based on multiple standalone single or dual
These CPU’s may have access to central shared processor computers interconnected via high
memory or may participate in memory speed communication system.
hierarchy in both local and shared memory.
Tightly- coupled systems perform better & are Loosely- coupled systems physically larger than
physically smaller than loosely- coupled tightly- coupled system.
system.
More Expensive Less Expensive
Delay is low Delay is high
Data rate is high Data rate is low
Uses dynamic interconnection network Uses static interconnection network
Eg. Two CPU chips on same PCB connected by Eg. Two computers connected via modem over
wires a telephone system

Q 3. a)For each processes listed in table, draw gantt chart illustrating their execution using:-
i) Round Robin (Time quantum= 3)
ii) Priority Scheduling
iii) First Come First Serve
iv) Shortest Job First
Process Burst Time Priority
A 10 2
B 6 5
C 2 3
D 4 1
E 8 4
Also compare each algorithm on the basis of average waiting time and turnaround time
calculated. (10)
Ans:
Gantt Charts:

i) Round Robin (Time quantum= 3)

A B C D E A B D E A E A

0 3 6 8 11 14 17 20 21 24 27 29 30

ii) Priority Scheduling

D A C E B
0 4 14 16 24 30

iii) First Come First Serve

A B C D E
0 10 16 18 22 30

iv) Shortest Job First

C D B E A
0 2 4 16 22 30

Comparison on the basis of average waiting time and turnaround time

Algorithm Avg. Turn Around Time Avg. Waiting Time


Round Robin Scheduling 21.6 ms 19.8 ms
Priority Scheduling 17.6 ms 11.6 ms
First Come First Serve 19.2 ms 13.2 ms
Shortest Job First 14.8 ms 8.8 ms
From above table it can be concluded that shortest job first algorithm has minimum
waiting and turnaround time which increases with priority, first come first serve and round robin
scheduling algorithm.
Q 3.b) Draw and explain process state transition diagram. (3)
Ans:

Process State Transition Diagram: When process executes, it changes state. Process state is
defined as the current activity of the process. Fig. 3.1 shows the general form of the
process state transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exit)

Fig: Diagram for Process State

1. New: A process that just been created.


2. Ready: Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
3. Running: The process that is currently being executed. A running process possesses all
the resources needed for its execution, including the processor.
4. Waiting: A process that cannot execute until some event occurs such as the completion of an
I/O operation. The running process may become suspended by invoking an I/O module.
5. Terminated: A process that has been released from the pool of executable processes by
the operating system.
Whenever processes changes state, the operating system reacts by placing the
process PCB in the list that corresponds to its new state. Only one process can be running
on any processor at any instant and many processes
may be ready and waiting state.

Q 4. a) Suppose head of moving head disk with 200 tracks numbering 0 to 199 is currently
serving a request at 140. The arrival request is kept in FIFO order. The request at track are
84, 147, 91, 177, 94, 150, 102, 175, 130
Assuming earlier direction towards to be zero, calculate total head movements for following
disk scheduling algorithms.
i) SSTF ii) SCAN iii)C-SCAN iv)FCFS (8)
Ans:
i) SSTF
0 84 91 94 102 130 140 147 150 175 177 199

Total Head Movement = (140-130) + (177-130) + (177-84)


= 150 Cylinders

ii) SCAN

0 84 91 94 102 130 140 147 150 175 177 199

Total Head Movement = (140-0) + (177-0)


= 317 Cylinders

iii) C-SCAN

0 84 91 94 102 130 140 147 150 175 177 199

Total Head Movement = (140-0) + (199-147)


= 192 Cylinders

iv) FCFS
0 84 91 94 102 130 140 147 150 175 177 199

Total Head Movement = (140-84) + (147-84) +(147-91) + (177-91) + (177-94) + (150-94)


+ (150-102) + (175-102) + (175-130)
= 566 Cylinders
Q. 4 b) Explain sector queuing briefly. (5)
Ans:

Sector queuing is an algorithm for scheduling fixed head devices. It is based on the division of
each track into a fixed number of blocks called sectors. The disk address in each request specifies
the track and sectors. Since seek time is zero for fixed head devices, the main service time is
latency. Sector queuing is primarily used with fixed head devices. If there is more than one request
for service within a particular track or cylinder. Sector queuing can be used to order multiple
requests within the same cylinder.
Example: Assume the head is currently over sector 2 and the first request in the queue is for sector
12. To service this request wait until sector 12 revolves under read/ write heads. If there is a
request in the queue for sector 5, it could be services before the request for sector without causing
for sector 12 to be delayed.
Sector queue defines a separate queue for each sector of the drum. When a request arrives
for sector i, it is placed in the queue for sector i.

Fig: Sector Queuing

Q 5) Explain the following. (3+3+3+3+2=14)


i)Demand Paging
ii) Thrashing
iii) Overlays
iv) Segmented Paging
v) Spatial Locality
Ans:
i) Demand Paging
A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory (usually a disk). When we want to execute a process, we swap it into
memory rather than swapping the entire process into memory. The term pager is used with
demand paging rather than swapper as swapper manipulates entire processes, whereas a pager is
concerned with the individual pages of a process.

Fig: Transfer of Paged Memory to contiguous disk space


The hardware to support demand paging is the same as the hardware for paging and
swapping:
• Page table: This table has the ability to mark an entry invalid through a valid-invalid bit or
special value of protection bits.
• Secondary memory: This memory holds those pages that are not present in main memory. The
secondary memory is usually a high-speed disk. It is known as the swap device, and the section of
disk used for this purpose is known as swap space.
When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only those
necessary pages into memory. With this scheme, we need some form of hardware support to
distinguish between the pages that are in memory and the pages that are on the disk. The valid-
invalid bit scheme can be used for this purpose. When this bit is set to "valid" the associated page
is both legal and in memory. If the bit is set to "invalid," the page either is not valid or is valid but
is currently on the disk.

Fig: Page table when some pages are not in memory

ii) Thrashing
Consider a process that does not have ''enough" frames. If the process does not have the
number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it
must replace some page. As all its pages are in active use, it must replace a page that will be
needed again right away. Consequently, it quickly faults again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity is called ―thrashing‖. A
process is thrashing if it is spending more time in paging than executing.

Fig: Thrashing
Thrashing results in severe performance problems. The operating system monitors CPU
utilization. If CPU utilization is too low, we increase the degree of multiprogramming by
introducing a new process to the system. A global page-replacement algorithm is used; it replaces
pages without regard to the process to which they belong. Now suppose that a process enters a
new phase in its execution and needs more frames. It starts faulting and taking frames away from
other processes. These processes in turn also fault for pages, taking frames from other processes.
As processes wait for the paging device, CPU utilization decreases. The CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming. Thrashing has
occurred, and system throughput plunges. The page-fault rate increases tremendously as a result,
the effective memory-access time increases. No work is getting done, because the processes are
spending all their time paging.

iii) Overlays
The entire program and data of a process must be in the physical memory for the process
to execute. The size of a process is limited to the size of physical memory. If a process is larger
than the amount of memory, a technique called overlays can be used.
Overlays is to keep in memory only those instructions and data that are needed at any
given time. When other instructions are needed, they are loaded into space that was occupied
previously by instructions that are no longer needed. Overlays are implemented by user, no special
support needed from operating system, programming design of overlay structure is complex.
Example: Consider a two-pass assembler.
o Pass1 constructs a symbol table.
o Pass2 generates machine-language code.
Assume the following:

To load everything at once, we need 200k of memory. If only 150K is available, we cannot
run our process. Notice that Pass1 and Pass2 do not need to be in memory at same time. So, we
define two overlays:
– Overlay A: symbol table, common routines, and Pass1.
– Overlay B: symbol table, common routines, and Pass2.
We add overlay driver 10k and start with overlay A in memory. When finish Pass1, we
jump to overlay driver, which reads overlay B into memory overwriting overlay A and transfer
control to Pass2. Overlay A needs 130k and Overlay B needs 140k.

iv) Segmented Paging


In Pure segmentation scheme there are various problems as follows:
1. If segments are very large it will be very inconvenient to keep in main memory.
2. If segments are very large and there is no paging then there can be possibilities of external
fragmentation.
3. Also search time to allocate a segment using best fit or first fit will be more.

This leads to the idea of paging them and bringing only that pages in main memory which
are necessary. The paged segmentation scheme is as follows:
1. A virtual address becomes a segment number, a page within that segment, and an offset
within the page.
2. The segment number indexes into the segment table which yields the base address of the
page table for that segment.
3. The remainder of the address (page number and offset) is checked against the limit of the
segment.
4. The page number is used to index the page table. The entry in the page number is the frame
number.
5. The frame and the offset is added to get the physical address which is used to refer the data
of interest in the main memory.

Fig: Paged Segmentation

v) Spatial Locality
Locality of reference, also known as the principle of locality, is a phenomenon describing
the same value, or related storage locations, being frequently accessed. There are two basic types
of reference locality temporal locality & spatial locality.
Spatial locality, refers to the use of data elements within relatively close storage
locations. If a particular memory location is referenced at a particular time, then it is likely that
nearby memory locations will be referenced in the near future. In this case it is common to attempt
to guess the size and shape of the area around the current reference for which it is worthwhile to
prepare faster access.

Section – B

Q 6. a)What are different types of memory fragmentation? Under what circumstances does
each occur? (4)
Ans:
Following are the different types of memory fragmentation:
i) Internal Fragmentation:
When partitioning is static, memory is wasted in each partition where an object of smaller
size than the partition itself is loaded. Wasting of memory within partition due to difference in size
of a partition and of object resident within it, is called internal fragmentation.
Internal fragmentation occurs when memory is internal to the region but it is not being
used.

ii) External fragmentation:


When an object is removed from memory the space it occupied is returned to the pool of
free space, from which new allocations are made. After some time in operation, dynamic
partitioning of memory has a tendency main memory into interspersed areas of allocated and of
unused memory. As a result allocation may fail to find a free region to fulfil the request. Wasting
of memory between partitions due to scattering of free space into a number of discontiguous area,
is called external fragmentation.
External fragmentation occurs when region is unused and available, but it is too small for
any waiting job.

Q 6. b) Consider the following page reference string:


4, 1, 2, 1, 5, 4, 1, 2, 1, 5
Assume 3page frames and pure demand paging.
How many page fault will occur for: i)FIFO ii) LRU iii) Optimal Algorithm. 6
Ans:

i) FIFO Replacement

Reference String 4 1 2 1 5 4 1 2 1 5
Frame 1 4 4 4 5* 5 5 2* 2
Frame 2 1 1 1 4* 4 4 5*
Frame 3 2 2 2 1* 1 1
Page Fault # # # # # # # #

No. of. Page Faults (#) = 8

ii) LRU Replacement

Reference String 4 1 2 1 5 4 1 2 1 5
Frame 1 4 4 4 5* 5 2* 2
Frame 2 1 1 1 1 1 1
Frame 3 2 2 4* 4 5*
Page Fault # # # # # # #

No. of. Page Faults (#) = 7

iii) Optimal Algorithm

Reference String 4 1 2 1 5 4 1 2 1 5
Frame 1 4 4 4 4 2*
Frame 2 1 1 1 1
Frame 3 2 5* 5
Page Fault # # # # #

No. of. Page Faults (#) = 5


Q6. c) What is Belady’s Anomaly? (3)
Ans:
The general principal is if the no. of. frames is increased, the page fault rate will be
decreased.
Eg. For the reference string 7 0 1 2 0 3 0 4 2 3 0 3 0 3 2 1 2 0 1 7 0 1 if we apply FIFO
algorithm with 3 frames there are 15 page fault and if we increase the no. of frames to 4 there are
12 page faults.
Now consider the reference string 1 2 3 4 1 2 5 1 2 3 4 5.
In this case no. of. page faults with 3 frames is 9 and with 4 frames is 10. 1.e. page fault is
increasing with frame number.
This result is unexpected and known as Belady’s Anomaly. It is applicable only for FIFO
algorithm.

Reference String 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4* 4 4 5* 5 5
Frame 2 2 2 2 1* 1 1 3* 3
Frame 3 3 3 3 2* 2 2 4*
Page Fault # # # # # # # # #
No. of page fault with 3 page frame: 9

Reference String 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 5* 5 5 5 4* 4
Frame 2 2 2 2 2 1* 1 1 1 5*
Frame 3 3 3 3 3 2* 2 2 2
Frame 4 4 4 4 4 3* 3 3
Page Fault # # # # # # # # # #
No. of page fault with 4 page frame: 10

Q 7. a) What is deadlock? Explain deadlock avoidance techniques? (5)


Ans:
In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not available at
that time, the process enters a wait state. It may happen that waiting processes will never
again change state, because the resources they have requested are held by other waiting processes.
This situation is called deadlock.
Deadlock avoidance techniques
Method for avoiding deadlocks is to require additional information about how resources
are to be requested. The various algorithms that use this approach differ in the amount and type of
information required. The simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need. Given this a priori, information, it is
possible to construct an algorithm that ensures that the system will never enter a deadlocked state.
Such an algorithm defines the deadlock-avoidance approach. A deadlock-avoidance algorithm
dynamically examines the resource-allocation state to ensure that a circular-wait condition can
never exist. The resource-allocation state is defined by the number of available and allocated
resources and the maximum demands of the processes.

Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in
some order and still avoid a deadlock. A system is in a safe state only if there exists a safe
sequence. A sequence of processes <P1, P2, ..., Pn> is a safe sequence for the current allocation
state if, for each Pi, the resource requests that P, can still make can be satisfied by the currently
available resources plus the resources held by all Pi, with j< i. In this situation, if the resources
that Pi needs are not immediately available, then Pi, can wait until all Pj have finished. When they
have finished, P; can obtain all of its needed resources, complete its designated task, return its
allocated resources, and terminate. When Pi, terminates, Pi+l can obtain its needed resources, and
so on. If no such sequence exists, then the system state is said to be unsafe.

Fig: Safe, unsafe & Deadlock State Process

Resource-Allocation-Graph Algorithm
In addition to the request and assignment edges in resource allocation graph, a new type of
edge, called a claim edge is introduced in this algorithm. A claim edge Pi —> Rj indicates that
process Pi may request resource Rj at some time in the future. This edge resembles a request edge
in direction but is represented in the graph by a dashed line. When process Pi requests resource Rj,
the claim edge Pi —> Rj is converted to a request edge. Similarly, when a resource Rj is released
by Pj, the assignment edge Rj  Pi is reconverted toa claim edge Pi —> Rj. Before process Pi
starts executing, all its claim edges must already appear in the resource-allocation graph. This
condition can be relaxed by allowing a claim edge Pi —> Rj to be added to the graph only if all
the edges associated with process Pi are claim edges.
Suppose that process requests resource, the request can be granted only if converting the request
edge to an assignment edge does not result in the formation of a cycle in the resource-allocation
graph. If no cycle exists, then the allocation of the resource will leave the system in a safe state. If
a cycle is found, then the allocation will put the system in an unsafe state. Therefore, process will
have to wait for its requests to be satisfied.

Fig: Resource allocation graph for Fig: An unsafe state in


deadlock avoidance Resource allocation graph:

Bankers Algorithm
The Bankers algorithm is applicable to a resource-allocation system with multiple
instances of each resource type. is less efficient than the resource-allocation graph scheme. When,
a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number may not exceed the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation
of these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
Several data structures must be maintained to implement the banker's algorithm. Let n be
the number of processes in the system and m be the number of resource types. We need the
following data structures:
• Available: A vector of length m indicates the number of available resources of each type. If
Available[j] equals k, there are k instances of resource type Rj available.
• Max: An n x m matrix defines the maximum demand of each process. If M[i][j] equals k, then
process Pi may request at most k instances of resource type Rj.
• Allocation: An n x in matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.
• Need: An n x m matrix indicates the remaining resource need of each process. If Need[i][j]
equals k, then process Pi may need k more instances of resource type Rj to complete its task. Note
that Need[i][j] equals Max[i][j]- Allocation[i][j].

Q 7.b) Explain briefly:- (8)


i)Bankers Algorithm
ii)Safety Algorithm
Ans:

The Bankers algorithm is applicable to a resource-allocation system with multiple


instances of each resource type. is less efficient than the resource-allocation graph scheme. When,
a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number may not exceed the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation
of these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
Several data structures must be maintained to implement the banker's algorithm. Let n be
the number of processes in the system and m be the number of resource types. We need the
following data structures:
• Available: A vector of length m indicates the number of available resources of each type. If
Available[j] equals k, there are k instances of resource type Rj available.
• Max: An n x m matrix defines the maximum demand of each process. If M[i][j] equals k, then
process Pi may request at most k instances of resource type Rj.
• Allocation: An n x in matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.
• Need: An n x m matrix indicates the remaining resource need of each process. If Need[i][j]
equals k, then process Pi may need k more instances of resource type Rj to complete its task. Note
that Need[i][j] equals Max[i][j]- Allocation[i][j].

i) Safety Algorithm:
This algorithm for finding out whether or not a system is in a safe state. This algorithm
can be described, as follows:
1. Let Work and Finish be vectors of length in and n, respectively. Initialize
Work = Available and Finish[i] = false for i= 0,1 , ..., n-l .
2. Find an i such that both
a. Finish[i] ==false
b. Needi < Work
If no such i exists, go to step 4.
3. Work = Work + Allocation,
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.

ii) Resource-Request Algorithm


This determines if requests can be safely granted. Let Requesti be the request vector for
process Pi. If Requesti[ j] == k, then process Pi wants k instances of resource type Rj. When a
request for resources is made by process Pi the following actions are taken:
1. If Requesti < Needi go to step 2. Otherwise, raise an error condition, since the process has
exceeded its maximum claim.
2. If Requesti < Available, go to step 3. Otherwise, Pi must wait, since the resources are not
available.
3. Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as follows:
Available = Available - Requesti
Allocation-, = Allocationi + Requesti
Needi = Necdi - Requesti
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti and the
old resource allocation state is restored.
Example:
Consider the following snapshot of system:

The content of the matrix Need is defined to be Max - Allocation and is as follows:

By using safety algorithm we can conclude that the system is currently in a safe state with
the sequence<P1, P3, P4, P1, P0>. Suppose now that process P1 requests one additional instance
of resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 < Available- that is, (1,0,2) <
(3,3,2), which is true. By using resource request algorithm this request has been fulfilled, and we
arrive at the following new state:

Again applying safety algorithm to check whether the new system state is safe or not. We
get the safe sequence <P1, P3, P4, P0, P2>. Thus the request can be granted immediately.
Q 8. a) Explain the concept of semaphore? Give solution for reader’s/ writer’s problem
using semaphores. (8)
Ans:

Semaphore
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The waitO operation was originally
termed P; signal () was originally called V.
The definition of wait 0 is as follows:
wait(S) {
while S <= 0
; // no-op
S--;

The definition of signal () is as follows:


signal(S) {
S++;
}
All the modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly. That is, when one process modifies the semaphore value,
no other process can simultaneously modify that same semaphore value. In addition, in the case of
wait(S), the testing of the integer value of S (S < 0), and its possible modification (S--), must also
be executed without interruption.
Operating systems often distinguish between counting and binary semaphores. The value
of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore
can range only between 0 and 1. On some systems, binary semaphores are known as mutex locks,
as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the
critical-section problem for multiple processes. Counting semaphores can be used to control
access to a given resource consisting of a finite number of instances.

Reader’s/ Writer’s Problem:


Given a universe of readers that read a common data structure and a universe of writers
that modify he same common data structure. Devise a synchronization protocol among the readers
and writers that ensures consistency of common data while maintaining as high a degree of
concurrency as possible.
Solution for Reader’s/ Writer’s Problem:
The reader processes share the following data structures:
semaphore mutex, wrt;
int readcount;
The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The
semaphore wrt is common to both reader and writer processes. The mutex semaphore is used to
ensure mutual exclusion when the variable readcount is updated. The readcount variable keeps
track of how many processes are currently reading the object. The semaphore wrt functions as a
mutual-exclusion semaphore for the writers. It is also used by the first or last reader that enters or
exits the critical section. It is not used by readers who enter or exit while other readers are in their
critical sections.
Reader’s Process:
do
{
wait(mutex);
readcount++;
if (readcount == 1)
wait(rt);
signal(mutex);

reading is performed

wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):
}While (True);

Writer’s Process:
Do
{
wait(wrt);

writing is performed

signal(wrt);
}

Q 8. b) Explain the solution of producer- consumer problem with bounded buffer using
semaphore. (6)
Ans:

The producer consumer problem can be stated as, given a set of cooperating process, some
of which produce data items to be consumed by others with possible disparity between
consumption & production rates. Devise a synchronization protocol that allows both producers
and consumers to operate concurrently at their respective service rates in such a way that produced
items are consumed in the exact order of production.
To allow producer and consumer to operate concurrently, a pool of buffer is created that is
filled by the producer and emptied by consumer. Producer produces in one buffer and consumer
consumes from another buffer. The process should be synchronized in such a way that consumer
should not consume the item that the producer has not produced.
At any particular time, the shared global buffer may be emptied, partially filled or full of
produced items ready for assumption. A producer may run in either of the two former cases, but
when buffer is full the producer must be kept waiting. On the other hand when buffer is empty,
consumer must wait.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The
next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill
the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The
solution can be reached by means of inter-process communication, typically using semaphores.
The example below shows a general solution to the producer consumer problem using
semaphores. We assume that the pool consists of n buffers, each capable of holding one item. The
mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.The
code for the producer and consumer process is shown below. We can interpret this code as the
producer producing full buffers for the consumer or as the consumer producing empty buffers for
the producer.
Shared data
semaphore full, empty, mutex;
Initially:mfull = 0, empty = n, mutex = 1

Producers Process:
do {

produce an item in nextp

wait(empty);
wait(mutex);

add nextp to buffer

signal(mutex);
signal(full);
} while (1);

Consumers Process:
do {
wait(full)
wait(mutex);

remove an item from buffer

signal(mutex);
signal(empty);

consume the item

} while (1);

Q9. a) Explain access matrix with copy, owner and control type of operation. (7)
Ans:
Model of protection can be viewed abstractly as a matrix, called an access matrix. The
rows of the access matrix represent domains, and the columns represent objects. Each entry in the
matrix consists of a set of access rights. Because the column defines objects explicitly, we can
omit the object name from the access right. The entry access(i,j) defines the set of operations that a
process executing in domain Di can invoke on object Oj.
To illustrate these concepts, we consider the access matrix shown in Figure below. There
are four domains and four objects—three files (F1, F2, F3) and one laser printer. A process
executing in domain D1 can read files F1 and F3. A process executing in domain D4 has the same
privileges as one executing in domain D1; but in addition, it can also write onto files F1 and F2.
Note that the laser printer can be accessed only by a process executing in domain D2.

Fig: Access Matrix

Allowing controlled change in the contents of the access-matrix entries requires three
additional operations: copy, owner, and control .
The ability to copy an access right from one domain (or row) of the access matrix to
another is denoted by an asterisk (*) appended to the access right. The copy right allows the
copying of the access right only within the column for which the right is defined. For example, in
figure (a) below a process executing in domain D2 can copy the read operation into any entry
associated with file F2. Hence, the access matrix of figure (a) can be modified to the access matrix
shown in figure (b).

Fig: Access Matrix with copy rights.


This scheme has two variants:
1. A right is copied from access(i, j) to access(k,j); it is then removed from access(i,j). This action
is a transfer of a right, rather than a copy.
2. Propagation of the copy right may be limited. That is, when the right R* is copied from
access(i,j) to access(k.j), only the right R (not R*) is created. A process executing in domain Dk
cannot further copy the right R.
A system may select only one of these three copy rights, or it may provide all three by
identifying them as separate rights: copy, transfer, and limited copy.
A mechanism to allow addition of new rights and removal of some rights is also needed.
The owner right controls these operations. If access(i,j) includes the owner right, then a process
executing in domain Di, can add and remove any right in any entry in column j. For example, in
fig (a) below domain D1 is the owner of F1 and thus can add and delete any valid right in column
F2. Thus, the access matrix of figure (a) can be modified to the access matrix shown in figure (b).
Fig: Access Matrix with owner rights.
The copy and owner rights allow a process to change the entries in a column. A
mechanism is also needed to change the entries in a row. The control right is applicable only to
domain objects. If access(i,j) includes the control right, then a process executing in domain Di can
remove any access right from row j. For example, suppose that, in figure (a) below, we include the
control right in access(D2, D4). Then, a process executing in domain D2 could modify domain
D4, as shown in figure (b).

Fig (a) Access Matrix with domain as object Fig: Modified Access Matrix

Q 9. b) Differentiate between protection & security. (6)


Ans:
Protection Security
Protection is one means to that security, and Security is safety. Freedom from unwanted
involves the active agency of a being who external interference.
provides that security.
Protection refers to the external measures being Security is within the boundary (internal)
taken
General Example: X-ray machines and metal General Example: It is common to see offices
detectors are means of security at important and other governmental buildings to have fool
public places to ensure safety and security of proof security measures to counter the threat of
the establishments and to prevent loss of terrorism these days.
property and valuable human lives.
Protection is not for security Security is not for protection, its subjective.
Protection is an tactic. Security is a strategy.
The word protection often refers to predictable Security tends to be used in the sense of either
or stable behaviour in a situation longevity or resistance to change by outside
agents
Protection consists of exercising rules of safety Security is having a feeling of safety; ,
to stay safe and to not be harmed; free from precautions to keep something safe
infliction etc
Data protection is suitably defined as the Data security is commonly referred to as the
appropriate use of data. When companies and confidentiality, availability, and integrity of
merchants use data or information that is data. In other words, it is all of the practices and
provided or entrusted to them, the data should processes that are in place to ensure data isn't
be used according to the agreed purposes. being used or accessed by unauthorized
individuals or parties.

Q 10. Write short notes on (any three):- (13)


i)Language based protection
ii)Viruses and worms
iii) Threats to computer security
iv)Cryptography
Ans:

i) Languague Based Protection


Protection in existing computer systems is usually achieved through an operating-system
kernel, which acts as a security agent to inspect and validate each attempt to access a protected
resource.
As operating systems have become more complex, and particularly as they have attempted
to provide higher-level user interfaces, the goals of protection have become much more refined.
Protection systems are now concerned not only with the identity of a resource to which access is
attempted but also with the functional nature of that access.
Policies for resource use may also vary, depending on the application, and they may be
subject to change over time.

a) Compiler-Based Enforcement
When protection is declared along with data typing, the designer of each subsystem can
specify its requirements for protection, as well as its need for use of other resources in a system.
Such a specification should be given directly as a program is composed, and in the language in
which the program itself is stated.
This approach has several significant advantages:
1. Protection needs are simply declared, rather than programmed as a sequence of calls on
procedures of an, operating system.
2. Protection requirements can be stated independently of the facilities provided by a
particular operating system.
3. The means for enforcement need not be provided by the designer of a subsystem.
4. A declarative notation is natural because access privileges are closely related to the
linguistic concept of data type.

A variety of techniques can be provided by a programming-language implementation to


enforce protection, but any of these must depend on some degree of support from an underlying
machine and its operating system. A language implementation might provide standard protected
procedures to interpret software capabilities that would realize the protection policies that could be
specified in the language. The security provided by this form of protection rests on the assumption
that the code generated by the compiler will not be modified prior to or during its execution.

ii) Viruses and worms

Virus Worm
The virus is the program code that attaches The worm is code that replicate itself in order to
itself to application program and when consume resources to bring it down.
application program run it runs along with it.
It inserts itself into a file or executable program. It exploits a weakness in an application or
operating system by replicating itself.
It has to rely on users transferring infected It can use a network to replicate itself to other
files/programs to other computer systems. computer systems without user intervention.
It deletes or modifies files. Sometimes a virus Worms usually only monopolize the CPU and
also changes the location of files. memory.
virus is slower than worm. worm is faster than virus.

iii) Threats to computer security

The threat is the potential for a security violation, such as the discovery of a vulnerability.
Following are the various threats to computer security.
• Breach of confidentiality: This type of violation involves unauthorized reading of data (or theft
of information). Typically, a breach of confidentiality is the goal of an intruder. Capturing secret
data from a system or a data stream, such as credit-card information or identity information for
identity theft, can result directly in money for the intruder.
• Breach of integrity: This violation involves unauthorized modification of data. Such attacks
can, for example, result in passing of liability to an innocent party or modification of the source
code of an important commercial application.
• Breach of availability: This violation involves unauthorized destruction of data. Some crackers
would rather wreak havoc and gain status or bragging rights than gain financially. Web-site
defacement is a common example of this type of security breach.
• Theft of service: This violation involves unauthorized use of resources. For example, an intruder
(or intrusion program) may install a daemon on a system that acts as a file server.
• Denial of service: This violation involves preventing legitimate use of the system. Denial-of-
service, or DOS, attacks are sometimes accidental.

iv) Cryptography
There are many defenses against computer attacks, running the gamut from methodology
to technology. The broadest tool available to system designers and users is cryptography.
Cryptography is the art of protecting information by transforming it (encrypting it) into an
unreadable format, called cipher text. Only those who possess a secret key can decipher
(or decrypt) the message into plain text. Encrypted messages can sometimes be broken by
cryptanalysis, also called codebreaking, although modern cryptography techniques are virtually
unbreakable.
Following figure shows the basic model of cryptography:

Fig: Basic model of cryptography


A plaintext is the raw information to be protected while transmission from sender to
receiver which is also called as message. The sender knows the plaintext and at the end of the
transmission process it should also be known to receiver. Only it should be hidden from
interceptor.
Cipher text is the scrambled version which results after applying encryption algorithm also
referred as cryptogram.
An encryption algorithm is the set of rules that determines, for any given plaintext and any
valid encryption key, a unique cipher text.
A decryption algorithm is the set of rules that determines, for any given cipher text and any
valid decryption key, a unique plaintext.
Encryption key is used by the sender for converting plaintext to cipher text and decryption key is
used by receiver to convert cipher text into plain text. These keys are to be kept secret by sender
and receiver respectively.
TULSIRAMJI GAIKWAD- PATIL College of Engineering & Technology
Department of Information Technology
Fourth Semester
B.E. Examination Solution Set
Subject: Operating System

Section - A
1. (a) What is an interrupt? Explain different types of interrupts with their significance to
operating system. (06)
Ans:
An interrupt is an exception, a change of the normal progression, or interruption in the
normal flow of program execution. An interrupt is essentially a hardware generated function call.
Interrupts are caused by both internal and external sources. An interrupt causes the normal
program execution to halt and for the interrupt service routine (ISR) to be executed. At the
conclusion of the ISR, normal program execution is resumed at the point where it was last.
Interrupt is an event external to the currently executing process that causes a change in the normal
flow of instruction execution. Interrupt causes transfer of control to an interrupt service routine
(ISR). When the ISR is completed, the original program resumes execution Interrupts provide an
efficient way to handle unanticipated events.
Following are the different types of interrupts:
Hardware Interrupt:
A hardware interrupt is an electronic alerting signal sent to the processor from an external
device, either a part of the computer itself such as a disk controller or an external peripheral. For
example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that
cause the processor to read the keystroke or mouse position. Hardware interrupts are
asynchronous and can occur in the middle of instruction execution, requiring additional care in
programming. The act of initiating a hardware interrupt is referred to as an interrupt request (IRQ).
Software Interrupt:
A software interrupt is caused either by an exceptional condition in the processor itself, or
a special instruction in the instruction set which causes an interrupt when it is executed. The
former is often called a trap or exception and is used for errors or events occurring during program
execution that are exceptional enough that they cannot be handled within the program itself. For
example, if the processor's arithmetic logic unit is commanded to divide a number by zero, this
impossible demand will cause a divide-by-zero exception, perhaps causing the computer to
abandon the calculation or display an error message. Software interrupt instructions function
similarly to subroutine calls and are used for a variety of purposes, such as to request services
from low level system software such as device drivers. For example, computers often use software
interrupt instructions to communicate with the disk controller to request data be read or written to
the disk.

1. (b) List & explain various services provided by operating system. (4)
Ans: Following are the various services provided by operating system:
i) Program Execution
ii) I/O Operation
iii) File system manipulation
iv) Communication
v) Error handling
vi) Resource Management
vii) Protection

1. Program execution
Operating system handles many kinds of activities from user programs to system programs
like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management.
 Loads a program into memory.
 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

2. I/O Operation
Operating System manages the communication between user and device drivers.
Following are the major activities of an operating system with respect to I/O Operation.
 I/O operation means read or write operation with any file or any specific I/O device.
 Program may require any I/O device while running.
 Operating system provides the access to the required I/O device when required.

3. File system manipulation


A file represents a collection of related information. Computer can store files on the disk
(secondary storage), for long term storage purpose.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management.
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

4. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications between
processes. Multiple processes with one another through communication lines in the network.
OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication.
 Two processes often require data to be transferred between them.
 The both processes can be on the one computer or on different computer but are connected
through computer network.
 Communication may be implemented by two methods either by Shared Memory or by
Message Passing.
5. Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling.
 OS constantly remains aware of possible errors.
 OS takes the appropriate action to ensure correct and consistent computing.

6. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major activities of
an operating system with respect to resource management.
 OS manages all kind of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.

7. Protection
Protection refers to mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer systems. Following are the major activities of an
operating system with respect to protection.
 OS ensures that all access to system resources is controlled.
 OS ensures that external I/O devices are protected from invalid access attempts.
 OS provides authentication feature for each user by means of a password.

1. (c) Different methods available for free space management. (4)


Ans:
An important function of file is to manage space on the secondary storage. This includes
keeping track of both the disk blocks allocated to files and free blocks available for allocation.
Following are various approaches to manage disk space:
Bit Vector
Free-space list is implemented as a bit map or bit vector. Each block is represented by 1
bit. If the block is free, the bit is 1; if the block is allocated, the bit is 0.
For example consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27
are free, and the rest of the blocks are allocated. The free-space bit map would be
001111001111110001100000011100000 …..
The main advantage of this approach is that it is relatively simple and efficient to
find the first free block or n consecutive free blocks on the disk.
The calculation of the block number is
(number of bits per word) x (number of 0-value words) + offset of first 1 bit

Linked List
Another approach is to link together all the free disk blocks, keeping a pointer to
the first free block in a special location on the disk and caching it in memory. This first block
contains a pointer to the next free disk block, and so on. Block 2 would contain a pointer to
block 3, which would point to block 4, which would point to block 5, which would point to block
8, and so on. Usually, the operating system simply needs a free block so that it can
allocate that block to a file, so the first block in the free list is used.
Fig: Linked Free Space on a Disk
Grouping
A modification of the free-list approach is to store the addresses of n free blocks in
the first free block. The first n-1 of these blocks are actually free. The importance of this
implementation is that the addresses of a large number of free blocks can be found
quickly, unlike in the standard linked-list approach.

Counting
Several contiguous blocks may be allocated or freed simultaneously, particularly
when space is allocated with the contiguous allocation algorithm or through clustering. A list
of n free disk addresses, we can keep the address of the first free block and the number
n of free contiguous blocks that follow the first block. Each entry in the free-space list then
consists of a disk address and a count. Although each entry requires more space than would a
simple disk address, the overall list will be shorter, as long as count is generally greater than 1.

2. (a) What do you mean by PCB? Also explain process state & process state transition
diagram in detail. (6)
Ans:
Process Control Block: Each process contains the process control block (PCB). PCB is the
data structure used by the operating system. Operating system groups all information that
needs about particular process. Fig. below shows the process control block.

Process State

Process Number

Program Counter

CPU Registers

Memory Allocation

Event Information

List of Open Files

.....
Fig: Process Control Block
1. Process State : Process state may be new, ready, running, waiting and so on.
2. Program Counter : It indicates the address of the next instruction to be executed for
this process.
3. Event information : For a process in the blocked state this field contains
information concerning the event for which the process is waiting.
4. CPU register : It indicates general purpose register, stack pointers, index registers
and accumulators etc. number of register and type of register totally depends upon the
computer architecture.
5. Memory Management Information : This information may include the value of base and
limit register. This information is useful for de-allocating the memory when the
process terminates.
6. Accounting Information : This information includes the amount of CPU and real time
used, time limits, job or process numbers, account numbers etc.
7. I/O Status Information: This information includes the list of I/O devices allocated to the
process, a list of open files and so on.
Process control block also includes the information about CPU scheduling, I/O
resource management, file management information, priority and so on. The PCB simply serves as
the repository for any information that may vary from process to process.

Process state: Process state is defined as the current activity of the process.

Process State Transition Diagram: When process executes, it changes state. Process state is
defined as the current activity of the process. Fig. 3.1 shows the general form of the
process state transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exit)

Fig: Diagram for Process State

1. New: A process that just been created.


2. Ready: Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
3. Running: The process that is currently being executed. A running process possesses all
the resources needed for its execution, including the processor.
4. Waiting: A process that cannot execute until some event occurs such as the completion of an
I/O operation. The running process may become suspended by invoking an I/O module.
5. Terminated: A process that has been released from the pool of executable processes by
the operating system.
Whenever processes changes state, the operating system reacts by placing the
process PCB in the list that corresponds to its new state. Only one process can be running
on any processor at any instant and many processes
may be ready and waiting state.

2. (b) Compare SCAN and C- LOOK disk scheduling algorithms with example. (6)
Ans:
SCAN scheduling algorithm:
The scan algorithm has the head start at track 0 and move towards the highest
numbered track, servicing all requests for a track as it passes the track. The service
direction is then reserved and the scan proceeds in the opposite direction, again picking up
all requests in order.
SCAN algorithm is guaranteed to service every request in one complete pass through the
disk. SCAN algorithm behaves almost identically.
Example:

C LOOK Scheduling Algorithm


The C-LOOK policy restricts scanning to one direction only. Thus, when the last track has been
visited in one direction, the arm is returned to the opposite end of the disk and the scan
begins again.
Example:
3. (a) Explain paged segmentation system with neat diagram. (6)
Ans:
In Pure segmentation scheme there are various problems as follows:
1. If segments are very large it will be very inconvenient to keep in main memory.
2. If segments are very large and there is no paging then there can be possibilities of external
fragmentation.
3. Also search time to allocate a segment using best fit or first fit will be more.

This leads to the idea of paging them and bringing only that pages in main memory which
are necessary. The paged segmentation scheme is as follows:
1. A virtual address becomes a segment number, a page within that segment, and an offset
within the page.
2. The segment number indexes into the segment table which yields the base address of the
page table for that segment.
3. The remainder of the address (page number and offset) is checked against the limit of the
segment.
4. The page number is used to index the page table. The entry in the page number is the frame
number.
5. The frame and the offset is added to get the physical address which is used to refer the data
of interest in the main memory.

Fig: Paged Segmentation

3. (b) Differentiate between preemptive and non- preemptive scheduling. (4)


Ans:

Preemptive Scheduling Non- Preemtive Scheduling


Preemptive Scheduling is when a In non- preemptive scheduling computer
computer process is interrupted and the CPU's process is not interrupted unitll it voluntarily
power is given over to another process with a gives up the CPU
higher priority.
Preemptive Scheduling allows the Non-preemptive Scheduling occupies full
computer's Operating System, (OS), control control of the CPU
over the states of processes.
The preemptive scheduling is prioritized. The When a process enters the state of running, the
highest priority process should always be the state of that process is not deleted from the
process that is currently utilized. scheduler until it finishes its service time.
In preemptive scheduling process is forcibly In non- preemptive scheduling the process at
sent to waiting state when higher priority running state cannot be forced to leave CPU
process comes in. until it completes
Eg. Round Robin Scheduling, Preemptive SJF Eg. FCFS, SJF

3. (c) Define the following allocation algorithm. (4)


i) First Fit
ii) Best Fit
iii) Worst Fit
Ans:
First fit: Allocate the first hole that is big enough. Searching can start either at the beginning of
the set of holes or where the previous first-fit search ended. We can stop searching as soon as we
find a free hole that is large enough.

Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the
list is ordered by size. This strategy produces the smallest leftover hole.

Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by
size. This strategy produces the largest leftover hole, which may be more useful than the smaller
leftover hole from a best-fit approach.

4. (a)Explain the multilevel feedback queue CPU scheduling algorithm in detail. (6)
Ans:
When the multilevel queue scheduling algorithm is used, processes are permanently
assigned to a queue when they enter the system. If there are separate queues for foreground and
background processes, processes do not move from one queue to the other, since processes do not
change their foreground or background nature. This setup has the advantage of low scheduling
overhead, but it is inflexible.
The multilevel feedback-queue scheduling algorithm, in, allows a process to move
between queues. The idea is to separate processes according to the characteristics of their CPU
bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue. This
scheme leaves I/O-bound and interactive processes in the higher-priority queues. In addition, a
process that waits too long in a lower-priority queue may be moved to a higher-priority queue.
This form of aging prevents starvation.
A process entering the ready queue is put in queue O. A process in queue 0 is given a time
quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1.
If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it
does not complete, it is pre-empted and is put into queue 2. Processes in queue 2 are run on an
FCFS basis but are run only when queues 0 and 1 are empty.
In general, a multilevel feedback-queue scheduler is defined by the following parameters:
o The number of queues.
o The scheduling algorithm for each queue.
o The method used to determine when to upgrade a process to a higher-priority
queue.
o The method used to determine when to demote a process to a lower-priority queue.
o The method used to determine which queue a process will enter when that process
needs service.

Fig: Multilevel feedback Queue

4. (b) What is virtual memory? Explain demand paging in detail. (7)


Ans:
Virtual Memory:
Virtual memory is a technique that allows the execution of processes that are not
completely in memory. virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from physical memory. This
technique frees programmers from the concerns of memory-storage limitations. Virtual memory
also allows processes to share files easily and to implement shared memory.

Demand Paging:
A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory. When we want to execute a process, we swap it into memory. When
a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again. Instead of swapping in a whole process, the pager brings only those necessary
pages into memory. Thus, it avoids reading into memory pages that will not be used anyway,
decreasing the swap rime and the amount of physical memory needed.
With the demand paging we need some form of hardware support to distinguish between
the pages that are in memory and the pages that are on the disk. The valid-invalid bit scheme can
be used for this purpose. When this bit is set to "valid" the associated page is both legal and in
memory. If the bit is set to "invalid," the page either is not valid or is valid but is currently on the
disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table
entry for a page that is not currently in memory is either simply marked invalid or contains the
address of the page on disk. This situation is depicted in Figure below:

Fig: Page Table when some Pages are not in Memory


The hardware to support demand paging is the same as the hardware for paging and
swapping:
• Page table: This table has the ability to mark an entry invalid through a valid-invalid bit or
special value of protection bits.
• Secondary memory: This memory holds those pages that are not present in main memory.
A crucial requirement for demand paging is the need to be able to restart any instruction
after a page fault. If the page fault occurs on the instruction fetch, we can restart by-fetching the
instruction again. If a page fault occurs while we are fetching an operand, we must fetch and
decode the instruction again and then fetch the operand.

5. (a) Write short note on sector queuing. (4)


Ans:
Sector queuing is an algorithm for scheduling fixed head devices. It is based on the
division of each track into a fixed number of blocks called sectors. The disk address in each
request specifies the track and sectors. Since seek time is zero for fixed head devices, the main
service time is latency. Sector queuing is primarily used with fixed head devices. If there is more
than one request for service within a particular track or cylinder. Sector queuing can be used to
order multiple requests within the same cylinder.
Example: Assume the head is currently over sector 2 and the first request in the queue is for sector
12. To service this request wait until sector 12 revolves under read/ write heads. If there is a
request in the queue for sector 5, it could be services before the request for sector without causing
for sector 12 to be delayed.
Sector queue defines a separate queue for each sector of the drum. When a request arrives
for sector i, it is placed in the queue for sector i.

Fig: Sector Queuing

5. (b) Explain working of Long- Term scheduler with the help of suitable diagram. (4)
Ans:
Long Term Scheduling
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU scheduler. The
primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On same systems, the long term scheduler may be absent or minimal. Time-sharing
operating systems have no long term scheduler. When process changes the state from new to
ready, then there is a long term scheduler.

Fig: Queuing Diagram with Medium Term Scheduling

5. (b) Differentiate contiguous and index file allocation methods. (5)


Ans:
Contiguous Allocation Index Allocation
Each file occupies a set of contiguous blocks on Blocks belonging to file are scattered in a disk
the disk. in non contiguous manner.
The directory entry contains file name, start The directory entry contains file name, address
address and length of file. of index block.
Contiguous allocation method does not require Index allocation requires an index table
index table.
File size cannot grow with contiguous File size can grow with index allocation
allocation method. method.
It is subject to dynamic storage allocation Storage can be allocated dynamically
problem
External fragmentation can occur. External fragmentation does not occur.
Section – B
6. (a) What is the cause of thrashing? How does the system detect thrashing and what are the
methods available to minimize thrashing? (7)
Ans:
Thrashing:
Consider a process that does not have ''enough" frames. If the process does not have the
number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it
must replace some page. As all its pages are in active use, it must replace a page that will be
needed again right away. Consequently, it quickly faults again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity is called ―thrashing‖. A
process is thrashing if it is spending more time in paging than executing.

Fig: Thrashing
Different methods to minimize thrashing are as follows:
Working Set Model:
The working-set model is xbased on the assumption of locality. This model uses a
parameter, Δ, to define the working-set window. The idea is to examine the most recent A page
references. The set of pages in the most recent Δ page references is the working set as in figure
below. If a page is in active use, it will be in the working set. If it is no longer being used, it will
drop from the working set Δ time units after its last reference. Thus, the working set is an
approximation of the program's locality.
For example, given the sequence of memory references shown in Figure, if Δ = 10 memory
references, then the working set at time t1 is {1, 2, 5, 6, 7). By time t2, the working set has
changed to {3, 4}.

Fig: Working Set Model

The accuracy of the working set depends on the selection of Δ. If Δ is too small, it will not
encompass the entire locality; if Δ is too large, it may overlap several localities. In the extreme, if
Δ is infinite, the working set is the set of pages touched during the process execution. Once Δ has
been selected, use of the working-set model is simple. The operating system monitors the working
set of each process and allocates to that working set enough frames to provide it with its working-
set size. If there are enough extra frames, another process can be initiated. If the sum of the
working-set sizes increases, exceeding the total number of available frames, the operating system
selects a process to suspend. The process's pages are written out (swapped), and its frames are
reallocated to other processes. The suspended process can be restarted later.
This working-set strategy prevents thrashing while keeping the degree of
multiprogramming as high as possible. Thus, it optimizes CPU utilization. The difficulty with the
working-set model is keeping track of the working set. The working-set window is a moving
window. At each memory reference, a new reference appears at one end and the oldest reference
drops off the other end. A page is in the working set if it is referenced anywhere in the working-set
window.

Page-Fault Frequency:
The working-set model is successful, and knowledge of the working set can be useful for
pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page-fault
frequency (PFF) takes a more direct approach.
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate.
Thus, we want to control the page-fault rate. When it is too high, we know that the process needs
more frames. Conversely, if the page-fault rate is too low, then the process may have too many
frames. We can establish upper and lower bounds on the desired page-fault rate. If the actual page-
fault rate exceeds the upper limit, we allocate the process another frame; if the page-fault rate falls
below the lower limit, we remove a frame from the process. Thus, we can directly measure and
control the page-fault rate to prevent thrashing. As with the working-set strategy, we may have to
suspend a process. If the page-fault rate increases and no free frames are available, we must select
some process and suspend it. The freed frames are then distributed to processes with high page-
fault rates.

Fig: Page Fault Frequency

6. (b) Describe the deadlock prevention method. (7)


Ans:
Deadlock can be prevented by ensuring that at least one of the following conditions cannot
hold.
Mutual Exclusion:
The mutual-exclusion condition must hold for non-sharable resources. For example, a
printer cannot be simultaneously shared by several processes. Sharable resources, in contrast, do
not require mutually exclusive access and thus cannot be involved in a deadlock. Read-only files
are a good example of a sharable resource. If several processes attempt to open a read-only file at
the same time, they can be granted simultaneous access to the file. A process never needs to wait
for a sharable resource. In general, however, we cannot prevent deadlocks by denying the mutual-
exclusion condition, because some resources are intrinsically non-sharable.

Hold and Wait:


To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources. One protocol
that can be used requires each process to request and be allocated all its resources before it begins
execution. We can implement this provision by requiring that system calls requesting resources for
a process precede all other system calls.
An alternative protocol allows a process to request resources only when it has none. A
process may request some resources and use them. Before it can request any additional resources,
however, it must release all the resources that it is currently allocated. Both these protocols have
two main disadvantages. First, resource utilization may be low, since resources may be allocated
but unused for a long period. Second, starvation is possible. A process that needs several popular
resources may have to wait indefinitely, because at least one of the resources that it needs is
always allocated to some other process.

No Preemption:
The third necessary condition for deadlocks is that there be no pre-emption of resources
that have already been allocated. To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait), then all resources currently
being held are preempted. In other words, these resources are implicitly released. The preempted
resources are added to the list of resources for which the process is waiting. The process will be
restarted only when it can regain its old resources, as well as the new ones that it is requesting.
Alternatively, if a process requests some resources, we first check whether they are available. If
they are, we allocate them. If they are not, we check whether they are allocated to some other
process that is waiting for additional resources. If so, we preempt the desired resources from the
waiting process and allocate them to the requesting process. If the resources are neither available
nor held by a waiting process, the requesting process must wait. While it is waiting, some of its
resources may be preempted, but only if another process requests them. A process can be restarted
only when it is allocated the new resources it is requesting and recovers any resources that were
pre-empted while it was waiting. This protocol is often applied to resources whose state can be
easily saved and restored later, such as CPU registers and memory space. It cannot generally be
applied to such resources as printers and tape drives.

Circular Wait:
The fourth and final condition for deadlocks is the circular-wait condition. One way to
ensure that this condition never holds is to impose a total ordering of all resource types and to
require that each process requests resources in an increasing order of enumeration. To illustrate,
we let R = {R1, R2, ..., Rm} be the set of resource types. We assign to each resource type a unique
integer number, which, allows us to compare two resources and to determine whether one
precedes another in the ordering.
7. (a)What is the critical section problem? How are semaphores used to avoid it? (8)
Ans:
Consider a system consisting of n processes {P0, P1 , ..., Pn-1}. Each process has a segment
of code, called a critical section, in which the process may be changing common variables,
updating a table, writing a file, and so on. The important feature of the system is that, when one
process is executing in its critical section, no other process is to be allowed to execute in its
critical section. That is, no two processes are executing in their critical sections at the same time.
The critical-section problem is to design a protocol that the processes can use to cooperate. Each
process must request permission to enter its critical section. The section of code implementing this
request is the entry section. The critical section may be followed by an exit section. The remaining
code is the remainder section. The general structure of a typical process P, is shown in figure
below. The entry section and exit section are enclosed in boxes to highlight these important
segments of code.

Fig: General Structure of Typical Process Pi

The problem of critical section can be avoided using synchronization tool called
semaphores. A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The wait() operation was originally
termed P; signal () was originally called V.
The definition of wait 0 is as follows:
wait(S) {
while S <= 0
; // no-op
S--;

The definition of signal () is as follows:


signal(S) {
S++;
}
All the modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly. That is, when one process modifies the semaphore value,
no other process can simultaneously modify that same semaphore value. In addition, in the case of
wait(S), the testing of the integer value of S (S < 0), and its possible modification (S--), must also
be executed without interruption.
Operating systems often distinguish between counting and binary semaphores. The value
of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore
can range only between 0 and 1. On some systems, binary semaphores are known as mutex locks,
as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the
critical-section problem for multiple processes. Counting semaphores can be used to control
access to a given resource consisting of a finite number of instances.

7. (b) What are the necessary conditions for deadlock to occur? Explain in brief. (5)
Ans:
In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not available at
that time, the process enters a wait state. It may happen that waiting processes will never
again change state, because the resources they have requested are held by other waiting processes.
This situation is called deadlock.
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a
system:

1. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.

2. Hold and Wait: There must exist a process that is holding at least one resource and is waiting
to acquire additional resources that are currently being held by other processes.

3. No Preemption: Resources cannot be preempted; that is, a resource can be released


only voluntarily by the process holding it, after that process, has completed its task.

4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-
1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held
by P0.

8. (a) Consider the following page reference string:


1, 2, 3, 4, 5, 3, 1, 4, 1, 6, 7, 8, 7, 8, 9, 7, 8, 9, 5, 4, 5, 4, 2
How many page fault would occur for the following page replacement algorithms assuming
for four page frames? All frames are initially empty:
i) LRU Replacement
ii) FIFO Replacement
iii) Optimal Replacement (7)
Ans:
i) LRU Replacement
Reference
1 2 3 4 5 3 1 4 1 6 7 8 7 8 9 7 8 9 5 4 5 4 2
String
Frame 1 1 1 1 1 5* 5 6* 6 6 6 5* 5 5
Frame 2 2 2 2 2 1* 1 1 1 9* 9 9 9
Frame 3 3 3 3 3 3 7* 7 7 7 4* 4
Frame 4 4 4 4 4 4 8* 8 8 8 2*
Page Fault # # # # # # # # # # # # #

No. of. Page Faults (#) = 13


ii) FIFO Replacement
Reference
1 2 3 4 5 3 1 4 1 6 7 8 7 8 9 7 8 9 5 4 5 4 2
String
Frame 1 1 1 1 1 5* 5 5 5 8* 8 8 8 2*
Frame 2 2 2 2 2 1* 1 1 1 9* 9 9 9
Frame 3 3 3 3 3 6* 6 6 6 5* 5 5
Frame 4 4 4 4 4 7* 7 7 7 4* 4
Page Fault # # # # # # # # # # # # #

No. of. Page Faults (#) = 13

iii) Optimal Replacement


Reference
1 2 3 4 5 3 1 4 1 6 7 8 7 8 9 7 8 9 5 4 5 4 2
String
Frame 1 1 1 1 1 1 6* 6 8* 8 8 2*
Frame 2 2 2 2 5* 5 5 5 9* 5* 5
Frame 3 3 3 3 3 7* 7 7 7 7
Frame 4 4 4 4 4 4 4 4 4
Page Fault # # # # # # # # # # #

No. of. Page Faults (#) = 11

8. (b) Explain access list and capability list for implementation of access matrix. (6)
Ans:
Access Lists for Objects
1. Each column in the access matrix can be implemented as an access list for one object.
Obviously, the empty entries can be discarded.
2. The resulting list for each object consists of ordered pairs <domain, rights-set>, which
define all domains with a nonempty set of access rights for that object.
3. An access list is a list that specifies the user name and the types of access allowed for each
user.
4. Access Lists with each file, indicate which users are allowed to perform which operations.
5. Access List is one way of recording access rights in a computer system. They are
frequently used in file systems.
6. In principle, access list is an exhaustive enumeration of the specific access rights of all
entities that are authorized access to a given object.
7. In systems that employ access lists, a separate list is maintained for each object.
8. Usually owner has the exclusive right to define and modify the related access list. The
owner of the object can revoke the access rights granted to a particular subject or a domain
by simply modifying or deleting the related entry in the access list.

Capability Lists for Domains


1. Capability list is obtained by decomposition of access matrix by row.
2. In capability based system, a subject can name only object for which it has capabilities.
3. A capability list for a domain is a list of objects together with the operations allowed on
those objects. An object is often represented by its physical, name or address, called a
capability.
4. A capability list is a list of objects coupled with the operations allowed on those objects.
Capabilities with each user, indicate which files may be accessed, and in what ways.
5. Capabilities provide a single unified mechanism to:
a) Address both primary and secondary memory
b) Access both hardware and software resources
c) Protect objects in both primary and secondary memory.
6. A capability is a token or a ticket that gives the subject possessing its permission to access
a specific object in the specified manner. A capability may be represented as a data
structure consisting of two items of information viz. a unique object identifier and access
rights to that object.
7. Capability based systems combine the addressing and protection functions in a single
unified mechanism that is used to access all system objects. In capability based systems, a
list of capabilities is associated with each subject.
8. Capability-based protection relies on the fact that the capabilities are never allowed to
migrate into any address space directly accessible by a user process. If all capabilities are
secure, the object they protect is also secure against unauthorized access.

9. (a) Consider the following snapshot of a system. (7)


Allocation Max Available
Process A B C D A B C D A B C D
P0 0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
Answer the following questions using banker’s algorithm.
a. What is the content of matrix Need?
b. Is the system in safe state?
c. If a request from process P1 arrives for (0, 4, 2, 0), can the request granted
immediately?
Ans:

a) Needi= Maxi – Allocationi


Thus the contents of need matrix are:
A B C D
P0 0 0 0 0
P1 0 7 5 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
b) Apply safety algorithm
initialize work= 1 5 2 0
finish[i]= false for i=0 to 4

consider P0
Need0 <= work
0 0 0 0 <= 1 5 2 0
Condition true
work= work+allocation1
work=1 5 2 0 + 0 0 1 2
work= 1 5 3 2
finish[1]= true

consider P1
Need1 <= work
0 7 5 0 <= 1 5 3 2
Condition false

consider P2
Need2 <= work
1 0 0 2 <= 1 5 3 2
Condition true
work= work+allocation2
work=1 5 3 2 + 1 3 5 4
work= 2 8 8 6
finish[2]= true

consider P3
Need3 <= work
0 0 2 0 <= 2 8 8 6
Condition true
work= work+allocation3
work=2 8 8 6 + 0 6 3 2
work= 2 14 11 8
finish[3]= true

consider P4
Need4 <= work
0 6 4 2<= 2 14 11 8
Condition true
work= work+allocation4
work=2 14 11 8 + 0 0 1 4
work= 2 14 12 12
finish[4]= true

consider P1
Need2 <= work
0 7 5 0 <= 2 14 12 12
Condition true
work= work+allocation1
work=12 14 12 12 + 1 0 0 0
work= 13 14 12 12
finish[1]= true

Yes, the system is in safe state and the safe sequence is <P0, P2, P3, P4, P1>.

c) Now applying resource request algorithm to find whether 0 4 2 0 can be granted to P1


immediately

request1= 0 4 2 0

Request1<=Need1
0 4 2 0 <= 07 5 0
Condition true

available= available+request1
available= 1 5 2 0 + 0 4 2 0
available= 1 1 0 0
Allocation1= Allocation1+request1
Allocation1=1 0 0 0 + 0 4 2 0
Allocation1= 1 4 2 0

Need1= Need1 + request1


Need1=0 7 5 0 + 0 4 2 0
Need1=0 3 3 0

Thus after granting request the new state is:


Allocation Need Available
Process A B C D A B C D A B C D
P0 0 0 1 2 0 0 0 0 1 1 0 0
P1 1 4 2 0 0 3 3 0
P2 1 3 5 4 1 0 0 2
P3 0 6 3 2 0 0 2 0
P4 0 0 1 4 0 6 4 2

Now again applying safety algorithm

initialize work= 1 1 0 0
finish[i]= false for i=0 to 4

consider P0
Need0 <= work
0 0 0 0 <= 1 1 0 0
Condition true
work= work+allocation1
work=1 1 0 0 + 0 0 1 2
work= 1 1 1 2
finish[1]= true

consider P1
Need1 <= work
0 3 3 0 <= 1 1 1 2
Condition false

consider P2
Need2 <= work
1 0 0 2 <= 1 1 1 2
Condition true
work= work+allocation2
work=1 1 1 2 + 1 3 5 4
work= 2 4 4 6
finish[2]= true

consider P3
Need3 <= work
0 0 2 0 <= 2 4 4 6
Condition true
work= work+allocation3
work=2 4 4 6 + 0 6 3 2
work= 2 10 9 8
finish[3]= true

consider P4
Need4 <= work
0 6 4 2<= 2 10 9 8
Condition true
work= work+allocation4
work=2 10 9 8 + 0 0 1 4
work= 2 10 10 12
finish[4]= true

consider P1
Need2 <= work
0 3 3 0 <= 2 10 10 12
Condition true
work= work+allocation1
work= 2 10 10 12 + 1 4 2 0
work= 3 14 12 12
finish[1]= true
Yes the request can be granted immediately as the system is in safe state with with safe sequence
<P0, P2, P3, P4, P1>

9. (b) What are the goals of protection. (4)


Ans:
Protection refers to a mechanism for controlling the access of programs, processes, or
users to the resources defined by a computer system. As computer systems have become more
sophisticated and pervasive in their applications, the need to protect their integrity has also grown.
Protection was originally conceived as an adjunct to multiprogramming operating systems, so that
untrustworthy users might safely share a common logical name space, such as a directory of files,
or share a common physical name space, such as memory. Modern protection concepts have
evolved to increase the reliability of any complex system that makes use of shared resources. We
need to provide protection for several reasons.
1. To prevent mischievous, intentional violation of an access restriction by a user.
2. To ensure that each program component active in a system uses system resources only in
ways consistent with stated policies.
3. To improve reliability by detecting latent errors at the interfaces between component
subsystems.
4. Early detection of interface errors can often prevent contamination of a healthy subsystem
by a malfunctioning subsystem.
5. To provide means to distinguish between authorized and unauthorized usage.
6. To provide a mechanism for the enforcement of the policies governing resource use. These
policies can be established in a variety of ways. Some are fixed in the design of the system,
while others are formulated by the management of a system. Still others are defined by the
individual users to protect their own files and programs.

10. (a) What are the different schemes for implementing revocation rights. (6)
Ans:
In a dynamic protection system, we may sometimes need to revoke access rights to
objects shared by different users. With an access-list scheme, revocation is easy. The access list is
searched for any access rights to be revoked, and they are deleted from the list. Revocation is
immediate and can be general or selective, total or partial, and permanent or temporary.
Capabilities, however, present a much more difficult revocation problem. Since the capabilities are
distributed throughout the system, we must find them before we can revoke them.
Various schemes for implementing revocation rights are as follows:
 Reacquisition: Periodically, capabilities are deleted from each domain. If a process wants
to use a capability, it may find that capability has been deleted. The process may
then try to reacquire the capability. If access has been revoked, the process will not be
able to reacquire the capability.
 Back pointers: A list of pointers is maintained with each object, pointing to all
capabilities associated with that object. When revocation is required, we can follow this
pointers, changing the capabilities as necessary. This scheme was adopted in the
MULTICS system.
 Indirection: The capabilities point indirectly, not directly, to the objects. Each
capability points to a unique entry in a global trade, which in turn points to the
object. We implement revocation by searching the global table for the desired entry and
deleting it. It does not allow selective revocation.
 Keys: A key is a unique bit pattern that can be associated with a capability. This key is
defined when the capability is created, and it can be neither modified nor inspected
by the process that owns the capability. A master key is associated with each object; it can
be defined or replaced with the set-key operation.

10. (b) Write short notes on: (7)


i) Languaue Based Protection
ii) Interprocess communication
Ans:

i) Languaue Based Protection


Protection in existing computer systems is usually achieved through an operating-system
kernel, which acts as a security agent to inspect and validate each attempt to access a protected
resource.
As operating systems have become more complex, and particularly as they have attempted
to provide higher-level user interfaces, the goals of protection have become much more refined.
Protection systems are now concerned not only with the identity of a resource to which access is
attempted but also with the functional nature of that access.
Policies for resource use may also vary, depending on the application, and they may be
subject to change over time.

a) Compiler-Based Enforcement
When protection is declared along with data typing, the designer of each subsystem can
specify its requirements for protection, as well as its need for use of other resources in a system.
Such a specification should be given directly as a program is composed, and in the language in
which the program itself is stated.
This approach has several significant advantages:
1. Protection needs are simply declared, rather than programmed as a sequence of calls on
procedures of an, operating system.
2. Protection requirements can be stated independently of the facilities provided by a
particular operating system.
3. The means for enforcement need not be provided by the designer of a subsystem.
4. A declarative notation is natural because access privileges are closely related to the
linguistic concept of data type.

A variety of techniques can be provided by a programming-language implementation to


enforce protection, but any of these must depend on some degree of support from an underlying
machine and its operating system. A language implementation might provide standard protected
procedures to interpret software capabilities that would realize the protection policies that could be
specified in the language. The security provided by this form of protection rests on the assumption
that the code generated by the compiler will not be modified prior to or during its execution.

ii) Interprocess communication


In computing, inter-process communication (IPC) is a set of methods for the exchange of
data among multiple threads in one or more processes. Processes may be running on one or more
computers connected by a network. IPC methods are divided into methods for message
passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC
used may vary based on the bandwidth and latency of communication between the threads, and the
type of data being communicated.

Message Passing:
The most popular form of inter-process communication involves message passing.
Processes communicate with each other by exchanging messages. A process may send
information to a port, from which another process may receive information. The sending and
receiving processes can be on the same or different computers connected via a communication
medium. One reason for the popularity of message passing is its ability to support client-server
interaction. A server is a process that offers a set of services to client processes. These services are
invoked in response to messages from the clients and results are returned in messages to the client.

Synchronization:
Synchronization refers to one of two distinct but related concepts: synchronization
of processes, and synchronization of data. Process synchronization refers to the idea that multiple
processes are to join up or handshake at a certain point, in order to reach an agreement or commit
to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies
of a dataset in coherence with one another, or to maintain data integrity. Process synchronization
primitives are commonly used to implement data synchronization.

Shared Memory:
Shared memory is memory that may be simultaneously accessed by multiple programs
with an intent to provide communication among them or avoid redundant copies. Shared memory
is an efficient means of passing data between programs. Depending on context, programs may run
on a single processor or on multiple separate processors. Using memory for communication inside
a single program, for example among its multiple threads, is also referred to as shared memory.

Remote Procedure call:


Remote procedure call (RPC) is an inter-process communication that allows a computer
program to cause a subroutine or procedure to execute in another address space(commonly on
another computer on a shared network) without the programmer explicitly coding the details for
this remote interaction. That is, the programmer writes essentially the same code whether the
subroutine is local to the executing program, or remote. When the software in question
uses object-oriented principles, RPC is called remote invocation or remote method invocation.
TULSIRAMJI GAIKWAD- PATIL College of Engineering & Technology
Department of Information Technology
Fourth Semester
B.E. Examination
Solution Set
Subject: Operating System

Section - A
1. (a) Define Operating System. Explain Batch, Time Sharing & Real Time Operating
System. (10)
Ans:
An Operating system is a program that controls the execution of application programs and
acts as an interface between the user of a computer and the computer hardware.
An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly includes
programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.

Batch System
 Batch operating system is one where programs and data are collected together in a batch
before processing starts. A job is predefined sequence of commands, programs and data
that are combined in to a single unit called job.
 Figure below shows the memory layout for a simple batch system. Memory management
in batch system is very simple. Memory is usually divided into two areas : Operating
system and user program area.

Fig: Memory Layout for a Simple Batch System


 Scheduling is also simple in batch system. Jobs are processed in the order of submission i.e
first come first served fashion.
 When job completed execution, its memory is releases and the output for the job gets
copied into an output spool for later printing.
 Batch system often provides simple forms of file management. Access to file is serial.
Batch systems do not require any time critical device management.
 Batch systems are inconvenient for users because users can not interact with their jobs to
fix problems. There may also be long turn-around times.

Time Sharing Systems


 Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple
jobs are executed by the CPU switching between them, but the switches occur so
frequently that the users may interact with each program while it is running.
 In an interactive, or hands-on, computer system the user gives instructions to the
operating system or to a program directly, and receives an immediate response. Usually, a
keyboard is used to provide input, and a display screen (such as a cathode-ray tube (CRT)
or monitor) is used to provide output.
 Time-sharing systems were developed to provide interactive use of a computer
system at a reasonable cost. A time-shared operating system uses CPU scheduling and
multiprogramming to provide each user with a small portion of a time-shared
computer. Each user has at least one separate program in memory. A program that is
loaded into memory and is executing is commonly referred to as a process. When a
process executes, it typically executes for only a short time before it either finishes
or needs to perform I/O. I/O may be interactive; that is, output is to a display for
the user and input is from a user keyboard. Since interactive I/O typically runs at people
speeds, it may take a long time to completed.
 A time-shared operating system allows the many users to share the computer
simultaneously. Since each action or command in a time-shared system tends to be
short, only a little CPU time is needed for each user. As the system switches
rapidly from one user to the next, each user is given the impression that she has her
own computer, whereas actually one computer is being shared among many users.
 Time-sharing operating systems are even more complex than are multi-programmed
operating systems. As in multiprogramming, several jobs must be kept simultaneously in
memory, which requires some form of memory management and protection.

Real Time Operating System


 A real-time operating system (RTOS) is an operating system (OS) intended to serve real
time application requests. It must be able to process data as it comes in, typically without
buffering delays. Processing time requirements (including any OS delay) are measured in
tenths of seconds or shorter.
 A real-time operating system (RTOS) is an operating system that guarantees a certain
capability within a specified time constraint. For example, an operating system might be
designed to ensure that a certain object was available for a robot on an assembly line.
 RTOS is categorized into hard RTOS & soft RTOS.
 In "hard" real-time operating system, if the calculation could not be performed for making
the object available at the designated time, the operating system would terminate with a
failure.
 The "soft" real-time operating system is the less restricted type of operating system. If the
data is not processed within the specified time interval then output may loss its utility.
Some real-time operating systems are created for a special application and others are more
general purpose.

1. (b) Define & Compare Spooling & Buffering. (3)


Ans:
Buffering: It is a method of overlapping input, output and processing of a single job. After the
data has been read and CPU is about to start operating on it, the input device is instructed to begin
the next input immediately. The CPU and the input device are then both busy. By the time that the
CPU is ready for the next data item, the input device will have finished reading it. The CPU can
the begin processing the newly read data, while the input device starts to read the following data.
Similar can be done for output. In this case CPU creates data that is put into a buffer until an
output device can accept it. If the CPU is fast then for input it always find free buffer and for
output it always finds full buffer. In both the cases CPU has to wait for input or output device.

Spooling: It stands for simultaneous Peripheral Operation Online. In disk technology, rather than
the cards being read from the card reader directly into memory and then the job being processed,
cards are read directly from the card reader onto the disk. The location of the card reader is
recorded in the table kept by the OS. When the job is executed, the OS satisfied its request for the
card reader input by reading from the disk. Similarly, when the job requests the printer to output
the line, that line is copied into the system buffer and is written into the disk. When the job is
completed the output is actually printed. This form of processing is called as spooling.

Comparison of Spooling & Buffering: Buffering overlaps input, output and processing of a
single job whereas spooling allows CPU to overlap the input of one job with the computation and
output of other jobs.

2. (a) List & explain various services provided by operating system. (6)
Ans: Following are the various services provided by operating system:
i) Program Execution
ii) I/O Operation
iii) File system manipulation
iv) Communication
v) Error handling
vi) Resource Management
vii) Protection

1. Program execution
Operating system handles many kinds of activities from user programs to system programs
like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management.
 Loads a program into memory.
 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

2. I/O Operation
Operating System manages the communication between user and device drivers.
Following are the major activities of an operating system with respect to I/O Operation.
 I/O operation means read or write operation with any file or any specific I/O device.
 Program may require any I/O device while running.
 Operating system provides the access to the required I/O device when required.

3. File system manipulation


A file represents a collection of related information. Computer can store files on the disk
(secondary storage), for long term storage purpose.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management.
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

4. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications between
processes. Multiple processes with one another through communication lines in the network.
OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication.
 Two processes often require data to be transferred between them.
 The both processes can be on the one computer or on different computer but are connected
through computer network.
 Communication may be implemented by two methods either by Shared Memory or by
Message Passing.

5. Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling.
 OS constantly remains aware of possible errors.
 OS takes the appropriate action to ensure correct and consistent computing.

6. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major activities of
an operating system with respect to resource management.
 OS manages all kind of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.

7. Protection
Protection refers to mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer systems. Following are the major activities of an
operating system with respect to protection.
 OS ensures that all access to system resources is controlled.
 OS ensures that external I/O devices are protected from invalid access attempts.
 OS provides authentication feature for each user by means of a password.
2. (b) Discuss the various file allocation methods. (8)
Ans:
Contiguous Allocation
 The contiguous allocation method requires each file to occupy a set of contiguous
blocks on the disk. Contiguous allocation of a file is defined by the disk address and length
(in block units) of the first block. If the file is n blocks long, and starts at location
b, then it occupies blocks b, b + 1, b + 2, ..., b + n – 1. The directory entry for each file
indicates the address of the starting block and the length of the area allocate for this file.
 Accessing a file that has been allocated contiguously is easy. For sequential access, the
file system remembers the disk address of the last block referenced and, when
necessary, reads the next block. For direct access to block i of a file that starts at
block b, we can immediately access block b + i.
 The contiguous disk-space-allocation problem can be seen to be a particular application
of the general dynamic storage-allocation First Fit and Best Fit are the most
common strategies used to select a free hole from the set of available holes.
 These algorithms suffer from the problem of external fragmentation. To prevent loss of
significant amounts of disk space to external fragmentation, the user had to run
repacking routine that copied the entire file system onto another floppy disk or onto a
tape. The original floppy disk was then freed completely, creating one large
contiguous free space. The routine then copied the files back onto the floppy disk
by allocating contiguous space from this one large hole. This scheme effectively
compacts all free space into one contiguous space, solving the fragmentation problem.
 The time cost is particularly severe for large hard disks that use contiguous allocation,
where compacting all the space may take hours and may be necessary on a weekly
basis.
 A major problem is determining how much space is needed for a file. When the file is
created, the total amount of space it will need must be found and allocated. The user will
normally over estimate the amount of space needed, resulting in considerable wasted
space.

Fig: Contiguous Allocation of Disk Space

Linked Allocation
 With link allocation, each file is a linked list disk blocks; the disk blocks may be
scattered anywhere on the disk.
 Each directory entry has pointer initialized to nil to signify empty file to first disk block of
the file.
 There is no external fragmentation with linked allocation, and any free block on thefree-
space list can be used to satisfy a request. There is no need to declare the size of a file
when that file is created. A file can continue to grow as long as there are free
blocks.
 The major problem is that it can be used effectively for only sequential access
files. To find the ith block of a file we must start at the beginning of that file, and follow
the pointers until we get to the ith block. Each access to a pointer requires a disk read and
sometimes a disk seek.
 One drawback of linked allocation is the space required for the pointers. If a pointer
requires 4 bytes out of a 512 Byte block then 0.78 percent of the disk is being used for
pointer, rather than for information. The usual solution to this problem is to collect blocks
into multiples, called clusters, and to allocate the clusters rather than blocks. .
 An important variation, on the linked allocation method is the use of a file allocation
table (FAT). The table has one entry for each disk block, and is indexed by block number.
The directory entry contains the block number of the first block of the file. The table
entry indexed by that block number then contains the block number of the next
block in the file. This chain continues until the last block, which has a special end-of-file
value -as the table entry. Unused blocks are indicated by a 0 table value. Allocating a new
block to a file is a simple matter of finding the first 0-valued table entry, and replacing the
previous end-of-file value with the address of the new block. The 0 is then
replaced with the end-of file value.

Fig: Linked Allocation Fig: file Allocation table

Indexed Allocation
 Linked allocation cannot support efficient direct access, since the pointers to the
blocks are scattered with the blocks themselves all over the disk and need to be
retrieved in order. Indexed allocation solves this problem by bringing all the pointers
together into one location: the index block.
 Each file has its own index block, which is an array of disk-block addresses. The ith entry
in the index block points to the ith block of the file. The directory contains the
address of the index block.
 When the file is created, all pointers in the index block are set to nil. When the ith block is
first written, a block is obtained: from the free space manager, and its address- is put in the
ith index-block entry.
 Allocation supports direct access, without suffering from external fragmentation
because any free block on the disk may satisfy a request for more space.
 Indexed allocation suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
1. Linked scheme. An index block is normally one disk block. Thus, it can be read and
written directly by itself.
2. Multilevel index. A variant of the linked representation is to use a first-level
index block to point to a set of second-level index blocks, which in turn point to
the file blocks. To access a block, the operating system uses the first-level index
to find a second-level index block, and that block to find the desired data block.

Fig: Index Allocation

3. (a) Explain different scheduling levels like short- term, mid- term & long- term
scheduling. (6)
Ans:
Long Term Scheduling
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU scheduler. The
primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On same systems, the long term scheduler may be absent or minimal. Time-sharing
operating systems have no long term scheduler. When process changes the state from new to
ready, then there is a long term scheduler.

Short Term Scheduling


It is also called CPU scheduler. Main objective is increasing system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects from among the processes that are ready to execute and
allocates the CPU to one of them.
Short term scheduler also known as dispatcher, execute most frequently and makes the fine
grained decision of which process to execute next. Short term scheduler is faster than long tern
scheduler.

Medium Term Scheduling


Medium term scheduling is part of the swapping function. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium term scheduler is in
charge of handling the swapped out-processes.
Running process may become suspended by making an I/O request. Suspended
processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other process. Suspended process is move to the
secondary storage is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.

Fig: Queuing Diagram with Medium Term Scheduling

3. (b) Draw & Explain process state transition diagram in detail. Also explain PCB. (7)
Ans:
Process State Transition Diagram: When process executes, it changes state. Process state is
defined as the current activity of the process. Fig. 3.1 shows the general form of the
process state transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exit)

Fig: Diagram for Process State

1. New: A process that just been created.


2. Ready: Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
3. Running: The process that is currently being executed. A running process possesses all
the resources needed for its execution, including the processor.
4. Waiting: A process that cannot execute until some event occurs such as the completion of an
I/O operation. The running process may become suspended by invoking an I/O module.
5. Terminated: A process that has been released from the pool of executable processes by
the operating system.
Whenever processes changes state, the operating system reacts by placing the
process PCB in the list that corresponds to its new state. Only one process can be running
on any processor at any instant and many processes
may be ready and waiting state.

Process Control Block: Each process contains the process control block (PCB). PCB is the
data structure used by the operating system. Operating system groups all information that
needs about particular process. Fig. below shows the process control block.

Process State

Process Number

Program Counter

CPU Registers

Memory Allocation

Event Information

List of Open Files

.....
Fig: Process Control Block

1. Process State : Process state may be new, ready, running, waiting and so on.
2. Program Counter : It indicates the address of the next instruction to be executed for
this process.
3. Event information : For a process in the blocked state this field contains
information concerning the event for which the process is waiting.
4. CPU register : It indicates general purpose register, stack pointers, index registers
and accumulators etc. number of register and type of register totally depends upon the
computer architecture.
5. Memory Management Information : This information may include the value of base and
limit register. This information is useful for de-allocating the memory when the
process terminates.
6. Accounting Information : This information includes the amount of CPU and real time
used, time limits, job or process numbers, account numbers etc.
7. I/O Status Information: This information includes the list of I/O devices allocated to the
process, a list of open files and so on.
Process control block also includes the information about CPU scheduling, I/O
resource management, file management information, priority and so on. The PCB simply serves as
the repository for any information that may vary from process to process.
4. (a) Explain paging & segmentation scheme of memory management with example. (7)
Ans:
Paging:
Paging is a memory-management scheme that permits the physical address space of a
process to be non-contiguous. The basic method for implementing paging involves breaking
physical memory into fixed-sized blocks called frames and breaking logical memory into blocks
of the same size called pages. When a process is to be executed, its pages are loaded into any
available memory frames from the backing store. The backing store is divided into fixed-sized
blocks that are of the same size as the memory frames. The hardware support for paging is
illustrated in figure below.

Fig: Paging Hardware

Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d). The page number is used as an index into a page table. The page table contains the
base address of each page in physical memory. This base address is combined with the page offset
to define the physical memory address that is sent to the memory unit.
As an example, consider the memory in figure below Using a page size of 4 bytes and a
physical memory of 32 bytes (8 pages), we show how the user's view of memory can be mapped
into physical memory. Logical address 0 is page 0, offset 0. Indexing into the page table, we find
that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0).
Logical address 3 (page 0, offset 3) maps to physical address 23 {- (5x4 ) + 3). Logical address 4
is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical address
4 maps to physical address 24 (= (6x4 ) + 0). Logical address 13 maps to physical address 9.

Fig: Paging example with 32- byte Memory and 4- byte Pages
Segmentation:
Segmentation is a memory-management scheme that supports this user view of memory. A
logical address space is a collection of segments. Each segment has a name and a length. The
addresses specify both the segment name and the offset within the segment. The user therefore
specifies each address by two quantities: a segment name and an offset. For simplicity of
implementation, segments are numbered and are referred by a segment number, rather than by a
segment name. Thus, a logical address consists of a two tuple:
< segment-number, offset >.
Each entry in the segment table has a segment base and a segment limit. The segment base
contains the starting physical address where the segment resides in memory, whereas the segment
limit specifies the length of the segment. The use of a segment table is illustrated in Figure below.
A logical address consists of two parts: a segment number, s, and an offset into that segment, d.
The segment number is used as an index to the segment table. The offset d of the logical address
must be between 0 and the segment limit. If it is not, we trap to the operating system. When an
offset is legal, it is added to the segment base to produce the address in physical memory of the
desired byte. The segment table is thus essentially an array of base-limit register pairs.

Fig: Segmentation Hardware


As an example, consider the situation shown in Figure 8.20. We have five segments
numbered from 0 through 4. The segments are stored in physical memory as shown. The segment
table has a separate entry for each segment, giving the beginning address of the segment in
physical memory (or base) and the length of that segment (or limit). For example, segment 2 is
400 bytes long and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped
onto location 4300 + 53 = 4353. A reference to segment 3r byte 852, is mapped to 3200 (the base
of segment 3) + 852 = 4052. A reference to byte 1222 of segment 0 would result in a trap to the
operating system, as this segment is only 1,000 bytes long.

Fig: Example of Segmentation


4. (b) What is thrashing? Explain different methods to minimize thrashing. (6)
Ans:
Thrashing:
Consider a process that does not have ''enough" frames. If the process does not have the
number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it
must replace some page. As all its pages are in active use, it must replace a page that will be
needed again right away. Consequently, it quickly faults again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity is called ―thrashing‖. A
process is thrashing if it is spending more time in paging than executing.

Fig: Thrashing
Different methods to minimize thrashing are as follows:
Working Set Model:
The working-set model is xbased on the assumption of locality. This model uses a
parameter, Δ, to define the working-set window. The idea is to examine the most recent A page
references. The set of pages in the most recent Δ page references is the working set as in figure
below. If a page is in active use, it will be in the working set. If it is no longer being used, it will
drop from the working set Δ time units after its last reference. Thus, the working set is an
approximation of the program's locality.
For example, given the sequence of memory references shown in Figure, if Δ = 10 memory
references, then the working set at time t1 is {1, 2, 5, 6, 7). By time t2, the working set has
changed to {3, 4}.

Fig: Working Set Model

The accuracy of the working set depends on the selection of Δ. If Δ is too small, it will not
encompass the entire locality; if Δ is too large, it may overlap several localities. In the extreme, if
Δ is infinite, the working set is the set of pages touched during the process execution. Once Δ has
been selected, use of the working-set model is simple. The operating system monitors the working
set of each process and allocates to that working set enough frames to provide it with its working-
set size. If there are enough extra frames, another process can be initiated. If the sum of the
working-set sizes increases, exceeding the total number of available frames, the operating system
selects a process to suspend. The process's pages are written out (swapped), and its frames are
reallocated to other processes. The suspended process can be restarted later.
This working-set strategy prevents thrashing while keeping the degree of
multiprogramming as high as possible. Thus, it optimizes CPU utilization. The difficulty with the
working-set model is keeping track of the working set. The working-set window is a moving
window. At each memory reference, a new reference appears at one end and the oldest reference
drops off the other end. A page is in the working set if it is referenced anywhere in the working-set
window.

Page-Fault Frequency:
The working-set model is successful, and knowledge of the working set can be useful for
pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page-fault
frequency (PFF) takes a more direct approach.
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate.
Thus, we want to control the page-fault rate. When it is too high, we know that the process needs
more frames. Conversely, if the page-fault rate is too low, then the process may have too many
frames. We can establish upper and lower bounds on the desired page-fault rate. If the actual page-
fault rate exceeds the upper limit, we allocate the process another frame; if the page-fault rate falls
below the lower limit, we remove a frame from the process. Thus, we can directly measure and
control the page-fault rate to prevent thrashing. As with the working-set strategy, we may have to
suspend a process. If the page-fault rate increases and no free frames are available, we must select
some process and suspend it. The freed frames are then distributed to processes with high page-
fault rates.

Fig: Page Fault Frequency

5. (a) What is virtual memory? Explain demand paging in detail. (6)


Ans:
Virtual Memory:
Virtual memory is a technique that allows the execution of processes that are not
completely in memory. virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from physical memory. This
technique frees programmers from the concerns of memory-storage limitations. Virtual memory
also allows processes to share files easily and to implement shared memory.

Demand Paging:
A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory. When we want to execute a process, we swap it into memory. When
a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again. Instead of swapping in a whole process, the pager brings only those necessary
pages into memory. Thus, it avoids reading into memory pages that will not be used anyway,
decreasing the swap rime and the amount of physical memory needed.
With the demand paging we need some form of hardware support to distinguish between
the pages that are in memory and the pages that are on the disk. The valid-invalid bit scheme can
be used for this purpose. When this bit is set to "valid" the associated page is both legal and in
memory. If the bit is set to "invalid," the page either is not valid or is valid but is currently on the
disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table
entry for a page that is not currently in memory is either simply marked invalid or contains the
address of the page on disk. This situation is depicted in Figure below:

Fig: Page Table when some Pages are not in Memory

The hardware to support demand paging is the same as the hardware for paging and
swapping:
• Page table: This table has the ability to mark an entry invalid through a valid-invalid bit or
special value of protection bits.
• Secondary memory: This memory holds those pages that are not present in main memory.
A crucial requirement for demand paging is the need to be able to restart any instruction
after a page fault. If the page fault occurs on the instruction fetch, we can restart by-fetching the
instruction again. If a page fault occurs while we are fetching an operand, we must fetch and
decode the instruction again and then fetch the operand.

5. (b) Consider a system with 3 page frame for user level application. Consider the following
reference string:
5,6,4,3,5,6,3,6,9,4,3,9,6,4,9
How many page faults will be there when one considers FIFO, LRU & Optimal Page
Replacement algorithm? (7)
Ans:
i) First In First Out (FIFO)
Reference
5 6 4 3 5 6 3 6 9 4 3 9 6 4 9
String
Frame 1 5 5 5 3* 3 3 9* 9 9 6* 6
Frame 2 6 6 6 5* 5 5 4* 4 4 9*
Frame 3 4 4 4 6* 6 6 3* 3 3
Page Fault           

No. of. Page Faults when one considers FIFO Algorithm is: 11
ii) Least Recently Use (LRU)
Reference
5 6 4 3 5 6 3 6 9 4 3 9 6 4 9
String
Frame 1 5 5 5 3* 3 3 3 4* 4 6* 6
Frame 2 6 6 6 5* 5 9* 9 9 9 9
Frame 3 4 4 4 6* 6 6 3* 3 4*
Page Fault           

No. of. Page Faults when one considers LRU Algorithm is: 11

iii) Optimal Page Replacement (OPR)


Reference
5 6 4 3 5 6 3 6 9 4 3 9 6 4 9
String
Frame 1 5 5 5 5 9* 9 9
Frame 2 6 6 6 6 4* 4
Frame 3 4 3* 3 3 6*
Page Fault       

No. of. Page Faults when one considers OPR Algorithm is: 07

Section – B
6. (a) List & Explain necessary conditions that must hold simultaneously for deadlock. (6)
Ans: In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not available at
that time, the process enters a wait state. It may happen that waiting processes will never
again change state, because the resources they have requested are held by other waiting processes.
This situation is called deadlock.
Following are the necessary conditions that must hold simultaneously for deadlock:
i) Mutual exclusion
ii) Hold & wait
iii) No Pre-emption
iv) Circular wait

i. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is, only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.

ii. Hold and Wait: There must exist a process that is holding at least one resource and is
waiting to acquire additional resources that are currently being held by other processes.

iii. No Preemption: Resources cannot be preempted; that is, a resource can be released
only voluntarily by the process holding it, after that process, has completed its task.

iv. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2,
…., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.
6. (b) Explain Banker’s algorithm for deadlock avoidance with suitable example (8)
Ans:
The Bankers algorithm is applicable to a resource-allocation system with multiple
instances of each resource type. is less efficient than the resource-allocation graph scheme. When,
a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number may not exceed the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation
of these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
Several data structures must be maintained to implement the banker's algorithm. Let n be
the number of processes in the system and m be the number of resource types. We need the
following data structures:
• Available: A vector of length m indicates the number of available resources of each type. If
Available[j] equals k, there are k instances of resource type Rj available.
• Max: An n x m matrix defines the maximum demand of each process. If M[i][j] equals k, then
process Pi may request at most k instances of resource type Rj.
• Allocation: An n x in matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.

• Need: An n x m matrix indicates the remaining resource need of each process. If Need[i][j]
equals k, then process Pi may need k more instances of resource type Rj to complete its task. Note
that Need[i][j] equals Max[i][j]- Allocation[i][j].

i) Safety Algorithm:
This algorithm for finding out whether or not a system is in a safe state. This algorithm
can be described, as follows:
1. Let Work and Finish be vectors of length in and n, respectively. Initialize
Work = Available and Finish[i] = false for i= 0,1 , ..., n-l .
2. Find an i such that both
a. Finish[i] ==false
b. Needi < Work
If no such i exists, go to step 4.
3. Work = Work + Allocation,
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.

ii) Resource-Request Algorithm


This determines if requests can be safely granted. Let Requesti be the request vector for
process Pi. If Requesti[ j] == k, then process Pi wants k instances of resource type Rj. When a
request for resources is made by process Pi the following actions are taken:
1. If Requesti < Needi go to step 2. Otherwise, raise an error condition, since the process has
exceeded its maximum claim.
2. If Requesti < Available, go to step 3. Otherwise, Pi must wait, since the resources are not
available.
3. Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as follows:
Available = Available - Requesti
Allocation-, = Allocationi + Requesti
Needi = Necdi - Requesti
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti and the
old resource allocation state is restored.

Example:
Consider the following snapshot of system:

The content of the matrix Need is defined to be Max - Allocation and is as follows:

By using safety algorithm we can conclude that the system is currently in a safe state with
the sequence<P1, P3, P4, P1, P0>. Suppose now that process P1 requests one additional instance
of resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 < Available- that is, (1,0,2) <
(3,3,2), which is true. By using resource request algorithm this request has been fulfilled, and we
arrive at the following new state:

Again applying safety algorithm to check whether the new system state is safe or not. We
get the safe sequence <P1, P3, P4, P0, P2>. Thus the request can be granted immediately.

7. Explain the following: (3+3+3+4)


i) Critical section
ii) Mutual Exclusion
iii) Busy Waiting
iv) Semaphore
Ans:
i) Critical section
Consider a system consisting of n processes {PQ, PI , ..., P,,~\}. Each process has a
segment of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. The important feature of the system is that,
when one process is executing in its critical section, no other process is to be allowed to execute in
its critical section. That is, no two processes are executing in their critical sections at the same
time. The critical-section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section. The critical section may be followed by an exit
section. The remaining code is the remainder section. The general structure of a typical process P,
is shown in figure below. The entry section and exit section are enclosed in boxes to highlight
these important segments of code.

Fig: General Structure of Typical Process Pi

ii) Mutual Exclusion


Mutual exclusion refers to the requirement of ensuring that no two concurrent processes
are in their critical section at the same time; it is a basic problem in concurrency control, to
prevent race conditions. Here, a critical section refers to a period of time when the process
accesses a shared resource, such as shared memory.
If the shared resource is a variable, mutual exclusion ensures that at most one process at a
time has to access it during the critical updates that lead to temporarily inconsistent values. With
shared devices, the need of mutual exclusion is even more obvious when one considers the
problem that may be caused by their uncontrolled use. With these operations performed in a
mutually exclusive way, only one program at a point is allowed to control a serially reusable
device.
To be acceptable as a general tool, a solution to the mutual exclusion problem should:
1. Ensure mutual exclusion between processes accessing the protected shared resources.
2. Make no assumption about relative speeds and priorities of contending processes.
3. Guarantee that crashing or terminating of any process outside of its critical section does
not affect the ability of other contending processes to access the shared resources.
4. When more than one process wishes to enter the critical section, grant entrance to one of
them in finite time.

iii) Busy Waiting


The main disadvantage of the semaphore definition given here is that it requires busy
waiting. While a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the entry code. This continual looping is clearly a problem in a
real multiprogramming system, where a single CPU is shared among many processes. Busy
waiting wastes CPU cycles that some other process might be able to use productively. This type of
semaphore is also called a spinlock because the process "spins" while waiting for the lock.
(Spinlocks do have an advantage in that no context switch is required when a process must wait on
a lock, and a context switch may take considerable time. Thus, when locks are expected to be held
for short times, spinlocks are useful)
To overcome the need for busy waiting, we can modify the definition of the wait () and
signal () semaphore operations. When a process executes the wait () operation and finds that the
semaphore value is not positive, it must wait. However, rather than engaging in busy waiting, the
process can block itself. The block operation places a process into a waiting queue associated with
the semaphore, and the state of the process is switched to the waiting state. Then control is
transferred to the CPU scheduler, which selects another process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some other
process executes a signal( ) operation. The process is restarted by a wakeup () operation, which
changes the process from the waiting state to the ready state. The process is then placed in the
ready queue.

iv) Semaphore
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The waitO operation was originally
termed P; signal () was originally called V.
The definition of wait 0 is as follows:
wait(S) {
while S <= 0
; // no-op
S--;

The definition of signal () is as follows:


signal(S) {
S++;
}
All the modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly. That is, when one process modifies the semaphore value,
no other process can simultaneously modify that same semaphore value. In addition, in the case of
wait(S), the testing of the integer value of S (S < 0), and its possible modification (S--), must also
be executed without interruption.
Operating systems often distinguish between counting and binary semaphores. The value
of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore
can range only between 0 and 1. On some systems, binary semaphores are known as mutex locks,
as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the
critical-section problem for multiple processes. Counting semaphores can be used to control
access to a given resource consisting of a finite number of instances.

8. (a) Explain Producer Consumer Problem with solution using semaphore. (7)
Ans:
The producer consumer problem can be stated as, given a set of cooperating process, some
of which produce data items to be consumed by others with possible disparity between
consumption & production rates. Devise a synchronization protocol that allows both producers
and consumers to operate concurrently at their respective service rates in such a way that produced
items are consumed in the exact order of production.
To allow producer and consumer to operate concurrently, a pool of buffer is created that is
filled by the producer and emptied by consumer. Producer produces in one buffer and consumer
consumes from another buffer. The process should be synchronized in such a way that consumer
should not consume the item that the producer has not produced.
At any particular time, the shared global buffer may be emptied, partially filled or full of
produced items ready for assumption. A producer may run in either of the two former cases, but
when buffer is full the producer must be kept waiting. On the other hand when buffer is empty,
consumer must wait.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The
next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill
the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The
solution can be reached by means of inter-process communication, typically using semaphores.
The example below shows a general solution to the producer consumer problem using
semaphores. We assume that the pool consists of n buffers, each capable of holding one item. The
mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.The
code for the producer and consumer process is shown below. We can interpret this code as the
producer producing full buffers for the consumer or as the consumer producing empty buffers for
the producer.

Structure of producer process:


Producer()
{
while (1)
{
<<< produce item >>>
P(empty); /* Get an empty buffer (decrease count) , block if unavail */
P(mutex); /* acquire critical section: shared buffer */

<<< critical section: Put item into shared buffer >>>

V(mutex); /* release critical section */


V(full); /* increase number of full buffers */
}
}

Structure of consumer process:


Consumer()
{
while (1)
{
P(full);
P(mutex);

<<< critical section: Remove item from shared buffer */

V(mutex);
V(empty);
}
8. (b) Suppose the head of moving disk with 200 track, is currently at 143 and has just
finished a request at 125. If the queue of request is in following order:
86, 147, 91, 177, 95, 155, 106, 177, 133
What is total head movement to satisfy these requests for following disk scheduling
algorithms:
i) FCFS
ii) SSTF
iii) SCAN
iv) CSCAN
Ans:
i) FCFS
0 86 91 95 106 130 143 147 155 177 199

Total Head Movement = (143-86) + (147-86) +(147-91) + (177-91) + (177-95) + (155-95)


+ (155-106) + (177-106) + (177-130)
= 569Cylinders

ii) SSTF

0 86 91 95 106 130 143 147 155 177 199

Total Head Movement = (155-143) + (155-86) +(177-86)


= 172Cylinders

ii) SCAN
0 86 91 95 106 130 143 147 155 177 199

Total Head Movement = (199-143) + (199-86)


= 169Cylinders
ii) C-SCAN

0 86 91 94 102 130 143 147 150 177 199

Total Head Movement = (199-143) + (130-0)


= 186Cylinders
9. (a) Discuss the security problem in computer system. (8)
Ans:
The system is secure if its resources are used and accessed as intended under all
circumstances. Unfortunately, total security cannot be achieved. Security violations (or misuse) of
the system can be categorized as intentional (malicious) or accidental. It is easier to protect against
accidental misuse than against malicious misuse. For the most part, protection mechanisms are the
core of protection from accidents. The following list includes forms of accidental and malicious
security violations.
• Breach of confidentiality. This type of violation involves unauthorized reading of data (or theft
of information). Typically, a breach of confidentiality is the goal of an intruder. Capturing secret
data from a system or a data stream, such as credit-card information or identity information for
identity theft, can result directly in money for the intruder.
• Breach of integrity. This violation involves unauthorized modification of data. Such attacks can,
for example, result in passing of liability to an innocent party or modification of the source code of
an important commercial application.
• Breach of availability. This violation involves unauthorized destruction of data. Some crackers
would rather wreak havoc and gain status or bragging rights than gain financially. Web-site
defacement is a common example of this type of security breach.
• Theft of service. This violation involves unauthorized use of resources. For example, an intruder
(or intrusion program) may install a daemon on a system that acts as a file server.
• Denial of service. This violation involves preventing legitimate use of the system. Denial-of-
service, or DOS, attacks are sometimes accidental.
Attackers use several standard methods in their attempts to breach security. The most
common is masquerading, in which one participant in a communication pretends to be someone
else (another host or another person). By masquerading, attackers breach authentication, the
correctness of identification; they can then can gain access that they would not normally be
allowed or escalate their privileges—obtain privileges to which they would not normally be
entitled.
Another common attack is to replay a captured exchange of data. A replay attack consists
of the malicious or fraudulent repeat of a valid data transmission. Sometimes the replay comprises
the entire attack— for example, in a repeat of a request to transfer money. But frequently it is done
along with message modification, again to escalate privileges. Consider the damage that could be
done if a request for authentication had a legitimate user's information replaced with an
unauthorized user's.
Yet another kind of attack is the man-in-the-middle attack, in which an attacker sits in the
data flow of a communication, masquerading as the sender to the receiver, and vice versa. In a
network communication, a man-in-the-middle attack may be preceded by a session hijacking, in
which an active communication session is intercepted.

9. (b) What are the main difference between Capability list & access List. (5)
Ans:
Access List Capability List
An access list is a list that specifies the user A capability list is a list of objects coupled with
name and the types of access allowed for each the operations allowed on those objects.
user.
Access Lists with each file, indicate which Capabilities with each user, indicate which files
users are allowed to perform which operations. may be accessed, and in what ways.

Access List is one way of recording access Capabilities provide a single unified mechanism
rights in a computer system. They are to:
frequently used in file systems. a) Address both primary and secondary memory
b) Access both hardware and software resources
c) Protect objects in both primary and
secondary memory.
In principle, access list is an exhaustive A capability is a token or a ticket that gives the
enumeration of the specific access rights of all subject possessing its permission to access a
entities that are authorized access to a given specific object in the specified manner. A
object. capability may be represented as a data
structure consisting of two items of information
viz. A unique object identifier and access rights
to that object.
In systems that employ access lists, a separate Capability based systems combine the
list is maintained for each object. addressing and protection functions in a single
Usually owner has the exclusive right to define unified mechanism that is used to access all
and modify the related access list. The owner of system objects. In capability based systems, a
the object can revoke the access rights granted list of capabilities is associated with each
to a particular subject or a domain by simply subject.
modifying or deleting the related entry in the
access list.
In access list system a subject can name any In capability based system, a subject can name
object. only object for which it has capabilities.
Access list is obtained by decomposition of Capability list is obtained by decomposition of
access matrix by columns. access matrix by row.

10. (a) Why it is difficult to protect a system in which users are allowed to do their own I/O. (7)
Ans:
1. Data protection attempts to ensure the security of computer-processed data from
unauthorized access, from destructive user actions, and from computer failure. With
increasing use of computer-based information systems, there has been increasing concern
for the protection of computer-processed data.
2. In many applications, however, questions of data protection require explicit consideration
in their own right. Data protection must deal with two general problems. First, data must
be protected from unauthorized access and tampering. This is the problem of data security.
3. If users are allowed to do their own I/O then they may disrupt the normal operation of the
system by issuing illegal I/O instructions, by accessing memory locations within the
operating system itself, or by refusing to relinquish the CPU.
4. Second, data must be protected from errors by authorized system users, in effect to protect
users from their own mistakes. This is the problem of error prevention.
5. Concern for data security will take different forms in different system applications.
Individual users may be concerned with personal privacy, and wish to limit access to
private data files. Corporate organizations may seek to protect data related to proprietary
interests. Military agencies may be responsible for safeguarding data critical to national
security.
6. The mechanisms for achieving security will vary accordingly. Special passwords might be
required to access private files. Special log-on procedures might be required to assure
positive identification of authorized users, with records kept of file access and data
changes. Special confirmation codes might be required to validate critical commands.
7. At the extreme, measures instituted to protect data security may be so stringent that they
handicap normal system operations. Imagine a system in which security measures are
designed so that every command must be accompanied by a continuously changing
validation code which a user has to remember. Imagine further that when the user makes a
code error, which can easily happen under stress, the command sequence is interrupted to
re-initiate a user identification procedure. In such a system, there seems little doubt that
security measures will reduce operational effectiveness.
8. It seems probable, however, that absolute data security can never be attained in any
operational information system. There will always be some reliance on human judgment,
as for example in the review and release of data transmissions, which will leave systems in
some degree vulnerable to human error. Thus a continuing concern in user interface design
must be to reduce the likelihood of errors, and to mitigate the consequences of those errors
that do occur.
9. Consider the following example. In one computer center, an operator must enter a
command "$U" to update an archive tape by writing a new file at the end of the current
record, while the command "$O" will overwrite the new file at the beginning of the tape so
that all previous records are lost. A difference of one keystroke could obliterate the records
of years of previous work.
10. In systems where information handling requires the coordinated action of multiple users, it
may be appropriate that one user can change data that will be used by others. But when
multiple users will act independently, then care should be taken to ensure that they will not
interfere with one another. Extensive system testing under conditions of multiple uses may
be needed to determine that unwanted interactions do not occur.
11. When one user's actions can be interrupted by another user, as in defined emergency
situations, that interruption should be temporary and non-destructive. The interrupted user
should subsequently be able to resume operation at the point of interruption without data
loss.

10. (b) What are advantages of encrypting data in computer system? (6)
Ans:
1. Data encryption refers to the process of transforming electronic information into a scrambled form
that can only be read by someone who knows how to translate the code.
2. Encryption is important in the business world because it is the easiest and most practical method of
protecting data that is stored, processed, or transmitted electronically.
3. It is vital to electronic commerce, for example, because it allows merchants to protect customers'
credit card numbers and personal information from computer hackers or competitors.
4. It is also commonly used to protect legal contracts, sensitive documents, and personal messages
that are sent over the Internet. Without encryption, this information could be intercepted and
altered or misused by outsiders.
5. In addition, encryption is used to scramble sensitive information that is stored on business
computer networks, and to create digital signatures to authenticate e-mail and other types of
messages sent between businesses.
6. The main benefit of data encryption is that even if you were to lose your computer, get
malicious malware or are hacked, the data inside your computer is still safe.
7. File encrypted by one user cannot be opened by another user if the latter does not possess
appropriate permissions.

You might also like