OS_Model_Solutions
OS_Model_Solutions
Section – A
Q 1. a) Define Operating System? Explain Batch, Time Sharing & Real Time Operating
System. (7)
Ans:
An Operating system is a program that controls the execution of application programs and acts as
an interface between the user of a computer and the computer hardware.
An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly includes
programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.
Batch System
Batch operating system is one where programs and data are collected together in a batch
before processing starts. A job is predefined sequence of commands, programs and data
that are combined in to a single unit called job.
Figure below shows the memory layout for a simple batch system. Memory management
in batch system is very simple. Memory is usually divided into two areas : Operating
system and user program area.
i) Sequential Access:
The simplest access method is sequential access. Information in the file is processed in
order, one record after the other. Editors and compilers usually access files in this fashion.
Reads and writes make up the bulk of the operations on a file. A read operation—read
next—reads the next portion of the file and automatically advances a file pointer, which tracks the
I/O location. Similarly, the write operation—write next—appends to the end of the file and
advances to the end of the newly written material (the new end of file). Such a file can be reset to
the beginning; and on some systems, a program .may be able to skip forward or backward n
records for some integer n. Sequential access is based on tape model of a file.
Multiprogramming Multitasking
Multi programming as a concept involves the Multi tasking is a function of a system to
capability of a system to simultaneously run perform more than one tasks at a time.
two or more programs at a time
User cannot interact with the system. User can interact with the system.
The simultaneous execution of two or more The concurrent or interleaved execution of two
programs or instruction sequences by separate or more jobs takes place by a single CPU.
CPUs under integrated control
For example, If you are printing a document of For example, If you are printing a document of
100 pages. While your computer is performing 100 pages. While your computer is performing
that, you still can do other jobs like typing a that, you still can do other jobs like typing a
new document. So, more than one task is new document. So, more than one task is
performed. performed.
Linked Allocation
With link allocation, each file is a linked list disk blocks; the disk blocks may be
scattered anywhere on the disk.
Each directory entry has pointer initialized to nil to signify empty file to first disk block of
the file.
There is no external fragmentation with linked allocation, and any free block on the free-
space list can be used to satisfy a request. There is no need to declare the size of a file
when that file is created. A file can continue to grow as long as there are free
blocks.
The major problem is that it can be used effectively for only sequential access
files. To find the ith block of a file we must start at the beginning of that file, and follow
the pointers until we get to the ith block. Each access to a pointer requires a disk read and
sometimes a disk seek.
One drawback of linked allocation is the space required for the pointers. If a pointer
requires 4 bytes out of a 512 Byte block then 0.78 percent of the disk is being used for
pointer, rather than for information. The usual solution to this problem is to collect blocks
into multiples, called clusters, and to allocate the clusters rather than blocks. .
An important variation, on the linked allocation method is the use of a file allocation
table (FAT). The table has one entry for each disk block, and is indexed by block number.
The directory entry contains the block number of the first block of the file. The table
entry indexed by that block number then contains the block number of the next
block in the file. This chain continues until the last block, which has a special end-of-file
value -as the table entry. Unused blocks are indicated by a 0 table value. Allocating a new
block to a file is a simple matter of finding the first 0-valued table entry, and replacing the
previous end-of-file value with the address of the new block. The 0 is then
replaced with the end-of file value.
Indexed Allocation
Linked allocation cannot support efficient direct access, since the pointers to the
blocks are scattered with the blocks themselves all over the disk and need to be
retrieved in order. Indexed allocation solves this problem by bringing all the pointers
together into one location: the index block.
Each file has its own index block, which is an array of disk-block addresses. The ith entry
in the index block points to the ith block of the file. The directory contains the
address of the index block.
When the file is created, all pointers in the index block are set to nil. When the ith block is
first written, a block is obtained: from the free space manager, and its address- is put in the
ith index-block entry.
Allocation supports direct access, without suffering from external fragmentation
because any free block on the disk may satisfy a request for more space.
Indexed allocation suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
1. Linked scheme. An index block is normally one disk block. Thus, it can be read and
written directly by itself.
2. Multilevel index. A variant of the linked representation is to use a first-level
index block to point to a set of second-level index blocks, which in turn point to
the file blocks. To access a block, the operating system uses the first-level index
to find a second-level index block, and that block to find the desired data block.
Fig: Index Allocation
Buffering: It is a method of overlapping input, output and processing of a single job. After the
data has been read and CPU is about to start operating on it, the input device is instructed to begin
the next input immediately. The CPU and the input device are then both busy. By the time that the
CPU is ready for the next data item, the input device will have finished reading it. The CPU can
the begin processing the newly read data, while the input device starts to read the following data.
Similar can be done for output. In this case CPU creates data that is put into a buffer until an
output device can accept it. If the CPU is fast then for input it always find free buffer and for
output it always finds full buffer. In both the cases CPU has to wait for input or output device.
Buffering overlaps input, output and processing of a single job
Spooling: It stands for simultaneous Peripheral Operation Online. In disk technology, rather than
the cards being read from the card reader directly into memory and then the job being processed,
cards are read directly from the card reader onto the disk. The location of the card reader is
recorded in the table kept by the OS. When the job is executed, the OS satisfied its request for the
card reader input by reading from the disk. Similarly, when the job requests the printer to output
the line, that line is copied into the system buffer and is written into the disk. When the job is
completed the output is actually printed. This form of processing is called as spooling. Spooling
allows CPU to overlap the input of one job with the computation and output of other jobs.
Q 3. a)For each processes listed in table, draw gantt chart illustrating their execution using:-
i) Round Robin (Time quantum= 3)
ii) Priority Scheduling
iii) First Come First Serve
iv) Shortest Job First
Process Burst Time Priority
A 10 2
B 6 5
C 2 3
D 4 1
E 8 4
Also compare each algorithm on the basis of average waiting time and turnaround time
calculated. (10)
Ans:
Gantt Charts:
A B C D E A B D E A E A
0 3 6 8 11 14 17 20 21 24 27 29 30
D A C E B
0 4 14 16 24 30
A B C D E
0 10 16 18 22 30
C D B E A
0 2 4 16 22 30
Process State Transition Diagram: When process executes, it changes state. Process state is
defined as the current activity of the process. Fig. 3.1 shows the general form of the
process state transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exit)
Q 4. a) Suppose head of moving head disk with 200 tracks numbering 0 to 199 is currently
serving a request at 140. The arrival request is kept in FIFO order. The request at track are
84, 147, 91, 177, 94, 150, 102, 175, 130
Assuming earlier direction towards to be zero, calculate total head movements for following
disk scheduling algorithms.
i) SSTF ii) SCAN iii)C-SCAN iv)FCFS (8)
Ans:
i) SSTF
0 84 91 94 102 130 140 147 150 175 177 199
ii) SCAN
iii) C-SCAN
iv) FCFS
0 84 91 94 102 130 140 147 150 175 177 199
Sector queuing is an algorithm for scheduling fixed head devices. It is based on the division of
each track into a fixed number of blocks called sectors. The disk address in each request specifies
the track and sectors. Since seek time is zero for fixed head devices, the main service time is
latency. Sector queuing is primarily used with fixed head devices. If there is more than one request
for service within a particular track or cylinder. Sector queuing can be used to order multiple
requests within the same cylinder.
Example: Assume the head is currently over sector 2 and the first request in the queue is for sector
12. To service this request wait until sector 12 revolves under read/ write heads. If there is a
request in the queue for sector 5, it could be services before the request for sector without causing
for sector 12 to be delayed.
Sector queue defines a separate queue for each sector of the drum. When a request arrives
for sector i, it is placed in the queue for sector i.
ii) Thrashing
Consider a process that does not have ''enough" frames. If the process does not have the
number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it
must replace some page. As all its pages are in active use, it must replace a page that will be
needed again right away. Consequently, it quickly faults again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity is called ―thrashing‖. A
process is thrashing if it is spending more time in paging than executing.
Fig: Thrashing
Thrashing results in severe performance problems. The operating system monitors CPU
utilization. If CPU utilization is too low, we increase the degree of multiprogramming by
introducing a new process to the system. A global page-replacement algorithm is used; it replaces
pages without regard to the process to which they belong. Now suppose that a process enters a
new phase in its execution and needs more frames. It starts faulting and taking frames away from
other processes. These processes in turn also fault for pages, taking frames from other processes.
As processes wait for the paging device, CPU utilization decreases. The CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming. Thrashing has
occurred, and system throughput plunges. The page-fault rate increases tremendously as a result,
the effective memory-access time increases. No work is getting done, because the processes are
spending all their time paging.
iii) Overlays
The entire program and data of a process must be in the physical memory for the process
to execute. The size of a process is limited to the size of physical memory. If a process is larger
than the amount of memory, a technique called overlays can be used.
Overlays is to keep in memory only those instructions and data that are needed at any
given time. When other instructions are needed, they are loaded into space that was occupied
previously by instructions that are no longer needed. Overlays are implemented by user, no special
support needed from operating system, programming design of overlay structure is complex.
Example: Consider a two-pass assembler.
o Pass1 constructs a symbol table.
o Pass2 generates machine-language code.
Assume the following:
To load everything at once, we need 200k of memory. If only 150K is available, we cannot
run our process. Notice that Pass1 and Pass2 do not need to be in memory at same time. So, we
define two overlays:
– Overlay A: symbol table, common routines, and Pass1.
– Overlay B: symbol table, common routines, and Pass2.
We add overlay driver 10k and start with overlay A in memory. When finish Pass1, we
jump to overlay driver, which reads overlay B into memory overwriting overlay A and transfer
control to Pass2. Overlay A needs 130k and Overlay B needs 140k.
This leads to the idea of paging them and bringing only that pages in main memory which
are necessary. The paged segmentation scheme is as follows:
1. A virtual address becomes a segment number, a page within that segment, and an offset
within the page.
2. The segment number indexes into the segment table which yields the base address of the
page table for that segment.
3. The remainder of the address (page number and offset) is checked against the limit of the
segment.
4. The page number is used to index the page table. The entry in the page number is the frame
number.
5. The frame and the offset is added to get the physical address which is used to refer the data
of interest in the main memory.
v) Spatial Locality
Locality of reference, also known as the principle of locality, is a phenomenon describing
the same value, or related storage locations, being frequently accessed. There are two basic types
of reference locality temporal locality & spatial locality.
Spatial locality, refers to the use of data elements within relatively close storage
locations. If a particular memory location is referenced at a particular time, then it is likely that
nearby memory locations will be referenced in the near future. In this case it is common to attempt
to guess the size and shape of the area around the current reference for which it is worthwhile to
prepare faster access.
Section – B
Q 6. a)What are different types of memory fragmentation? Under what circumstances does
each occur? (4)
Ans:
Following are the different types of memory fragmentation:
i) Internal Fragmentation:
When partitioning is static, memory is wasted in each partition where an object of smaller
size than the partition itself is loaded. Wasting of memory within partition due to difference in size
of a partition and of object resident within it, is called internal fragmentation.
Internal fragmentation occurs when memory is internal to the region but it is not being
used.
i) FIFO Replacement
Reference String 4 1 2 1 5 4 1 2 1 5
Frame 1 4 4 4 5* 5 5 2* 2
Frame 2 1 1 1 4* 4 4 5*
Frame 3 2 2 2 1* 1 1
Page Fault # # # # # # # #
Reference String 4 1 2 1 5 4 1 2 1 5
Frame 1 4 4 4 5* 5 2* 2
Frame 2 1 1 1 1 1 1
Frame 3 2 2 4* 4 5*
Page Fault # # # # # # #
Reference String 4 1 2 1 5 4 1 2 1 5
Frame 1 4 4 4 4 2*
Frame 2 1 1 1 1
Frame 3 2 5* 5
Page Fault # # # # #
Reference String 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4* 4 4 5* 5 5
Frame 2 2 2 2 1* 1 1 3* 3
Frame 3 3 3 3 2* 2 2 4*
Page Fault # # # # # # # # #
No. of page fault with 3 page frame: 9
Reference String 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 5* 5 5 5 4* 4
Frame 2 2 2 2 2 1* 1 1 1 5*
Frame 3 3 3 3 3 2* 2 2 2
Frame 4 4 4 4 4 3* 3 3
Page Fault # # # # # # # # # #
No. of page fault with 4 page frame: 10
Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in
some order and still avoid a deadlock. A system is in a safe state only if there exists a safe
sequence. A sequence of processes <P1, P2, ..., Pn> is a safe sequence for the current allocation
state if, for each Pi, the resource requests that P, can still make can be satisfied by the currently
available resources plus the resources held by all Pi, with j< i. In this situation, if the resources
that Pi needs are not immediately available, then Pi, can wait until all Pj have finished. When they
have finished, P; can obtain all of its needed resources, complete its designated task, return its
allocated resources, and terminate. When Pi, terminates, Pi+l can obtain its needed resources, and
so on. If no such sequence exists, then the system state is said to be unsafe.
Resource-Allocation-Graph Algorithm
In addition to the request and assignment edges in resource allocation graph, a new type of
edge, called a claim edge is introduced in this algorithm. A claim edge Pi —> Rj indicates that
process Pi may request resource Rj at some time in the future. This edge resembles a request edge
in direction but is represented in the graph by a dashed line. When process Pi requests resource Rj,
the claim edge Pi —> Rj is converted to a request edge. Similarly, when a resource Rj is released
by Pj, the assignment edge Rj Pi is reconverted toa claim edge Pi —> Rj. Before process Pi
starts executing, all its claim edges must already appear in the resource-allocation graph. This
condition can be relaxed by allowing a claim edge Pi —> Rj to be added to the graph only if all
the edges associated with process Pi are claim edges.
Suppose that process requests resource, the request can be granted only if converting the request
edge to an assignment edge does not result in the formation of a cycle in the resource-allocation
graph. If no cycle exists, then the allocation of the resource will leave the system in a safe state. If
a cycle is found, then the allocation will put the system in an unsafe state. Therefore, process will
have to wait for its requests to be satisfied.
Bankers Algorithm
The Bankers algorithm is applicable to a resource-allocation system with multiple
instances of each resource type. is less efficient than the resource-allocation graph scheme. When,
a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number may not exceed the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation
of these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
Several data structures must be maintained to implement the banker's algorithm. Let n be
the number of processes in the system and m be the number of resource types. We need the
following data structures:
• Available: A vector of length m indicates the number of available resources of each type. If
Available[j] equals k, there are k instances of resource type Rj available.
• Max: An n x m matrix defines the maximum demand of each process. If M[i][j] equals k, then
process Pi may request at most k instances of resource type Rj.
• Allocation: An n x in matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.
• Need: An n x m matrix indicates the remaining resource need of each process. If Need[i][j]
equals k, then process Pi may need k more instances of resource type Rj to complete its task. Note
that Need[i][j] equals Max[i][j]- Allocation[i][j].
i) Safety Algorithm:
This algorithm for finding out whether or not a system is in a safe state. This algorithm
can be described, as follows:
1. Let Work and Finish be vectors of length in and n, respectively. Initialize
Work = Available and Finish[i] = false for i= 0,1 , ..., n-l .
2. Find an i such that both
a. Finish[i] ==false
b. Needi < Work
If no such i exists, go to step 4.
3. Work = Work + Allocation,
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.
The content of the matrix Need is defined to be Max - Allocation and is as follows:
By using safety algorithm we can conclude that the system is currently in a safe state with
the sequence<P1, P3, P4, P1, P0>. Suppose now that process P1 requests one additional instance
of resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 < Available- that is, (1,0,2) <
(3,3,2), which is true. By using resource request algorithm this request has been fulfilled, and we
arrive at the following new state:
Again applying safety algorithm to check whether the new system state is safe or not. We
get the safe sequence <P1, P3, P4, P0, P2>. Thus the request can be granted immediately.
Q 8. a) Explain the concept of semaphore? Give solution for reader’s/ writer’s problem
using semaphores. (8)
Ans:
Semaphore
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The waitO operation was originally
termed P; signal () was originally called V.
The definition of wait 0 is as follows:
wait(S) {
while S <= 0
; // no-op
S--;
Writer’s Process:
Do
{
wait(wrt);
…
writing is performed
…
signal(wrt);
}
Q 8. b) Explain the solution of producer- consumer problem with bounded buffer using
semaphore. (6)
Ans:
The producer consumer problem can be stated as, given a set of cooperating process, some
of which produce data items to be consumed by others with possible disparity between
consumption & production rates. Devise a synchronization protocol that allows both producers
and consumers to operate concurrently at their respective service rates in such a way that produced
items are consumed in the exact order of production.
To allow producer and consumer to operate concurrently, a pool of buffer is created that is
filled by the producer and emptied by consumer. Producer produces in one buffer and consumer
consumes from another buffer. The process should be synchronized in such a way that consumer
should not consume the item that the producer has not produced.
At any particular time, the shared global buffer may be emptied, partially filled or full of
produced items ready for assumption. A producer may run in either of the two former cases, but
when buffer is full the producer must be kept waiting. On the other hand when buffer is empty,
consumer must wait.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The
next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill
the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The
solution can be reached by means of inter-process communication, typically using semaphores.
The example below shows a general solution to the producer consumer problem using
semaphores. We assume that the pool consists of n buffers, each capable of holding one item. The
mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.The
code for the producer and consumer process is shown below. We can interpret this code as the
producer producing full buffers for the consumer or as the consumer producing empty buffers for
the producer.
Shared data
semaphore full, empty, mutex;
Initially:mfull = 0, empty = n, mutex = 1
Producers Process:
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Consumers Process:
do {
wait(full)
wait(mutex);
…
remove an item from buffer
…
signal(mutex);
signal(empty);
…
consume the item
…
} while (1);
Q9. a) Explain access matrix with copy, owner and control type of operation. (7)
Ans:
Model of protection can be viewed abstractly as a matrix, called an access matrix. The
rows of the access matrix represent domains, and the columns represent objects. Each entry in the
matrix consists of a set of access rights. Because the column defines objects explicitly, we can
omit the object name from the access right. The entry access(i,j) defines the set of operations that a
process executing in domain Di can invoke on object Oj.
To illustrate these concepts, we consider the access matrix shown in Figure below. There
are four domains and four objects—three files (F1, F2, F3) and one laser printer. A process
executing in domain D1 can read files F1 and F3. A process executing in domain D4 has the same
privileges as one executing in domain D1; but in addition, it can also write onto files F1 and F2.
Note that the laser printer can be accessed only by a process executing in domain D2.
Allowing controlled change in the contents of the access-matrix entries requires three
additional operations: copy, owner, and control .
The ability to copy an access right from one domain (or row) of the access matrix to
another is denoted by an asterisk (*) appended to the access right. The copy right allows the
copying of the access right only within the column for which the right is defined. For example, in
figure (a) below a process executing in domain D2 can copy the read operation into any entry
associated with file F2. Hence, the access matrix of figure (a) can be modified to the access matrix
shown in figure (b).
Fig (a) Access Matrix with domain as object Fig: Modified Access Matrix
a) Compiler-Based Enforcement
When protection is declared along with data typing, the designer of each subsystem can
specify its requirements for protection, as well as its need for use of other resources in a system.
Such a specification should be given directly as a program is composed, and in the language in
which the program itself is stated.
This approach has several significant advantages:
1. Protection needs are simply declared, rather than programmed as a sequence of calls on
procedures of an, operating system.
2. Protection requirements can be stated independently of the facilities provided by a
particular operating system.
3. The means for enforcement need not be provided by the designer of a subsystem.
4. A declarative notation is natural because access privileges are closely related to the
linguistic concept of data type.
Virus Worm
The virus is the program code that attaches The worm is code that replicate itself in order to
itself to application program and when consume resources to bring it down.
application program run it runs along with it.
It inserts itself into a file or executable program. It exploits a weakness in an application or
operating system by replicating itself.
It has to rely on users transferring infected It can use a network to replicate itself to other
files/programs to other computer systems. computer systems without user intervention.
It deletes or modifies files. Sometimes a virus Worms usually only monopolize the CPU and
also changes the location of files. memory.
virus is slower than worm. worm is faster than virus.
The threat is the potential for a security violation, such as the discovery of a vulnerability.
Following are the various threats to computer security.
• Breach of confidentiality: This type of violation involves unauthorized reading of data (or theft
of information). Typically, a breach of confidentiality is the goal of an intruder. Capturing secret
data from a system or a data stream, such as credit-card information or identity information for
identity theft, can result directly in money for the intruder.
• Breach of integrity: This violation involves unauthorized modification of data. Such attacks
can, for example, result in passing of liability to an innocent party or modification of the source
code of an important commercial application.
• Breach of availability: This violation involves unauthorized destruction of data. Some crackers
would rather wreak havoc and gain status or bragging rights than gain financially. Web-site
defacement is a common example of this type of security breach.
• Theft of service: This violation involves unauthorized use of resources. For example, an intruder
(or intrusion program) may install a daemon on a system that acts as a file server.
• Denial of service: This violation involves preventing legitimate use of the system. Denial-of-
service, or DOS, attacks are sometimes accidental.
iv) Cryptography
There are many defenses against computer attacks, running the gamut from methodology
to technology. The broadest tool available to system designers and users is cryptography.
Cryptography is the art of protecting information by transforming it (encrypting it) into an
unreadable format, called cipher text. Only those who possess a secret key can decipher
(or decrypt) the message into plain text. Encrypted messages can sometimes be broken by
cryptanalysis, also called codebreaking, although modern cryptography techniques are virtually
unbreakable.
Following figure shows the basic model of cryptography:
Section - A
1. (a) What is an interrupt? Explain different types of interrupts with their significance to
operating system. (06)
Ans:
An interrupt is an exception, a change of the normal progression, or interruption in the
normal flow of program execution. An interrupt is essentially a hardware generated function call.
Interrupts are caused by both internal and external sources. An interrupt causes the normal
program execution to halt and for the interrupt service routine (ISR) to be executed. At the
conclusion of the ISR, normal program execution is resumed at the point where it was last.
Interrupt is an event external to the currently executing process that causes a change in the normal
flow of instruction execution. Interrupt causes transfer of control to an interrupt service routine
(ISR). When the ISR is completed, the original program resumes execution Interrupts provide an
efficient way to handle unanticipated events.
Following are the different types of interrupts:
Hardware Interrupt:
A hardware interrupt is an electronic alerting signal sent to the processor from an external
device, either a part of the computer itself such as a disk controller or an external peripheral. For
example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that
cause the processor to read the keystroke or mouse position. Hardware interrupts are
asynchronous and can occur in the middle of instruction execution, requiring additional care in
programming. The act of initiating a hardware interrupt is referred to as an interrupt request (IRQ).
Software Interrupt:
A software interrupt is caused either by an exceptional condition in the processor itself, or
a special instruction in the instruction set which causes an interrupt when it is executed. The
former is often called a trap or exception and is used for errors or events occurring during program
execution that are exceptional enough that they cannot be handled within the program itself. For
example, if the processor's arithmetic logic unit is commanded to divide a number by zero, this
impossible demand will cause a divide-by-zero exception, perhaps causing the computer to
abandon the calculation or display an error message. Software interrupt instructions function
similarly to subroutine calls and are used for a variety of purposes, such as to request services
from low level system software such as device drivers. For example, computers often use software
interrupt instructions to communicate with the disk controller to request data be read or written to
the disk.
1. (b) List & explain various services provided by operating system. (4)
Ans: Following are the various services provided by operating system:
i) Program Execution
ii) I/O Operation
iii) File system manipulation
iv) Communication
v) Error handling
vi) Resource Management
vii) Protection
1. Program execution
Operating system handles many kinds of activities from user programs to system programs
like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management.
Loads a program into memory.
Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
2. I/O Operation
Operating System manages the communication between user and device drivers.
Following are the major activities of an operating system with respect to I/O Operation.
I/O operation means read or write operation with any file or any specific I/O device.
Program may require any I/O device while running.
Operating system provides the access to the required I/O device when required.
4. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications between
processes. Multiple processes with one another through communication lines in the network.
OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication.
Two processes often require data to be transferred between them.
The both processes can be on the one computer or on different computer but are connected
through computer network.
Communication may be implemented by two methods either by Shared Memory or by
Message Passing.
5. Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling.
OS constantly remains aware of possible errors.
OS takes the appropriate action to ensure correct and consistent computing.
6. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major activities of
an operating system with respect to resource management.
OS manages all kind of resources using schedulers.
CPU scheduling algorithms are used for better utilization of CPU.
7. Protection
Protection refers to mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer systems. Following are the major activities of an
operating system with respect to protection.
OS ensures that all access to system resources is controlled.
OS ensures that external I/O devices are protected from invalid access attempts.
OS provides authentication feature for each user by means of a password.
Linked List
Another approach is to link together all the free disk blocks, keeping a pointer to
the first free block in a special location on the disk and caching it in memory. This first block
contains a pointer to the next free disk block, and so on. Block 2 would contain a pointer to
block 3, which would point to block 4, which would point to block 5, which would point to block
8, and so on. Usually, the operating system simply needs a free block so that it can
allocate that block to a file, so the first block in the free list is used.
Fig: Linked Free Space on a Disk
Grouping
A modification of the free-list approach is to store the addresses of n free blocks in
the first free block. The first n-1 of these blocks are actually free. The importance of this
implementation is that the addresses of a large number of free blocks can be found
quickly, unlike in the standard linked-list approach.
Counting
Several contiguous blocks may be allocated or freed simultaneously, particularly
when space is allocated with the contiguous allocation algorithm or through clustering. A list
of n free disk addresses, we can keep the address of the first free block and the number
n of free contiguous blocks that follow the first block. Each entry in the free-space list then
consists of a disk address and a count. Although each entry requires more space than would a
simple disk address, the overall list will be shorter, as long as count is generally greater than 1.
2. (a) What do you mean by PCB? Also explain process state & process state transition
diagram in detail. (6)
Ans:
Process Control Block: Each process contains the process control block (PCB). PCB is the
data structure used by the operating system. Operating system groups all information that
needs about particular process. Fig. below shows the process control block.
Process State
Process Number
Program Counter
CPU Registers
Memory Allocation
Event Information
.....
Fig: Process Control Block
1. Process State : Process state may be new, ready, running, waiting and so on.
2. Program Counter : It indicates the address of the next instruction to be executed for
this process.
3. Event information : For a process in the blocked state this field contains
information concerning the event for which the process is waiting.
4. CPU register : It indicates general purpose register, stack pointers, index registers
and accumulators etc. number of register and type of register totally depends upon the
computer architecture.
5. Memory Management Information : This information may include the value of base and
limit register. This information is useful for de-allocating the memory when the
process terminates.
6. Accounting Information : This information includes the amount of CPU and real time
used, time limits, job or process numbers, account numbers etc.
7. I/O Status Information: This information includes the list of I/O devices allocated to the
process, a list of open files and so on.
Process control block also includes the information about CPU scheduling, I/O
resource management, file management information, priority and so on. The PCB simply serves as
the repository for any information that may vary from process to process.
Process state: Process state is defined as the current activity of the process.
Process State Transition Diagram: When process executes, it changes state. Process state is
defined as the current activity of the process. Fig. 3.1 shows the general form of the
process state transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exit)
2. (b) Compare SCAN and C- LOOK disk scheduling algorithms with example. (6)
Ans:
SCAN scheduling algorithm:
The scan algorithm has the head start at track 0 and move towards the highest
numbered track, servicing all requests for a track as it passes the track. The service
direction is then reserved and the scan proceeds in the opposite direction, again picking up
all requests in order.
SCAN algorithm is guaranteed to service every request in one complete pass through the
disk. SCAN algorithm behaves almost identically.
Example:
This leads to the idea of paging them and bringing only that pages in main memory which
are necessary. The paged segmentation scheme is as follows:
1. A virtual address becomes a segment number, a page within that segment, and an offset
within the page.
2. The segment number indexes into the segment table which yields the base address of the
page table for that segment.
3. The remainder of the address (page number and offset) is checked against the limit of the
segment.
4. The page number is used to index the page table. The entry in the page number is the frame
number.
5. The frame and the offset is added to get the physical address which is used to refer the data
of interest in the main memory.
Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the
list is ordered by size. This strategy produces the smallest leftover hole.
Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by
size. This strategy produces the largest leftover hole, which may be more useful than the smaller
leftover hole from a best-fit approach.
4. (a)Explain the multilevel feedback queue CPU scheduling algorithm in detail. (6)
Ans:
When the multilevel queue scheduling algorithm is used, processes are permanently
assigned to a queue when they enter the system. If there are separate queues for foreground and
background processes, processes do not move from one queue to the other, since processes do not
change their foreground or background nature. This setup has the advantage of low scheduling
overhead, but it is inflexible.
The multilevel feedback-queue scheduling algorithm, in, allows a process to move
between queues. The idea is to separate processes according to the characteristics of their CPU
bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue. This
scheme leaves I/O-bound and interactive processes in the higher-priority queues. In addition, a
process that waits too long in a lower-priority queue may be moved to a higher-priority queue.
This form of aging prevents starvation.
A process entering the ready queue is put in queue O. A process in queue 0 is given a time
quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1.
If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it
does not complete, it is pre-empted and is put into queue 2. Processes in queue 2 are run on an
FCFS basis but are run only when queues 0 and 1 are empty.
In general, a multilevel feedback-queue scheduler is defined by the following parameters:
o The number of queues.
o The scheduling algorithm for each queue.
o The method used to determine when to upgrade a process to a higher-priority
queue.
o The method used to determine when to demote a process to a lower-priority queue.
o The method used to determine which queue a process will enter when that process
needs service.
Demand Paging:
A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory. When we want to execute a process, we swap it into memory. When
a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again. Instead of swapping in a whole process, the pager brings only those necessary
pages into memory. Thus, it avoids reading into memory pages that will not be used anyway,
decreasing the swap rime and the amount of physical memory needed.
With the demand paging we need some form of hardware support to distinguish between
the pages that are in memory and the pages that are on the disk. The valid-invalid bit scheme can
be used for this purpose. When this bit is set to "valid" the associated page is both legal and in
memory. If the bit is set to "invalid," the page either is not valid or is valid but is currently on the
disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table
entry for a page that is not currently in memory is either simply marked invalid or contains the
address of the page on disk. This situation is depicted in Figure below:
5. (b) Explain working of Long- Term scheduler with the help of suitable diagram. (4)
Ans:
Long Term Scheduling
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU scheduler. The
primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On same systems, the long term scheduler may be absent or minimal. Time-sharing
operating systems have no long term scheduler. When process changes the state from new to
ready, then there is a long term scheduler.
Fig: Thrashing
Different methods to minimize thrashing are as follows:
Working Set Model:
The working-set model is xbased on the assumption of locality. This model uses a
parameter, Δ, to define the working-set window. The idea is to examine the most recent A page
references. The set of pages in the most recent Δ page references is the working set as in figure
below. If a page is in active use, it will be in the working set. If it is no longer being used, it will
drop from the working set Δ time units after its last reference. Thus, the working set is an
approximation of the program's locality.
For example, given the sequence of memory references shown in Figure, if Δ = 10 memory
references, then the working set at time t1 is {1, 2, 5, 6, 7). By time t2, the working set has
changed to {3, 4}.
The accuracy of the working set depends on the selection of Δ. If Δ is too small, it will not
encompass the entire locality; if Δ is too large, it may overlap several localities. In the extreme, if
Δ is infinite, the working set is the set of pages touched during the process execution. Once Δ has
been selected, use of the working-set model is simple. The operating system monitors the working
set of each process and allocates to that working set enough frames to provide it with its working-
set size. If there are enough extra frames, another process can be initiated. If the sum of the
working-set sizes increases, exceeding the total number of available frames, the operating system
selects a process to suspend. The process's pages are written out (swapped), and its frames are
reallocated to other processes. The suspended process can be restarted later.
This working-set strategy prevents thrashing while keeping the degree of
multiprogramming as high as possible. Thus, it optimizes CPU utilization. The difficulty with the
working-set model is keeping track of the working set. The working-set window is a moving
window. At each memory reference, a new reference appears at one end and the oldest reference
drops off the other end. A page is in the working set if it is referenced anywhere in the working-set
window.
Page-Fault Frequency:
The working-set model is successful, and knowledge of the working set can be useful for
pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page-fault
frequency (PFF) takes a more direct approach.
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate.
Thus, we want to control the page-fault rate. When it is too high, we know that the process needs
more frames. Conversely, if the page-fault rate is too low, then the process may have too many
frames. We can establish upper and lower bounds on the desired page-fault rate. If the actual page-
fault rate exceeds the upper limit, we allocate the process another frame; if the page-fault rate falls
below the lower limit, we remove a frame from the process. Thus, we can directly measure and
control the page-fault rate to prevent thrashing. As with the working-set strategy, we may have to
suspend a process. If the page-fault rate increases and no free frames are available, we must select
some process and suspend it. The freed frames are then distributed to processes with high page-
fault rates.
No Preemption:
The third necessary condition for deadlocks is that there be no pre-emption of resources
that have already been allocated. To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait), then all resources currently
being held are preempted. In other words, these resources are implicitly released. The preempted
resources are added to the list of resources for which the process is waiting. The process will be
restarted only when it can regain its old resources, as well as the new ones that it is requesting.
Alternatively, if a process requests some resources, we first check whether they are available. If
they are, we allocate them. If they are not, we check whether they are allocated to some other
process that is waiting for additional resources. If so, we preempt the desired resources from the
waiting process and allocate them to the requesting process. If the resources are neither available
nor held by a waiting process, the requesting process must wait. While it is waiting, some of its
resources may be preempted, but only if another process requests them. A process can be restarted
only when it is allocated the new resources it is requesting and recovers any resources that were
pre-empted while it was waiting. This protocol is often applied to resources whose state can be
easily saved and restored later, such as CPU registers and memory space. It cannot generally be
applied to such resources as printers and tape drives.
Circular Wait:
The fourth and final condition for deadlocks is the circular-wait condition. One way to
ensure that this condition never holds is to impose a total ordering of all resource types and to
require that each process requests resources in an increasing order of enumeration. To illustrate,
we let R = {R1, R2, ..., Rm} be the set of resource types. We assign to each resource type a unique
integer number, which, allows us to compare two resources and to determine whether one
precedes another in the ordering.
7. (a)What is the critical section problem? How are semaphores used to avoid it? (8)
Ans:
Consider a system consisting of n processes {P0, P1 , ..., Pn-1}. Each process has a segment
of code, called a critical section, in which the process may be changing common variables,
updating a table, writing a file, and so on. The important feature of the system is that, when one
process is executing in its critical section, no other process is to be allowed to execute in its
critical section. That is, no two processes are executing in their critical sections at the same time.
The critical-section problem is to design a protocol that the processes can use to cooperate. Each
process must request permission to enter its critical section. The section of code implementing this
request is the entry section. The critical section may be followed by an exit section. The remaining
code is the remainder section. The general structure of a typical process P, is shown in figure
below. The entry section and exit section are enclosed in boxes to highlight these important
segments of code.
The problem of critical section can be avoided using synchronization tool called
semaphores. A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The wait() operation was originally
termed P; signal () was originally called V.
The definition of wait 0 is as follows:
wait(S) {
while S <= 0
; // no-op
S--;
7. (b) What are the necessary conditions for deadlock to occur? Explain in brief. (5)
Ans:
In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not available at
that time, the process enters a wait state. It may happen that waiting processes will never
again change state, because the resources they have requested are held by other waiting processes.
This situation is called deadlock.
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a
system:
1. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and Wait: There must exist a process that is holding at least one resource and is waiting
to acquire additional resources that are currently being held by other processes.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-
1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held
by P0.
8. (b) Explain access list and capability list for implementation of access matrix. (6)
Ans:
Access Lists for Objects
1. Each column in the access matrix can be implemented as an access list for one object.
Obviously, the empty entries can be discarded.
2. The resulting list for each object consists of ordered pairs <domain, rights-set>, which
define all domains with a nonempty set of access rights for that object.
3. An access list is a list that specifies the user name and the types of access allowed for each
user.
4. Access Lists with each file, indicate which users are allowed to perform which operations.
5. Access List is one way of recording access rights in a computer system. They are
frequently used in file systems.
6. In principle, access list is an exhaustive enumeration of the specific access rights of all
entities that are authorized access to a given object.
7. In systems that employ access lists, a separate list is maintained for each object.
8. Usually owner has the exclusive right to define and modify the related access list. The
owner of the object can revoke the access rights granted to a particular subject or a domain
by simply modifying or deleting the related entry in the access list.
consider P0
Need0 <= work
0 0 0 0 <= 1 5 2 0
Condition true
work= work+allocation1
work=1 5 2 0 + 0 0 1 2
work= 1 5 3 2
finish[1]= true
consider P1
Need1 <= work
0 7 5 0 <= 1 5 3 2
Condition false
consider P2
Need2 <= work
1 0 0 2 <= 1 5 3 2
Condition true
work= work+allocation2
work=1 5 3 2 + 1 3 5 4
work= 2 8 8 6
finish[2]= true
consider P3
Need3 <= work
0 0 2 0 <= 2 8 8 6
Condition true
work= work+allocation3
work=2 8 8 6 + 0 6 3 2
work= 2 14 11 8
finish[3]= true
consider P4
Need4 <= work
0 6 4 2<= 2 14 11 8
Condition true
work= work+allocation4
work=2 14 11 8 + 0 0 1 4
work= 2 14 12 12
finish[4]= true
consider P1
Need2 <= work
0 7 5 0 <= 2 14 12 12
Condition true
work= work+allocation1
work=12 14 12 12 + 1 0 0 0
work= 13 14 12 12
finish[1]= true
Yes, the system is in safe state and the safe sequence is <P0, P2, P3, P4, P1>.
request1= 0 4 2 0
Request1<=Need1
0 4 2 0 <= 07 5 0
Condition true
available= available+request1
available= 1 5 2 0 + 0 4 2 0
available= 1 1 0 0
Allocation1= Allocation1+request1
Allocation1=1 0 0 0 + 0 4 2 0
Allocation1= 1 4 2 0
initialize work= 1 1 0 0
finish[i]= false for i=0 to 4
consider P0
Need0 <= work
0 0 0 0 <= 1 1 0 0
Condition true
work= work+allocation1
work=1 1 0 0 + 0 0 1 2
work= 1 1 1 2
finish[1]= true
consider P1
Need1 <= work
0 3 3 0 <= 1 1 1 2
Condition false
consider P2
Need2 <= work
1 0 0 2 <= 1 1 1 2
Condition true
work= work+allocation2
work=1 1 1 2 + 1 3 5 4
work= 2 4 4 6
finish[2]= true
consider P3
Need3 <= work
0 0 2 0 <= 2 4 4 6
Condition true
work= work+allocation3
work=2 4 4 6 + 0 6 3 2
work= 2 10 9 8
finish[3]= true
consider P4
Need4 <= work
0 6 4 2<= 2 10 9 8
Condition true
work= work+allocation4
work=2 10 9 8 + 0 0 1 4
work= 2 10 10 12
finish[4]= true
consider P1
Need2 <= work
0 3 3 0 <= 2 10 10 12
Condition true
work= work+allocation1
work= 2 10 10 12 + 1 4 2 0
work= 3 14 12 12
finish[1]= true
Yes the request can be granted immediately as the system is in safe state with with safe sequence
<P0, P2, P3, P4, P1>
10. (a) What are the different schemes for implementing revocation rights. (6)
Ans:
In a dynamic protection system, we may sometimes need to revoke access rights to
objects shared by different users. With an access-list scheme, revocation is easy. The access list is
searched for any access rights to be revoked, and they are deleted from the list. Revocation is
immediate and can be general or selective, total or partial, and permanent or temporary.
Capabilities, however, present a much more difficult revocation problem. Since the capabilities are
distributed throughout the system, we must find them before we can revoke them.
Various schemes for implementing revocation rights are as follows:
Reacquisition: Periodically, capabilities are deleted from each domain. If a process wants
to use a capability, it may find that capability has been deleted. The process may
then try to reacquire the capability. If access has been revoked, the process will not be
able to reacquire the capability.
Back pointers: A list of pointers is maintained with each object, pointing to all
capabilities associated with that object. When revocation is required, we can follow this
pointers, changing the capabilities as necessary. This scheme was adopted in the
MULTICS system.
Indirection: The capabilities point indirectly, not directly, to the objects. Each
capability points to a unique entry in a global trade, which in turn points to the
object. We implement revocation by searching the global table for the desired entry and
deleting it. It does not allow selective revocation.
Keys: A key is a unique bit pattern that can be associated with a capability. This key is
defined when the capability is created, and it can be neither modified nor inspected
by the process that owns the capability. A master key is associated with each object; it can
be defined or replaced with the set-key operation.
a) Compiler-Based Enforcement
When protection is declared along with data typing, the designer of each subsystem can
specify its requirements for protection, as well as its need for use of other resources in a system.
Such a specification should be given directly as a program is composed, and in the language in
which the program itself is stated.
This approach has several significant advantages:
1. Protection needs are simply declared, rather than programmed as a sequence of calls on
procedures of an, operating system.
2. Protection requirements can be stated independently of the facilities provided by a
particular operating system.
3. The means for enforcement need not be provided by the designer of a subsystem.
4. A declarative notation is natural because access privileges are closely related to the
linguistic concept of data type.
Message Passing:
The most popular form of inter-process communication involves message passing.
Processes communicate with each other by exchanging messages. A process may send
information to a port, from which another process may receive information. The sending and
receiving processes can be on the same or different computers connected via a communication
medium. One reason for the popularity of message passing is its ability to support client-server
interaction. A server is a process that offers a set of services to client processes. These services are
invoked in response to messages from the clients and results are returned in messages to the client.
Synchronization:
Synchronization refers to one of two distinct but related concepts: synchronization
of processes, and synchronization of data. Process synchronization refers to the idea that multiple
processes are to join up or handshake at a certain point, in order to reach an agreement or commit
to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies
of a dataset in coherence with one another, or to maintain data integrity. Process synchronization
primitives are commonly used to implement data synchronization.
Shared Memory:
Shared memory is memory that may be simultaneously accessed by multiple programs
with an intent to provide communication among them or avoid redundant copies. Shared memory
is an efficient means of passing data between programs. Depending on context, programs may run
on a single processor or on multiple separate processors. Using memory for communication inside
a single program, for example among its multiple threads, is also referred to as shared memory.
Section - A
1. (a) Define Operating System. Explain Batch, Time Sharing & Real Time Operating
System. (10)
Ans:
An Operating system is a program that controls the execution of application programs and
acts as an interface between the user of a computer and the computer hardware.
An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly includes
programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.
Batch System
Batch operating system is one where programs and data are collected together in a batch
before processing starts. A job is predefined sequence of commands, programs and data
that are combined in to a single unit called job.
Figure below shows the memory layout for a simple batch system. Memory management
in batch system is very simple. Memory is usually divided into two areas : Operating
system and user program area.
Spooling: It stands for simultaneous Peripheral Operation Online. In disk technology, rather than
the cards being read from the card reader directly into memory and then the job being processed,
cards are read directly from the card reader onto the disk. The location of the card reader is
recorded in the table kept by the OS. When the job is executed, the OS satisfied its request for the
card reader input by reading from the disk. Similarly, when the job requests the printer to output
the line, that line is copied into the system buffer and is written into the disk. When the job is
completed the output is actually printed. This form of processing is called as spooling.
Comparison of Spooling & Buffering: Buffering overlaps input, output and processing of a
single job whereas spooling allows CPU to overlap the input of one job with the computation and
output of other jobs.
2. (a) List & explain various services provided by operating system. (6)
Ans: Following are the various services provided by operating system:
i) Program Execution
ii) I/O Operation
iii) File system manipulation
iv) Communication
v) Error handling
vi) Resource Management
vii) Protection
1. Program execution
Operating system handles many kinds of activities from user programs to system programs
like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management.
Loads a program into memory.
Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
2. I/O Operation
Operating System manages the communication between user and device drivers.
Following are the major activities of an operating system with respect to I/O Operation.
I/O operation means read or write operation with any file or any specific I/O device.
Program may require any I/O device while running.
Operating system provides the access to the required I/O device when required.
4. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications between
processes. Multiple processes with one another through communication lines in the network.
OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication.
Two processes often require data to be transferred between them.
The both processes can be on the one computer or on different computer but are connected
through computer network.
Communication may be implemented by two methods either by Shared Memory or by
Message Passing.
5. Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling.
OS constantly remains aware of possible errors.
OS takes the appropriate action to ensure correct and consistent computing.
6. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major activities of
an operating system with respect to resource management.
OS manages all kind of resources using schedulers.
CPU scheduling algorithms are used for better utilization of CPU.
7. Protection
Protection refers to mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer systems. Following are the major activities of an
operating system with respect to protection.
OS ensures that all access to system resources is controlled.
OS ensures that external I/O devices are protected from invalid access attempts.
OS provides authentication feature for each user by means of a password.
2. (b) Discuss the various file allocation methods. (8)
Ans:
Contiguous Allocation
The contiguous allocation method requires each file to occupy a set of contiguous
blocks on the disk. Contiguous allocation of a file is defined by the disk address and length
(in block units) of the first block. If the file is n blocks long, and starts at location
b, then it occupies blocks b, b + 1, b + 2, ..., b + n – 1. The directory entry for each file
indicates the address of the starting block and the length of the area allocate for this file.
Accessing a file that has been allocated contiguously is easy. For sequential access, the
file system remembers the disk address of the last block referenced and, when
necessary, reads the next block. For direct access to block i of a file that starts at
block b, we can immediately access block b + i.
The contiguous disk-space-allocation problem can be seen to be a particular application
of the general dynamic storage-allocation First Fit and Best Fit are the most
common strategies used to select a free hole from the set of available holes.
These algorithms suffer from the problem of external fragmentation. To prevent loss of
significant amounts of disk space to external fragmentation, the user had to run
repacking routine that copied the entire file system onto another floppy disk or onto a
tape. The original floppy disk was then freed completely, creating one large
contiguous free space. The routine then copied the files back onto the floppy disk
by allocating contiguous space from this one large hole. This scheme effectively
compacts all free space into one contiguous space, solving the fragmentation problem.
The time cost is particularly severe for large hard disks that use contiguous allocation,
where compacting all the space may take hours and may be necessary on a weekly
basis.
A major problem is determining how much space is needed for a file. When the file is
created, the total amount of space it will need must be found and allocated. The user will
normally over estimate the amount of space needed, resulting in considerable wasted
space.
Linked Allocation
With link allocation, each file is a linked list disk blocks; the disk blocks may be
scattered anywhere on the disk.
Each directory entry has pointer initialized to nil to signify empty file to first disk block of
the file.
There is no external fragmentation with linked allocation, and any free block on thefree-
space list can be used to satisfy a request. There is no need to declare the size of a file
when that file is created. A file can continue to grow as long as there are free
blocks.
The major problem is that it can be used effectively for only sequential access
files. To find the ith block of a file we must start at the beginning of that file, and follow
the pointers until we get to the ith block. Each access to a pointer requires a disk read and
sometimes a disk seek.
One drawback of linked allocation is the space required for the pointers. If a pointer
requires 4 bytes out of a 512 Byte block then 0.78 percent of the disk is being used for
pointer, rather than for information. The usual solution to this problem is to collect blocks
into multiples, called clusters, and to allocate the clusters rather than blocks. .
An important variation, on the linked allocation method is the use of a file allocation
table (FAT). The table has one entry for each disk block, and is indexed by block number.
The directory entry contains the block number of the first block of the file. The table
entry indexed by that block number then contains the block number of the next
block in the file. This chain continues until the last block, which has a special end-of-file
value -as the table entry. Unused blocks are indicated by a 0 table value. Allocating a new
block to a file is a simple matter of finding the first 0-valued table entry, and replacing the
previous end-of-file value with the address of the new block. The 0 is then
replaced with the end-of file value.
Indexed Allocation
Linked allocation cannot support efficient direct access, since the pointers to the
blocks are scattered with the blocks themselves all over the disk and need to be
retrieved in order. Indexed allocation solves this problem by bringing all the pointers
together into one location: the index block.
Each file has its own index block, which is an array of disk-block addresses. The ith entry
in the index block points to the ith block of the file. The directory contains the
address of the index block.
When the file is created, all pointers in the index block are set to nil. When the ith block is
first written, a block is obtained: from the free space manager, and its address- is put in the
ith index-block entry.
Allocation supports direct access, without suffering from external fragmentation
because any free block on the disk may satisfy a request for more space.
Indexed allocation suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
1. Linked scheme. An index block is normally one disk block. Thus, it can be read and
written directly by itself.
2. Multilevel index. A variant of the linked representation is to use a first-level
index block to point to a set of second-level index blocks, which in turn point to
the file blocks. To access a block, the operating system uses the first-level index
to find a second-level index block, and that block to find the desired data block.
3. (a) Explain different scheduling levels like short- term, mid- term & long- term
scheduling. (6)
Ans:
Long Term Scheduling
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU scheduler. The
primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On same systems, the long term scheduler may be absent or minimal. Time-sharing
operating systems have no long term scheduler. When process changes the state from new to
ready, then there is a long term scheduler.
3. (b) Draw & Explain process state transition diagram in detail. Also explain PCB. (7)
Ans:
Process State Transition Diagram: When process executes, it changes state. Process state is
defined as the current activity of the process. Fig. 3.1 shows the general form of the
process state transition diagram. Process state contains five states. Each process is in one of
the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated (exit)
Process Control Block: Each process contains the process control block (PCB). PCB is the
data structure used by the operating system. Operating system groups all information that
needs about particular process. Fig. below shows the process control block.
Process State
Process Number
Program Counter
CPU Registers
Memory Allocation
Event Information
.....
Fig: Process Control Block
1. Process State : Process state may be new, ready, running, waiting and so on.
2. Program Counter : It indicates the address of the next instruction to be executed for
this process.
3. Event information : For a process in the blocked state this field contains
information concerning the event for which the process is waiting.
4. CPU register : It indicates general purpose register, stack pointers, index registers
and accumulators etc. number of register and type of register totally depends upon the
computer architecture.
5. Memory Management Information : This information may include the value of base and
limit register. This information is useful for de-allocating the memory when the
process terminates.
6. Accounting Information : This information includes the amount of CPU and real time
used, time limits, job or process numbers, account numbers etc.
7. I/O Status Information: This information includes the list of I/O devices allocated to the
process, a list of open files and so on.
Process control block also includes the information about CPU scheduling, I/O
resource management, file management information, priority and so on. The PCB simply serves as
the repository for any information that may vary from process to process.
4. (a) Explain paging & segmentation scheme of memory management with example. (7)
Ans:
Paging:
Paging is a memory-management scheme that permits the physical address space of a
process to be non-contiguous. The basic method for implementing paging involves breaking
physical memory into fixed-sized blocks called frames and breaking logical memory into blocks
of the same size called pages. When a process is to be executed, its pages are loaded into any
available memory frames from the backing store. The backing store is divided into fixed-sized
blocks that are of the same size as the memory frames. The hardware support for paging is
illustrated in figure below.
Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d). The page number is used as an index into a page table. The page table contains the
base address of each page in physical memory. This base address is combined with the page offset
to define the physical memory address that is sent to the memory unit.
As an example, consider the memory in figure below Using a page size of 4 bytes and a
physical memory of 32 bytes (8 pages), we show how the user's view of memory can be mapped
into physical memory. Logical address 0 is page 0, offset 0. Indexing into the page table, we find
that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0).
Logical address 3 (page 0, offset 3) maps to physical address 23 {- (5x4 ) + 3). Logical address 4
is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical address
4 maps to physical address 24 (= (6x4 ) + 0). Logical address 13 maps to physical address 9.
Fig: Paging example with 32- byte Memory and 4- byte Pages
Segmentation:
Segmentation is a memory-management scheme that supports this user view of memory. A
logical address space is a collection of segments. Each segment has a name and a length. The
addresses specify both the segment name and the offset within the segment. The user therefore
specifies each address by two quantities: a segment name and an offset. For simplicity of
implementation, segments are numbered and are referred by a segment number, rather than by a
segment name. Thus, a logical address consists of a two tuple:
< segment-number, offset >.
Each entry in the segment table has a segment base and a segment limit. The segment base
contains the starting physical address where the segment resides in memory, whereas the segment
limit specifies the length of the segment. The use of a segment table is illustrated in Figure below.
A logical address consists of two parts: a segment number, s, and an offset into that segment, d.
The segment number is used as an index to the segment table. The offset d of the logical address
must be between 0 and the segment limit. If it is not, we trap to the operating system. When an
offset is legal, it is added to the segment base to produce the address in physical memory of the
desired byte. The segment table is thus essentially an array of base-limit register pairs.
Fig: Thrashing
Different methods to minimize thrashing are as follows:
Working Set Model:
The working-set model is xbased on the assumption of locality. This model uses a
parameter, Δ, to define the working-set window. The idea is to examine the most recent A page
references. The set of pages in the most recent Δ page references is the working set as in figure
below. If a page is in active use, it will be in the working set. If it is no longer being used, it will
drop from the working set Δ time units after its last reference. Thus, the working set is an
approximation of the program's locality.
For example, given the sequence of memory references shown in Figure, if Δ = 10 memory
references, then the working set at time t1 is {1, 2, 5, 6, 7). By time t2, the working set has
changed to {3, 4}.
The accuracy of the working set depends on the selection of Δ. If Δ is too small, it will not
encompass the entire locality; if Δ is too large, it may overlap several localities. In the extreme, if
Δ is infinite, the working set is the set of pages touched during the process execution. Once Δ has
been selected, use of the working-set model is simple. The operating system monitors the working
set of each process and allocates to that working set enough frames to provide it with its working-
set size. If there are enough extra frames, another process can be initiated. If the sum of the
working-set sizes increases, exceeding the total number of available frames, the operating system
selects a process to suspend. The process's pages are written out (swapped), and its frames are
reallocated to other processes. The suspended process can be restarted later.
This working-set strategy prevents thrashing while keeping the degree of
multiprogramming as high as possible. Thus, it optimizes CPU utilization. The difficulty with the
working-set model is keeping track of the working set. The working-set window is a moving
window. At each memory reference, a new reference appears at one end and the oldest reference
drops off the other end. A page is in the working set if it is referenced anywhere in the working-set
window.
Page-Fault Frequency:
The working-set model is successful, and knowledge of the working set can be useful for
pre-paging, but it seems a clumsy way to control thrashing. A strategy that uses the page-fault
frequency (PFF) takes a more direct approach.
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate.
Thus, we want to control the page-fault rate. When it is too high, we know that the process needs
more frames. Conversely, if the page-fault rate is too low, then the process may have too many
frames. We can establish upper and lower bounds on the desired page-fault rate. If the actual page-
fault rate exceeds the upper limit, we allocate the process another frame; if the page-fault rate falls
below the lower limit, we remove a frame from the process. Thus, we can directly measure and
control the page-fault rate to prevent thrashing. As with the working-set strategy, we may have to
suspend a process. If the page-fault rate increases and no free frames are available, we must select
some process and suspend it. The freed frames are then distributed to processes with high page-
fault rates.
Demand Paging:
A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory. When we want to execute a process, we swap it into memory. When
a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again. Instead of swapping in a whole process, the pager brings only those necessary
pages into memory. Thus, it avoids reading into memory pages that will not be used anyway,
decreasing the swap rime and the amount of physical memory needed.
With the demand paging we need some form of hardware support to distinguish between
the pages that are in memory and the pages that are on the disk. The valid-invalid bit scheme can
be used for this purpose. When this bit is set to "valid" the associated page is both legal and in
memory. If the bit is set to "invalid," the page either is not valid or is valid but is currently on the
disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table
entry for a page that is not currently in memory is either simply marked invalid or contains the
address of the page on disk. This situation is depicted in Figure below:
The hardware to support demand paging is the same as the hardware for paging and
swapping:
• Page table: This table has the ability to mark an entry invalid through a valid-invalid bit or
special value of protection bits.
• Secondary memory: This memory holds those pages that are not present in main memory.
A crucial requirement for demand paging is the need to be able to restart any instruction
after a page fault. If the page fault occurs on the instruction fetch, we can restart by-fetching the
instruction again. If a page fault occurs while we are fetching an operand, we must fetch and
decode the instruction again and then fetch the operand.
5. (b) Consider a system with 3 page frame for user level application. Consider the following
reference string:
5,6,4,3,5,6,3,6,9,4,3,9,6,4,9
How many page faults will be there when one considers FIFO, LRU & Optimal Page
Replacement algorithm? (7)
Ans:
i) First In First Out (FIFO)
Reference
5 6 4 3 5 6 3 6 9 4 3 9 6 4 9
String
Frame 1 5 5 5 3* 3 3 9* 9 9 6* 6
Frame 2 6 6 6 5* 5 5 4* 4 4 9*
Frame 3 4 4 4 6* 6 6 3* 3 3
Page Fault
No. of. Page Faults when one considers FIFO Algorithm is: 11
ii) Least Recently Use (LRU)
Reference
5 6 4 3 5 6 3 6 9 4 3 9 6 4 9
String
Frame 1 5 5 5 3* 3 3 3 4* 4 6* 6
Frame 2 6 6 6 5* 5 9* 9 9 9 9
Frame 3 4 4 4 6* 6 6 3* 3 4*
Page Fault
No. of. Page Faults when one considers LRU Algorithm is: 11
No. of. Page Faults when one considers OPR Algorithm is: 07
Section – B
6. (a) List & Explain necessary conditions that must hold simultaneously for deadlock. (6)
Ans: In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not available at
that time, the process enters a wait state. It may happen that waiting processes will never
again change state, because the resources they have requested are held by other waiting processes.
This situation is called deadlock.
Following are the necessary conditions that must hold simultaneously for deadlock:
i) Mutual exclusion
ii) Hold & wait
iii) No Pre-emption
iv) Circular wait
i. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is, only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.
ii. Hold and Wait: There must exist a process that is holding at least one resource and is
waiting to acquire additional resources that are currently being held by other processes.
iii. No Preemption: Resources cannot be preempted; that is, a resource can be released
only voluntarily by the process holding it, after that process, has completed its task.
iv. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2,
…., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.
6. (b) Explain Banker’s algorithm for deadlock avoidance with suitable example (8)
Ans:
The Bankers algorithm is applicable to a resource-allocation system with multiple
instances of each resource type. is less efficient than the resource-allocation graph scheme. When,
a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number may not exceed the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation
of these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
Several data structures must be maintained to implement the banker's algorithm. Let n be
the number of processes in the system and m be the number of resource types. We need the
following data structures:
• Available: A vector of length m indicates the number of available resources of each type. If
Available[j] equals k, there are k instances of resource type Rj available.
• Max: An n x m matrix defines the maximum demand of each process. If M[i][j] equals k, then
process Pi may request at most k instances of resource type Rj.
• Allocation: An n x in matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.
• Need: An n x m matrix indicates the remaining resource need of each process. If Need[i][j]
equals k, then process Pi may need k more instances of resource type Rj to complete its task. Note
that Need[i][j] equals Max[i][j]- Allocation[i][j].
i) Safety Algorithm:
This algorithm for finding out whether or not a system is in a safe state. This algorithm
can be described, as follows:
1. Let Work and Finish be vectors of length in and n, respectively. Initialize
Work = Available and Finish[i] = false for i= 0,1 , ..., n-l .
2. Find an i such that both
a. Finish[i] ==false
b. Needi < Work
If no such i exists, go to step 4.
3. Work = Work + Allocation,
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.
Example:
Consider the following snapshot of system:
The content of the matrix Need is defined to be Max - Allocation and is as follows:
By using safety algorithm we can conclude that the system is currently in a safe state with
the sequence<P1, P3, P4, P1, P0>. Suppose now that process P1 requests one additional instance
of resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 < Available- that is, (1,0,2) <
(3,3,2), which is true. By using resource request algorithm this request has been fulfilled, and we
arrive at the following new state:
Again applying safety algorithm to check whether the new system state is safe or not. We
get the safe sequence <P1, P3, P4, P0, P2>. Thus the request can be granted immediately.
iv) Semaphore
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The waitO operation was originally
termed P; signal () was originally called V.
The definition of wait 0 is as follows:
wait(S) {
while S <= 0
; // no-op
S--;
8. (a) Explain Producer Consumer Problem with solution using semaphore. (7)
Ans:
The producer consumer problem can be stated as, given a set of cooperating process, some
of which produce data items to be consumed by others with possible disparity between
consumption & production rates. Devise a synchronization protocol that allows both producers
and consumers to operate concurrently at their respective service rates in such a way that produced
items are consumed in the exact order of production.
To allow producer and consumer to operate concurrently, a pool of buffer is created that is
filled by the producer and emptied by consumer. Producer produces in one buffer and consumer
consumes from another buffer. The process should be synchronized in such a way that consumer
should not consume the item that the producer has not produced.
At any particular time, the shared global buffer may be emptied, partially filled or full of
produced items ready for assumption. A producer may run in either of the two former cases, but
when buffer is full the producer must be kept waiting. On the other hand when buffer is empty,
consumer must wait.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The
next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill
the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The
solution can be reached by means of inter-process communication, typically using semaphores.
The example below shows a general solution to the producer consumer problem using
semaphores. We assume that the pool consists of n buffers, each capable of holding one item. The
mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.The
code for the producer and consumer process is shown below. We can interpret this code as the
producer producing full buffers for the consumer or as the consumer producing empty buffers for
the producer.
V(mutex);
V(empty);
}
8. (b) Suppose the head of moving disk with 200 track, is currently at 143 and has just
finished a request at 125. If the queue of request is in following order:
86, 147, 91, 177, 95, 155, 106, 177, 133
What is total head movement to satisfy these requests for following disk scheduling
algorithms:
i) FCFS
ii) SSTF
iii) SCAN
iv) CSCAN
Ans:
i) FCFS
0 86 91 95 106 130 143 147 155 177 199
ii) SSTF
ii) SCAN
0 86 91 95 106 130 143 147 155 177 199
9. (b) What are the main difference between Capability list & access List. (5)
Ans:
Access List Capability List
An access list is a list that specifies the user A capability list is a list of objects coupled with
name and the types of access allowed for each the operations allowed on those objects.
user.
Access Lists with each file, indicate which Capabilities with each user, indicate which files
users are allowed to perform which operations. may be accessed, and in what ways.
Access List is one way of recording access Capabilities provide a single unified mechanism
rights in a computer system. They are to:
frequently used in file systems. a) Address both primary and secondary memory
b) Access both hardware and software resources
c) Protect objects in both primary and
secondary memory.
In principle, access list is an exhaustive A capability is a token or a ticket that gives the
enumeration of the specific access rights of all subject possessing its permission to access a
entities that are authorized access to a given specific object in the specified manner. A
object. capability may be represented as a data
structure consisting of two items of information
viz. A unique object identifier and access rights
to that object.
In systems that employ access lists, a separate Capability based systems combine the
list is maintained for each object. addressing and protection functions in a single
Usually owner has the exclusive right to define unified mechanism that is used to access all
and modify the related access list. The owner of system objects. In capability based systems, a
the object can revoke the access rights granted list of capabilities is associated with each
to a particular subject or a domain by simply subject.
modifying or deleting the related entry in the
access list.
In access list system a subject can name any In capability based system, a subject can name
object. only object for which it has capabilities.
Access list is obtained by decomposition of Capability list is obtained by decomposition of
access matrix by columns. access matrix by row.
10. (a) Why it is difficult to protect a system in which users are allowed to do their own I/O. (7)
Ans:
1. Data protection attempts to ensure the security of computer-processed data from
unauthorized access, from destructive user actions, and from computer failure. With
increasing use of computer-based information systems, there has been increasing concern
for the protection of computer-processed data.
2. In many applications, however, questions of data protection require explicit consideration
in their own right. Data protection must deal with two general problems. First, data must
be protected from unauthorized access and tampering. This is the problem of data security.
3. If users are allowed to do their own I/O then they may disrupt the normal operation of the
system by issuing illegal I/O instructions, by accessing memory locations within the
operating system itself, or by refusing to relinquish the CPU.
4. Second, data must be protected from errors by authorized system users, in effect to protect
users from their own mistakes. This is the problem of error prevention.
5. Concern for data security will take different forms in different system applications.
Individual users may be concerned with personal privacy, and wish to limit access to
private data files. Corporate organizations may seek to protect data related to proprietary
interests. Military agencies may be responsible for safeguarding data critical to national
security.
6. The mechanisms for achieving security will vary accordingly. Special passwords might be
required to access private files. Special log-on procedures might be required to assure
positive identification of authorized users, with records kept of file access and data
changes. Special confirmation codes might be required to validate critical commands.
7. At the extreme, measures instituted to protect data security may be so stringent that they
handicap normal system operations. Imagine a system in which security measures are
designed so that every command must be accompanied by a continuously changing
validation code which a user has to remember. Imagine further that when the user makes a
code error, which can easily happen under stress, the command sequence is interrupted to
re-initiate a user identification procedure. In such a system, there seems little doubt that
security measures will reduce operational effectiveness.
8. It seems probable, however, that absolute data security can never be attained in any
operational information system. There will always be some reliance on human judgment,
as for example in the review and release of data transmissions, which will leave systems in
some degree vulnerable to human error. Thus a continuing concern in user interface design
must be to reduce the likelihood of errors, and to mitigate the consequences of those errors
that do occur.
9. Consider the following example. In one computer center, an operator must enter a
command "$U" to update an archive tape by writing a new file at the end of the current
record, while the command "$O" will overwrite the new file at the beginning of the tape so
that all previous records are lost. A difference of one keystroke could obliterate the records
of years of previous work.
10. In systems where information handling requires the coordinated action of multiple users, it
may be appropriate that one user can change data that will be used by others. But when
multiple users will act independently, then care should be taken to ensure that they will not
interfere with one another. Extensive system testing under conditions of multiple uses may
be needed to determine that unwanted interactions do not occur.
11. When one user's actions can be interrupted by another user, as in defined emergency
situations, that interruption should be temporary and non-destructive. The interrupted user
should subsequently be able to resume operation at the point of interruption without data
loss.
10. (b) What are advantages of encrypting data in computer system? (6)
Ans:
1. Data encryption refers to the process of transforming electronic information into a scrambled form
that can only be read by someone who knows how to translate the code.
2. Encryption is important in the business world because it is the easiest and most practical method of
protecting data that is stored, processed, or transmitted electronically.
3. It is vital to electronic commerce, for example, because it allows merchants to protect customers'
credit card numbers and personal information from computer hackers or competitors.
4. It is also commonly used to protect legal contracts, sensitive documents, and personal messages
that are sent over the Internet. Without encryption, this information could be intercepted and
altered or misused by outsiders.
5. In addition, encryption is used to scramble sensitive information that is stored on business
computer networks, and to create digital signatures to authenticate e-mail and other types of
messages sent between businesses.
6. The main benefit of data encryption is that even if you were to lose your computer, get
malicious malware or are hacked, the data inside your computer is still safe.
7. File encrypted by one user cannot be opened by another user if the latter does not possess
appropriate permissions.