Operating System Solved Question Paper
Operating System Solved Question Paper
Contiguous allocation
Linked List allocation
Linked List allocation using a table in memory.
Indexed allocation
I-nodes
1) Contiguous Allocation: Each file takes up a set of contiguous blocks on the disk. Disk
address defines a linear ordering on the disk. If each block size on disk is 2 KB, then 30
KB file would be allocated 15 consecutive blocks. The directory entry for each file
specifies the address of the starting block and the total number of blocks allocated for
this file. Directory entry is shown below. File A starts at block 0 and it is 3 block long
occupying block 0.
Advantage:
• Contiguous allocation is easy to implement.
Disadvantage:
• When allocated file is deleted, continuously allocated blocks to the file become free.
Advantage:
• Reading file sequentially is simple.
Disadvantage:
• In each block pointer takes some space, so each file requires slightly more disk space
rather than its actual size.
3) Linked List allocation using table in memory: Each block needs to store pointer
information; therefore, entire block is not fully used to store file content. This limitation
can be overcome by keeping pointer information in table which always remains in
memory. Refer Linked list fig for Table.
Disadvantage:
• Whole table must be in memory all the time to make it work.
4) Indexed allocation: With file allocation table in memory, Linked list allocation support
random access, but this entire table must be in memory all the time. In indexed
allocation, all the pointers are kept in one location called as index block. There is an
index block assigned to each file and this index block holds the disk block addresses of
that particular file.
Advantage:
• Indexed allocation supports random access.
Disadvantage:
• The pointer overhead of index block is more with compare to the pointer overhead of
linked allocation.
5) I-nodes: I-Nodes (Index Nodes) Is the data structure which records the attributes and
disk addresses of the files blocks. I-nodes is associated with each file and it keeps track
of which block belongs to which file. If particular file is open, only its i-node to be in
memory. This is more beneficial with compare to linked list allocation which requires
entire file allocation table in memory. The size of file allocation table is proportional to the
number of blocks that disk contains.
File Attributes
Address of disk block 0
Address of disk block 1
Address of disk block 2
Address of disk block 3
Address of block of Disk Block Containing
pointers additional disk address
Operating System
(Nov 2018)
Q.P. Code 55382
Q.1 Attempt any FOUR
a) Explain the difference between monolithic and micro kernel 5M
Monolithic Kernel Micro Kernel
1. If virtually entire operating 1. In microkernel, set of modules
system code is executed in for managing the hardware is
kernel mode, then it is a kept which can uniformly well be
monolithic program that runs in executed in user mode. A small
a single address space. microkernel contains only code
that must execute in kernel
mode. It is the second part of
operating system.
2. There is same address space 2. There is different address
for user mode as well as kernel space for user mode as well as
mode. kernel mode.
3. It has a large space as 3. It has a small space as
compared to micro kernel. compared to monolithic kernel.
4. Execution speed is faster 4. Execution speed is slower
than micro kernel. than monolithic kernel.
5. If one service crashes whole 5. If one service crashes whole
operating system fails. operating system do not fails, it
does not affect working of other
part micro kernel.
6. Kernel calls the function 6. Communication is done
directly for communication. through message passing.
MUQuestionPapers.com Page 1
Significance:
1) It avoids Race around condition.
2) Prevents multiple threads to enter critical section at the same time.
MUQuestionPapers.com Page 2
d) Explain various process states with diagram 5M
MUQuestionPapers.com Page 3
Q.5. a) What is system call? Explain any five system call in details. 10M
1) The interface between OS and user programs is defined by the set of system
calls that the operating system offers. System call is the call for the operating
system to perform some task on behalf of the user’s program. Therefore,
system calls make up the interface between processes and the operating
system.
2) The system calls are functions used in the kernel itself. From programmer’s
point view, the system call is a normal C function call.
3) Due to system call, the code is executed in the kernel so that there must be a
mechanism to change the process mode from user mode to kernel mode.
4 File Manipulation Create file, delete file, open, close, read, write,
reposition, get file attributes, set file attributes.
MUQuestionPapers.com Page 16
Q.6. Write short notes on: (any two): 20M
a) Linux Virtual file system
The object oriented principle is used in Virtual File System (VFS).
- It has two modules: a set of definitions that states what file –system objects are
permissible to seems to be and software layer for these objects manipulation.
- Following four major object types are defined by VFS:
1) I-node Object – An individual file is represented by I-node Object.
2) File Object – An open file is represented by file object.
3) Superblock Object – An entire file system is represented by a Superblock
Object.
4) Dentry Object – An individual directory entry is represented by Dentry
object.
- A set of operations are defined for each of the type of objects. Each object of one of
these points to a function table.
- The record of addresses of the actual functions is kept in function table. These
functions implement the defined set of operations for that object.
- The VFS software layer need not recognize earlier about what kind of object it is
dealing with. It can carry out an operation on one of the file-system objects by
invoking the right function from the object’s function table.
- The VFS remains unaware about whether an i-node represents a networked file, a
disk file, a network socket, or a directory file. The function table contains suitable
function to read the file.
- The VFS software layer will invoke that function without worrying about the way of
reading the data. The file can be accesses with the help of i-node and file objects.
- An i-node object is a data structure that holds pointers to the disk blocks. The actual
file data is present on disk block.
- The file object denotes a point of access to the data in an open file. In order to
access the i-node contents, the process first has to access a file object pointing to
the i-node.
- The i-node objects do not belong to single process. Whereas file objects belong to
single process.
- The i-node object is cached by the VFS to get better performance in near future
access of the file. Therefore, although processes is not currently using the file, its i-
node is cached by VFS.
- All cached file data are linked onto list in the file’s i-node object. The i-node also
keeps standard information about each file, for example the owner, size, and time
most recently modified.
- Directory files are treated in a different way from other files.
- The programming interface of UNIX defines several operations on directories, for
example creation, deletion, and renaming of a file in a directory.
- The system calls for these directory operations do not have need of the user open
the files concerned, unlike the case for reading or writing data.
- Therefore, these directory operations are defined by VFS in the i-node object,
instead of in the file object.
MUQuestionPapers.com Page 21
d. Explain the effect of page frame size on performance of page
replacement algorithms. 5
The number of frames is equal to the size of memory divided by the page-
size. So and increase in page size means a decrease in the number of
available frames.
Having a fewer frames will increase the number of page faults because of
the lower freedom in replacement choice.
Large pages would also waste space by Internal Fragmentation.
On the other hand, a larger page-size would draw in more memory per
fault; so the number of fault may decrease if there is limited contention.
Larger pages also reduce the number of TLB misses.
An important hardware design decision is the size of page to be used. There are
several factors to consider.
Internal fragmentation:
- Clearly, the smaller the page size, the lesser is the amount of internal
fragmentation. To optimize the use of main memory, we would like to
reduce internal fragmentation.
- On the other hand, smaller the page, the greater is the number of pages
required per process which could mean that some portion of page tables
of active processes must be in virtual memory, not in main memory. This
eventually leads to double page fault for a single reference of memory.
- If the page size is very small, then ordinarily a large number of pages will
be available in main memory for process, which after some time will
contain portions of process near recent references leading to low page
fault rate.
- As the size of page is increased, each individual page will contain
locations further and further from any particular recent reference. Thus,
the effect of the principle of locality is weakened and the page fault rate
begins to rise.
- Eventually, however the page fault rate will begin to fall as the size of
page approaches the size of the entire process.
e. Explain Thrashing. 5
-A process should have some minimum number of frames to support active
pages which are in memory.
- It helps to reduce the number of page faults. If these numbers of frames are not
available to the process then it will quickly page fault.
- To handle this page fault, it is necessary to replace the existing page from
memory.
- Since all the pages fault, it is necessary to replace the existing page from
memory.
-Since in paging, pages are transferred between main memory and disk, this has
an enormous overhead.
- Because of this frequent movement of pages between main memory and disk,
system throughput reduces.
- This frequent paging activity causing the reduction in system throughput
called as thrashing.
-Although many processes are in memory, due processes are in memory, due to
thrashing CPU utilization goes low.
- When operating system monitors this CPU utilization, it introduce new
process in memory to increase the degree of multiprogramming.
- Now it is needed that the existing pages should be replaced for the new
process.
-If global page replacement algorithm is used, it replaces the pages of other
process and allocates frames to this newly introduced process. So other
processes whose pages are replaced are also causes page faults.
- All these faulting processes go in wait state and waits for paging devices. In
this case again CPU utilization goes low.
-There is no actual work getting done and processes spend time only in paging.
-This thrashing can be limited by using local page replacement algorithm
instead of global page replacement algorithm.
Q-2
a. Differentiate between monolithic, layered and microkernel structure of
OS. 10
Q-3
a. Discuss how the following pairs of scheduling criteria conflict in certain
setting 10
a) CPU utilization and response time
b) Average Turnaround time and maximum waiting time
CPU utilization and response time:
- It locates the i-node analogous to a particular i-node number by requesting
the superblock object to return the i-node with that number.
- A entry object represents a directory entry that may include the name of a
directory in the path name of a file (such as /usr) or the actual file.
- In this role, the operating system keeps track of the status of each resource, and decides who
gets a resource, for how long and when.
- In system that supports concurrent execution of program, the operating system resolves
conflicting requests for resources in manner that preserves system integrity, and in doing so
attempts to optimize the resulting performance.
MUQuestionPapers.com Page 1
Process can have one of the following five states at a time.
1. New state: A process that just has been created but has not yet been admitted to the
pool of execution processes by the operating system. Every new operation which is
requested to the system is known as the new born process.
2. Ready state: When the process is ready to execute but he is waiting for the CPU to
execute then this is called as the ready state. After completion of the input and output
the process will be on ready state means the process will wait for the processor to
execute.
3. Running state: The process that is currently being executed. When the process is
running under the CPU, or when the program is executed by the CPU, then this is called
as the running process and when a process is running then this will also provide us
some outputs on the screen.
4. Waiting or blocked state: A process that cannot execute until some event occurs or
an I/O completion. When a process is waiting for some input and output operations then
this is called as the waiting state and in this process is not under the execution instead
the process is stored out of memory and when the user will provide the input and then
this will again be on ready state.
5. Suspended ready: After the completion of the process, the process will be
automatically terminated by the CPU. So this is also called as the terminated state of the
process. After executing the complete process the processor will also deallocate the
memory which is allocated to the process. So this is called as the terminated process.
MUQuestionPapers.com Page 2
c) Describe microkernel with a diagram. 5M
1. In microkernel, set of modules for managing the hardware is kept which can uniformly
well be executed in user mode. A small microkernel contains only code that must
execute in kernel mode. It is the second part of operating system.
2. There is different address space for user mode as well as kernel mode.
3. It has a small space as compared to monolithic kernel.
4. Execution speed is slower than monolithic kernel.
5. If one service crashes whole operating system do not fails, it does not affect working of
other part micro kernel.
6. Communication is done through message passing.
7. To write microkernel more code is required.
8. It is easily extendible.
9. Example: QNX, Symbian, L4Linux, etc.
10. It is more flexible.
11. In microkernel, communication is through message passing.
MUQuestionPapers.com Page 3
d) Discuss the importance of “Multithreading”. Differentiate between kernel and user
thread. 5M
Importance of Multithreading:
- In operating system that supports multithreading, process can consist of many threads,
Theses threads run in parallel improving the application performance.
- Each such thread has it’s own CPU state and stack, but they share the address space of
the process and the environment.
- Considering the advantages of user level and kernel level threads, a hybrid threading
model using both types of threads, a hybrid threading model using both types of threads
can be implemented.
- The Solaris operating system supports this hybrid model.
- In this implementation, all the thread management functions are carried out by user level
thread package at user space. So operations on thread do not require kernel
intervention.
- Java language supports for development of multithreaded process that offers scheduling
and memory management for java applications.
- Java application that can benefit directly from multicore resources comprise application
servers.
- Single application can also be benefitted from multicore architecture by running
multicore instances of the application in parallel.
- If multiple application instances require some degree of isolation, virtualization
technology can be used to offer each of them with it’s own separate and secure
environment.
MUQuestionPapers.com Page 4
Q2
a) Differentiate between short term, medium term and long term scheduler with a
diagram. 10M
2 Speed is less than short Speed is very fast and Speed is in between both
term scheduler. invoked frequently than the short term and long
long term scheduler. term.
b) Calculate AWT, ATAT, Response Time and Throughput of the following processes
using Shortest job first (Non-Pre-emptive).
10M
Process Arrival Time(ms) Burst
Time(ms)
P1 1 7
P2 2 5
P3 3 1
P4 4 2
P5 5 8
MUQuestionPapers.com Page 5
All the four above mentioned conditions should occur for a deadlock to occurs.
Q4
a) Explain different types of memory fragmentation. 8M
- As processes are loaded and removed from memory, the free memory space is broken into
little pieces.
- It happens after sometimes that processes cannot be allocated to memory blocks considering
their small size and memory blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types –
External Fragmentation:
- It exists when there us enough total memory space available to satisfy a request, but available
memory space are not contiguous.
- Storage space is fragmented into large number of small holes.
- Both first fit and best fit strategies suffer from this.
- First fit is better in some systems, whereas best fit is better for other.
- Depending on the total amount of memory storage, size, external fragmentation may be minor
or major problem.
- Statistically N allocated block, Another 0.5 N blocks will be lost to fragmentation. The 1/3 of
memory is unusable.
Internal Fragmentation:
- Consider a multiple partition allocation scheme with a hole of 18,462 bytes. The next process
request with 18,462 bytes. If we allocate, we are left with a hole of 2 bytes.
- The general approach to avoid this problem is to:
MUQuestionPapers.com Page 10
- Break physical memory into fixed sized blocks and allocate memory in units based on block
size.
- Memory allocated to a process may be slightly large than the requested memory.
b) Compare the performance of FIFO, LRU and Optimal based on number of pages hit for
the following string. Frame size =3; String(pages): 1 2 3 4 5 2 1 3 3 2 4 5 12M
1. FIFO
Frame 1 2 3 4 5 2 1 3 3 2 4 5
0 1 1 1 4 4 4 1 1 1 1 1 5
1 2 2 2 5 5 5 3 3 3 3 3
2 3 3 3 2 2 2 2 2 4 4
PF Y Y Y Y Y Y Y Y - - Y Y
2. LRU
Frame 1 2 3 4 5 2 1 3 3 2 4 5
0 1 1 1 4 4 4 1 1 1 1 4 4
1 2 2 2 5 5 5 3 3 3 3 3
2 3 3 3 2 2 2 2 2 2 2
PF Y Y Y Y Y Y Y Y - - Y Y
3. Optimal
Frame 1 2 3 4 5 2 1 3 3 2 4 5
0 1 1 1 1 1 1 1 3 3 3 3 3
1 2 2 2 2 2 2 2 2 2 4 4
2 3 4 5 5 5 5 5 5 5 5
PF Y Y Y Y Y - - Y - - Y -
Performance:
MUQuestionPapers.com Page 11
Algorithm FIFO LRU Optimal
Page Hit 10 10 7
Page Miss 2 2 5
Q5
a) Explain Interrupt driven IO and discuss the advantages of Interrupt driven IO over
programmed IO. 10M
- When CPU issues I/O command in support of process two possibilities exist.
- Interrupt driven I/O is an alternative scheme dealing with I/O. Interrupt I/O is a way of
controlling input/output activity whereby a peripheral or terminal that needs to make or receive a
data transfer sends a signal. This will cause a program interrupt to be set.
- At a time appropriate to the priority level of the I/O interrupt. Relative to the total interrupt
system, the processors enter an interrupt service routine.
- The function of the routine will depend upon the system of interrupt levels and priorities that is
implemented in the processor.
- Basic operation of Interrupt
1. CPU issues read command.
2. I/O module gets data from peripheral whilst CPU does other work.
3. I/O module interrupts CPU.
4. CPU requests data.
5. I/O module transfers data.
- Design Issues
There are 2 main problems for interrupt I/O, which are:
-There are multiple I/O modules, how should the processor determine the device that issued the
interrupt signal?
-How does the processor decide which module to process when multiple interrupts have
occurred?
MUQuestionPapers.com Page 12