0% found this document useful (0 votes)
3 views

OS 2 marks

The document provides a comprehensive overview of operating systems, defining their functions, goals, and various types such as batch processing and real-time systems. It discusses key concepts like multiprogramming, system calls, and the differences between various system architectures, including monolithic and microkernel designs. Additionally, it covers the advantages and disadvantages of different operating system designs and the role of interrupts and memory management in system performance.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS 2 marks

The document provides a comprehensive overview of operating systems, defining their functions, goals, and various types such as batch processing and real-time systems. It discusses key concepts like multiprogramming, system calls, and the differences between various system architectures, including monolithic and microkernel designs. Additionally, it covers the advantages and disadvantages of different operating system designs and the role of interrupts and memory management in system performance.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

UNIT 1

1. Define operating systems.


Ans. : An operating system is a program that manages the computer hardware. it actas an
intermediate between a user‘s of a computer and the computer hardware. It controls and
coordinates the use of the hardware among the various application programs for the various
users.
2. What are the goals of OS?
Ans. :Operating system goals :
a. Execute user programs and make solving user problems easier.
b. Make the computer system convenient to use.
3. List down the functions of operating systems.
Ans. : Functions of operating systems are convenience, efficiency and ability to evolve.
4. What are the objective of operating systems ? AU: [CSE]: May-17, Dec.-17
Ans. : The objective of operating systems are efficient use, user convenience, ability to
evolve.
5. Compare and contrast DMA and Cache memory.
Ans. :• DMA technology provides special channels for CPU and I/O devices to exchange
I/O data, and the memory is used for buffering the I/O data. When the CPU wants to handle
I/O data, it triggers the DMA write operations that transfer the I/O data from I/O devices to
the memory.
o Caches are "automatically" managed in that the hardware, when the requested memory
contents are not in the cache, fetches that data from main memory.
6. How does an interrupt differ from trap? AU: [EIE]: May-18
Ans. : Trap is a software-generated interrupt caused either by an error or by a specific request
from a user program that an operating-system service be performed. A trap usually results in
a switch to kernel mode. Interrupt signals can cause a program to suspend itself temporarily
to service the interrupt. An interrupt is a hardware-generated signal that changes the flow
within the system.
7. What are the three main purposes of an operating system ?
Ans. :
 To provide an environment for a computer user to execute programs on computer
hardware in a convenient and efficient manner.
• To allocate the separate resources of the computer as needed to solve the problem given.
The allocation process should be as fair and efficient as possible.
• As a control program it serves two major functions: 1) Supervision of the execution of
user programs to prevent errors and improper use of the computer and 2) Management of
the operation and control of I/O devices.
8. Explain what is batch processing.
Ans. : Here jobs with similar requirements are batched together and run through the
computer as a group. Thus a batch operating system reads a stream of separate jobs, each
with its own control cards that predefine what the job does, feed the batches one after another
and send the output of each job to the appropriate destination.
9. What is spooling ?
Ans. : The use of secondary memory as buffer storage to reduce processing delays when
transferring data between peripheral equipment and the processors of a computer.
10. Define the degree of multiprogramming.
Ans. : Degree of multiprogramming is the number of processes in the memory.
11. What is the main advantage of multiprogramming ?
Ans. : Multiprogramming makes efficient use of the CPU by overlapping the demands for the
CPU and its I/O devices from various users. It attempts to increase CPU utilization by always
having something for the CPU to execute.
12. What do you mean by multiprogramming?
Ans. : Multiprogramming increases CPU utilization by organizing jobs so that the CPU
always has one to execute.
13. List the disadvantages of multiprogrammed batched systems.
Ans. :Disadvantages
 Users cannot interact with their jobs, while executing.
 A programmer cannot modify a program as it executes to study its behavior
14. Do time sharing differs from multiprogramming? If so, how?
Ans. : Time sharing is the sharing of resources among several processes at the same time.
Multiprogramming is the allocation of more than one process on a computer system and its
resources. Time sharing minimizes the response time and multiprogramming maximizes the
processor use. Time sharing systems use the concept of multiprogramming to share the CPU
time between multiple users at the same time.
15. Write the difference between batch system and time sharing systems.
Ans. : A batch system executes jobs, whereas a time-shared system has user programs, or
tasks. Batch systems are inconvenient for users because users cannot interact with their jobs
to fix problems. User interacts with system in time sharing system.
16. Describe the differences between symmetric and asymmetric multiprocessing.
Ans. :Symmetric multiprocessing treats all processors as equals and I/O can be processed on
any CPU. Asymmetric multiprocessing has one master CPU and the remainder CPUs are
slaves. The master distributes tasks among the slaves and I/O is usually done by the master
only.
17. What is tightly coupled system ?
Ans. : Processors share memory and a clock; communication usually takes place through the
shared memory.
18. What are the disadvantages of multiprocessor systems ?
Ans. : Multiprocessor systems are more complex in both hardware and software. Additional
CPU cycles are required to manage the cooperation, so per-CPU efficiency goes down.
19. Distinguish between tightly coupled system and loosely coupled system.

20. What is difference between networked O.S. and distributed O.S. ?


Ans. : Difference between networked O.S. and distributed O.S. is an follows:
21. Mention the advantages in the design of distributed operating systems.
Ans. :Advantages are as follows:
 Resource sharing
 Higher reliability
 Better price performance ratio
 Shorter response times and higher throughput.
22. What are the advantages of peer-to-peer systems over client server systems?
Ans. : Peer-to-peer system is more reliable as central dependency is eliminated. All the
resources and contents are shared by all the peers, unlike server-client architecture where
server shares all the contents and resources.
23. Define real time system.
Ans. : Real time system is one that must react to inputs and responds to them quickly. A real
time system has well defined, fixed time constraints.
24. What is meant by "hard real systems and soft real systems"?
Hard real systems guarantee that critical tasks complete on time. In soft real system a critical
task get priority over other tasks and remains that priority until it completes.
25. What is the main difficulty that a programmer must overcome in writing an operating
system for a real-time environment ?
Ans. : The main difficulty is keeping the operating system within the fixed time constraints
of a real-time system. If the system does not complete a task in a certain time frame, it may
cause a breakdown of the entire system it is running. Therefore when writing an operating
system for a real-time system, the writer must be sure that his scheduling schemes don't allow
response time to exceed the time constraint.
26. What are the main advantages of layered approach ?
o Each layer is implemented using those operations provided by lower-level layers.
o A layer does not need to know how the low-level operations are implemented; it needs
to know what these operations.
o Each layer hides the existence of data structures, operations and hardware from higher-
level layer.

27. Mention the purpose of system calls. AU: May-18


Ans. : System calls allow user-level processes to request services of the operating system.
28. What is the purpose of system programs ? AU: May-16
System program that provides an application programming environment on top of the
hardware.
29. What are the five major categories of system calls ? AU CSE/IT: May-11
Five main categories of system calls are File management, IPC, process management, I/O
devices management, information management.
30. What is the function of system programs? Write the name of the categories in which
the system programs can be divided.
programs provide a convenient environment for program development and execution.
System programs are divided into these categories: File management, status management,
file modification, programming language support, program loading and execution,
communications.
31. What does the CPU do when there are no user programs to run?
The CPU will always do processing. Even though there are no application programs running,
the OS is still running and the CPU will still have to process many system processes during the
operation of computer.
32. List general methods used to pass parameters in system call.
Three general methods are used to pass parameters between a running program and the operating
system.
o Pass parameters in registers.
o Store the parameters in a table in memory and the table address is passed as a
parameter in a register.
o Push (store) the parameters onto the stack by the program and pop off the stack by
operating system.
33. Why APIs need to be used rather than system calls?
Ans. : An application programmer designing a program using an API can expect his program
to compile and run on any system that supports the same API. Actual system calls can often
be more detailed and difficult to work with the API available to an application programmer.
34. List the advantages and disadvantages of writing an operating system in high level
languages such as C. AU: May-18
Ans. : Advantages:
• The code can be written faster and more compact.
• Easier to understand and debug
• Recompilation is simple.
Disadvantages :
• There could be a performance overhead introduced by the compiler and
runtime system used for the high-level language.
• Certain operations and instructions that are available at the machine-level
might not be accessible from the language level, thereby limiting some of
the functionality available to the programmer.
35. What is dual mode operation and what is the need of it? AU [CSE] May-19
• Two modes are provided by the hardware - user and kernel mode. Mode
bit is added to computer hardware to indicate the current mode. The OS
is loaded and then user process is started in user mode. When an interrupt
occurs, hardware switches to kernel mode.
• In order to ensure the proper execution of the OS, user must be able to
distinguish between the execution of OS code and user defined code.
36. List out some system calls required to control the communication system. AU [CSE]:
May-19
System calls for communication system: Open connection and close connection, read
message and write message, get hosted and get processed.
37. What is an Interrupt? [R]
a. Interrupt are provided primarily as way to improve processor utilization.
b. It is a mechanism by which other modules( I/O, Memory) may interrupt the
normal sequencing of the processor.
Classes of interrupts:-
c. Program
d. Timer
e. I/O
f. Hardware failure
38. What characteristics distinguish the various elements of a memory hierarchy?
Characteristics are Cost Per bit, Capacity, Access Time, Frequency of access to the memory
by the processor.

39. What is Cache Memory? [R]


Cache memory is invisible to the OS
It interacts with other memory management hardware
Cache contains a copy of a portion of main memory .

What are the 3 objective of an OS Design? [R]


a. Convenience – An OS makes a computer more convenient to use
b. Efficiency -- An OS allows the system resources to be used in efficient
manner
c. Ability to Evolve – An OS Constructed in such a way as to permit the effective
development, testing & introducing new function.
40. List the Services of operating system function. (NOV/DEC 2013) [R]
 Program development
 Program execution
 User Interface
 I/O Operations
 File system Manipulation
 Communication
 Error Detection
 Resource allocation
 Accounting
 Security
41. Define Kernel[R]
i. The kernel is a software code that resides in the central core of a operating
system. It has complete control over the system.
42. Define system call. (MAY/JUNE 2009, APRIL/MAY 2019)
i. System Call provides the interface between running program and the
OSUser can request any services from OS through System Call.
ii. Categories of system call:-
b. File management
c. Process Management
d. Inter process Communication
e. I/O Device Management
f. Information Processing & Maintenance
43. What is System Programs? (APRIL/MAY 2011) [R]
i. System programs provides an convenient environment to the user for
developingand executing the programs.
b. Categories:-

 File management
 Status Information
 File Modification
 Programming language support
 Program loading & execution
 Communication
44. What is Boot strapping? [R]
i. The boot program is stored on disk with predetermined address called boot
sector. The boot program then loads the operating system into memory to
startup the computer this arrangement is known as bootstrapping.
45. Difference b/w Monolithic & Microlithic. [An]

46. Monolithic 47. Micro lithic


48. Kernel size is large 49. Kernel size is small
50. OS is Complex to design 51. OS is easy to Design Implement &

52. Install
53. Request may be serviced 54. Request may be serviced slower
faster
55. All OS services are included 57. Kernel Provides only IPC and low
in the level

56. Kernel 58. Device management services


59. What is Multiprogramming? (MAY/JUNE 2013) [R]
i. Multi Programming increases CPU Utilization by organizing jobs so that the
CPUalways has one to execute.
b. Advantage:-
c. It increase CPU utilization
d. It makes efficient use of the CPU overlapping the demands for the CPU & I/O
devices
e. Increased throughput.
f. Lower response time.
60. Define Real Time System [R]
Real time system is one that must react to input & responds to them quickly. Areal time
system has well defined, fixed time constants.
61. What does the CPU do when there are no user programs to run?
The CPU will always do processing. Even though there are no application programsrunning,
the operating system is still running and the CPU will still have to process.
62. Describe the actions taken by a kernel to context-switch between processes.
In general, the operating system must save the state of the currently running processand
restore the state of the process scheduled to be run next. Saving the state of a process
typically includes the values of all the CPU registers in addition to memory allocation.
Context switches must also perform many architecture-specific operations, including
flushing data and instruction caches.

63. What is multicore processor? [R]


Hardware has been to place multiple processor cores on the same physical chip, resulting
in a multicore processor. Each core maintains its architectural state and thus appears to the
operating system to be a separate physical processor.
64. What is memory stall? [R]
Researchers have discovered that when a processor accesses memory, it spends a
significant amount of time waiting for the data to become available. This situation, known
as a memory stall , may occur for various reasons, such as a cache miss.
65. What is Boot strapping? [R]
a. The boot program is stored on disk with predetermined address called boot sector.
b. The boot program then loads the operating system into memory to startup the
computer. This arrangement is known as bootstrapping.
66. Can multiple user level threads achieve better performance on a multiprocessor
system than a single processor system? Justify your answer. (MAY/JUNE 2014 )
We assume that user-level threads are not known to the kernel. In that case, the
answer is because the scheduling is done at the process level. On the other hand, some
OSallows user-level threads to be assigned to different kernel-level processes for the
purposes of scheduling. In this case the multithreaded solution could be faster.

67. Mention the circumstances that would a user be better off using a time-sharing
system rather than a PC or a single user workstation? (MAY/JUNE 2014) [An]
A user is better off fewer than three situations: when it is cheaper, faster, or
easier.
For example: When the user is paying for management costs and the costs are
cheaper for atime-sharing system than for a single-user computer.
When running a simulation or calculation that takes too long to run on a single PC or
workstation.When a user is travelling and doesn't have laptop to carry around, they can
connectremotely to a time-shared system and do their work.
68. Do timesharing differ from Multiprogramming? If so, How?(APR/MAY 2015)
i. Time Sharing: here, OS assigns some time slots to each job. Here,
each job is executed according to the allotted time slots.
ii. Job1: 0 to 5 Job2: 5 to 10 Job3: 10 to 15
Multi-Tasking: in this operating system, jobs are executed in
parallel by the operating system. But, we can achieve this multi-
tasking through multiple processors (or) multicore CPU only.
iii. CPU1: Job1 CPU2: Job2 CPU3: Job3
69. Why API s need to be used rather than system calls?(APR/MAY 2015) [An]
System calls are much slower than APIs (library calls) since for each system call, a context
switch has to occur to load the OS (which then serves the system call).Most detailsof OS
interface hidden from programmer by API Managed by run-time support library (Set of
functions built into libraries included with compiler.)
70. Describe the actions taken by a thread library to context switch between user- level
threads. [An]
The user-level threads are known only within a given process. To context switch, we only
need to save the thread-specific context: the program counter, CPU registers, and the
thread-specific stack pointer
71. What is the purpose of system programs? (May/Jun 2016) [R]
System programs can be thought of as bundles of useful system calls. They provide basic
functionality to users so that users do not need to write their own programs to solve
common problems.
72. What are the advantages of peer-to-peer systems over client-server systems?
(May/Jun 2016) [R]
a. It is easy to install and so is the configuration of computers on this network, all the
resources and contents are shared by all the peers, unlike server-client architecture
where Server shares all the contents and resources.
b. P2P is more reliable as central dependency is eliminated. Failure of one peer doesn‘t
affect the functioning of other peers. In case of Client –Server network, if server
goes down whole network gets affected.
c. There is no need for full-time System Administrator. Every user is the administrator
of his machine. User can control their shared resources.
d. The over-all cost of building and maintaining this type of network is comparatively
very less.
73. Compare and contrast DMA and cache memory. (Nov/Dec 2015) [An]
a. DMA is a hardware device that can move to/from memory without using CPU
instructions.
b. For instance, a hardware device (lets say, your PCI sound device) wants audio to
play back. You can either:
c. Write a word at a time via a CPU mov instructions.
d. Configure the DMA device. You give it a start address, a destination, and the
number of bytes to copy. The transfer now occurs while the CPU does something
else instead of spoon feeding the audio device.
e. DMA can be very complex (scatter gather, etc), and varies by bus type and system.

74. Write the difference between Batch systems and Time sharing systems. (Nov/Dec
2015) [An]
a. A batch is a sequence of jobs. This batch is submitted to batch processing operating
systems, and output would appear some later time in the form of a program or as
program error. To speed up processing similar jobs are batched together.The major
task of batch operating systems is to transfer control automatically from one job to
next. Here the operating is always in the memory.
b. Time sharing or multi-tasking is a logical execution of multiprogramming. Multiple
jobs are executed by the CPU switching between them. Here the computer system
provides on line communication between the user and the system.
c. Here the CPU is never idle. Time shared operating system allows many users to
share the computer simultaneously.
d. Time sharing systems requires some sort of memory management and protection.
75. How does an interrupt differ from a trap or what is the difference between trap and
interrupt? (NOV/DEC 2016) [An] (APR/MAY 2018)
An interrupt handler is called to deal with the cause of the interrupt; control is thenreturned
to the interrupted context and instruction. A trap is a software-generated interrupt. An
interrupt can be used to signal the completion of an I/O to obviate the need for device
polling.
76. What are the disadvantages of multiprocessor systems? (NOV/DEC 2016) [R]
a. If one processor fails then it will affect in the speed
b. Multiprocessor systems are expensive.
c. Complex OS is required.
d. It's more complex.
e. It requires context switching which slightly impacts
77. Consider memory systems with a cache access time of 10 ns and a memory access
time of 110 ns-assume the memory access time includes the time to check the cache.
if the effective access time is 10 % greater than the cache access time, Whatis the hit
ration H? (APR/MAY 2017) [E]

78. What are the objectives of operating systems? (APR/MAY 2017) [R] (NOV/DEC
2017) [R]
i. An OS is a program that controls the execution of application programs and
acts as an interface between applications and the computer hardware.
Objectives of OS: Convenience: An OS makes a computer more convenient
to use. Efficiency: An OS allows the computer system resources to be used
in an efficient manner.
79. What is SYS GEN and system boot ?[R] (NOV/DEC 2017) (NOV/DEC 2021)
 SYS GEN :It is the Short for SYStem GENeration, sysgen is a utility that enablesan
operating system to configure hardware and software setups properly.
 System Boot:
 Booting the system is done by loading the kernel into main memory, and startingits
execution.
 The CPU is given a reset event, and the instruction register is loaded with a
predefined memory location, where execution starts.
80. Mention the purpose of system calls.[R] (APR/MAY 2018)
a. System calls allow user-level processes to request services of the operating system.
b. If a file system requires the creation or deletion of files.
c. Reading and writing from files also require a system call.
d. Creation and management of new processes.
e. Network connections also require system calls. This includes sending and
receivingpackets.
f. Access to a hardware devices such as a printer, scanner etc. requires a system call.
81. What is dual mode operation and what is the need of it?(APRIL/MAY 2019) In
monitor mode, the CPU can use all instructions and access all areas of memory.
a. In user mode, the CPU is restricted to unprivileged instructions and a specified area
ofmemory. User code should always be executed in user mode and the OS design
ensures that it is.
82. Distinguish between Multicore and Multiprocessor. (NOV/DEC 2021)
i. Multicore systems have a single processor with multiple processing units.
These processing units are termed cores. On the other hand, multiprocessor
systems have two or more processors. A multiprocessor system is much
more reliable than a multicore system but a multiprocessor system has a
complex configuration compared to a multicore system. Both multicore
processors and multiprocessors are used to speeding up the computing
process of the system.

UNIT II
83. Define process. What is the information maintained in a PCB ?
Ans. A process is simply a program in execution. i.e. an instance of a program. execution.
PCB maintains pointer, state, process number, CPU register, PC, memory allocation etc.
84. Define task control block.
TCB is also called PCB.
85. What is PCB? Specify the information maintained in it. AU CSE/IT: Dec.-12
Each process is represented in the operating system by a process control block. PCB contains
information like process state, program counter, CPU register, accounting information etc.
86. What is independent process ?
Independent process cannot affect or be affected by the execution of another process.
87. Name and draw five different process states with proper definition. AU: Dec.-17
Process states are new, running, waiting, ready and terminated. Fig. 2.16.1 shows process
state diagram.
88. Define context switching.
Ans. Switching the CPU to another process requires saving the state of the old process and
loading the saved state for the new process. This task is known as context switch.
89. What are the reasons for terminating execution of child process ?
Ans. Parent may terminate execution of children processes via abort system call for a variety of
reasons, such as:
 Child has exceeded allocated resources.
 Task assigned to child is no longer required.
 Parent is exiting and the operating system does not allow a child to continue if its
parent terminates.
90. What is ready queue ?
Ans. The processes that are residing in main memory and are ready and waiting to execute are
kept on a list called the ready queue.
91. List out the data fields associated with process control blocks.
Data fields associated with process control block is CPU registers, PC, process state, memory
management information, input-output status information etc.
92. What are the properties of communication link ?
Properties of communication link
1. Links are established automatically.
2. A link is associated with exactly one pair of communicating processes.
3. Between each pair there exists exactly one link.
4. The link may be unidirectional, but is usually bidirectional.
93. What is socket ?
A socket is defined as an endpoint for communication.
94. What is non-preemptive scheduling ?
Ans.Non-preemptive scheduling ensures that a process relinquishes control of the CPU only
when it finishes with its current CPU burst.
95. Differentiate preemptive and non-preemptive scheduling.
Ans. :

96.
97. What do you mean by short term scheduler ?
Short term scheduler, also known as a dispatcher executes most frequently, and makes the
finest-grained decision of which process should execute next. This scheduler is invoked
whenever an event occurs.
98. Which are the criteria used for CPU scheduling ?
Criteria used for CPU scheduling are CPU utilization, throughput, turnaround time, waiting
time, response time.
99. Explain why two level scheduling is commonly used.
Ans. It provides the hybrid solution to the problem of providing good system utilization and
good user service simultaneously.
100. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-
bound programs?
I/O-bound programs have the property of performing only a small amount of computation
before performing IO. Such programs typically do not use up their entire CPU quantum.
CPU-bound programs, on the other hand, use their entire quantum without performing any
blocking IO operations. Consequently, one could make better use of the computer's resources
by giving higher priority to I/O-bound programs and allow them to execute ahead of the
CPU-bound programs.
101. What is response time?
Response time is the amount of time it takes from when a request was, submitted until the
first response is produced, not output.
102. Define waiting time.
Amount of time a process has been waiting in the ready queue.
103. Define scheduling algorithm ?
In multiprogramming systems, whenever two simultaneously in the ready state, a choice has
to be made which process to run next. The part of the OS that makes the choice is called the
scheduler and the algorithm it uses is called the scheduling algorithm.
104. Define the term dispatch latency.
Time it takes for the dispatcher to stop one process and start another running.
105. What is preemptive priority method?
Ans. A preemptive priority will preempt the CPU if the newly arrived process is higher than
the priority of the currently running process.
106. What is medium term scheduling ?
Medium-term scheduling used especially with time-sharing systems as an intermediate
scheduling level. A swapping scheme is implemented to remove partially run programs from
memory and reinstate them later to continue where they left off.
107. What is preemptive scheduling ?
Preemptive scheduling can preempt a process which is utilizing the CPU in between its
execution and give the CPU to another process.
108. What is the difference between long-term scheduling and short-term scheduling ?
Long term scheduling adds jobs to the ready queue from the job queue. Short term scheduling
dispatches jobs from the ready queue to the running state.
109. List out any four scheduling criteria.
Response time, throughput, waiting time and turn around time.
110. Define the term 'Dispatch latency'.
Dispatch latency: Time it takes for the dispatcher to stop one process and start another running.
It is the amount of time required for the scheduler to stop one process and start another.
111. Distinguish between CPU-bounded and I/O bounded processes.
112. Define priority inversion problem.
The higher priority process would be waiting for the low priority one to finish. This situation
is known as priority inversion problem.
113. What advantage is there in having different time-quantum sizes on different levels of a
multilevel queuing system?
Processes that need more frequent servicing, for instance, interactive processes such as
editors, can be in a queue with a small time quantum. Processes switch no need for frequent
servicing can be in a queue with a larger quantum, requiring fewer context switches to complete
the processing, and thus making more efficient use of the computer.
114. How does real-time scheduling differs from normal scheduling ?
Normal scheduling provides no guarantee on when a critical process will be scheduled; it
guarantees only that the process will be given preference over non-critical processes. Real-time
systems have stricter requirements. A task must be serviced by its deadline; service after the
deadline has expired is the same as no service at all.
115. What is Shortest-Remaining-Time-First (SRTF) ?
If a new process arrives with CPU burst length less than remaining time of
process, current executing preempt. This scheme is known as the Shortest-Remaining-Time-First.
116. What is round robin CPU scheduling ?
Each process gets a small unit of CPU time (time quantum). After this time has elapsed, the
process is preempted and added to the end of the ready queue.
117. What is meant by starvation in operating system?
Starvation is a resource management problem where a process does not get the resources
(CPU) it needs for a long time because the resources are being allocated to other processes.
118. What is an aging?
Aging is a technique to avoid starvation in a scheduling system. It works by adding an aging
factor to the priority of each request. The aging factor must increase the requests priority as
time passes and must ensure that a request will eventually be the highest priority request
119. How to solve starvation problem in priority CPU scheduling ?
Aging - as time progresses increase the priority of the process, so eventually the process will
become the highest priority and will gain the CPU. i.e., the more time is spending a process
in ready queue waiting, its priority becomes higher and higher.
120. What is convoy effect?
A convoy effect happens when a set of processes need to use a resource for a short time, and
one process holds the resource for a long time, blocking all of the other processes. Essentially
it causes poor utilization of the other resources in the system.
121. How can starvation / indefinite blocking of processes be avoided in priority scheduling?
A solution to the problem of indefinite blockage of processes is aging. Aging is a technique
of gradually increasing the priority of processes that wait in the system for a long time.
122. "Priority inversion is a condition that occurs in real time systems where a low priority
process is starved because higher priority processes have gained hold of the CPU" -
Comment on this statement. AU CSE: May-17
A low priority thread always starts on a shadow version of the shared resource, the original
resource remains unchanged. When a high-priority thread needs a resource engaged by a low -
priority thread, the low priority thread is preempted, the original resource is restored and the high
priority thread is allowed to use the original
123. Provide two programming examples in which multithreading provides better.
performance than a single-threaded solution.
 A Web server that services each request in a separate thread.
 A parallelized application such as matrix multiplication where different parts of the matrix
may be worked on in parallel.
 An interactive GUI program such as a debugger where a thread is used to monitor user input,
another thread represents the running application, and a show third thread monitors
performance.
124. State what does a thread share with peer threads.
Thread share the memory and resource of the process.
125. Define a thread. State the major advantage of threads.
A thread is a flow of execution through the process's code with its own program counter,
system registers and stack.
 Efficient communication.
 Minimize context switching time.
126. Can a multithreaded solution using multiple user-level threads achieve better
performance on a multiprocessor system than on a single processor system ?
A multithreaded system comprising of multiple user-level threads cannot make use of the
different processors in a multiprocessor system simultaneously. The operating system sees
only a single process and will not schedule the different threads of the process on separate
processors. Consequently, there is no performance benefit associated with executing multiple
user-level threads on a multiprocessor system.
127. What are the differences between user-level threads and kernel-level threads ?

128. Why a thread is called as light weight process ?


Thread is light weight taking lesser resources than a process. It is called light weight process
to emphasize the fact that a thread is like a process but is more efficient and uses fewer
resources and they also share the address space.
129. What are the benefits of multithreads?
Benefits of multithreading is responsiveness, resource sharing, economy and utilization of
multiprocessor architecture.
130. Name one situation where threaded programming is normally used ?
Threaded programming would be used when a program should satisfy multiple tasks at the same
time. A good example for this would be a program running with GUI.
131. Describe the actions taken by a thread library to context switch between user-level
threads.
Context switching between user threads is quite similar to switching between Kernel threads,
although it is dependent on the threads library and how it maps user threads to kernel threads.
In general, context switching between user threads involves taking a user thread of its LWP
and replacing it with another thread. This act typically involves saving and restoring the state
of the registers.
132. What is a thread pool ?
A thread pool is a collection of worker threads that efficiently execute asynchronous callbacks
on behalf of the application. The thread pool is primarily used to reduce the number of application
threads and provide management of the worker threads.
133. What is deferred cancellation in thread ?
The target thread periodically checks whether it should terminate, allowing it an opportunity
to terminate itself in an orderly fashion. With deferred cancellation, one thread indicates that
a target thread is to be cancelled, but cancellation occurs only after the target thread has
checked a flag to determine if it should be cancelled or not. This allows a thread to check
whether it should be cancelled at a point when it can be cancelled safely.
134. What is Pthread ?
Pthreads refers to the POSIX standard defining an API for thread creation and
synchronization. This is a specification for thread behavior, not an implementation.
Operating system designers may implement the specification in any way they wish.
135. What is thread cancellation ?
Under normal circumstances, a thread terminates when it exits normally, either by returning
from its thread function or by calling pthread_exit.However, it is possible for a thread to
request that another thread terminate. This is called canceling a thread.
136. List the benefit of multithreading.
Benefit of multithreading:
 It takes less time to create thread than a new process.
 It takes less time to terminate thread than process.
137. Under what circumstances is user level threads is better than the kernel level threads?
User-level threads are much faster to switch between, as there is no context switch; further,
a problem-domain-dependent algorithm can be used to schedule among them. CPU-bound tasks
with interdependent computations, or a task that will switch among threads often, might best be
handled by user-level threads
138. What resources are required to create threads?
Thread is smaller than a process, so thread creation typically uses fewer resources than
process creation.
Creating either a user or kernel thread involves allocating a small data structure to hold a
register set, stack, and priority.
139. Differentiate single threaded and multi-threaded processes.
Single-threading is the processing of one command at a time. When one thread is paused, the
system waits until this thread is resumed. In Multithreaded processes, threads can be distributed
over a series of processors to scale. When one thread is paused due to some reason, other threads
run as normal.
140. Give an programming example in which multitreading does not provide better
performance than single threaded solution.
Any kind of sequential program is not a good candidate to be threaded. An example of this
is a program that calculates an individual tax return.
Another example is a "shell" program such as the C-shell or Korn shell. Such a program must
closely monitor its own working space such as open files, environment variables, and current
working directory.
141. Define mutual exclusion.
If a collection of processes share a resource or collection of resources, then often mutual
exclusion is required to prevent interference and ensure consistency when accessing the resources.
142. How the lock variable can be used to introduce mutual exclusion?
We consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its
critical section, it first tests the lock. If lock is 0, the process first sets it to 1 and then enters
the critical section. If the lock is already 1, the process just waits until (lock) variable becomes
0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses - some
process is in its critical section.
143. What is the hardware feature provided in order to perform mutual exclusion
operation indivisibly ?
Hardware features can make any programming task easier and improve system efficiency. It
provide special hardware instructions that allow user to test and modify the content of a word.
144. Discuss why implementing synchronization primitives by disabling interrupts is gnome
not appropriate in a single processor system if the synchronization primitives be are to
be used in user level programs.
If a user level program is given the ability to disable interrupts, then it can disable the timer
interrupt and prevent context switching from taking place, thereby allowing it to use the processor
without letting other processes execute.
145. What is race condition ?
A race condition is a situation where two or more processes access shared data concurrently
and final value of shared data depends on timing.
146. Define entry section and exit section.
Each process must request permission to enter its critical section. The section of the code
implementing this request is the entry section. The critical section is followed by an exit
section. The remaining code is the remainder section.
147. Elucidate mutex locks with its procedure.
Mutex lock is software tools to solve the critical-section problem. A mutex lock has a
boolean variable available whose value indicates if the lock is available or not. If the lock is
available, a call to acquire() succeeds, and the lock is then considered unavailable
148. Name any two file system objects that are neither files nor directories and what the
advantage of doing so is.
Semaphore and monitors the system objects. Advantage is to avoid critical section problem.
149. What is binary semaphore ?
Binary semaphore is a semaphore with an integer value that can range only between 0 and 1.
150. What is semaphore? Mention its importance in operating systems.
Semaphore is an integer variable. It is a synchronization tool used to solve critical section
problem. The various hardware based solutions to the critical section problem are
complicated for application programmers to use.
151. What is the meaning of the term busy waiting ?
Busy waiting means a process waits by executing a tight loop to check the status/value of a
variable.
152. What is bounding waiting ?
After a process made a request to enter its critical section and before it is granted the permission
to enter, there exists a bound on the number of times that other processes are allowed to enter.
153. Why can't you use a test and set instruction in place of a binary semaphore ?
A binary semaphore requires either a busy wait or a blocking wait, semantics not provided
directly in the Test and Set. The advantage of a binary semaphore is that it does not require an
arbitrary length queue of processes waiting on the semaphore.
154. What is concept behind strong semaphore and spinlock ?
Semaphore can be implemented in user applications and in the kernel. The process that has
been blocked the longest is released from the queue first is called a strong semaphore..
Using simple lock variable, process synchronization problem is not solved. To avoid this,
spinlock is used. A lock that uses busy waiting is called a spin lock.
155. What is bounded buffer problem ?
The bounded buffer producers and consumers assume that there are fixed buffer sizes i.e. a
finite numbers of slots are available. To suspend the producers when the buffer is full, to
suspend the consumers when the buffer is empty, and to make sure that only one process at
a time manipulates a buffer so there are no race conditions or lost updates.
156. State the assumption behind the bounded buffer producer consumer problem.
Assumption: It is assume that the pool consists of 'n' buffers, each capable of holding one
item. The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is
initialized to the value 1.
157. What is a critical section and what requirements must a solution to the critical section
problem satisfy ?
Consider a system consisting of several processes, each having a segment of code called a
critical section, in which the process may be changing common variables, updating tables,
etc. The important feature of the system is that when one process is executing its critical
section, no other process is to be allowed to execute its critical section. Execution of the
critical section is mutually exclusive in time.
A solution to the critical section problem must satisfy the following three requirements: 1.
Mutual exclusion 2. Progress 3. Bounded waiting
158. Define 'monitor'. What does it consist of? AU: CSE/IT: Dec.-11
Monitor is a highly structured programming language construct. It consists of private
variables and private procedures that can only be used within a monitor.
159. Explain the use of monitors.
Use of monitors:
a) It provides a mutual exclusion facility.
b) b) A monitor support synchronization by the use of condition
variables.
c) c) Shared data structure can be protected by placing it in a monitor.
160. Give the queueing diagram representation of process scheduling.
161. What are kernal threads. AU: May-22
Kernel threads are handled by the operating system directly and the thread management
is done by the kernel. The context information for the process as well as the process
threads is all managed by the kernel.

162. Define Process?[R]


A Process can be thought of as a program in execution. A process will need certain
resources such as CPU time, memory, files & I/O devices to accomplish its task.
163. Draw & briefly explain the process states?[U] or Name and draw five different
process states with proper definition. (NOV/DEC 2017)

ii. New- The process is being created.
Running – Instructions are being
executed
iii. Waiting – The process is waiting for some event to occur
Ready – The process is waiting to be assigned a processor
iv. Terminated - the process has finished execution
164. What is process control block? List out the data field associated with
PCB.
Each process is represented in the operating system by a process control block also
calleda task control block. (PCB) also called a task control block.

i. Process state
 Process number
 Program counter
 CPU registers
 Memory limits
 List of open files
 CPU scheduling information
 Memory management
information
 Accounting information
 I/O status information
165. What is meant by context switching?[R]
Switching the CPU to another process requires saving the state of the old process and
loading the savet the state for the new process. This task is known as context switch.
166. Define co- operating process and independent process.[R]
Independent process:
a. A process is independent if it cannot affect or be affected by the other processes
executing in the system.
b. A process that does not share data with any other process is independent.
Cooperating process:
c. A process is co-operating if it can affect or be affected by other processes
executing in the system.
d. Any process that shares data with any other process is cooperating.
167. What are the benefits of multithreaded programming? [R]
The benefits of multithreaded programming can be broken down into four major categories
 Responsiveness

 Resource sharing
 Economy scalability
 Utilization of multiprocessor architectures.
168. What is a thread?[R]
i. A thread otherwise called a lightweight process (LWP) is a basic unit of CPU uti
lization, it comprises of a thread id, a program counter, a register set and a stack. It
shareswith other threads belonging to the same process its code section, data section,
and operating system resources

such as open files and


signals.
169. Under What circumstances CPU scheduling decision takes place.[An]
 When a process switches from running state to waiting state
 When a process switches from running state to ready state.
 When a process switches from running state to waiting state to ready state
 When a process terminates.
170. What are the various scheduling criteria for CPU scheduling?[R]
The various scheduling criteria are
 CPU utilization
 Throughput
 Turnaround time
 Waiting time
 Response time
171. Write down the definition of TestAndSet() Instruction.[R]
boolean TestAndSet (boolean &target)
{
boolean rv = *target;
*target
= true;
return
rv;
}

172. Define busy waiting and spinlock.


Busy waiting:-
When a process is in its critical section, any other process that tries to enter its critical s
ection must loop continuously in the entry code. This is called as busy waiting.
Spinlock:-
Busy waiting waster CPU cycles that some other process might be able to use
productively.This type of semaphore is also called a spinlock because the process
spin while waiting for the lock.
173. What is mean by monitors?[R]
A high level synchronization construct. A monitor type is an ADT which presents set
of programmer define operations that are provided mutual exclusion within the
monitor.

174. What are the characterizations of deadlock?[R]


Mutual exclusion: only one process at a time can use a resource.
Hold and wait: a process holding at least one resource is waiting to acquireadditional
resources held by other processes.
No preemption: a resource can be released only voluntarily by the process holding it,
after that process has completed its task.
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is heldby P2,
…, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for aresource that
is held by P0.Deadlock can arise if four conditions hold simultaneously.
175. Differentiate a Thread form a Process. (NOV/DEC 2012)[An]
Threads
 Will by default share memory

 Will share file descriptors


 Will share file system context
 Will share signal handling
Processes
 Will by default not share memory
 Most file descriptors not shared
 Don't share file system context
 Don't share signal handling
176. What are the difference b/w user level threads and kernel level threads? (MAY
User threads
User threads are supported above the kernel and are implemented by a thread
library atthe user level. Thread creation & scheduling are done in the user space,
without kernel intervention. Therefore they are fast to create and manage blocking
system call will cause the entire process to block
Kernel threads
Kernel threads are supported directly by the operating system .Thread creation,
scheduling and management are done by the operating system. Therefore they are
slower to create & manage compared to user threads. If the thread performs a
blocking system call, the kernel can schedule another thread in the application for
execution
177. What is the use of fork and exec system calls?[R]
Fork is a system call by which a new process is created. Exec is also a system
call, which is used after a fork by one of the two processes to place the process
memory spacewith a new program.
178. Define thread cancellation & target thread.[R]
The thread cancellation is the task of terminating a thread before it has completed.
A thread that is to be cancelled is often referred to as the target thread. For
example, if multiple threads are concurrently searching through a database and
one thread returns theresult, the remaining threads might be cancelled.
179. What are the different ways in which a thread can be cancelled?[An]
Cancellation of a target thread may occur in two different scenarios:
Asynchronous cancellation: One thread immediately terminates the target thread
is called asynchronous cancellation.
Deferred cancellation: The target thread can periodically check if it should
terminate, allowing the target thread an opportunity to terminate itself in an
orderly fashion.
180. Define PThreads[R]
PThreads refers to the POSIX standard defining an API for thread creation and
synchronization. This is a specification for thread behavior, not an implementation.
181. What is critical section problem?[R]
Consider a system consists of 'n' processes. Each process has segment of code
called acritical section, in which the process may be changing common variables,
updating a table, writing a file. When one process is executing in its critical section, no
other process can be allowed to execute in its critical section.
182. What are the requirements that a solution to the critical section problem
must satisfy?[R]
The three requirements are

 Mutual exclusion
 Progress & Bounded waiting
183. Define mutual exclusion. (MAY/JUNE 2013)[R]
Mutual exclusion refers to the requirement of ensuring that no two process or
threadsare in their critical section at the same time.i.e. If process Pi is executing in
its critical section, then no other processes can beexecuting in their critical
sections.
184. Define entry section and exit section.[R]
The critical section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section.
Entry Section: The section of the code implementing this request is the entry section.

Exit Section: The section of the code following the critical section is an exit section.
a. The general structure:
do {
entry section

critical section
exit section

remainder section
} while(1);
185. Give two hardware instructions and their definitions which can be used
for implementing mutual exclusion.[An]
TestAndSet

boolean TestAndSet (boolean &target)


{
boolean
rv=target;
target = true;
return rv;
}
Swap
void Swap (boolean &a, boolean &b)
{
boolean
temp = a;
a= b;
b = temp;
}
186. What is semaphore? Mention its importance in operating system.
(APRIL/MAY 2010, NOV/DEC 2012)[R]
A semaphore 'S' is a synchronization tool which is an integer value that, apart from
initialization, is accessed only through two standard atomic operations; wait and signal.
Semaphores can be used to deal with the n-process critical section problem. Itcan be
also used to solve various Synchronization problems.
187. How the mutual exclusion may be violated if the signal and wait operations
are not executed automatically (MAY/JUNE 2014)[An]
A wait operation atomically decrements the value associated with a semaphore. If two
wait operations are executed on a semaphore when its value is1, if the two operations
arenot performed atomically, then it is possible that both operations might proceed to
decrement the semaphore value, thereby violating mutual exclusion
188. Define CPU scheduling.[R]
CPU scheduling is the process of switching the CPU among various processes.
CPU scheduling is the basis of multi programmed operating systems. By switching
the CPU among processes, the operating system can make the computer more
productive.
189. What is preemptive and non-preemptive scheduling?
i. Under non preemptive scheduling once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by
terminating or switching to the waiting state.
ii. Preemptive scheduling can preempt a process which is utilizing the CPU in
betweenits execution and give the CPU to another process.
190. What is a Dispatcher?[R]
The dispatcher is the module that gives control of the CPU to the process selected
bythe short-term scheduler. This function involves:
a. Switching context.

b. Switching to user mode.


c. Jumping to the proper location in the user program to restart that program.
191. Define the term “dispatch latency” (APR/MAY 2015)[R]
The time taken by the dispatcher to stop one process and start another running is known as
dispatch latency.
192. Define throughput?[R]
Throughput in CPU scheduling is the number of processes that are completed per unit
time. For long processes, this rate may be one process per hour; for short transactions,
throughput might be 10 processes per second.
193. What is turnaround time? (NOV/DEC 2013)[R]
Turnaround time is the interval from the time of submission to the time of completion
of a process. It is the sum of the periods spent waiting to get into memory,waiting in
the ready queue, executing on the CPU, and doing I/O.
194. Define race condition.[R]
When several process access and manipulate same data concurrently, then the outcome of
the execution depends on particular order in which the access takes place is called race
condition. To avoid race condition, only one process at a time can manipulate the shared
variable.
195. Write the four situations under which CPU scheduling decisions take place
(MAY/JUNE 2014) [R]
CPU scheduling decisions take place under one of four conditions:

a. When a process switches from the running state to the waiting state, such as for an
I/O request or invocation of the wait ( ) system call.
b. When a process switches from the running state to the ready state, for example in
response to an interrupt.
c. When a process switches from the waiting state to the ready state, say at
completion of I/O or a return from wait ( ).
d. When a process terminates.
196. Define deadlock. (APRIL/MAY 2010)[R]
A process requests resources; if the resources are not available at that time, the
process enters a wait state. Waiting processes may never again change state, because
the resources they have requested are held by other waiting processes. This situation
is calleda deadlock.
197. What is the sequence in which resources may be utilized?[R]
Under normal mode of operation, a process may utilize a resource in the following
sequence:
a. Request: If the request cannot be granted immediately, then the requesting process
must wait until it can acquire the resource.
b. Use: The process can operate on the resource.
c. Release: The process releases the resource.
198. What are conditions under which a deadlock situation may arise?

A deadlock situation can arise if the following four conditions hold simultaneouslyin a
system:
 Mutual exclusion

 Hold and wait


 No pre-emption
 Circular wait
199. What is a resource-allocation graph? [R]
Resource allocation graph is directed graph which is used to describe deadlocks. This
graph consists of a set of vertices V and a set of edges E. The set of vertices V is partitioned
into two different types of nodes; P the set consisting of all active processes inthe system
and R the set consisting of all resource types in the system.
200. Define request edge and assignment edge. [R]
A directed edge from process Pi to resource type Rj (denoted by Pi → Rj) is called as request
edge; it signifies that process Pi requested an instance of resource type Rj and iscurrently
waiting for that resource. A directed edge from resource type R j to process Pi (denoted by
Rj → Pi) is called an assignment edge; it signifies that an instance of resource type has been
allocated to a process Pi.

201. What are the methods for handling deadlocks? (APRIL/MAY 2011)[R]
The deadlock problem can be dealt with in one of the three ways:
 Use a protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
 Allow the system to enter the deadlock state, detect it and then recover.

 Ignore the problem all together, and pretend that deadlocks never occur in the system.
202. How real-time Scheduling does differs from normal scheduling?
i. In a normal Scheduling, we have two types of processes. User process &
kernel Process. Kernel processes have time constraints. However, user
processes do not have timeconstraints.
ii. In a RTOS, all process are Kernel process & hence time constraints should be
strictlyfollowed. All process/task (can be used interchangeably) are based on
priority and time constraints are important for the system to run correctly.
203. What do you meant by short-term scheduler (NOV/DEC 2010) [R]
The selection process is carried out by the short-term scheduler or CPU scheduler.
The scheduler selects the process form the process in memory that is ready to execute
and allocates the CPU to the process.
204. What is the concept behind strong semaphore and spinlock?
A spinlock is one possible implementation of a lock, namely one that is implementedby busy
waiting ("spinning"). A semaphore is a generalization of a lock (or, the other wayaround, a
lock is a special case of a semaphore). Usually, but not necessarily, spinlocks are only valid
within one process whereas semaphores can be used to synchronize between different
processes, too.
A semaphore has a counter and will allow itself being acquired by one or several threads,
depending on what value you post to it, and (in some implementations) depending on what its
maximum allowable value is.
205. What is the meaning of the term busy waiting? (May/Jun 2016) [R]
Busy waiting means that a process is waiting for a condition to be satisfied in a tightloop
without relinquish the processor. Alternatively, a process could wait by relinquishingthe
processor, and block on a condition and wait to be awakened at some appropriate timein
the future.
206. Distinguish between CPU-bounded and I/O bounded processes.
CPU Bound means the rate at which process progresses is limited by the speed of
the CPU. A task that performs calculations on a small set of numbers, for example
multiplying small matrices, is likely to be CPU bound.
I/O Bound means the rate at which a process progresses is limited by the speed of the
I/Osubsystem. A task that processes data from disk, for example, counting the number
of lines in a file is likely to be I/O bound.
207. What resources are required to create threads (NOV/DEC 2016) [R]
When a thread is created, the thread does not require any new resources to execute the
thread shares the resources like memory of the process to which they belong. The
benefit of code sharing is that it allows an application to have several different threads
of activity all within the same address space.
208. “Priority inversion is a condition that occurs in real time systems where a low
priority process is starved because higher priority processes have gained hold of the
CPU”-Comment on this statement. (APR/MAY 2017) [An]
Priority inversion is a problematic scenario in scheduling in which a high
prioritytask is indirectly preempted by a lower priority task effectively "inverting" the
relative priorities of the two tasks. This violates the priority model that high priority tasks
can onlybe prevented from running by higher priority tasks and briefly by low priority
tasks whichwill quickly complete their use of a resource shared by the high and low
priority tasks.

209. Differentiate single threaded and multi-threaded processes. (APR/MAY 2017)


 Multithreaded Programming  Single Threaded Programming
 In this type of programming  In this type of programming a
multiple threads run at the same single thread runs at a time.
time
 Multi-threaded model doesn‘t use  Single threaded model uses a
event loop with polling process event loop with polling
 CPU time is never wasted.  CPU time is wasted.
 Idle time is minimum.  Idle time is more.
 It results in more efficient programs.  It results in less efficient programs.
 When one thread is paused due to  When one thread is paused, the
some reason, other threads run as system waits until this thread is
normal. resumed.

210. Elucidate mutex locks with its procedure. (NOV/DEC 2017)


Mutex is a program object that allows multiple program threads to share the same
resource, such as file access, but not simultaneously. When a program is started a mutex
is created with a unique name. After this stage, any thread that needs the resource must
lock the mutex from other threads while it is using the resource. The mutex is set to
unlockwhen the data is no longer needed or the routine is finished. In mutex locks
approach, in the entry section of code, a LOCK is acquired over the critical resources
modified and usedinside critical section, and in the exit section that LOCK is released.
As the resource is locked while a process executes its critical section hence no other
process can access it.
211. What are the benefits of synchronous and asynchronous communication?
(APR/MAY 2018)
a. Benefits of synchronous communication:
 Synchronous communication enables flexibility and offer higher availability.
 There‘s less pressure on the system to act on the information or immediately
respond in some way.
 Also, one system being down does not impact the other system. For example,
emails –thousands of emails can be sent without having to revert back..
b. Benefits of Asynchronous communication:
 Asynchronous message passing allows more parallelism.
 Since a process does not block, it can do some computation while the message is
in transit.
 In the case of receive, this means a process can express its interest in receving
messages on multiple ports simultaneously.
212. Give a programming example in which multithreading does not provide
better performance than single-threaded solutions. (APR/MAY 2018)
Multi-threading does not perform well for any sequential program. For example; program
to calculate an individual tax return. Another example where multithreading does not work
good would be shell program like ―Korn shell.
213. Give the queuing diagram representation of process scheduling. (APR/MAY
2019)
i.

214. List out the benefits and challenge of thread handling. (APR/MAY 2019)
Benefits:
a. Responsiveness.
b. Resource sharing
c. Economy
d. Scalability.
Challenges
e. Dividing activities
f. Balance
g. Data splitting
h. Data dependency
i. Testing and debugging
UNIT III

215. Why page are sizes always powers of 2?


Recall that paging is implemented by breaking up an address into a page and offset
number. It is most efficient to break the address into X page bits and Y offset bits, rather
than perform arithmetic on the address to calculate the page number and offset. Because
each bit 25 26 position represents a power of 2, splitting an address between bits results
in a page size that is a power of 2.
216. Consider a logical address space of eight pages of 1024 words each,
mapped onto a physical memory of 32 frames.[E]
a. How many bits are there in the logical address?
b. How many bits are there in the physical address?
Each page/frame holds 1K; we will need 10 bits to uniquely address each of those 1024
addresses. Physical memory has 32 frames and we need 25 bits to address each frame,
requiring in total 5+10=15 bits. A logical address space of 64 pages requires 6 bits to
address each page uniquely, requiring 16bits in total.
Logical address: 13 bits
Physical address: 15 bits
217. In the IBM/370, memory protection is provided through the use of keys. A key
is a4-bit quantity. Each 2K block of memory has a key (the storage key) associated
with it. The CPU also has a key (the protection key) associated with it. A store
operation is allowed only if both keys are equal, or if either is zero. Which of the
following memory-management schemes could be used successfully with this
hardware? [E]
a. Bare machine
b. Single-user system
c. Multiprogramming with a fixed number of processes
d. Multiprogramming with a variable number of processes
e. Paging
f. Segmentation
Answer:
 Protection not necessary set system key to 0.
 Set system key to 0 when in supervisor mode.
 Region sizes must be fixed in increments of 2k bytes, allocate key with
memory blocks.
 Same as above.
 Frame sizes must be in increments of 2k bytes, allocate key with pages.
Segment sizes must be in increments of 2k bytes, allocate key with segments
218. What is address binding? [R]
The process of associating program instructions and data to physical memoryaddresses is
called address binding, or relocation.
219. Difference between internal and external fragmentation (NOV/DEC 2013) [An]
i. Internal fragmentation is the area occupied by a process but cannot be
used bythe process. This space is unusable by the system until the
process release the space.
ii. External fragmentation exists when total free memory is enough for the
new process but it's not contiguous and can't satisfy the request. Storage is
fragmented intosmall holes.
220. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7,
6, 3, 2, 1, 2, 3, 6. How many page faults would occur for the following
replacementalgorithms, assuming one, two, three, four, five, six, or seven frames?
Remember allframes are initially empty, so your first unique pages will all cost one
fault each. • LRUreplacement • FIFO replacement •Optimal replacement Answer:
Number of frames LRU FIFO Optimal
1 20 20 20
2 18 18 15
3 15 16 11
4 10 14 8
5 8 10 7
6 7 10 7
7 7 7 7

221. Define dynamic loading. [R]


To obtain better memory-space utilization dynamic loading is used. With dynamic loading,
a routine is not loaded until it is called. All routines are kept on disk in are locatable load
format. The main program is loaded into memory and executed. The callingroutine checks
whether the routine has been loaded. If not, there locatable linking loader is called to load
the desired program into memory.
222. Define dynamic linking. [R]
i. Dynamic linking is similar to dynamic loading, rather that loading being
postponeduntil execution time, linking is postponed. This feature is usually
used with system libraries, such as language subroutine libraries
223. What are overlays? Compare swapping and overlays [An]
i. To enable a process to be larger than the amount of memory allocated to
it, overlays are used. The idea of overlays is to keep in memory only those
instructions anddata that are needed at a given time. When other
instructions are needed, they are loaded into space occupied previously by
instructions that are no longer needed.
224. List the strategies for managing free memory in kernel? [R]
 Buddy System: - The buddy system allocates memory from a fixed size segment consists
of physical contiguous pages. Memory is allocated using power-of-2. This allocation
satisfy request in units sized as a power of 2.
 Slab Allocation:- A Slab is made up of one or more physically contiguous pages. A
cache consists of one or more slabs. The slab allocation uses caches to store kernel
Objects.
225. What is virtual memory? Mention its advantages.
Virtual memory is a technique that allows the execution of processes that may not be completely
in memory. It is the separation of user logical memory from physical memory.This separation
provides an extremely large virtual memory, when only a smaller physical memory is
available.The main visible advantage of this scheme is that programs can be larger than
physical memory.
225.Define Demand paging and write advantages. [R]

Virtual memory is commonly implemented by demand paging. In demand paging, the pager
brings only those necessary pages into memory instead of swapping in a wholeprocess.
Thus it avoids reading into memory pages that will not be used anyway, decreasing the
swap time and the amount of physical memory needed.

226.What is the purpose of paging the page tables? [R]

In certain situations the page tables could become large enough that by paging the page
tables, one could simplify the memory allocation problem (by ensuring that everything is
allocated as fixed-size pages as opposed to variable-sized chunks) and also enable the
swapping of portions of page table that are not currently used.
226.Compare paging with segmentation with respect to the amount of memory required
by the address translation structures in order to convert virtual addresses to physical
addresses. [An]
Paging requires more memory overhead to maintain the translation structures. Segmentation
requires just two registers per segment: one to maintain the base of the segment and the other to
maintain the extent of the segment. Paging on the other hand requires one entry per page, and
this entry provides the physical address in which the page is located.
226. What do you mean by thrashing? (APR/MAY 2015) [R] (APR/MAY 2019)
Thrashing is the coincidence of high page traffic and low CPU utilization.
227.How do you limit the effects of thrashing? [An]
To limit the effect of thrashing we can use local replacement algorithm. With Local
replacement algorithm, if the process starts thrashing, it cannot steal frames from
another process and cause the latter to thrash as well. The problem is not entirely solved.
Thus the effective access time will increase even for the process that is not thrashing.
227.What do mean by page fault? [R]
Page fault is the situation in which the page is not available whenever a processorneeds to
execute it.
228.Differentiate between Global and Local page replacement algorithms. [An]

Global Page Replacement Local Page Replacement


Algorithm Algorithm
 Allows a process to  Each process select form only its
select a replacement own set of allocated frames
frame from the set of all
 The number of frames allocated to a  The number of frames allocated
process can change since a process to aprocess does not change
may happen to select only frames
allocated to other processes, thus
increasing the number of frames
allocated to it.
 A process cannot control its own  A process can control its own page-
page-fault rate fault rate
227. Define TLB. [R]
a. Translation Look-Aside Buffer, a table in the processors memory that contains
information about the pages in memory the processor has accessed recently
b. The TLB enables faster computing because it allows the address processing to take
place independent of the normal address-translation pipeline
228. Define Pre paging. [R]
a. It is an attempt to prevent the high level of initial paging. This strategy is to
bringinto memory at one time all the pages the will be needed.

Example: - Solaris uses pre paging.

229. Define logical address and physical address. [R]


An address generated by the CPU is referred as logical address. An address seen bythe
memory unit that is the one loaded into the memory address register of the memory is
commonly referred as physical address
230. What is the main function of the memory-management unit?[R]
The runtime mapping from virtual to physical addresses is done by a hardware device
called a memory management unit (MMU).
231. What is difference between demands paging n pure demand paging? [R]

In demand paging, a page is not loaded into main memory until it is needed. In puredemand
paging, even a single page is not loaded into memory initially. Hence pure demandpaging
causes a page fault.
232. Define Copy-on-write. [R]
Copy-on-write finds its main use in virtual memory operating systems; when a process
creates a copy of itself, the pages in memory that might be modified by either the process
or its copy are marked copy-on-write.
233. Define swapping.
A process needs to be in memory to be executed. However a process can be swapped
temporarily out of memory to a backing store and then brought back into memory for
continued execution. This process is called swapping.
234. What are the common strategies to select a free hole from a set of available holes?
The most common strategies are

First fit Best fit Worst fit


235. Define lazy swapper. [R]
Rather than swapping the entire process into main memory, a lazy swapper is used. A
lazy swapper never swaps a page into memory unless that page will be needed.
236. Define effective access time. [R]
i. Let p be the probability of a page fault (0£p£1). The value of p is expected
to beclose to 0; that is, there will be only a few page faults. The effective
access time is
ii. Effective access time = (1-p) * ma + p* page fault time. ma: memory-access
time

237. What is the basic approach of page replacement? [R]


If no frame is free is available, find one that is not currently being used and free it. A frame
can be freed by writing its contents to swap space, and changing the page table toindicate
that the page is no longer in memory. Now the freed frame can be used to hold thepage for
which the process faulted.
238. What is the various page replacement algorithms used for page replacement?[R]
a. FIFO page replacement
b. Optimal page replacement
c. LRU page replacement
d. LRU approximation page replacement
e. Counting based page replacement
f. Page buffering algorithm.
239. What are the major problems to implement demand paging?
The two major problems to implement demand paging is developing
a. Frame allocation algorithm
b. Page replacement algorithm
240. What is a reference string?
An algorithm is evaluated by running it on a particular string of memory referencesand
computing the number of page faults. The string of memory reference is called a reference
string.
241. Differentiate a page from a segment. (APRIL/MAY 2010) [An]
i. In segmentation, the address space is typically divided into a preset number
of segments like data segment (read/write), code segment (read-only), stack
(read/write) etc. And the programs are divided into these segments
accordingly. Logical addresses are represented as tuple <segment, offset>.
While with paging, the address space is divided into a sequence of fixed size
units called "pages". And logical addresses take the form of a tuple <page,
offset>.

242. What is address binding? (NOV/DEC 2010) [R]


i. The process of associating program instructions and data to physical
memory addresses is called address binding, or relocation.
243. How do you limit the effects of thrashing?
i. To limit the effect of thrashing we can use local replacement algorithm.
With Local replacement algorithm, if the process starts thrashing, it cannot
steel frames from another process and cause the latter to thrash as well. The
problem is not entirely solved. Thus the effective access time will increase
even for the process that is not thrashing.
244. Mention the significance of LDT and SDT. (APR/MAY 2015)[R]

The Global Descriptor Table or GDT is a data structure used by Intel x86-family
processors starting with the 80286 in order to define the characteristics of the various
memory areas used during program execution, including the base address, the size and
access privileges like executability and writability. These memory areas are called
segments.
The Local Descriptor Table (LDT) is a memory table used in the x86 architecture in
protected mode and contains memory segment descriptors: start in linear memory, size,
executability, writability, access privilege, actual presence in memory, etc.
a. The LDT is supposed to contain memory segments which are private to a specific
program, while the GDT is supposed to contain global segments.
b. The x86 processors contain facilities for automatically switching the current LDT
on specific machine events, but no facilities for automatically switching the
GDT.
c. The LDT is the sibling of the Global Descriptor Table (GDT) and defines up to
8192 memory segments accessible to programs –
d. Unlike the GDT, the zeroth entry is a valid entry, and can be used like any other
LDT entry.
e. Unlike the GDT, the LDT cannot be used to store certain system entries: TSSs or
LDTs.
245. Define demand paging in memory management. What are the steps required to
handle a page fault in demand paging. (Nov/Dec 2015) [R]
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not
in advance. When a context switch occurs, the operating system does not copy any
of the old program‘s pages out to the disk or any of the new program‘s pages into
the main memoryInstead, it just begins executing the new program after loading
the first page and fetches that program‘s pages as they are referenced. While
executing a program, if the program references a page which is not available in the
main memory because it was swapped out a little ago, the processor treats this
invalid memory reference as a page fault and transferscontrol from the program to
the operating system to demand the page back into the memory.
246. How does the system detect thrashing? (May/Jun 2016) [An]
Thrashing is caused by under allocation of the minimum number of pages required
by a process, forcing it to continuously page fault. The system can detect thrashing
by evaluating the level of CPU utilization as compared to the level of
multiprogramming. It can be eliminated by reducing the level of
multiprogramming.
247. Name two differences between logical and physical addresses. (May/Jun 2016) [R]
A logical address does not refer to an actual existing address; rather, it refers to an
abstract address in an abstract address space. Contrast this with a physical address
that refers to an actual physical address in memory. A logical address is generated by the
CPUand is translated into a physical address by the memory management unit(MMU).
Therefore, physical addresses are generated by the MMU.
248. Will optimal page replacement algorithm suffer from Belady‟s anamaly?
Justify your answer (APR/MAY 2017)[An]
In computer storage, Belady‘s anomaly is the phenomenon in which increasing the number
of page frames results in an increase in the number of page faults for certain memory access
patterns. This phenomenon is commonly experienced when using the first-in first-out
(FIFO) page replacement algorithm.
249. Write about swapping. Let us assume the user process is of size 1MB and the
backingstore is a standard hard disk with a transfer rate of 5MBPS. Calculate the
transfer rate.

i. Swapping: A process can be swapped temporarily out of memory to a


backing store(SWAP OUT)and then brought back into memory for
continued execution(SWAP IN).
ii. Let us assume the user process is of size 1MB & the backing store is a
standard harddisk with a transfer rate of 5MBPS.
iii. Transfer time = 1000KB/5000KB per second

a) = 1/5 sec = 200ms

250. Consider the following page-reference string : [E] (NOV/DEC 2017)


i. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.
251. How many page faults and page fault ratio would occur for the FIFO page
replacementalgorithm ? Assuming there is four frames.
The number of page faults =12
The page fault ratio is 1:1
252. Define external fragmentation. [U] (APR/MAY 2018)
External fragmentation occurs when there is a sufficient amount of space in the memory
to satisfy the memory request of a process. But the process‘s memory request cannot be
satisfied as the memory available is in a non-contiguous manner. Either first-fit or best-fit
memory allocation strategy will cause external fragmentation.
253. What is the counting based page replacement algorithm? [U] (APR/MAY 2018)

In this algorithm, we keep the counter of the number of reference that have beenmade to
each page. In this 2 schemes are used :-Least Frequency Used (LFU) Page Replacement
AlgorithmIt requires that the page with smallest count to be replaced.
Most Frequency (MFU) Used Page Replacement Algorithm: It is based on the
argument that the page with the smallest count was probably just brought in and has yet to
be used.
254. Consider the following segmentation table.

Segme Base Length


nt
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96

What are the physical addresses for the logical addresses 3400 and 0110?(APR/MAY
2019)
a. illegal reference; traps to operating system
b.2300+0110 = 2410
255. What is thrashing? and how to resolve this problem? (APR/MAY 2019)
In a virtual storage system (an operating system that manages its logical storage ormemory
in units called pages), thrashing is a condition in which excessive paging operations are
taking place. A system that is thrashing can be perceived as either a very slow system or one
that has come to a halt.One of the recommended ways to eliminate thrashing is to add more
memory to main memory. Another way of resolving the issue of thrashing is by adjusting
the size of the swap file.
UNIT IV

256. What is a file? [R]


A file is a named collection of related information that is recorded on secondary storage.
A file contains either programs or data. A file has certain "structure" based on its type.
257. List the various file attributes. (APRIL/MAY 2011,
NOV/DEC2012)(MAY/JUNE 2014)(APRIL/MAY 2015) [R]
a. A file has certain other attributes, which vary from one operating system to
another,but typically consist of these:
b. Identifier

c. Name
d. Type
e. Location
f. Size
g. Protection
h. Time
i. Date
j. User identification
258. What are the various file operations? (NOV/DEC 2012, APRIL/MAY 2015) [R]
a. The six basic file operations are:
b. Creating a file
c. Writing a file
d. Reading a file
e. Repositioning within a file
f. Deleting a file
g. Truncating a file
259. What are all the information‟s associated with an open file? [R]
Several pieces of information are associated with an open file which may be:
a. File pointer
b. File open count
c. Disk location of the file
d. Access rights
260. What are the different accessing methods of a file? (APRIL/MAY 2010) [R]
The different types of accessing a file are:
i. Sequential access: Information in the file is accessed sequentially
ii. Direct access: Information in the file can be accessed without any
particular order.Other access methods: Creating index for the file, indexed
sequential access method (ISAM),etc.
261. What is Directory? [R]
The device directory or simply known as directory records information-such as name,
location, size, and type for all files on that particular partition. The directory can beviewed
as a symbol table that translates file names into their directory entries.
262. What are the operations that can be performed on a directory? [R]
The operations that can be performed on a directory are
a. Search for a file
b. Create a file
c. Delete a file
d. Rename a file
e. List directory
f. Traverse the file system
263. What are the most common schemes for defining the logical
structure ofa directory? [R] (MAY/JUNE 2012)
The most common schemes for defining the logical structure of directory
a. Single-Level Directory
b. Two-level Directory
c. Tree-Structured Directories
d. Acyclic-Graph Directories
e. General Graph Directory
264. Define UFD and MFD. [R]
In the two-level directory structure, each user has her own user file directory (UFD).Each
UFD has a similar structure, but lists only the files of a single user. When a job startsthe
system's master file directory (MFD) is searched. The MFD is indexed by the user name
or account number, and each entry points to the UFD for that user.
265. What is a path name? [R]
A pathname is the path from the root through all subdirectories to a specified file.
In a two-level directory structure a user name and a file name define a path name.
266. What are the various layers of a file system? [R]
The file system is composed of many different levels. Each level in the design uses the
feature of the lower levels to create new features for use by higher levels.
 Application programs

 Logical file system


 File-organization module
 Basic file system
 I/O control vi Devices
267. What are the functions of virtual file system (VFS)? [R]
a. It separates file-system-generic operations from their implementation defining a
clean VFS interface. It allows transparent access to different types of file systems
mounted locally.
b. VFS is based on a file representation structure, called a vnode. It contains a
numerical value for a network-wide unique file .The kernel maintains one vnode
structure for each active file or directory.
268. Define seek time and latency time. [R]
The time taken by the head to move to the appropriate cylinder or track is called seek time.
Once the head is at right track, it must wait until the desired block rotates underthe read-
write head. This delay is latency time.
269. What are the allocation methods of a disk space? [R]
Methods of allocating disk space which are widely in use are
a. Contiguous allocation
b. Linked allocation
c. Indexed allocation
270. What are the advantages of Contiguous allocation? [R]
The advantages are
a. Supports direct access
b. Supports sequential access
c. Number of disk seeks is minimal.
271. What are the drawbacks of contiguous allocation of disk space? [R]
The disadvantages are
a. Suffers from external fragmentation.
b. Suffers from internal fragmentation.
c. Difficulty in finding space for a new file.
d. File cannot be extended.
e. Size of the file is to be declared in advance.
272. What are the disadvantages of linked allocation? [R]
The disadvantages are
a. Used only for sequential access of files.
b. Direct access is not supported.
c. Memory space required for the pointers.
d. Reliability is compromised if the pointers are lost or damaged
273. What are the advantages of Indexed allocation? [R]
The advantages are
a. No external-fragmentation problems.
b. Solves the size-declaration problems.
c. Supports direct access.
274. How can the index blocks be implemented in the indexed allocation
scheme? [R]
a. The index block can be implemented as follows

b. Linked scheme
c. Multilevel scheme
d. Combined scheme
275. What is garbage collection? (MAY /JUNE 2012 ) [R]
i. Garbage Collection (GC) is a form of automatic memory management. The
garbage collector, or just collector, attempts to reclaim garbage, or memory
occupied by objects that are no longer in use by the program.
276. Mention the objectives of File Management System. (APR/MAY 2010) [R]
i. The system that an operating system or program uses to organize and keep
track offiles. For example, a hierarchical file system is one that uses
directories to organize files into a tree structure.
277. What is the content of a typical file control block? (APR/MAY 2011,
APR/MAY 2010) [R]
 File permissions

 File dates (create, access,

 write)
 File owner, group, ACL

 File size

 File data blocks

i. File Control Block (FCB) is a file system structure in which the state of an
openfile is maintained.
278. What are the two types of system directories? (MAY/JUNE 2012) [R}
Device directory, describing physical properties of files.
i. File directory, giving logical properties of the files.
279. What is meant by polling?(MAY/JUNE 2014) [R]
Polling is the process where the computer waits for an external device to check for its
readiness. The computer does not do anything else than checking the status of the device
Polling is often used with low-level hardware. Example: when a printer connected via a
parallel port the computer waits until the next character has been received by the printer.
These processes can be as minute as only reading 1 Byte. Polling is the continuous (or
frequent) checking by a controlling device or process of other devices, processes, queues,
etc.
280. State any three disadvantages of placing functionality in a device controller,
rather than in the kernel. (MAY/JUNE 2014) [R]
Three advantages:-

o Bugs are less likely to cause an operating system crash.


 Performance can be improved by utilizing dedicated hardware and hard-coded
algorithms.
 The kernel is simplified by moving algorithms out of it.

a. Three disadvantages:
 Bugs are harder to fix - a new firmware version or new hardware is needed
 Improving algorithms likewise require a hardware update rather than just kernel or
device driver update
 Embedded algorithms could conflict with application‘s use of the device, causing
decreased performance.
281. How free-space is managed using bit vector implementation?[An]
The free-space list is implemented as a bit map or bit vector. Each block is represented
by 1 bit. If the block is free, the bit is 1; if the block is allocated, the bit is 0.
282. List the attributes of a file(MAY/JUNE 2014)[R]
Name, Identifier, Type, Location, Size, Protection, Time, Date and User
authentication.
283. What are the information contained in a boot control block and partition
control block? (MAY/JUNE 2014) [R]
Boot control block:

i. Contain information needed by the system to boot an operating from that


partition. If the disk does not contain an operating system, this block can be
empty. It is typically the first block of a partition. In UFS, this is called the
boot block.
Partition Control block:
ii. Contains partition details, such as number of blocks in the partition, size of
the blocks, free block count and free block pointers, and free FCB count and
FCB pointers.
284. Define buffering. [R]
 A buffer is a memory area that stores data while they are transferred between
two devices or between a device and an application. Buffering is done for
three reasons
 To cope with a speed mismatch between the producer and consumer of a data stream

 To adapt between devices that have different data transfer sizes


 To support copy semantics for application I/O
285. Define caching. [R]
A cache is a region of fast memory that holds copies of data. Access to the cached copy is
more efficient than access to the original. Caching and buffering are distinct functions, but
sometimes a region of memory can be used for both purposes.
286. Define spooling. [R]
A spool is a buffer that holds output for a device, such as printer, that cannot accept
interleaved data streams. When an application finishes printing, the spooling system
queues the corresponding spool file for output to the printer. The spooling system copies
the queued spool files to the printer one at a time.
287. Define rotational latency and disk bandwidth. (NOV/DEC 2010, MAY/JUNE
2013) [R]
Rotational latency is the additional time waiting for the disk to rotate the desired sector
to the disk head.
Disk bandwidth is the total number of bytes transferred, divided by the timebetween the
first request for service and the completion of the last transfer.
288. What are the various disk-scheduling algorithms? [R]
a. The various disk-scheduling algorithms are
b. First Come First Served Scheduling
c. Shortest Seek Time First Scheduling
d. SCAN Scheduling
e. C-SCAN Scheduling
289. What is the need for disk scheduling? (NOV/DEC 2012) [R]
In operating systems, seek time is very important. Since all device requests are linked in queues,
the seek time is increased causing the system to slow down.
Disk Scheduling Algorithms are used to reduce the total seek time of any request.

290. What is low-level formatting? [R]


Before a disk can store data, it must be divided into sectors that the disk controller can read
and write. This process is called low-level formatting or physical formatting. Low-level
formatting fills the disk with a special data structure for each sector. The data structure for
a sector consists of a header, a data area, and a trailer.
291. What is the use of boot block? [R]
For a computer to start running when powered up or rebooted it needs to have an initial
program to run. This bootstrap program tends to be simple. It finds the operating system
on the disk loads that kernel into memory and jumps to an initial address to begin the
operating system execution. The full bootstrap program is stored in a partition called the
boot blocks, at fixed location on the disk. A disk that has boot partition is called boot disk
or system disk.
292. What is sector sparing? [R]
Low-level formatting also sets aside spare sectors not visible to the operating system. The
controller can be told to replace each bad sector logically with one of the spare sectors.
This scheme is known as sector sparing or forwarding.

293. What is seek time? (MAY /JUNE 2012) [R]


Seek time: the time to position heads over a cylinder (~8 msec on average).
294. What are storage area networks? (April/May 2011) [R]
A storage area network (SAN) is a dedicated network that provides access to consolidated,
block level data storage. SANs are primarily used to make storage devices, such as disk
arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear
like locally attached devices to the operating system.
295. Write a brief note on RAID. (MAY/JUNE 2013) [R]
RAID (redundant array of independent disks; originally redundant array of inexpensive
disks) is a way of storing the same data in different places (thus, redundantly)on multiple
hard disks. By placing data on multiple disks, I/O (input/output) operations can overlap in
a balanced way, improving performance. Since multiple disks increase the mean time
between failures (MTBF), storing data redundantly also increases fault tolerance.
296. What Characteristics determine the disk access speed? (MAY /JUNE 2012)
[R]
a. Seek time

b. Rotational latency
c. Command processing time
d. Settle time
297. Give the importance of Swap space Management. [R] (NOV/DEC 2012,
APR/MAY2010, NOV/DEC 2010)
Swap-space management: Swap-space management is low- level task of the operating
system. The main goal for the design and implementation of swap space is to provide the
best throughput for the virtual memory system.
Swap-space use: The operating system needs to release sufficient main memory tobring
in a process that is ready to execute. Operating system uses this swap space in variousways.
Paging systems may simply store pages that have been pushed out of main memory.UNIX
operating system allows the use of multiple swap spaces. These swap space are usually put
on separate disks, so the load placed on the I/O system by paging and swappingcan be spread
over the systems I/O devices.
Swap-space location: Swap space can reside in two places:
 Separate disk partition
 Normal file system
298. Write three basic functions which are provided by the hardware clocks and
timers. (APRIL/MAY 2011) [R]
a. OSTickInit()

b. OSTimeSet()
c. OSTimeGet()
299. What are the advantages of Linked allocation? [R]
The advantages are
a. No external fragmentation.
b. Size of the file does not need to be declared.
300. Define FAT(NOV/DEC 2014) [R]
FAT is a much older file-system format that is understood by many systems besides
Windows, such as the software running on cameras. A disadvantage is that the FAT file
system does not restrict file access to authorized users. The only solution for securing
data with FAT is to run an application to encrypt the data before storing it onthe file
system.
301. What is Relative block number? (NOV/DEC 2014) [R]
Relative block number is an index relative to the beginning of a file. Thus the 1strelative
block of the file is 0, the next is 1, and so on.
302. What is double buffering? (NOV/DEC 2014) [R]
OS can use various kinds of buffering:
 Single buffering — OS assigns a system buffer to the user request
 Double buffering — process consumes from one buffer while system fills the next
 Circular buffers — most useful for burst I/O
303. Draw the diagram for interrupt driven I/O cycle? (NOV/DEC 2014) [Ap]

304. What is HSM? Where it is used? [R]


Hierarchical storage management (HSM) is a data storage technique, which automatically
moves data between high-cost and low-cost storage media. HSM systems exist because
high-speed storage devices, such as hard disk drive arrays, are moreexpensive (per byte
stored) than slower devices, such as optical discs and magnetic tape drives.

305. How does DMA increase system concurrency? (May/Jun 2016)[An]


DMA increases system concurrency by allowing the CPU to perform tasks while the DMA
system transfers data via the system and memory buses. Hardware design is complicated
because the DMA controller must be integrated into the system and the system must allow
the DMA controller to be a bus master.
306. Why rotational latency is usually not considered in disk scheduling? (May/Jun
2016)[An]
Most disks do not export their rotational position information to the host. Even if they did,
the time for this information to reach the scheduler would be subject to imprecision and
the time consumed by the scheduler is variable, so the rotational position information
would become incorrect. Further, the disk requests are usually given in terms of logical
block numbers, and the mapping between logical blocks and physical locations is very
complex.
307. Define C-SCAN scheduling (NOV/DEC 2016)[R]

In the C-Scan all the Processes are Arranged by using Some Circular List. Circular Listis
that in which there is no start and end point of the list means the End of the List is the
Starting Point of the list. In the C-Scan Scheduling the CPU will search for the Process
from Start to end and if an End has Found then this again start from the Starting Process.

308. Why it is important to scale up system –bus and device speeds as CPU speed
increases? (NOV/DEC 2016)[An]
Consider a system which performs 50% I/O and 50% computers. Doubling the CPU
Performance on this system would increase the total system performance by only 50%.
Doubling both system aspects would increase performance by 100%.Generally,it is
important to remove the current system bottleneck and to increase overall system
performance rather than blindly increasing the performance of individual system
components.
309. Suppose that the disk rotates at 7200 rpm. What is the average rotational
latency of the disk drive? (APR/MAY 2017) [E]
a. Average disk access time=average seek time+ average rotational delay +
transfertime + controller overhead.

i.

310. Differentiate between file and directory (NOV/DEC 2016) [An]


Directory is a collection of files and folders. difference between directory and File : A file
is any kind of computer document and a directory is a computer document folder or filing
cabinet. Directory is a collection of a the folders and files.
311. Suppose that the disk rotates at 7200 rpm[E](NOV/DEC 2017)
a. What is the average rotational latency of the disk drive ?
b. Identify seek distance can be covered in the time ?
a. 7200 rpm gives 120 rotations per second.
b. Therefore Full rotation takes 8.33 ms and Average rotational latency takes 4.167ms.
t=4.167ms
i. t=0.7561+0.24L
4.167=0.756+0.24
39LL=195.58
Therefore We can seek over 195 tracks(about 4% of the disk) during an average
rotationallatency
312. Enlist different types of directory structure. [R](NOV/DEC 2017)
There are many types of directory structure in Operating System. They are as follows
 Single Level Directory
 Two Level Directory
 Tree Structured Directory
 Acyclic Graph Directory
 General Graph Directory
313. State the typical bad-sector transactions. [R] (APR/MAY 2018)
Typical bad sector transactions might be as follows :
a. The OS tries to read logical block.

b. The controller calculates the ECC and finds that the sector is bad. It reports this
finding to the OS.
c. The next time that the system is rebooted ,a special command is run to tell the
SCSI controller to replace the bad sector with a spare.
d. After that, whenever the system requests logical block, the request is translated
into the replacement sector‘s address by the controller.
314. What is the advantage of bit vector approach in free space management ? [R]
(APR/MAY 2018)
Advantages of bit vector approach:

a. Relatively simple
b. Efficient to find the first free blocks or n consecutive free blocks on the disk
315. Write short notes on file system mounting. (APR/MAY 2019)
The mount procedure is straightforward. The operating system is given the name ofthe
device and the mount point—the location within the file structure where the file systemis to
be attached. Some operating systems require that a file system type be provided, while
others inspect the structures of the device and determine the type of filesystem. Typically,
a mount point is an empty directory.
316. What is SSD? (APR/MAY 2019)
The highest level, the operating system may maintain a cache of file-system data inmain
memory. In addition, electronic RAM disks (also known as solid-state disks) may be used
for high-speed storage that is accessed through the file-system interface.

UNIT V

317. What is virtualization? [R]


Virtualization, in computing, refers to the act of creating a virtual (rather than actual)version
of something, including but not limited to a virtual computer hardware platform, operating
system (OS), storage device, or computer network resources. abstraction to hide low level
hardware details to system or application programs.
318. Why Virtualization is required? (Nov/Dec 2015) [An]
Virtualization reduces the number of physical servers, reducing the energy required to
power and cool them. Save time. With fewer servers, you can spend less time on the
manual tasks required for server maintenance. It's also much faster to deploya virtual
machine than it is to deploy a new physical server.
319. Write the purpose of using virtualization.[R] (NOV/DEC 2017)
Operating system virtualization provides application-transparent virtualization to users by
decoupling applications from the OS. The OS virtualization technique offers granular control at
the application level by facilitating the transparent migration of individual applications. OS
virtualization can also be used to migrate critical applications to another running operating system
instance.

320. Which layer of iOS contains fundamental system services for apps?
The Core Services layer contains the fundamental system services that all applications use,
The Core Services layer provides an abstraction over the services provided in the CoreOS
layer. It provides fundamental access to iOS services and consists of the following
components:
 Collections
 Address Book
 Networking
 File Access
 SQLite
 Core Location
 Net Services
 Threading
 Preferences
 URL Utilities

Q1.What is iOS SDK ?


Ans. The iOS Software Development Kit (SDK) contains the tools and interfaces needed to
develop, install, run and test native apps that appear on an iOS device's Home screen.
Q.2 What is media layer in iOS ?
Ans. The media layer contains technologies to implement multimedia functions, such as graphics,
Audio, Video, and Airplay. It contains frameworks for audio/video codecs and OpenGL features.
Q.3 Explain iOS architecture.
Ans. iOS architecture is written in Objective-C language and comprised of four layers, Cocoa
touch, Media, Core service and Core OS. System interfaces are provided by framework, which
contains libraries and resources such as header and image.
Q.4 List framework used in Core OS layer.
Ans. Frameworks are accelerate framework, core bluetooth framework, external accessory
framework, generic security services framework, security framework, system and 64-Bit Support.
Q.5 What is Android ?
Ans. Android is a Linux based operating system it is designed primarily for touch screen mobile
devices such as smart phones and tablet computers. The hardware that supports android software is
based on ARM architecture platform. The android is an open source operating system means that
it's free and any one can use it.
Q.6 What is Dalvik Virtual Machine ?
Ans. The Dalvik Virtual Machine (DVM) is an android virtual machine optimized for mobile
devices. It optimizes the virtual machine for memory, battery life and performance.

You might also like