0% found this document useful (0 votes)
11 views20 pages

UNIT II Two Marks

UNIT II Two marks os

Uploaded by

sathyarcse88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

UNIT II Two Marks

UNIT II Two marks os

Uploaded by

sathyarcse88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT-II

PART–A
Q.1 Define process. What is the information maintained in a PCB ?
Ans. A process is simply a program in execution. i.e. an instance of a program.
execution. PCB maintains pointer, state, process number, CPU register, PC,
memory allocation etc.
Q.2 Define task control block.
Ans. TCB is also called PCB.
Q.3 What is PCB? Specify the information maintained in it. AU CSE/IT:
Dec.-12
Ans. Each process is represented in the operating system by a process control
block. PCB contains information like process state, program counter, CPU register,
accounting information etc.
Q.4 What is independent process ?
Ans. Independent process cannot affect or be affected by the execution of another
process.
Q.5 Name and draw five different process states with proper definition. AU:
Dec.-17
Ans. Process states are new, running, waiting, ready and terminated. Fig. 2.16.1
shows process state diagram.

Q.6 Define context switching.


Ans. Switching the CPU to another process requires saving the state of the old
process and loading the saved state for the new process. This task is known as
context switch.
Q.7 What are the reasons for terminating execution of child process ?
Ans. Parent may terminate execution of children processes via abort system call
for a variety of reasons, such as:
1.Child has exceeded allocated resources.
2. Task assigned to child is no longer required.
3. Parent is exiting and the operating system does not allow a child to continue if
its parent terminates.
Q.8 What is ready queue ?
Ans. The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue.
Q.9 List out the data fields associated with process control blocks.
Ans. Data fields associated with process control block is CPU registers, PC,
process state, memory management information, input-output status information
etc.
Q.10 What are the properties of communication link ?
Ans. Properties of communication link
1. Links are established automatically.
2. A link is associated with exactly one pair of communicating processes.
3. Between each pair there exists exactly one link.
4. The link may be unidirectional, but is usually bidirectional.
Q.11 What is socket ?
Ans. A socket is defined as an endpoint for communication.
Q.12 What is non-preemptive scheduling ?
Ans.Non-preemptive scheduling ensures that a process relinquishes control of the
CPU only when it finishes with its current CPU burst.
Q.13 Differentiate preemptive and non-preemptive scheduling.
Ans. :
Q.14 What do you mean by short term scheduler ?
Ans. Short term scheduler, also known as a dispatcher executes most frequently,
and makes the finest-grained decision of which process should execute next. This
scheduler is invoked whenever an event occurs.
Q.15 Which are the criteria used for CPU scheduling ?
Ans.Criteria used for CPU scheduling are CPU utilization, throughput, turnaround
time, waiting time, response time.
Q.16 Explain why two level scheduling is commonly used.
Ans. It provides the hybrid solution to the problem of providing good system
utilization and good user service simultaneously.code
Q.17 Why is it important for the scheduler to distinguish I/O-bound programs
from CPU-bound programs?
Ans. I/O-bound programs have the property of performing only a small amount of
computation before performing IO. Such programs typically do not use up their
entire CPU quantum. CPU-bound programs, on the other hand, use their entire
quantum without performing any blocking IO operations. Consequently, one could
make better use of the computer's resources by giving higher priority to I/O-bound
programs and allow them to execute ahead of the CPU-bound programs.
Q.18 What is response time?
Ans. Response time is the amount of time it takes from when a request was,
submitted until the first response is produced, not output.
Q.19 Define waiting time.
Ans. Amount of time a process has been waiting in the ready queue.
Q.20 Define scheduling algorithm ?
Ans. In multiprogramming systems, whenever two simultaneously in the ready
state, a choice has to be made which process to run next. The part of the OS that
makes the choice is called the scheduler and the algorithm it uses is called the
scheduling algorithm.

Q.21 Define the term What is the concept behind strong semaphore and
spinlock?.

Time it takes for the dispatcher to stop one process and start another running.
Q.22 What is preemptive priority method?
Ans. A preemptive priority will preempt the CPU if the newly arrived process is
higher than the priority of the currently running process.
Q.23 What is medium term scheduling ?
Ans. Medium-term scheduling used especially with time-sharing systems as an
intermediate scheduling level. A swapping scheme is implemented to remove
partially run programs from memory and reinstate them later to continue where
they left off.
Q.24 What is preemptive scheduling ?
Ans. Preemptive scheduling can preempt a process which is utilizing the CPU in
between its execution and give the CPU to another process.
Q.25 What is the difference between long-term scheduling and short-term
scheduling ?
Ans. Long term scheduling adds jobs to the ready queue from the job queue. Short
term scheduling dispatches jobs from the ready queue to the running state.
Q.26 List out any four scheduling criteria.
Ans. Response time, throughput, waiting time and turn around time.
Q.27 Define the term 'Dispatch latency'.
Ans. Dispatch latency: Time it takes for the dispatcher to stop one process and start
another running. It is the amount of time required for the scheduler to stop one
process and start another.
Q.28 Distinguish between CPU-bounded and I/O bounded processes.
Ans.
Q.29 Define priority inversion problem.
Ans. The higher priority process would be waiting for the low priority one to
finish. This situation is known as priority inversion problem.
Q.30 What advantage is there in having different time-quantum sizes on
different levels of a multilevel queuing system?
Ans.Processes that need more frequent servicing, for instance, interactive
processes such as editors, can be in a queue with a small time quantum. Processes
switch no need for frequent servicing can be in a queue with a larger quantum,
requiring fewer context switches to complete the processing, and thus making
more efficient use of the computer.
Q.31 How does real-time scheduling differs from normal scheduling ?
Ans. Normal scheduling provides no guarantee on when a critical process will be
scheduled; it guarantees only that the process will be given preference over non-
critical processes. Real-time systems have stricter requirements. A task must be
serviced by its deadline; service after the deadline has expired is the same as no
service at all.
Q.32 What is Shortest-Remaining-Time-First (SRTF) ?
Ans. If a new process arrives with CPU burst length less than remaining time of
process, current executing preempt. This scheme is known as the
Shortest-Remaining-Time-First.
Q.33 What is round robin CPU scheduling ?
Ans. Each process gets a small unit of CPU time (time quantum). After this time
has elapsed, the process is preempted and added to the end of the ready queue.
Q.34 What is meant by starvation in operating system?
Ans. Starvation is a resource management problem where a process does not get
the resources (CPU) it needs for a long time because the resources are being
allocated to other processes.
Q.35 What is an aging?
Ans. Aging is a technique to avoid starvation in a scheduling system. It works by
adding an aging factor to the priority of each request. The aging factor must
increase the requests priority as time passes and must ensure that a request will
eventually be the highest priority request
Q.36 How to solve starvation problem in priority CPU scheduling ?
Ans. Aging - as time progresses increase the priority of the process, so eventually
the process will become the highest priority and will gain the CPU. i.e., the more
time is spending a process in ready queue waiting, its priority becomes higher and
higher.
Q.37 What is convoy effect?
Ans. A convoy effect happens when a set of processes need to use a resource for a
short time, and one process holds the resource for a long time, blocking all of the
other processes. Essentially it causes poor utilization of the other resources in the
system.
Q.38 How can starvation / indefinite blocking of processes be avoided in
priority scheduling ?
Ans. A solution to the problem of indefinite blockage of processes is aging. Aging
is a technique of gradually increasing the priority of processes that wait in the
system for a long time.
Q.39"Priority inversion is a condition that occurs in real time systems where a
low priority process is starved because higher priority processes have gained
hold of the CPU" - Comment on this statement. AU CSE: May-17
Ans. A low priority thread always starts on a shadow version of the shared
resource, the original resource remains unchanged. When a high-priority thread
needs a resource engaged by a low -priority thread, the low priority thread is
preempted, the original resource is restored and the high priority thread is allowed
to use the original
Q.40 Provide two programming examples in which multithreading provides
better. performance than a single-threaded solution.
Ans. A Web server that services each request in a separate thread.
A parallelized application such as matrix multiplication where different parts of the
matrix may be worked on in parallel.
An interactive GUI program such as a debugger where a thread is used to monitor
user input, another thread represents the running application, and a show third
thread monitors performance.
Q.41 State what does a thread share with peer threads.
Ans. Thread share the memory and resource of the process.
Q.42 Define a thread. State the major advantage of threads.
Ans. A thread is a flow of execution through the process's code with its own
program counter, system registers and stack.
2. Efficient communication.
Advantages: 1. Minimize context switching time.
Q.43 Can a multithreaded solution using multiple user-level threads achieve
better performance on a multiprocessor system than on a single processor
system ?
Ans. A multithreaded system comprising of multiple user-level threads cannot
make use of the different processors in a multiprocessor system simultaneously.
The operating system sees only a single process and will not schedule the different
threads of the process on separate processors. Consequently, there is no
performance benefit associated with executing multiple user-level threads on a
multiprocessor system.
Q.44 What are the differences between user-level threads and kernel-level
threads ? (Refer section 2.8.3)

Q.45 What are the benefits of multithreads?


Ans. Benefits of multithreading is responsiveness, resource sharing, economy and
utilization of multiprocessor architecture.
Q.46 Why a thread is called as light weight process ?
Ans. Thread is light weight taking lesser resources than a process. It is called light
weight process to emphasize the fact that a thread is like a process but is more
efficient and uses fewer resources and they also share the address space.
Q.47 Name one situation where threaded programming is normally used ?
Ans. Threaded programming would be used when a program should satisfy
multiple tasks at the same time. A good example for this would be a program
running with
GUI.
Q.48 Describe the actions taken by a thread library to context switch between
user-level threads.
Ans. Context switching between user threads is quite similar to switching between
Kernel threads, although it is dependent on the threads library and how it maps
user threads to kernel threads. In general, context switching between user threads
involves taking a user thread of its LWP and replacing it with another thread. This
act typically involves saving and restoring the state of the registers.
Q.49 What is a thread pool ?
Ans. A thread pool is a collection of worker threads that efficiently execute
asynchronous callbacks on behalf of the application. The thread pool is primarily
used to reduce the number of application threads and provide management of the
worker threads.
Q.50 What is deferred cancellation in thread ?
Ans. The target thread periodically checks whether it should terminate, allowing it
an opportunity to terminate itself in an orderly fashion. With deferred cancellation,
one thread indicates that a target thread is to be cancelled, but cancellation occurs
only after the target thread has checked a flag to determine if it should be cancelled
or not. This allows a thread to check whether it should be cancelled at a point when
it can be cancelled safely.
Q.51 What is Pthread ?
Ans. Pthreads refers to the POSIX standard defining an API for thread creation and
synchronization. This is a specification for thread behavior, not an implementation.
Operating system designers may implement the specification in any way they
wish.
Q.52 What is thread cancellation ?
Ans.Under normal circumstances, a thread terminates when it exits normally,
either by returning from its thread function or by calling pthread_exit.However, it
is possible for a thread to request that another thread terminate. This is called
canceling a thread.
Q.53 List the benefit of multithreading.
Ans.
Benefit of multithreading:
It takes less time to create thread than a new process.
It takes less time to terminate thread than process.
Q.54 Under what circumstances is user level threads is better than the kernel
level threads?
Ans. User-level threads are much faster to switch between, as there is no context
switch; further, a problem-domain-dependent algorithm can be used to schedule
among them. CPU-bound tasks with interdependent computations, or a task that
will switch among threads often, might best be handled by user-level threads
Q.55 What resources are required to create threads?
Ans. Thread is smaller than a process, so thread creation typically uses fewer
resources than process creation.
Creating either a user or kernel thread involves allocating a small data structure to
hold a register set, stack, and priority.
Q.56 Differentiate single threaded and multi-threaded processes.
Ans.Single-threading is the processing of one command at a time. When one
thread is paused, the system waits until this thread is resumed. In Multithreaded
processes, threads can be distributed over a series of processors to scale. When one
thread is paused due to some reason, other threads run as normal.
Q.57 Give an programming example in which multitreading does not provide
better performance than single threaded solution.
Ans. Any kind of sequential program is not a good candidate to be threaded. An
example of this is a program that calculates an individual tax return.
Another example is a "shell" program such as the C-shell or Korn shell. Such a
program must closely monitor its own working space such as open files,
environment variables, and current working directory.
Q.58 Define mutual exclusion.
Ans. If a collection of processes share a resource or collection of resources, then
often mutual exclusion is required to prevent interference and ensure consistency
when accessing the resources.
Q.59 How the lock variable can be used to introduce mutual exclusion?
Ans. We consider a single, shared, (lock) variable, initially 0. When a process
wants to enter in its critical section, it first tests the lock. If lock is 0, the process
first sets it to 1 and then enters the critical section. If the lock is already 1, the
process just waits until (lock) variable becomes 0. Thus, a 0 means that no process
in its critical section, and 1 means hold your horses - some process is in its critical
section.
Q.60 What is the hardware feature provided in order to perform mutual
exclusion operation indivisibly ?
Ans. Hardware features can make any programming task easier and improve
system efficiency. It provide special hardware instructions that allow user to test
and modify the content of a word.
Q.61 Discuss why implementing synchronization primitives by disabling
interrupts is gnome not appropriate in a single processor system if the
synchronization primitives be are to be used in user level programs.
Ans. If a user level program is given the ability to disable interrupts, then it can
disable the timer interrupt and prevent context switching from taking place,
thereby allowing it to use the processor without letting other processes execute.
Q.62 What is race condition ?
Ans. A race condition is a situation where two or more processes access shared
data concurrently and final value of shared data depends on timing.
Q.63 Define entry section and exit section.
Ans. Each process must request permission to enter its critical section. The section
of the code implementing this request is the entry section. The critical section is
followed by an exit section. The remaining code is the remainder section.
Q.64 Elucidate mutex locks with its procedure.
Ans. Mutex lock is software tools to solve the critical-section problem. A mutex
lock has a boolean variable available whose value indicates if the lock is available
or not. If the lock is available, a call to acquire() succeeds, and the lock is then
considered unavailable
Q.65 Name any two file system objects that are neither files nor directories
and what the advantage of doing so is.
Ans. Semaphore and monitors the system objects. Advantage is to avoid critical
section problem.
Q.66 What is binary?
Ans. Binary semaphore is a semaphore with an integer value that can range only
between 0 and 1.
Q.67 What is semaphore? Mention its importance in operating systems.
Ans. Semaphore is an integer variable. It is a synchronization tool used to solve
critical section problem. The various hardware based solutions to the critical
section problem are complicated for application programmers to use.
Q.68 What is the meaning of the term busy waiting ?
Ans. Busy waiting means a process waits by executing a tight loop to check the
status/value of a variable.
Q.69 What is bounding waiting ?
Ans. After a process made a request to enter its critical section and before it is
granted the permission to enter, there exists a bound on the number of times that
other processes are allowed to enter.
Q.70 Why can't you use a test and set instruction in place of a binary
semaphore ?
Ans. A binary semaphore requires either a busy wait or a blocking wait, semantics
not provided directly in the Test and Set. The advantage of a binary semaphore is
that it does not require an arbitrary length queue of processes waiting on the
semaphore.
Q.71 What is concept behind strong semaphore and spinlock ?
Ans.Semaphore can be implemented in user applications and in the kernel. The
process that has been blocked the longest is released from the queue first is called a
strong semaphore..
Using simple lock variable, process synchronization problem is not solved. To
avoid this, spinlock is used. A lock that uses busy waiting is called a spin lock.
Q.72 What is bounded buffer problem ?
Ans. The bounded buffer producers and consumers assume that there are fixed
buffer sizes i.e. a finite numbers of slots are available. To suspend the producers
when the buffer is full, to suspend the consumers when the buffer is empty, and to
make sure that only one process at a time manipulates a buffer so there are no race
conditions or lost updates.
Q.73 State the assumption behind the bounded buffer producer consumer
problem.
Ans. Assumption: It is assume that the pool consists of 'n' buffers, each capable of
holding one item. The mutex semaphore provides mutual exclusion for accesses to
the buffer pool and is initialized to the value 1.
Q.74 What is a critical section and what requirements must a solution to the
critical section problem satisfy ?
Ans. Consider a system consisting of several processes, each having a segment of
code called a critical section, in which the process may be changing common
variables, updating tables, etc. The important feature of the system is that when
one process is executing its critical section, no other process is to be allowed to
execute its critical section. Execution of the critical section is mutually exclusive in
time.
A solution to the critical section problem must satisfy the following three
requirements: 1. Mutual exclusion 2. Progress 3. Bounded waiting
Q.75 Define 'monitor'. What does it consist of? AU: CSE/IT: Dec.-11
Ans. Monitor is a highly structured programming language construct. It consists of
private variables and private procedures that can only be used within a monitor.
Q.76 Explain the use of monitors.
Ans. Use of monitors:
a) It provides a mutual exclusion facility. b) A monitor support synchronization by
the use of condition variables. c) Shared data structure can be protected by placing
it in a monitor.
Q.77 Give the queueing diagram representation of process schedulling.
Ans. Refer Fig. 2.2.2.
Q.78 What are kernal threads. AU: May-22
Ans.Kernel threads are handled by the operating system directly and the thread
management is done by the kernel. The context information for the process as well
as the process threads is all managed by the kernel.

80. Compare and contrast Single-threaded and multi-threaded process.

Ans: Single-threading is the processing of one command/ process at a time.


Whereas multi threading is a widespread programming and execution model that
allows multiple threads to exist within the context of one process. These threads
share the process's resources, but are able to execute independently.
81. Distinguish between CPU bounded, I/O bounded Distinguish between CPU
bounded, I/O bounded processes. (Nov/Dec 2016)

Ans:

CPU bound process, spends majority of its time simply using the CPU (doing
calculations). I/O bound process, spends majority of its time in input/output related
operations.

82. What resources are required to Creating threads?

Ans: When a thread is Creatingd the threads does not require any new resources to
execute. The thread shares the resources of the process to which it belongs to and it
requires a small data structure to hold a register set, stack, and priority.

83. What is a thread?

Ans: A thread otherwise called a lightweight process (LWP) is a basic unit of CPU
utilization, it comprises of a thread id, a program counter, a register set and a stack.
It shares with other threads belonging to the same process its code section, data
section, and operating system resources such as open files and signals.

84. What are the benefits of multithreaded programming?

Ans: The benefits of multithreaded programming can be broken down into four
major categories:

• Responsiveness

• Resource sharing

• Economy

• Utilization of multiprocessor architectures.

85. What is the use of fork and exec system calls?


Ans: Fork is a system call by which a new process is Creating. Exec is also a
system call, which is used after a fork by one of the two processes to place the
process memory space with a new program.

86. Define thread cancellation and target thread.

Ans: The thread cancellation is the task of terminating a thread before it has
completed. A thread that is to be cancelled is often referred to as the target thread.
For example, if multiple threads are concurrently searching through a database and
one thread returns the result, the remaining threads might be cancelled.

87. What are the different ways in which a thread can be cancelled?

Ans: Cancellation of a target thread may occur in two different scenarios:

Asynchronous cancellation: One thread immediately terminates the target thread is


called asynchronous cancellation.

Deferred cancellation: The target thread can periodically check if it should


terminate, allowing the target thread an opportunity to terminate itself in an orderly
fashion.

88. What are the various scheduling criteria for CPU scheduling?

Ans: The various scheduling criteria are,

• CPU utilization

• Throughput

• Turnaround time

• Waiting time

• Response time
89. What are the requirements that a solution to the critical section problem must
satisfy?

Ans: The three requirements are

• Mutual exclusion

• Progress

• Bounded waiting

90. Define: Critical section problem.

Ans: Consider a system consists of 'n' processes.Each process has segment of code
called a critical section, in which the process may be changing common variables,
updating a table, writing a file.When one process is executing in its critical section,
no other process can allowed executing in its critical section.

91. How will you calculate turn-around time?

Ans: Turnaround time is the interval from the time of submission to the time of
completion of a process. It is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.

92. Name two hardware instructions and their definitions which can be used for
implementing

mutual exclusion.

Ans:

• TestAndSet

boolean TestAndSet (boolean &target)

boolean rv = target;
target = true;

return rv;

• Swap

void Swap (boolean &a, boolean &b)


{
boolean temp = a;

a = b;

b = temp;

93. List two programming examples of multithreading giving improved


performance

over a single-threaded solution.

Ans:

• A Web server that services each request in a separate thread.

• A parallelized application such as matrix multiplication where different parts of


the matrix may be worked on in parallel.

• An interactive GUI program such as a debugger where a thread is used to monitor


user input, another thread represents the running application, and a third thread
monitors performance.

94.
DEADLOCK
Q.1 What is a deadlock ?
Ans. A set of processes is deadlocked if each process in the set is waiting for an
event that only another process in the set can cause. Usually the event is release of
a currently held resource.
Q.2 State the conditions for deadlock.
Four conditions for deadlocks are:
Ans. :
a. Mutual exclusion
b. Hold and wait
c. No preemption
d. Circular wait
1. Atleast one resource must be held in a nonsharable mode.
2. A process holding atleast one resource is waiting for more resources held by
other processes.
3.Resources cannot be preempted.
4.There must be a circular waiting.
Q.3 What is deadlock state ?
Ans. If there is any process deadlock in a state, then that state is called deadlock
state.
Q.4 What is resource-allocation graph ?
Ans. Deadlocks can be described more precisely in terms of a directed graph called
a system resource-allocation graph.
Q.5 Is it possible to have a deadlock involving only one process? State your
answer.
Ans. No. This follows directly from the hold-and-wait condition.
Q.6 List two examples of deadlocks that are not related to a computer system
environment.
Ans. a. Two cars crossing a single-lane bridge from opposite directions.
b. A person going down a ladder while another person is climbing up the ladder:
Q7. Define safe state.
Ans. A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
Q.8 When is a set of processes deadlocked ?
Ans. Resource deadlock : Each process requests resources held by another process
in the set and it must receive all the requested resources before it can become
unblocked. Communication deadlock: Each process is waiting for communication
from another process and will not communicate until it receives the
communication for which it is waiting.
Q.9 Define request edge and assignment edge.
Ans. There is a request edge from process P to resource R if and only if P is
blocked waiting for an allocation of R. There is an assignment edge from resource
R to process P if and only if P is holding an allocation of R.
Q.10 What is hold and wait?
Ans. Hold and Wait : A process must be holding a resource and waiting for
another.
Q.11 What is a knot ?
Ans. A strongly connected sub-graph of a directed graph, such that starting from
any node in the subset it is impossible to leave the knot by following the edges of
the graph.
Q.12 What is banker's algorithm ?
Ans. Banker's algorithm is a deadlock avoidance algorithm that is applicable to a
resource-allocation system with multiple instances of each resource type.
Q.13 Which are two options to break a deadlock ?
Ans. There are two options for breaking a deadlock :
1. Process termination: To abort one or more processes to break the circular wait.
2. Resource preemption: To preempt some resources from one or more of deadlock
Q.14 Write the three ways to deal the deadlock program.
Ans. Three ways to deal with the deadlock problem :
1. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
2. Allow the system to enter a deadlock state, detect it and recover.
3. Ignore the problem altogether and pretend that deadlocks never occur in the
system.
Q.15 List out the methods used to recover from the deadlock.
Ans. Recovery from Deadlock methods are process termination, resource
preemption. In process termination, all deadlocked processes are abort or abort one
process at a time until the deadlock cycle is eliminated. Resource preemption
procedure is selecting a victim, rollback and starvation.
16. Define Starvation in deadlock?
Ans: A problem related to deadlock is indefinite blocking or starvation, a situation
where processes wait indefinitely within a semaphore. Indefinite blocking may
occur if we add and remove processes from the list associated with a semaphore in
LIFO order.

16. Name some classic problem of synchronization?


Ans: The Bounded – Buffer Problem
The Reader – Writer Problem
The Dining –Philosophers Problem
17. Define ‘Safe State”?
Ans: A state is safe if the system allocates resources to each process in some order
and still avoid deadlock.

You might also like