OS Unit 2 Formatted
OS Unit 2 Formatted
UNIT – II
2 Marks
Resource sharing: As the threads can share the memory and resources of any
process it allows any application to perform multiple activities inside the same
address space.
Utilization of MultipleProcessor Architecture: The different threads can run
parallel on the multiple processors hence, this enables the utilization of the
processor to a large extent and efficiency.
5. What are the requirements that a solution to the critical section problem must
satisfy? (May’16) (NOV’14)
What are Critical Regions? List down various mechanisms used to deal with
critical region problem. (NOV’16)
Define Critical Section [ Nov 2018]. Explain with suitable example [May 2019]
A critical region is an area of code where the code expects shared resources not
to be changed or viewed by other threads while executing inside the critical
region.
Critical Section is the part of a program which tries to access shared resources.
That resource may be any resource in a computer like a memory location, Data
structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the same
time;
Requirements / Solutions
Mutual Exclusion - Mutual exclusion implies that only one process can be
inside the critical section at any time. If any other processes require the
critical section, they must wait until it is free.
Progress -Progress means that if a process is not using the critical section,
then it should not stop any other process from accessing it. In other words,
any process can enter a critical section if it is free.
Bounded Waiting -Bounded waiting means that each process must have a
limited waiting time. It should not wait endlessly to access the critical
section.
Example - Process A changing the data in a memory location while another process B
is trying to read the data from the same memory location.
Medium-term schedulers are those schedulers whose decision will have a mid-
term effect on the performance of the system.
It is responsible for swapping of a process from the Main Memory to
Secondary Memory and vice-versa.
It can re-introduce the process into memory and execution can be continued.
Speed is in between both short and long term scheduler.
A running process may become suspended if it takes an I/O request.
A suspended process cannot make any progress towards completion.
The main objective of medium term scheduler is to remove the process from
the memory and create space for other processes; the suspended process is then
moved to secondary storage.
11. What is the difference between process and thread? (May 2017 )
Process Thread
A process is a program under A thread is a lightweight process that
execution i.e an active program. can be managed independently by a
scheduler.
Processes require more time for Threads require less time for context
context switching as they are switching as they are lighter than
heavier. processes.
Processes are totally independent and A thread may share some memory with
don’t share memory. its peer threads.
Processes require more resources Threads generally need less resources
than threads. than processes.
Processes have independent data and A thread shares the data segment, code
code segments. segment, files etc. with its peer threads.
All the different processes are treated All user level peer threads are treated as
separately by the operating system. a single task by the operating system.
12. What are the uses of job queues, ready queue and device queue? (MAY’17)
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
14. What are the different CPU Scheduling algorithms [Nov 2017]
PART 1 [ 11 Marks]
1.1 Discuss how to do recovery from deadlock [5] [Apr 2014]
Therefore, after the detection of deadlock, a method/way must require to recover that
deadlock to run the system again. The method/way is called as deadlock recovery. In order
to recover the system from deadlocks, either OS considers resources or processes.
1. Process Termination:
This method of deadlock recovery through killing processes is the simplest way of
deadlock recovery. Sometime it is best to kill a process that can be return from the
beginning with no ill effects.
2. Resource Preemption:
The ability to take a resource away from a process, have another process use it, and
then give it back without the process noticing. It is highly dependent on the nature
of the resource.
Deadlock recovery through preemption is too difficult or sometime impossible.
This method will raise three issues –
This situation is called Starvation and must be avoided. One solution is that a
process must be picked as a victim only a finite number of times.
A set of a process is called deadlock when they are waiting for the happening of an event
which is called by some another event in the same set.
Here every process will follow the system model which means the process requests a
resource if not allocated then wait otherwise it allocated will use the resources and release
it after use.
Methods for handling deadlock - There are mainly four methods for handling
deadlock.
1. Deadlock ignorance
It is the most popular method and it acts as if no deadlock and the user will restart.
As handling deadlock is expensive to be called of a lot of codes need to be altered
which will decrease the performance so for less critical jobs deadlock are ignored.
Ostrich algorithm is used in deadlock Ignorance.
Used in windows, Linux etc.
2. Deadlock prevention
It means that we design such a system where there is no chance of having a deadlock.
Mutual exclusion:
o It can’t be resolved as it is the hardware property.
o For example, the printer cannot be simultaneously shared by several
processes.
o This is very difficult because some resources are not sharable.
Hold and wait:
o Hold and wait can be resolved using the conservative approach where a
process can start it and only if it has acquired all the resources.
Active approach:
o Here the process acquires only requires resources but whenever a new
resource requires it must first release all the resources.
Wait time out:
o Here there is a maximum time bound until which a process can wait for
other resources after which it must release the resources.
Circular wait:
o In order to remove circular wait, we assign a number to every resource and
the process can request only in the increasing order otherwise the process
must release all the high number acquires resources and then make a fresh
request.
No pre-emption:
o In no pre-emption, we allow forceful pre-emption where a resource can be
forcefully pre-empted.
o The pre-empted resource is added to the list of resources where the process
is waiting.
o The new process can be restarted only when it regains its old resources.
Priority must be given to a process which is in waiting for state.
3. Deadlock avoidance
Here whenever a process enters into the system it must declare maximum demand.
To the deadlock problem before the deadlock occurs.
This approach employs an algorithm to access the possibility that deadlock would
occur and not act accordingly.
If the necessary condition of deadlock is in place it is still possible to avoid
feedback by allocating resources carefully.
Safe state
When a system can allocate the resources to the process in such a way so that they still
avoid deadlock then the state is called safe state. When there is a safe sequence exit then
we can say that the system is in the safe state.
A sequence is in the safe state only if there exists a safe sequence. A sequence of
process P1, P2, Pn is a safe sequence for the current allocation state if for each Pi the
resources request that Pi can still make can be satisfied by currently available resources
pulls the resources held by all Pj with j<i.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 11
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
This graph is also kind of graphical bankers' algorithm where a process is denoted
by a circle Pi and resources is denoted by a rectangle RJ (.dots) inside the
resources represents copies.
Presence of a cycle in the resource’s allocation graph is necessary but not sufficient
condition for detection of deadlock. If the type of every resource has exactly one
copy than the presence of cycle is necessary as well as sufficient condition for
detection of deadlock.
This is in unsafe state (cycle exist) if P1 request P2 and P2 request R1 then deadlock will
occur.
2) Bankers’s algorithm
The resource allocation graph algorithms not applicable to the system with multiple
instances of the type of each resource. So, for this system Banker’s algorithm is used.
Here whenever a process enters into the system it must declare maximum demand
possible.
At runtime, we maintain some data structure like current allocation, current need, current
available etc. Whenever a process requests some resources, we first check whether the
system is in a safe state or not meaning if every process requires maximum resources then
is there any sequence in which request can be entertaining if yes then request is allocated
otherwise rejected.
Safety algorithm -This algorithm is used to find whether system is in safe state or not we
can find:
First we find the need matrix by Need= maximum – allocation .Then find
available resources = total – allocated
A B C D( 6 5 7 6) - A B C D( 3 4 6 4) , Available resources A B C D ( 3 1
1 2)
Then we check whether system is in deadlock or not & find safe sequence of process.
When the system is in deadlock then one method is to inform the operates and then
operator deal with deadlock manually and the second method is system will automatically
recover from deadlock. There are two ways to recover from deadlock:
Bounded buffer problem, which is also called producer consumer problem, is one of the
classic problems of synchronization. Let's start by understanding the problem here, before
moving on to the solution and program code.
Problem Statement: There is a buffer of n slots and each slot is capable of storing one
unit of data. There are two processes running, namely, producer and consumer, which are
operating on the buffer.
At any instant, the current value of empty represents the number of empty slots in the
buffer and full represents the number of occupied slots in the buffer.
The Producer Operation: The pseudocode of the producer function looks like this:
do {
// wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);
/* perform the insert operation in a slot */
// release lock
signal(mutex);
// increment 'full'
signal(full);
} while(TRUE)
The Consumer Operation:The pseudocode for the consumer function looks like this:
do
{
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
} while(TRUE);
The consumer waits until there is atleast one full slot in the buffer.
Then it decrements the full semaphore because the number of occupied slots will
be decreased by one, after the consumer completes its operation.
After that, the consumer acquires lock on the buffer.
Following that, the consumer completes the removal operation so that the data
from one of the full slots is removed.
Then, the consumer releases the lock.
Finally, the empty semaphore is incremented by 1, because the consumer has just
removed data from an occupied slot, thus making it empty.
The readers-writers problem is used for managing synchronization among various reader
and writer process so that there are no problems with the data sets, i.e. no inconsistency is
generated.
The Solution
From the above problem statement, it is evident that readers have higher priority
than writer. If a writer wants to write to the resource, it must wait until there are no
readers currently accessing that resource.
Instead of having the process to acquire lock on the shared resource, we use the
mutex m to make the process to acquire and release lock whenever it is updating
the read_count variable.
And, the code for the reader process looks like this:
while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m);
}
As seen above in the code for the writer, the writer just waits on the w semaphore
until it gets a chance to write to the resource.
After performing the write operation, it increments w so that the next writer can
access the resource.
On the other hand, in the code for the reader, the lock is acquired whenever the
read_count is updated by a process.
When a reader wants to access the resource, first it increments the read_count
value, then accesses the resource and then decrements the read_count value.
The semaphore w is used by the first reader which enters the critical section and
the last reader which exits the critical section.
The reason for this is, when the first readers enter the critical section, the writer is
blocked from the resource. Only new readers can access the resource now.
Similarly, when the last reader exits the critical section, it signals the writer using
the w semaphore because there are zero readers now and a writer can have the
chance to access the resource.
Synchronization
Process Synchronization means sharing system resources by processes in a such a
way that, Concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data. Maintaining data consistency demands mechanisms to
ensure synchronized execution of cooperating processes.
Process Synchronization was introduced to handle problems that arose while
multiple process executions.
Solution to Critical Section Problem -A solution to the critical section problem must
satisfy the following three conditions:
1. Mutual Exclusion - Out of a group of cooperating processes, only one process can be
in its critical section at a given point of time.
2. Progress -If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be allowed to get into its
critical section.
3. Bounded Waiting - After a process makes a request for getting into its critical section,
there is a limit for how many other processes can get into their critical section, before this
process's request is granted. So, after the limit is reached, system must grant the process
permission to get into its critical section.
Concurrency
Problems in Concurrency:
Advantages of Concurrency:
Running of multiple applications –It enable to run multiple applications at
the same time.
Better resource utilization –It enables that the resources that are unused by
one application can be used for other applications.
Better average response time –Without concurrency, each application has to
be run to completion before the next one can be run.
Better performance –It enables the better performance by the operating
system. When one application uses only the processor and another application
uses only the disk drive then the time to run both applications concurrently to
completion will be shorter than the time to run each application
consecutively.
Drawbacks of Concurrency:
It is required to protect multiple applications from one another.
Issues of Concurrency:
Non-atomic – Operations that are non-atomic but interruptible by multiple
processes can cause problems.
Race conditions – A race condition occurs of the outcome depends on which
of several processes gets to a point first.
Blocking – Processes can block waiting for resources. A process could be
blocked for long period of time waiting for input from a terminal. If the
process is required to periodically update some data, this would be very
undesirable.
Starvation – It occurs when a process does not obtain service to progress.
Deadlock – It occurs when two processes are blocked and hence neither can
proceed to execute.
Problem Statement
Consider there are five philosophers sitting around a circular dining table and their job is
to think and eat alternatively. The dining table has five chopsticks and a bowl of rice in the
middle as shown in the below figure.
At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat,
he uses two chopsticks - one from their left and one from their right. When a philosopher
wants to think, he keeps down both chopsticks at their original place.
A bowl of noodles is placed at the centre of the table along with five chopsticks for each
of the philosophers. To eat a philosopher needs both their right and a left chopstick. A
philosopher can only eat if both immediate left and right chopsticks of the philosopher is
available. In case if both immediate left and right chopsticks of the philosopher are not
available then the philosopher puts down their (either left or right) chopstick and starts
thinking again.
The dining philosopher demonstrates a large class of concurrency control problems hence
it's a classic synchronization problem.
Solution
From the problem statement, it is clear that a philosopher can think for an
indefinite amount of time. But when a philosopher starts eating, he has to stop at
some point of time. The philosopher is in an endless cycle of thinking and eating.
An array of five semaphores, stick [5], for each of the five chopsticks.
while(TRUE)
{
wait(stick[i]);
/* mod is used because if i=5, next chopstick is 1 (dining table is circular) */
wait(stick[(i+1) % 5]);
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and
picks up that chopstick. Then he waits for the right chopstick to be available, and then
picks it too. After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only if both the left and
right chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four
philosophers pick up four chopsticks, there will be one chopstick left on the table.
So, one philosopher can start eating and eventually, two chopsticks will be
available. In this way, deadlocks can be avoided.
This solution imposes the restriction that a philosopher may pick up her chopsticks only if
both of them are available.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 22
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
// Pickup chopsticks
Pickup(int i)
{
// indicate that I’m hungry
state[i] = hungry;
// set state to eating in test() only if my left and right neighbors are not eating
test(i);
// if right neighbor R=(i+1)%5 is hungry & both of R’s neighbors are not eating,
// set R’s state to eating and wake it up by signaling R’s CV
test((i + 1) % 5);
test((i + 4) % 5);
}
test(int i)
{
if (state[(i + 1) % 5] != eating
&&state[(i + 4) % 5] != eating
&& state[i] == hungry) {
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 23
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
init()
{
// Verify that this monitor-based solution is deadlock free and mutually exclusive in
that //no 2 neighbors can eat simultaneously
state[i] = thinking;
}
} // end of monitor
This allows philosopher i to delay herself when she is hungry but is unable to obtain
the chopsticks she needs. We are now in a position to describe our solution to the
dining-philosophers problem.
The distribution of the chopsticks is controlled by the monitor Dining Philosophers.
Each philosopher, before starting to eat, must invoke the operation pickup().
This act may result in the suspension of the philosopher process. After the successful
completion of the operation, the philosopher may eat.
Following this, the philosopher invokes the putdown() operation. Thus, philosopher i
must invoke the operations pickup() and putdown() in the following sequence:
DiningPhilosophers.pickup(i);
...
eat
...
DiningPhilosophers.putdown(i);
It is easy to show that this solution ensures that no two neighbors are eating
simultaneously and that no deadlocks will occur. We note, however, that it is
possible for a philosopher to starve to death.
Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table
and not picked up by a philosopher.The structure of a random philosopher i is given as
follows –
do {
wait( chopstick[i] );
wait( chopstick[ (i+1) % 5] );
..
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);
1.6 Write short notes on Monitors (OR) Explain the structure of monitors. [Nov
2016]
Monitors are used for process synchronization. With the help of programming
languages, we can use a monitor to achieve mutual exclusion among the
Characteristics of Monitors.
Initialization: – Initialization comprises the code, and when the monitors are
created,
Private Data: – Private data is another component of the monitor. It comprises all
the private data, and the private data contains private procedures that can only be
used within the monitor. So, outside the monitor, private data is not visible.
Monitor Procedure: – Monitors Procedures are those procedures that can be
called from outside the monitor.
Monitor Entry Queue: – Monitor entry queue is another essential component of
the monitor that includes all the threads, which are called procedures.
Syntax of monitor
Condition Variables
There are two types of operations that we can perform on the condition variables of the
monitor:
1. Wait
2. Signal
Wait Operation
a.wait(): – The process that performs wait operation on the condition variables are
suspended and locate the suspended process in a block queue of that condition
variable.
Signal Operation
a.signal() : – If a signal operation is performed by the process on the condition
variable, then a chance is provided to one of the blocked processes.
Advantages of Monitor
It makes the parallel programming easy, and if monitors are used, then there is less error-
prone as compared to the semaphore.
1.7 Demonstrate that monitors and semaphores are equivalent insofar as they can be
used to implement the same types of synchronization problems [Apr 2019]
Describe the difference between wait[A], where A is semaphore and B Wait () ,where
B is a condition variable in a monitor [Nov 2014]
Semaphore and Monitor both allow processes to access the shared resources in
mutual exclusion. Both are the process synchronization tool. Instead, they are very
different from each other.
Segment Number specifies the specific segment from which CPU wants to reads data.
Page Number specifies the specific page of that segment from which CPU wants to
read the data.
Page Offset specifies the specific word on that page that CPU wants to read.
Step-02:
For the generated segment number, corresponding entry is located in segment table.
Segment table provides the frame number of the frame storing the page table of the
referred segment.
The frame containing the page table is located.
Step-03:
For the generated page number, corresponding entry is located in the page table.
Page table provides the frame number of the frame storing the required page of the
referred segment.
The frame containing the required page is located.
Step-04:
The frame number combined with page offset forms the required physical address.
For the generated page offset, corresponding word is located in the page and read.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 29
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
Diagram-The following diagram illustrates the above steps of translating logical address
into physical address-
Advantages-
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
Disadvantages-
1. Internal Fragmentation will be there.
2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.
The Bakery algorithm is one of the simplest known solutions to the mutual exclusion
problem for the general case of N process. Bakery Algorithm is a critical section
solution for N processes. The algorithm preserves the first come first serve property.
Before entering its critical section, the process receives a number. Holder of the
smallest number enters the critical section.
If processes Pi and Pj receive the same number,
if i< j
Pi is served first;
else
Pj is served first.
The numbering scheme always generates numbers in increasing order of
enumeration; i.e., 1, 2, 3, 3, 3, 3, 4, 5, …
Shared data – choosing is an array [0..n – 1] of boolean values; & number is an array
[0..n – 1] of integer values. Both are initialized to False & Zero respectively.
Algorithm Pseudocode –
repeat
choosing[i] := true;
number[i] := max(number[0], number[1], ..., number[n - 1])+1;
choosing[i] := false;
for j := 0 to n - 1
do begin
while choosing[j] do no-op;
while number[j] != 0
and (number[j], j) < (number[i], i) do no-op;
end;
critical section
number[i] := 0;
remainder section
until false;
Firstly the process sets its “choosing” variable to be TRUE indicating its intent to
enter critical section.
Then it gets assigned the highest ticket number corresponding to other processes.
Then the “choosing” variable is set to FALSE indicating that it now has a new
ticket number.
The very purpose of the first three lines is that if a process is modifying its
TICKET value then at that time some other process should not be allowed to
check its old ticket value which is now obsolete.
Inside the for loop before checking ticket value we first make sure that all
other processes have the “choosing” variable as FALSE.
After that we proceed to check the ticket values of processes where process with
least ticket number/process id gets inside the critical section.
The exit section just resets the ticket value to zero.
PART [ 11 Marks]
2.1 What is critical section problem and explain two process solution and multiple
process solutions? (APR’15) -
What is critical section problem? How will you find the solution for it? [May 2018]
Refer Questions 1.3, 1.4 and 1.5
2.2 Specify the purpose of Semaphores and its types with an example (APR’16)
[Also Refer Questions 1.7 and 1.5]
What is binary semaphore? How will you implement it?[5] [May 2018]
Wait: – In wait operation, the argument ‘S’ value is decrement by 1 if the value of the ‘S’
variable is positive. If the value of the argument variable ‘S’ is zero or negative, no
operation is performed.
Signal: – In Signal atomic operation, the value of the argument variable ‘S’ is
incremented.
Characteristics of Semaphore
1. It is a mechanism that can be used to provide synchronization of tasks.
2. It is a low-level synchronization mechanism.
3. Semaphore will always hold a non-negative integer value.
4. Semaphore can be implemented using test operations and interrupts, which should
be executed using file descriptors.
Types of Semaphores
1. Counting Semaphores: – Counting Semaphore is defined as a semaphore that
contains integer values, and these values have an unrestricted value domain. A
counting semaphore is helpful to coordinate the resource access, which includes
multiple instances.
2. Binary Semaphores: – Binary Semaphores are also called Mutex lock. There are
two values of binary semaphores, which are 0 and 1. The value of binary
semaphore is initialized to 1. We use binary semaphore to remove the problem of
the critical section with numerous processes.
In this type of semaphore, the wait operation works only if semaphore = 1, and the
signal operation succeeds when semaphore= 0. It is easy to implement than
counting semaphores.
Example of Semaphore
Shared var mutex: semaphore = 1;
Process i
begin
.
.
P(mutex);
execute CS;
V(mutex);
.
.
End;
Advantages of Semaphore
1. It allows more than one thread to access the critical section
2. Semaphores are machine-independent.
3. Semaphores are implemented in the machine-independent code of the microkernel.
4. They do not allow multiple processes to enter the critical section.
5. As there is busy waiting in semaphore, there is never a wastage of process time and
resources.
6. They are machine-independent, which should be run in the machine-independent code
of the microkernel.
7. They allow flexible management of resources.
Disadvantages of Semaphore
1. One of the biggest limitations of a semaphore is priority inversion.
2. The operating system has to keep track of all calls to wait and signal semaphore.
3. Their use is never enforced, but it is by convention only.
4. In order to avoid deadlocks in semaphore, the Wait and Signal operations require to be
executed in the correct order.
5. Semaphore programming is a complicated, so there are chances of not achieving
mutual exclusion.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 34
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
6. It is also not a practical method for large scale use as their use leads to loss of
modularity.
7. Semaphore is more prone to programmer error.
8. It may cause deadlock or violation of mutual exclusion due to programmer error.
Thread is an execution unit which consists of its own program counter, a stack, and a set
of registers. Threads are also known as Lightweight processes. Threads are popular way to
improve application through parallelism. The CPU switches rapidly back and forth among
the threads giving illusion that the threads are running in parallel.
As each thread has its own independent resource for process execution, multiple processes
can be executed parallely by increasing number of threads.
Types of Thread
User threads, are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OSs support
kernel level threads, allowing the kernel to perform multiple simultaneous tasks and/or to
service multiple kernel system calls simultaneously.
Signal Handling
Whenever a multithreaded process receives a signal then to what thread should that signal
be conveyed? There are following four main option for signal distribution:
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 35
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
Asynchronies Cancellation
It means cancellation of thread immediately. Allocation of resources and inter
thread data transfer may be challenging for asynchronies cancellation.
Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself when it is
feasible. It’s upon the cancelled thread to check this flag intermittently and exit
nicely when it sees the set flag.
Scheduler Activation
Numerous implementation of threads provides a virtual processor as an interface b/w user
and kernel thread specifically for two tier model. The virtual processor is called as low
weight process (LWP). Kernel thread and LWP has one-to-one correspondence. The
available numbers of kernel threads can be changed dynamically. The O.S is used to
schedule on to the real system.
2.5 Explain the scheduling criteria. With an example explain various CPU
Scheduling algorithms [Sep 2020]
CPU Scheduling is a process of determining which process will own CPU for execution
while another process is on hold. The main task of CPU scheduling is to make sure that
whenever the CPU remains idle, the OS at least select one of the processes available in the
ready queue for execution. The selection process will be carried out by the CPU scheduler.
It selects one of the processes in memory that are ready for execution.
CPU Scheduling: Scheduling Criteria :There are many different criterias to check when
considering the "best" scheduling algorithm, they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be
working most of the time(Ideally 100% of the time). Considering a real system,
CPU usage should range from 40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total
amount of work done in a unit of time. This may range from 10/second to 1/hour
depending on the specific processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from
time of submission of the process to the time of completion of the process(Wall
clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process
has been waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their
turn to get into the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response
is produced. Remember, it is the time till the first response and not the completion
of process execution(final response).
In general CPU utilization and Throughput are maximized and other factors are
reduced for proper optimization.
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities.
Sometimes it is important to run a task with a higher priority before another lower
priority task, even if the lower priority task is still running.
The lower priority task holds for some time and resumes when the higher priority
task finishes its execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific
process. The process that keeps the CPU busy will release the CPU either by
switching context or terminating.
It is the only method that can be used for various hardware platforms. That's
because it doesn't need special hardware (for example, a timer) like preemptive
scheduling.
Types of CPU scheduling Algorithm - There are mainly six types of process scheduling
algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we
are considering 1 is the lowest priority.
P3 3 6 3 5
Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.
Multiple queues are maintained for processes with common characteristics.
Each queue can have its own scheduling algorithms.
Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.