0% found this document useful (0 votes)
78 views

Operating Systems (R16 Iii B.Tech I Sem) Unit - Ii

1) A process is an instance of a running program that includes program code, current activity, process stack, data section, and heap. A program becomes a process when loaded into memory. 2) Processes transition between states like new, running, waiting, ready, and terminated. Each process is represented by a process control block containing its state and other information. 3) The CPU scheduler selects processes from ready, device, and job queues for execution using different scheduling algorithms. A context switch saves the state of the current process and restores the next process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Operating Systems (R16 Iii B.Tech I Sem) Unit - Ii

1) A process is an instance of a running program that includes program code, current activity, process stack, data section, and heap. A program becomes a process when loaded into memory. 2) Processes transition between states like new, running, waiting, ready, and terminated. Each process is represented by a process control block containing its state and other information. 3) The CPU scheduler selects processes from ready, device, and job queues for execution using different scheduling algorithms. A context switch saves the state of the current process and restores the next process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 17

OPERATING SYSTEMS (R16 III B.

TECH I SEM) UNIT -


Ii
Syllubus: Process Management – Process concept- process scheduling, operations, Inter process
communication. Multi Thread programming models. Process scheduling criteria and algorithms, and
their evaluation.
Process Concept: A process is an instance of a program that is running in a computer. A process
is more than the program code, which is sometimes known as the text section. It also includes the
current activity, as represented by the value of the program counter and the contents of the
processor's registers. A process generally also includes the process stack, which contains temporary
data (such as function parameters, return addresses, and local variables), and a data section, which
contains global variables. A process may also include a heap, which is memory that is dynamically
allocated during process run time. The structure of a process in memory is shown in the following
figure.

A program is a passive entity, such as a file containing a list of instructions stored on disk
(often called an executable file), whereas a process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated resources. A program becomes a
process when an executable file is loaded into memory. Two common techniques for loading
executable files are double-clicking an icon representing the executable file and entering the name of
the executable file on the command line (as in prog. exe or a. out.)
Process States (or) Life Cycle of a Process: During execution of a process changes its state. The
state of a process is defined in part by the current activity of that process. Each process may be in
one of the following states:
 New. The process is being created.
 Running. Instructions are being executed.
 Waiting. The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
 Ready. The process is waiting to be assigned to a processor.
 Terminated. The process has finished execution.

The state diagram corresponding to these states is as shown below.

1
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii

Process Control Block: Each process is represented in the operating system by a process control
block (PCB)—also called a task control block. A PCB is shown in the following figure.

It contains many pieces of information associated with a specific process, including these:
 Process state. The state may be new, ready, running, waiting, halted, and so on.
 Program counter: The counter indicates the address of the next instruction to be executed
for this process.
 CPU registers: This block indicates type of registers used by the process. The registers vary
in number and type, depending on the computer architecture. They include accumulators,
index registers, stack pointers, and general-purpose registers, plus any condition-code
information.
 CPU-scheduling information: This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
 Memory-management information: This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on the
memory system used by the operating system.
 Accounting information: This information includes the amount of CPU and real time used,
time limits, account members, job or process numbers, and so on.
 I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

Process Scheduling: The process scheduler selects an available process (possibly from a set of
several available processes) for program execution on the CPU. For this processor maintains the
following three queues:
2
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
 Job Queue - which consists of all processes in the system.
 Ready Queue:The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored as a linked
list. A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
 Device Queue: The list of processes waiting for a particular I/O device is called a device
queue.
Each queue is maintained in the form of linked list as shown below.

A common representation for a discussion of process scheduling is a queueing diagram


which is shown below. Two types of queues are present: the ready queue and a set of device queues.
The circles represent the resources that serve the queues, and the arrows indicate the flow of
processes in the system. A new process is initially put in the ready queue. It waits there tmtil it is
selected for execution, or is dispatched. Once the process is allocated the CPU and is executing, one
of several events could occur:
 The process could issue an I/O request and then be placed in an I/O queue.
 The process could create a new subprocess and wait for the subprocess's termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.

Schedulers: A process migrates among the various scheduling queues throughout its lifetime. The
operating system must select, for scheduling purposes, processes from these queues in some fashion.

3
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
The selection process is carried out by the appropriate scheduler. There are three different types of
schedules:
The long-term scheduler, or job scheduler, selects processes from this pool and loads them
into memory for execution. The long-term scheduler executes much less frequently.
The long-term scheduler controls the degree of multiprogramming (the number of
processes in memory). If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the system. Thus, the long-
term scheduler may need to be invoked only when a process leaves the system.
In general, most processes can be described as either I/O bound or CPU bound. A process which
spends more amount of time with I/O devices is called as I/O bound process. A process which spends more
amount of time with CPU is called as CPU bound process.
If all processes are I/O bound, the ready queue will almost always be empty. If all processes
are CPU bound, the I/O waiting queue will almost always be empty, devices will go unused, and
again the system will be unbalanced. The system with the best performance will have a combination
of CPU-bound and I/O-bound processes.
The long-term scheduler is also used to control selection of good Process Mix(Combination
of I/O and CPU bound Process).
The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute and allocates the CPU to one of them.
The key idea behind a medium-term scheduler is that sometimes it can be advantageous to
remove processes from memory and thus reduce the degree of multiprogramming. Later, the process
can be reintroduced into memory, and its execution can be continued where it left off. This scheme
is called swapping. The process is swapped out, and is later swapped in, by the medium-term
scheduler. Swapping may be necessary to improve the process mix. The following diagram
represents swapping process.

Context Switch: Switching the CPU to another process requires performing a stata save of the
current process and a state restore of a different process. This task is known as a context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run.
Operations on Processes: The processes in most systems can execute concurrently, and they
may be created and deleted dynamically. The following are different types of operations that can be
performed on processes.
Process Creation: A process may create several new processes using fork() system call. The
creating process is called a parent process, and the new processes are called the children of that
process. Each of these new processes may in turn create other processes, forming a tree of
processes.
When a process creates a new process, two possibilities exist in terms of execution:
 The parent continues to execute concurrently with its children.
 The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new process:
4
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
 The child process is a duplicate of the parent process (it has the same program and data as the
parent).
 The child process has a new program loaded into it.
The following C program illustrates creation of a new process using fork() system call.
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid-t pid; /* fork a child process */
pid = fork();
if (pid < 0)
{/* error occurred */
fprintf(stderr, "Fork Failed");
exit (-1) ;
}
else if (pid == 0}
{/* child process */
execlpf"/bin/Is","Is",NULL);
}
else
{ /* parent process. parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
exit (0) ;
}
}

A new process is created by the fork() system call. The new process consists of a copy of the
address space of the original process. This mechanism allows the parent process to communicate
easily with its child process. Both processes (the parent and the child) continue execution at the
instruction after the f ork(), with one difference: The return code for the forkO is zero for the new
(child) process, whereas the (nonzero) process identifier of the child is returned to the parent.
Typically, the execO system call is used after a fork() system call by one of the two processes
to replace the process's memory space with a new program. The exec () system call loads a binary
file into memory and starts its execution.

Process Termination: A process terminates when it finishes executing its final statement and asks
the operating system to delete it by using the exit () system call. At that point, the process may return
a status value (typically an integer) to its parent process (via the wait() system call). All the resources
of the process—including physical and virtual memory, open files, and I/O buffers—are deallocated
by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
 The child has exceeded its usage of some of the resources that it has been allocated. (To
determine whether this has occurred, the parent must have a mechanism to inspect the state
of its children.)
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.

5
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Interprocess Communication: Processes executing concurrently in the operating system may
be either independent processes or cooperating processes. A process is independent if it cannot
affect or be affected by the other processes executing in the system. Any process that does not share
data with any other process is independent. A process is cooperating if it can affect or be affected
by the other processes executing in the system. Clearly, any process that shares data with other
processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
 Information sharing: Since several users may be interested in the same piece of
information, sharing provides an environment to allow concurrent access to such
information.
 Computation speedup: To execute a particular task faster, break it into subtasks,
each of which will be executing in parallel with the others.
 Modularity: To construct the system in a modular fashion, divide the system
functions into separate processes or threads.
 Convenience: Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, printing, and compiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanism that will
allow them to exchange data and information. There are two fundamental models of interprocess
communication: (1) shared memory and (2) message passing. In the shared-memory model, a
region of memory that is shared by cooperating processes is established. Processes can then
exchange information by reading and writing data to the shared region. In the messagepassing
model, communication takes place by means of messages exchanged between the cooperating
processes. The two communications models are contrasted in the following figure.

Communications models, (a) Message passing, (b) Shared memory.


Shared-Memory Systems: Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory. Typically, a shared-memory
region resides in the address space of the process creating the shared-memory segment. Other
processes that wish to communicate using this shared-memory segment must attach it to their
address space.
To illustrate the concept of cooperating processes, let's consider the producer-consumer
problem, which is a common paradigm for cooperating processes. A producer process produces
information that is consumed by a consumer process.
One solution to the producer-consumer problem uses shared memory. To allow producer and
consumer processes to run concurrently, must have available a buffer of items that can be filled by
the producer and emptied by the consumer. This buffer will reside in a region of memory that is
shared by the producer and consumer processes. A producer can produce one item while the
6
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
consumer is consuming another item. The producer and consumer must be synchronized, so that the
consumer does not try to consume an item that has not yet been produced.
Two types of buffers can be used. The unbounded buffer places no practical limit on the
size of the buffer. The consumer may have to wait for new items, but the producer can always
produce new items. The bounded buffer assumes a fixed buffer size. In this case, the consumer
must wait if the buffer is empty, and the producer must wait if the buffer is full.
The following variables reside in a region of memory shared by the producer and consumer
processes:
#define BUFFER_SIZE 10
typedef struct
{
…..
}item;
item buffer [BUFFER_SIZE] ;
int in = 0 ;
int out = 0 ;
The shared buffer is implemented as a circular array with two logical pointers: in and out.
The variable in points to the next free position in the buffer; out points to the first full position in the
buffer. The buffer is empty when in == out; the buffer is full when ((in + 1) % BUFFER_SIZE) ==
out.
The code for the producer and consumer processes is shown below.
The Producer process
item nextProduced;
while (true)
{
while (((in + 1) % BUFFER-SIZE) == out)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}

The Consumer process


item nextConsumed;
while (true)
{
while (in == out)
; //do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFEFLSIZE;
}
The producer process has a local variable nextProduced in which the new item to be
produced is stored. The consumer process has a local variable nextConsumed in which the item to be
consumed is stored.
Message Passing Systems: Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same address space. A message-
passing facility provides at least two operations: send(message) and receive(message). Messages
sent by a process can be of either fixed or variable size.
If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them. This link can be
implemented in a variety of ways.
 Direct or indirect communication
 Synchronous or asynchronous communication
7
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
 Automatic or explicit buffering
 Direct or indirect communication: Under Symmetry direct communication, each process
that wants to communicate must explicitly name the recipient or sender of the
communication. In this scheme, the send.0 and receive() primitives are defined as:
 send(P, message)—Send a message to process P.
 receive (Q, message)—Receive a message from process Q.
A communication link in this scheme has the following properties:
 A link is established automatically between every pair of processes that want to
communicate.
 A link is associated with exactly two processes.
 Between each pair of processes, there exists exactly one link.
In asymmetry direct communication only the sender names the recipient; the
recipient is not required to name the sender. In this scheme, the send() and receive ()
primitives are defined as follows:
 send(P, message)—Send a message to process P.
 receive(id, message)—-Receive a message from any process; the variable id is
set to the name of the process with which communication has taken place.
With indirect communication, the messages are sent to and received from
mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages
can be placed by processes and from which messages can be removed. Each mailbox has a
unique identification. In this scheme, a process can communicate with some other process via
a number of different mailboxes. Two processes can communicate only if the processes have
a shared mailbox, however. The sendC) and receive () primitives are defined as follows:
 send(A, message)—Send a message to mailbox A.
 receive(A, message)—Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
 A link is established between a pair of processes only if both members of the pair
have a shared mailbox.
 A link may be associated with more than two processes.
 Between each pair of communicating processes, there may be a number of
different links, with each link corresponding to one mailbox.
A mailbox may be owned either by a process or by the operating system. If the
mailbox is owned by a process there can be no confusion about who should receive a
message sent to this mailbox(). When a process that owns a mailbox terminates, the mailbox
disappears.
In contrast, a mailbox that is owned by the operating system has an existence of its
own. The operating system then must provide a mechanism that allows a process to do the
following:
 Create a new mailbox.
 Send and receive messages through the mailbox.
 Delete a mailbox.
 Synchronous or asynchronous communication: Communication between processes takes
place through calls to sendO and receive () primitives. There are different design options for
implementing each primitive. Message passing may be either blocking or nonblocking— also known
as synchronous and asynchronous.
 Blocking send. The sending process is blocked until the message is received by the
receiving process or by the mailbox.
 Nonblocking send. The sending process sends the message and resumes operation.
 Blocking receive. The receiver blocks until a message is available.
 Nonblocking receive. The receiver retrieves either a valid message or a null.

8
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
 Buffering: Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue. Basically, such queues can be
implemented in three ways:
 Zero capacity: The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
 Bounded capacity: The queue has finite length n; thus, at most n messages can reside in
it. If the queue is not full when a new message is sent, the message is placed in the queue,
and the sender can continue execution without waiting. If the link is full, the sender must
block until space is available in the queue.
 Unbounded capacity: The queues length is infinite; thus, any number of messages can
wait in it. The sender never blocks.
The zero-capacity case is sometimes referred to as a message system with no
buffering; the other cases are referred to as systems with automatic buffering.
Thread: A thread is a flow of execution through the process code, with its own program counter,
system registers and stack. A thread is also called a light weight process. Threads provide a way to
improve application performance through parallelism. Folowing figure shows the working of the
single and multithreaded processes.

Difference between Process and Thread:


Process Thread

Process is heavy weight. Thread is light weight taking lesser resources


than a process.
Process switching needs interaction with Thread switching does not need to interact with
operating system. operating system.
If one process is blocked then no other process While one thread is blocked and waiting, second
can execute until the first process is unblocked. thread in the same task can run.
In multiple processes each process operates One thread can read, write or change another
independently of the others. thread's data.

Types of Threads: Threads are implemented in following two ways:

 User Level Threads: User level threads are managed by a user level library. The thread
library contains code for creating and destroying threads, for passing message and data
between threads, for scheduling thread execution and for saving and restoring thread

9
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
contexts. User level threads are typically fast. Creating threads, switching between threads
and synchronizing threads only needs a procedure call.

Advantages:

 Thread switching does not require Kernel mode privileges.


 User level thread can run on any operating system.

 Scheduling can be application specific in the user level thread.

 User level threads are fast to create and manage.

Disadvantages:

 In a typical operating system, most system calls are blocking.


 Multithreaded application cannot take advantage of multiprocessing.

 Kernel Level Threads: In this case, thread management done by the Kernel. There is no
thread management code in the application area. Kernel threads are supported directly by the
operating system. They are slower than user level threads due to the management overhead.

Advantages:

o Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
o If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.

o Kernel routines themselves can multithreaded.

Disadvantages:

o Kernel threads are generally slower to create and manage than the user threads.
o Transfer of control from one thread to another within same process requires a mode
switch to the Kernel.

10
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Multi Thread programming Models: Some operating system provide a combined user level
thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process. Multithreading models are
three types

 Many to Many Model


 Many to One Model

 One to One Model

Many to Many Model: In this model, many user level threads multiplexes to the Kernel thread of
smaller or equal numbers. The number of Kernel threads may be specific to either a particular
application or a particular machine. Following diagram shows the many to many model. In this
model, developers can create as many user threads as necessary and the corresponding Kernel
threads can run in parallels on a multiprocessor.

Many to One Model: Many to one model maps many user level threads to one Kernel level thread.
Thread management is done in user space. When thread makes a blocking system call, the entire
process will be blocked. Only one thread can access the Kernel at a time,so multiple threads are
unable to run in parallel on multiprocessors. If the user level thread libraries are implemented in the
operating system in such a way that system does not support them then Kernel threads use the many
to one relationship modes.

One to One Model: There is one to one relationship of user level thread to the kernel level
thread.This model provides more concurrency than the many to one model. It also another thread to
run when a thread makes a blocking system call. It support multiple thread to execute in parallel on

11
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
microprocessors. Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

Process Scheduling Criteria: Many criteria have been suggested for comparing CPU
scheduling algorithms. The criteria include the following:
o CPU Utilization: In general, CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily used system).
o Throughput: One measure of work is the number of processes that are completed per time
unit, called throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be 10 processes per second.
o Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting
to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
o Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
o Response time: Response time, is the time it takes to start responding, not the time it takes to
output the response. The turnaround time is generally limited by the speed of the output
device.

Process Scheduling Algorithms: CPU scheduling deals with the problem of deciding which
of the processes in the ready queue is to be allocated the CPU. There are many different CPU
scheduling algorithms as shown below.
1. First Come First Serve (FCFS) Scheduling: In this scheduling, the process that requests
the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily
managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto
the tail of the queue. When the CPU is free, it is allocated to the process at the head of the
queue. The running process is then removed from the queue.
Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:

If the processes arrive in the order Pi, Po, P3, and are served in FCFS order, the result will be as
shown in the following Gantt chart:

12
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii

The waiting time is 0 milliseconds for process Pi, 24 milliseconds for process Pn, and 27
milliseconds for process Pj. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If the
processes arrive in the order P2, P3, P1, the results will be as showrn in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial.
Thus, the average waiting time under an FCFS policy is generally not minimal and may vary substantially
if the process's CPU burst times vary greatly.
The FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to a process,
that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
Advantages: Suitable for batch system. It is simple to understand and code
Disadvantages:
 Waiting time can be large if short requests wait behind the long ones.
 It is not suitable for time sharing systems where it is important that each user should
get the CPU for an equal amount of time interval.
 A proper mix of jobs is needed to achieve good results from FCFS scheduling.
2. Shortest-Job-First (SJF) Scheduling(shortest-next-CPU-burst algorithm): A different
approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This
algorithm associates with each process the length of the process's next CPU burst. When the
CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Consider the
following set of processes, with the length of the CPU burst given in milliseconds:

Using SJF scheduling, the following Gantt chart occurred.

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting time is (3 +
16 + 9 + 0)/4 - 7 milliseconds.

3. Priority Scheduling: The SJF algorithm is a special case of the general priority scheduling
algorithm. A priority is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF
algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower the priority. consider the following set of
processes, assumed to have arrived at time 0, in the order Pi, P2, • • -, P5, with the length of the CPU
burst given in milliseconds:

13
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii

Using priority scheduling, the following Gantt chart will be occurred. The average
waiting time is 8.2 milliseconds.

Priority scheduling can be either preemptive or nonpreemptive. When a process


arrives at the ready queue, its priority is compared with the priority of the currently running
process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of
the newly arrived process is higher than the priority of the currently running process. A
nonpreemptive priority scheduling algorithm will simply put the new process at the head of
the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be considered
blocked. A solution to the problem of indefinite blockage of low-priority processes is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the system
for a long time. For example, if priorities range from 127 (low) to 0 (high), we could increase
the priority of a waiting process by 1 every 15 minutes.
4. Round Robin(RR) Scheduling: The round-robin (RR) scheduling algorithm is designed
especially for timesharing systems. It is similar to FCFS scheduling, but preemption is added
to switch between processes. A small unit of time, called a time quantum or time slice, is
defined. A time quantum is generally from 10 to 100 milliseconds. The ready queue is treated
as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
To implement RR scheduling, keep the ready queue as a FIFO queue of processes.
New processes are added to the tail of the ready queue. The CPU scheduler picks the first
process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches
the process.
The average waiting time under the RR policy is often long. Consider the following set of
processes that arrive at time 0, with the length of the CPU burst given in milliseconds:

If we use a time quantum of 4 milliseconds, then process Pi gets the first 4


milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time
quantum, and the CPU is given to the next process in the queue, process P2. Since process Pi
does not need 4 milliseconds, it quits before its time quantum expires. The CPU is then given
to the next process, process P3. Once each process has received 1 time quantum, the CPU is

14
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
returned to process Pi for an additional time quantum. The resulting RR schedule is as shonw
below.

The average waiting time is 17/3 = 5.66 milliseconds. In the RR scheduling


algorithm, no process is allocated the CPU for more than 1 time quantum in a row (unless it
is the only runnable process). If a process's CPU burst exceeds 1 time quantum, that process
is preempted and is put back in the ready queue. The RR scheduling algorithm is thus
preemptive.
5. Multilevel Queue Scheduling: A multilevel queue scheduling algorithm partitions the
ready queue into several separate queues as shown below. The processes are permanently
assigned to one queue, generally based on some property of the process, such as memory
size, process priority, or process type. Each queue has its own scheduling algorithm. In
addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling. For example, the foreground queue may have absolute
priority over the background queue. Let's look at an example of a multilevel queue
scheduling algorithm with five queues, listed below in order of priority:

o System processes
o Interactive processes
o Interactive editing processes
o Batch processes
o Student processes

Each queue has absolute priority over lower-priority queues. No process in the batch
queue, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the
ready queue while a batch process was running, the batch process would be preempted.
Another possibility is to time-slice among the queues. Here, each queue gets a certain
portion of the CPU time, which it can then schedule among its various processes. For
instance, in the foreground-background queue example, the foreground queue can be given

15
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
80 percent of the CPU time for RR scheduling among its processes, whereas the background
queue receives 20 percent of the CPU to give to its processes on an FCFS basis.

6. Multilevel Feedback-Queue Scheduling: The multilevel feedback-queue scheduling


algorithm, allows a process to move between queues. The idea is to separate processes
according to the characteristics of their CPU bursts. If a process uses too much CPU time, it
will be moved to a lower-priority queue. In addition, a process that waits too long in a lower-
priority queue may be moved to a higher-priority queue. This form of aging prevents
starvation.

For example, consider a multilevel feedback-queue scheduler with three queues,


numbered from 0 to 2 as shonw below. The scheduler first executes all processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1. Similarly, processes in
queue 2 will only be executed if queues 0 and 1 are empty. A process that arrives for queue 1
will preempt a process in queue 2. A process in queue 1 will in turn be preempted by a
process arriving for queue 0.

A process entering the ready queue is put in queue 0. A process in queue 0 is given a
time quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of
queue 1. If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16
milliseconds. If it does not complete, it is preempted and is put into queue 2. Processes in
queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty.
This scheduling algorithm gives highest priority to any process with a CPU burst of 8
milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go
off to its next I/O burst. Processes that need more than 8 but less than 24 milliseconds are
also served quickly, although with lower priority than shorter processes. Long processes
automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over
from queues 0 and 1.

FREQUENTLY ASKED QUESTIONS

1. What is a Process? Explain about various fields of Process Cintrol Block?


2. With a neat diagram, explain various atate of a process?

16
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
3. What is scheduler? Explain various types of schedulers and their roles with help of process state
diagram?
4. Describe the differences among short term, medium term, and long term scheduling?
5. Explain the following operations on processes: Process Creation and Process Termination
6. What are the advantages of Inter Process Communication? How communicationtakes place in a
Shared memory environment? Explain.
7. Write and explain various issues involved in message passing systems?
8. Define a Thread? Give the benefits of multithreading. Differentiate process and Thread?
9. Explain about different types of multithreading models?
10. Define thread. What are the differences between user level and kernel level thread?
11. What are the criteria for evaluating the CPU scheduling algorithms? Why do we need it.
12. Explain Round Robin Scheduling algorithm with an example?
13. Distinguish between preemptive and non preemptive scheduling. Explain each type with an
example?
14. What are the parameters that can be used to evaluate algorithms? Also explain different
algorithmic evaluation methods with advantages and disadvantages?

17

You might also like