Operating Systems (R16 Iii B.Tech I Sem) Unit - Ii
Operating Systems (R16 Iii B.Tech I Sem) Unit - Ii
A program is a passive entity, such as a file containing a list of instructions stored on disk
(often called an executable file), whereas a process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated resources. A program becomes a
process when an executable file is loaded into memory. Two common techniques for loading
executable files are double-clicking an icon representing the executable file and entering the name of
the executable file on the command line (as in prog. exe or a. out.)
Process States (or) Life Cycle of a Process: During execution of a process changes its state. The
state of a process is defined in part by the current activity of that process. Each process may be in
one of the following states:
New. The process is being created.
Running. Instructions are being executed.
Waiting. The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
Ready. The process is waiting to be assigned to a processor.
Terminated. The process has finished execution.
1
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Process Control Block: Each process is represented in the operating system by a process control
block (PCB)—also called a task control block. A PCB is shown in the following figure.
It contains many pieces of information associated with a specific process, including these:
Process state. The state may be new, ready, running, waiting, halted, and so on.
Program counter: The counter indicates the address of the next instruction to be executed
for this process.
CPU registers: This block indicates type of registers used by the process. The registers vary
in number and type, depending on the computer architecture. They include accumulators,
index registers, stack pointers, and general-purpose registers, plus any condition-code
information.
CPU-scheduling information: This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
Memory-management information: This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on the
memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used,
time limits, account members, job or process numbers, and so on.
I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Process Scheduling: The process scheduler selects an available process (possibly from a set of
several available processes) for program execution on the CPU. For this processor maintains the
following three queues:
2
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Job Queue - which consists of all processes in the system.
Ready Queue:The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored as a linked
list. A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
Device Queue: The list of processes waiting for a particular I/O device is called a device
queue.
Each queue is maintained in the form of linked list as shown below.
Schedulers: A process migrates among the various scheduling queues throughout its lifetime. The
operating system must select, for scheduling purposes, processes from these queues in some fashion.
3
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
The selection process is carried out by the appropriate scheduler. There are three different types of
schedules:
The long-term scheduler, or job scheduler, selects processes from this pool and loads them
into memory for execution. The long-term scheduler executes much less frequently.
The long-term scheduler controls the degree of multiprogramming (the number of
processes in memory). If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the system. Thus, the long-
term scheduler may need to be invoked only when a process leaves the system.
In general, most processes can be described as either I/O bound or CPU bound. A process which
spends more amount of time with I/O devices is called as I/O bound process. A process which spends more
amount of time with CPU is called as CPU bound process.
If all processes are I/O bound, the ready queue will almost always be empty. If all processes
are CPU bound, the I/O waiting queue will almost always be empty, devices will go unused, and
again the system will be unbalanced. The system with the best performance will have a combination
of CPU-bound and I/O-bound processes.
The long-term scheduler is also used to control selection of good Process Mix(Combination
of I/O and CPU bound Process).
The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute and allocates the CPU to one of them.
The key idea behind a medium-term scheduler is that sometimes it can be advantageous to
remove processes from memory and thus reduce the degree of multiprogramming. Later, the process
can be reintroduced into memory, and its execution can be continued where it left off. This scheme
is called swapping. The process is swapped out, and is later swapped in, by the medium-term
scheduler. Swapping may be necessary to improve the process mix. The following diagram
represents swapping process.
Context Switch: Switching the CPU to another process requires performing a stata save of the
current process and a state restore of a different process. This task is known as a context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run.
Operations on Processes: The processes in most systems can execute concurrently, and they
may be created and deleted dynamically. The following are different types of operations that can be
performed on processes.
Process Creation: A process may create several new processes using fork() system call. The
creating process is called a parent process, and the new processes are called the children of that
process. Each of these new processes may in turn create other processes, forming a tree of
processes.
When a process creates a new process, two possibilities exist in terms of execution:
The parent continues to execute concurrently with its children.
The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new process:
4
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
The child process is a duplicate of the parent process (it has the same program and data as the
parent).
The child process has a new program loaded into it.
The following C program illustrates creation of a new process using fork() system call.
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid-t pid; /* fork a child process */
pid = fork();
if (pid < 0)
{/* error occurred */
fprintf(stderr, "Fork Failed");
exit (-1) ;
}
else if (pid == 0}
{/* child process */
execlpf"/bin/Is","Is",NULL);
}
else
{ /* parent process. parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
exit (0) ;
}
}
A new process is created by the fork() system call. The new process consists of a copy of the
address space of the original process. This mechanism allows the parent process to communicate
easily with its child process. Both processes (the parent and the child) continue execution at the
instruction after the f ork(), with one difference: The return code for the forkO is zero for the new
(child) process, whereas the (nonzero) process identifier of the child is returned to the parent.
Typically, the execO system call is used after a fork() system call by one of the two processes
to replace the process's memory space with a new program. The exec () system call loads a binary
file into memory and starts its execution.
Process Termination: A process terminates when it finishes executing its final statement and asks
the operating system to delete it by using the exit () system call. At that point, the process may return
a status value (typically an integer) to its parent process (via the wait() system call). All the resources
of the process—including physical and virtual memory, open files, and I/O buffers—are deallocated
by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
The child has exceeded its usage of some of the resources that it has been allocated. (To
determine whether this has occurred, the parent must have a mechanism to inspect the state
of its children.)
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
5
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Interprocess Communication: Processes executing concurrently in the operating system may
be either independent processes or cooperating processes. A process is independent if it cannot
affect or be affected by the other processes executing in the system. Any process that does not share
data with any other process is independent. A process is cooperating if it can affect or be affected
by the other processes executing in the system. Clearly, any process that shares data with other
processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
Information sharing: Since several users may be interested in the same piece of
information, sharing provides an environment to allow concurrent access to such
information.
Computation speedup: To execute a particular task faster, break it into subtasks,
each of which will be executing in parallel with the others.
Modularity: To construct the system in a modular fashion, divide the system
functions into separate processes or threads.
Convenience: Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, printing, and compiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanism that will
allow them to exchange data and information. There are two fundamental models of interprocess
communication: (1) shared memory and (2) message passing. In the shared-memory model, a
region of memory that is shared by cooperating processes is established. Processes can then
exchange information by reading and writing data to the shared region. In the messagepassing
model, communication takes place by means of messages exchanged between the cooperating
processes. The two communications models are contrasted in the following figure.
8
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Buffering: Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue. Basically, such queues can be
implemented in three ways:
Zero capacity: The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
Bounded capacity: The queue has finite length n; thus, at most n messages can reside in
it. If the queue is not full when a new message is sent, the message is placed in the queue,
and the sender can continue execution without waiting. If the link is full, the sender must
block until space is available in the queue.
Unbounded capacity: The queues length is infinite; thus, any number of messages can
wait in it. The sender never blocks.
The zero-capacity case is sometimes referred to as a message system with no
buffering; the other cases are referred to as systems with automatic buffering.
Thread: A thread is a flow of execution through the process code, with its own program counter,
system registers and stack. A thread is also called a light weight process. Threads provide a way to
improve application performance through parallelism. Folowing figure shows the working of the
single and multithreaded processes.
User Level Threads: User level threads are managed by a user level library. The thread
library contains code for creating and destroying threads, for passing message and data
between threads, for scheduling thread execution and for saving and restoring thread
9
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
contexts. User level threads are typically fast. Creating threads, switching between threads
and synchronizing threads only needs a procedure call.
Advantages:
Disadvantages:
Kernel Level Threads: In this case, thread management done by the Kernel. There is no
thread management code in the application area. Kernel threads are supported directly by the
operating system. They are slower than user level threads due to the management overhead.
Advantages:
o Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
o If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Disadvantages:
o Kernel threads are generally slower to create and manage than the user threads.
o Transfer of control from one thread to another within same process requires a mode
switch to the Kernel.
10
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Multi Thread programming Models: Some operating system provide a combined user level
thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process. Multithreading models are
three types
Many to Many Model: In this model, many user level threads multiplexes to the Kernel thread of
smaller or equal numbers. The number of Kernel threads may be specific to either a particular
application or a particular machine. Following diagram shows the many to many model. In this
model, developers can create as many user threads as necessary and the corresponding Kernel
threads can run in parallels on a multiprocessor.
Many to One Model: Many to one model maps many user level threads to one Kernel level thread.
Thread management is done in user space. When thread makes a blocking system call, the entire
process will be blocked. Only one thread can access the Kernel at a time,so multiple threads are
unable to run in parallel on multiprocessors. If the user level thread libraries are implemented in the
operating system in such a way that system does not support them then Kernel threads use the many
to one relationship modes.
One to One Model: There is one to one relationship of user level thread to the kernel level
thread.This model provides more concurrency than the many to one model. It also another thread to
run when a thread makes a blocking system call. It support multiple thread to execute in parallel on
11
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
microprocessors. Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.
Process Scheduling Criteria: Many criteria have been suggested for comparing CPU
scheduling algorithms. The criteria include the following:
o CPU Utilization: In general, CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily used system).
o Throughput: One measure of work is the number of processes that are completed per time
unit, called throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be 10 processes per second.
o Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting
to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
o Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
o Response time: Response time, is the time it takes to start responding, not the time it takes to
output the response. The turnaround time is generally limited by the speed of the output
device.
Process Scheduling Algorithms: CPU scheduling deals with the problem of deciding which
of the processes in the ready queue is to be allocated the CPU. There are many different CPU
scheduling algorithms as shown below.
1. First Come First Serve (FCFS) Scheduling: In this scheduling, the process that requests
the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily
managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto
the tail of the queue. When the CPU is free, it is allocated to the process at the head of the
queue. The running process is then removed from the queue.
Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:
If the processes arrive in the order Pi, Po, P3, and are served in FCFS order, the result will be as
shown in the following Gantt chart:
12
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
The waiting time is 0 milliseconds for process Pi, 24 milliseconds for process Pn, and 27
milliseconds for process Pj. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If the
processes arrive in the order P2, P3, P1, the results will be as showrn in the following Gantt chart:
The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial.
Thus, the average waiting time under an FCFS policy is generally not minimal and may vary substantially
if the process's CPU burst times vary greatly.
The FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to a process,
that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
Advantages: Suitable for batch system. It is simple to understand and code
Disadvantages:
Waiting time can be large if short requests wait behind the long ones.
It is not suitable for time sharing systems where it is important that each user should
get the CPU for an equal amount of time interval.
A proper mix of jobs is needed to achieve good results from FCFS scheduling.
2. Shortest-Job-First (SJF) Scheduling(shortest-next-CPU-burst algorithm): A different
approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This
algorithm associates with each process the length of the process's next CPU burst. When the
CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Consider the
following set of processes, with the length of the CPU burst given in milliseconds:
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting time is (3 +
16 + 9 + 0)/4 - 7 milliseconds.
3. Priority Scheduling: The SJF algorithm is a special case of the general priority scheduling
algorithm. A priority is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF
algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower the priority. consider the following set of
processes, assumed to have arrived at time 0, in the order Pi, P2, • • -, P5, with the length of the CPU
burst given in milliseconds:
13
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
Using priority scheduling, the following Gantt chart will be occurred. The average
waiting time is 8.2 milliseconds.
14
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
returned to process Pi for an additional time quantum. The resulting RR schedule is as shonw
below.
o System processes
o Interactive processes
o Interactive editing processes
o Batch processes
o Student processes
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the
ready queue while a batch process was running, the batch process would be preempted.
Another possibility is to time-slice among the queues. Here, each queue gets a certain
portion of the CPU time, which it can then schedule among its various processes. For
instance, in the foreground-background queue example, the foreground queue can be given
15
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
80 percent of the CPU time for RR scheduling among its processes, whereas the background
queue receives 20 percent of the CPU to give to its processes on an FCFS basis.
A process entering the ready queue is put in queue 0. A process in queue 0 is given a
time quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of
queue 1. If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16
milliseconds. If it does not complete, it is preempted and is put into queue 2. Processes in
queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty.
This scheduling algorithm gives highest priority to any process with a CPU burst of 8
milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go
off to its next I/O burst. Processes that need more than 8 but less than 24 milliseconds are
also served quickly, although with lower priority than shorter processes. Long processes
automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over
from queues 0 and 1.
16
OPERATING SYSTEMS (R16 III B.TECH I SEM) UNIT -
Ii
3. What is scheduler? Explain various types of schedulers and their roles with help of process state
diagram?
4. Describe the differences among short term, medium term, and long term scheduling?
5. Explain the following operations on processes: Process Creation and Process Termination
6. What are the advantages of Inter Process Communication? How communicationtakes place in a
Shared memory environment? Explain.
7. Write and explain various issues involved in message passing systems?
8. Define a Thread? Give the benefits of multithreading. Differentiate process and Thread?
9. Explain about different types of multithreading models?
10. Define thread. What are the differences between user level and kernel level thread?
11. What are the criteria for evaluating the CPU scheduling algorithms? Why do we need it.
12. Explain Round Robin Scheduling algorithm with an example?
13. Distinguish between preemptive and non preemptive scheduling. Explain each type with an
example?
14. What are the parameters that can be used to evaluate algorithms? Also explain different
algorithmic evaluation methods with advantages and disadvantages?
17