OS UNIT-II MATERIAL
OS UNIT-II MATERIAL
UNIT-II
Syllabus: Processes: Process Concept, Process scheduling, Operations on processes, Inter-process
communication. Threads and Concurrency: Multithreading models, Thread libraries, Threading issues.
CPU Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple processor scheduling.
2.1. PROCESS:
A process can be thought of as a program in execution. A process will need certain resources — such
as CPU time, memory, files, and I/O devices— to accomplish its task. These resources are allocated to
the process either when it is created or while it is executing. A process is the unit of work in most
systems. Systems consist of a collection of processes: Operating-system processes execute system
code, and user processes execute user code. All these processes may execute concurrently.
2.1.1. PROCESS CONCEPT:
Process – a program in execution; process execution must progress in sequential fashion. No parallel
execution of instructions of a single process
A process includes:
o The program code, also called text section
o Current activity including program counter, processor
registers
o Stack containing temporary data
Function parameters, return addresses, local
variables
o Data section containing global variables
o Heap containing memory dynamically allocated during run
time
Program is passive entity stored on disk (executable file); process
is active
o Program becomes process when an executable file is
loaded into memory
Execution of program started via GUI mouse clicks, command line entry of its name, etc.
One program can be several processes
o Consider multiple users executing the same program
Compiler
Text editor
2.1.2. PROCESS STATE:
As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process. Each process may be in one of the following states:
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Diagram of Process State
2.1.3. PROCESS CONTROL BLOCK (PCB):
• A process control block (PCB) is a data structure used by
computer operating systems to store all the information about
a process. It is also known as a process descriptor.
• Information associated with each process (also called task
control block)
• Process state – running, waiting, etc.
• Program counter – location of instruction to next execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling queue
pointers
• Memory-management information – memory allocated to the
process
• Accounting information – CPU used, clock time elapsed since
start, time limits
• I/O status information – I/O devices allocated to process, list of
open files
2.1.4. CPU SWITCH FROM PROCESS TO PROCESS:
2.1.5. Threads:
A thread is a lightweight process that can be managed independently by a
scheduler. It improves the application performance using parallelism. A thread
shares information like data segment, code segment, files etc. with its peer
threads while it contains its own registers, stack, counter etc.
A thread is a basic unit of CPU utilization, consisting of a program counter, a
stack, and a set of registers, ( and a thread ID. ) Traditional ( heavyweight )
processes have a single thread of control - There is one program counter, and
one sequence of instructions that can be carried out at any given time. As shown in below Figure,
multi-threaded applications have multiple threads within a single process, each having their own
program counter, stack and set of registers, but sharing common code, data, and certain structures
such as open files.
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.
A process is cooperating if it can affect or be affected by the other processes executing in the system.
Clearly, any process that shares data with other processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
Information sharing. Since several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow concurrent access to such
information.
Computation speedup. If we want a particular task to run faster, we must break it into subtasks,
each of which will be executing in parallel with the others. Notice that such a speedup can be
achieved only if the computer has multiple processing cores.
Modularity. We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads
Convenience. Even an individual user may work on many tasks at the same time. For instance, a
user may be editing, printing, and compiling in parallel.
2.4.1. DIFFERENT MODELS OF INTERPROCESS COMMUNICATION:
a) Shared Memory Model
b) Message Passing Model
a) SHARED MEMORY MODEL:
Shared memory is the memory that can be simultaneously accessed by multiple processes. This is
done so that the processes can communicate with each other. All POSIX systems, as well as Windows
operating systems use shared memory.
Advantages of Shared Memory Model
Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Disadvantages of Shared Memory Model
All the processes that use the shared memory model need to make sure that they are not writing
to the same memory location.
Shared memory model may create problems such as synchronization and memory protection that
need to be addressed.
b) MESSAGE PASSING MODEL:
Multiple processes can read and write data to the message queue without being connected to each
other. Messages are stored on the queue until their recipient retrieves them. Message queues are
quite useful for Interprocess communication and are used by most operating systems.
Advantage of Messaging Passing Model
The message passing model is much easier to implement than the shared memory model.
Disadvantage of Messaging Passing Model
The message passing model has slower communication than the shared memory model because
the connection setup takes time.
}
// An array is needed for holding the items.
// This is the shared place which will be
// access by both process
// item shared_buff [ buff_max ];
// Two variables which will keep track of
// the indexes of the items produced by producer
// and consumer The free index points to
// the next free index. The full index points to
// the first full index.
int free_index = 0;
int full_index = 0;
Producer Process Code
item nextProduced;
while(1){
shared_buff[free_index] = nextProduced;
free_index = (free_index + 1) mod buff_max;
}
Consumer Process Code
item nextConsumed;
while(1){
2.6. MULTI-THREADING :
2.6.1. Types of Threads:
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed
threads acting on kernel, an operating system core.
Kernel Level Threads: Kernel-level threads are handled by the
operating system directly and the thread management is done
by the kernel. The context information for the process as well
as the process threads is all managed by the kernel. Because
of this, kernel-level threads are slower than user- level
threads.
User Level Threads: The user-level threads are implemented by users and the kernel is not aware of
the existence of these threads. It handles them as if they were single-threaded processes. User-level
threads are small and much faster than kernel level threads. They are represented by a program
counter (PC), stack, registers and a small process control block. Also, there is no kernel involvement
in synchronization for user-level threads.
2.6.2. Difference between User-Level & Kernel-Level Thread:
S.No User-Level Threads Kernel-Level Thread
User-level threads are faster to create and Kernel-level threads are slower to create and
1
manage. manage.
User-level thread is generic and can run on Kernel-level thread is specific to the operating
3
any operating system. system.
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
The turn around time for P1 = 24
The turn around time for P2 = 27
The turn around time for P3 = 30
The average turnaround time = (24+27+30)/3= 27 milliseconds.
Generalized Activity Normalization Time Table (GANTT) chart is a type of bar chart that is used to
visually display the schedule of a project.
Convoy Effect in FCFS:
FCFS may suffer from the convoy effect if the burst time of the first job is the highest among all.
As in the real life, if a convoy is passing through the road then the other persons may get blocked
until it passes completely. This can be simulated in the Operating System also.
If the CPU gets the processes of the higher burst time at the front end of the ready queue then
the processes of lower burst time may get blocked which means they may never get the CPU if
the job in the execution has a very high burst time. This is called convoy effect or starvation.
Convoy effect - short process behind long process
Advantages of FCFS:
• Easy to implement
• First come, first serve method
Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.
2.12.2. SHORTEST-JOB-FIRST (SJF) SCHEDULING:
This algorithm associates with each process the length of the process’s next CPU burst. When the CPU
is available, it is assigned to the process that has the smallest next CPU burst. This is also known as
shortest-next-CPU-burst algorithm.
Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time among all
operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming.
Advantages of Shortest Job first:
• As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
• SJF is generally used for long term scheduling
Disadvantages of SJF:
• One of the demerits SJF has been starvation.
• Many times, it becomes complicated to predict the length of the upcoming CPU request
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
2.12.3. SHORTEST-REMAINING-TIME-FIRST:
The Preemptive version of Shortest Job First (SJF) scheduling is known as Shortest Remaining Time
First (SRTF). With the help of the SRTF algorithm, the process having the smallest amount of time
remaining until completion is selected first to execute. So basically in SRTF, the processes are
scheduled according to the shortest remaining time. However, the SRTF algorithm involves more
overheads than the shortest job first (SJF)scheduling, because in SRTF OS is required frequently in
order to monitor the CPU time of the jobs in the READY queue and to perform context switching. In
the SRTF scheduling algorithm, the execution of any process can be stopped after a certain amount
of time. On arrival of every process, the short-term scheduler schedules those processes from the list
of available processes & running processes that have the least remaining burst time. After all the
processes are available in the ready queue, then, no preemption will be done and then the algorithm
will work the same as SJF scheduling. In the Process Control Block, the context of the process is saved,
when the process is removed from the execution and when the next process is scheduled. The PCB is
accessed on the next execution of this process.
Advantages of SRTF
The main advantage of the SRTF algorithm is that it makes the processing of the jobs faster than
the SJF algorithm, mentioned it’s overhead charges are not counted.
Disadvantages of SRTF
In SRTF, the context switching is done a lot more times than in SJN due to more consumption of
the CPU's valuable time for processing. The consumed time of CPU then adds up to its processing
time and which then diminishes the advantage of fast processing of this algorithm.
Explanation
• At the 0th unit of the CPU, there is only one process that is P1, so P1 gets executed for the 1-
time unit.
• At the 1st unit of the CPU, Process P2 arrives. Now, the P1 needs 6 more units more to be
executed, and the P2 needs only 3 units. So, P2 is executed first by preempting P1.
• At the 3rd unit of time, the process P3 arrives, and the burst time of P3 is 4 units which is more
than the completion time of P2 that is 1 unit, so P2 continues its execution.
• Now after the completion of P2, the burst time of P3 is 4 units that means it needs only 4 units
for completion while P1 needs 6 units for completion.
• So, this algorithm picks P3 above P1 due to the reason that the completion time of P3 is less
than that of P1
• P3 gets completed at time unit 8, there are no new processes arrived.
• So again, P1 is sent for execution, and it gets completed at the 14th unit.
• As Arrival Time and Burst time for three processes P1, P2, P3 are given in the above diagram.
Let us calculate Turnaround time, completion time, and waiting time.
Turn Around Time = Waiting Time = Turn
Arrival Burst Completion
Process Completion Time – Around Time –
Time Time time
Arrival Time Burst Time
P1 0 7 14 14-0=14 14-7=7
P2 1 3 4 4-1=3 3-3=0
P3 3 4 8 8-3=5 5-4=1
average waiting time = waiting for time of all processes/ no.of processes
average waiting time=7+0+1=8/3 = 2.66ms
2.12.4. PRIORITY SCHEDULING:
The SJF algorithm is a special case of the general priority-scheduling algorithm. A priority is associated
with each process, and the CPU is allocated to the process with the highest priority. Equal-priority
processes are scheduled in FCFS order. SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time. A major problem with priority scheduling algorithms is indefinite
blocking, or starvation. A process that is ready to run but waiting for the CPU can be considered
blocked. A priority scheduling algorithm can leave some low priority processes waiting indefinitely. A
solution to the problem of indefinite blockage of low-priority processes is aging. Aging involves
gradually increasing the priority of processes that wait in the system for a long time.
Example
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Explanation
First of all, suppose that queues 1 and 2 follow round robin with
time quantum 8 and 16 respectively and queue 3 follows FCFS.
One of the implementations of Multilevel Feedback Queue
Scheduling is as follows:
1. If any process starts executing then firstly it enters queue 1.
2. In queue 1, the process executes for 8 unit and if it
completes in these 8 units or it gives CPU for I/O operation
in these 8 units unit than the priority of this process does not
change, and if for some reasons it again comes in the ready queue than it again starts its execution
in the Queue 1.
3. If a process that is in queue 1 does not complete in 8 units then its priority gets reduced and it
gets shifted to queue 2.
4. Above points 2 and 3 are also true for processes in queue 2 but the time quantum is 16 units.
Generally, if any process does not complete in a given time quantum then it gets shifted to the
lower priority queue.
5. After that in the last queue, all processes are scheduled in an FCFS manner.
6. It is important to note that a process that is in a lower priority queue can only execute only when
the higher priority queues are empty.
7. Any running process in the lower priority queue can be interrupted by a process arriving in the
higher priority queue.
Advantages of MFQS
• This is a flexible Scheduling Algorithm
• This scheduling algorithm allows different processes to move between different queues.
• In this algorithm, A process that waits too long in a lower priority queue may be moved to a higher
priority queue which helps in preventing starvation.
Disadvantages of MFQS
• This algorithm is too complex.
• As processes are moving around different queues which leads to the production of more CPU
overheads.
• In order to select the best scheduler this algorithm requires some other means to select the values