process and threads
process and threads
● Threads
● Scheduling
● Synchronization
● Deadlocks
PROCESS
Highlights
● Motivation
● Process Operations
● Process Hierarchies
● Process States
● Implementation of a Process
A Program does nothing unless its instructions are executed by a CPU. A program in
execution is called a process. In order to accomplish its task, process needs the computer
resources.
There may exist more than one process in the system which may require the same
resource at the same time. Therefore, the operating system has to manage all the
processes and the resources in a convenient and efficient way.
Some resources may need to be executed by one process at one time to maintain the
consistency otherwise the system can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process
Management
2. Program counter
A program counter stores the address of the last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new, ready,
running and waiting. We will discuss about them later in detail.
4. Priority
Every process has its own priority. The process with the highest priority among the processes gets the
CPU first. This is also stored on the process control block.
5. General Purpose Registers
Every process has its own set of registers which are used to hold the data which is
generated during the execution of the process.
● Thread Types
● Multi-threading Models
● Issues in Threading
THREADS – Motivation
● A thread is the subset of a process and is also known as the lightweight process.
A process can have more than one thread, and these threads are managed
independently by the scheduler.
● When a process is executing there is the point to which the execution is
located. This locus is what we call thread
● Therefore, a thread is a flow of execution through the process code and its own
associated contexts (PCB information)
● The thread has a program counter that keeps track of which instruction to
execute next. It has registers, which hold its current working variables. It has a
stack, which contains the execution history.
● Each thread belongs to exactly one process and no thread can exist outside a
process
How does thread work?
As we have discussed that a thread is a sub process or an execution unit within a process.
A process can contain a single thread to multiple threads. A thread works as follows:
•When a process starts, OS assigns the memory and resources to it. Each thread within a
process shares the memory and resources of that process only.
•Threads are mainly used to improve the processing of an application. In reality, only a
single thread is executed at a time, but due to fast context switching between threads
gives an illusion that threads are running parallelly.
•If a single thread executes in a process, it is known as a single-threaded And if multiple
threads execute simultaneously, then it is known as multithreading.
THREADS – Motivation
• Some differences between thread and process
✔ Issue of security.
● Advantages of ULT
✔ does not require modification to operating system,
● Disadvantages of KLT
✔ Kernel thread are generally slower to create and
● Schedulers
● Swapping
● Context switching
● Scheduling Algorithms
SCHEDULING - Motivation
Next → Schedulers
SCHEDULING - Schedulers
● Process scheduling is handled in many ways. Scheduler is a special
system software as a part of an OS towards these handling
● Schedulers main task is to select the jobs to be submitted into the
system and to decide which process to run
● They are of three types namely; Long-Term Scheduler, Short-Term
Scheduler, and Medium-Term Scheduler.
SCHEDULING - Schedulers
● Long Term Scheduler
✔ This is a job scheduler since it determines which programs are
Next → Swapping
SCHEDULING - Swapping
● Too many processes in main memory lead to decreased
performance. Swapping is the approach that is being
used, that is moving the processes to secondary storage
(backing store) and move back again to main memory for
continued execution. There is switching from one process
to another
● Two concepts here
● Synchronization Mechanisms
SYNCHRONIZATION - Motivation
● On the basis of synchronization, processes are categorized
into;
✔ Independent Process : Execution of one process does not
affects the execution of other processes
✔ Cooperative Process : Execution of one process affects the
execution of other processes.
● When two or more process cooperates with each other,
their order of execution must be preserved
● If not that there can be conflicts in their execution and
inappropriate outputs can be produced - sharing impact
● Such processes need to be synchronized so that their
order of execution can be guaranteed
SYNCHRONIZATION - Motivation
above,
● Race Condition
● Critical Section
✔ These are regions of a program that try to access shared
resources and may cause race conditions
✔ To avoid race condition among the processes, we need to
assure that only one process at a time can execute within
the critical section
✔ The critical section problem is used to design a set of
protocols (mechanisms) which can ensure that the race
condition among the processes will never arise
into critical section then it should not stop other processes to get into
the critical section
SYNCHRONIZATION - Requirements of
Synchronization mechanisms
● Bounded Waiting; We should be able to predict the waiting time for
every process to get into the critical section. The process must not be
endlessly waiting for getting into the critical section.
● Architectural Neutrality (portability); Our mechanism must be
architectural natural. It means that if our solution is working fine on
one architecture then it should also run on the other ones as well
DEADLOCKS
Highlights
● Motivation
starts, but if we take the resource away from the process which is
causing deadlock then we can prevent deadlock
✔ It not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now
can become inconsistent
✔ Consider an example of printer
DEADLOCKS - Strategies for Handling
Deadlock
● Circular Wait
✔ One way to avoid circular wait is to number all resources, and to
● Circular Wait
✔ One big challenge in this scheme is to determine the
relative ordering of the different resources
✔ Among all the methods, violating circular wait is the only
approach that can be implemented practically.
Next → END
END