0% found this document useful (0 votes)
3 views

process and threads

The document provides an overview of processes and threads in operating systems, detailing their definitions, attributes, operations, and states. It explains the roles of scheduling, synchronization, and deadlocks in process management, as well as the differences between user-level and kernel-level threads. Additionally, it covers various scheduling algorithms and the importance of efficient resource management in multi-threaded environments.

Uploaded by

zabronjoshua003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

process and threads

The document provides an overview of processes and threads in operating systems, detailing their definitions, attributes, operations, and states. It explains the roles of scheduling, synchronization, and deadlocks in process management, as well as the differences between user-level and kernel-level threads. Additionally, it covers various scheduling algorithms and the importance of efficient resource management in multi-threaded environments.

Uploaded by

zabronjoshua003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

PROCESS AND THREADS

PROCESS AND THREADS


Overall highlights
● Process

● Threads

● Scheduling

● Synchronization

● Deadlocks
PROCESS
Highlights
● Motivation

● Process Operations

● Process Hierarchies

● Process States

● Implementation of a Process
A Program does nothing unless its instructions are executed by a CPU. A program in
execution is called a process. In order to accomplish its task, process needs the computer
resources.

There may exist more than one process in the system which may require the same
resource at the same time. Therefore, the operating system has to manage all the
processes and the resources in a convenient and efficient way.

Some resources may need to be executed by one process at one time to maintain the
consistency otherwise the system can become inconsistent and deadlock may occur.

The operating system is responsible for the following activities in connection with Process
Management

Scheduling processes and threads on the CPUs.


Creating and deleting both user and system processes.
Suspending and resuming processes.
Providing mechanisms for process synchronization.
Providing mechanisms for process communication.
PROCESS : Motivation
● The software that runs on the computer is organized in sequential
fashion progressively
● That software is a collection of many programs
● An instance of a program in execution and the associated contexts or
components is basically a process.
● A program to be a process is when it is loaded into a memory to be
executed
Process Attributes
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.

2. Program counter
A program counter stores the address of the last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this process is resumed.

3. Process State
The Process, from its creation to the completion, goes through various states which are new, ready,
running and waiting. We will discuss about them later in detail.

4. Priority
Every process has its own priority. The process with the highest priority among the processes gets the
CPU first. This is also stored on the process control block.
5. General Purpose Registers
Every process has its own set of registers which are used to hold the data which is
generated during the execution of the process.

6. List of open files


During the Execution, Every process uses some files which need to be present in the
main memory. OS also maintains a list of open files in the Process Control Block(PCB.)

7. List of open devices


OS also maintains the list of all open devices that are used during the execution of the
process.
PROCESS : Process Operations

● The operations of process carried out by an OS primarily


are creation and termination
✔ A process can be created due to some reasons, e.g, user
logs on, user starts a program, OS creates process to
provide service such as printing, some program starts
another process.
✔ However, principal events that led to processes creation
are; system initialization, execution of a process-creation
system call by a running process, a user request to create
a new process, and initiation of a batch job
✔ So it is a task of creating new processes.
PROCESS : Process Operations

● A process can be terminated when it finish executing last


statement whereby, it’s resources are returned to the
system
● Usually processes terminates due to the followings;
✔ Normal exit, that is, processes have finished their jobs
then exits.
✔ Error exit which is a bug in a process, for instance, a user
tries to compile a program that does not exist, so the
compiler simply announces this issue and exits.
✔ Fatal error caused by process due to a bug in program for
example, executing an illegal instruction, referring non-
existing memory or dividing by zero.
✔ Killed by another process – e.g, system call by an OS
PROCESS : Process Operations

● Reasons towards the process termination are;


✔ program done, exceeded time limit, insufficient memory,
memory access, violation, unauthorized access, arithmetic
fault, exceeded waiting, I/O fault, invalid instruction,
privileged instruction, operator or OS intervention, and
termination of/by parent process.

✔ Next → Process Hierarchies


PROCESS : Process Hierarchies

● In some systems, when a process creates another process,


the child process can itself create more processes,
forming a process hierarchy
● In UNIX like systems, a process and all of its children and
further descendants together form a process group
● Windows OS has no concept of a process hierarchy, all
processes are equal. A parent is given a special token
(called a handle) that it can use to control the child.
However, it is free to pass this token to some other
process which is different from UNIX like systems

Next → Process States


PROCESS – Process States

● Although each process is an independent entity, with


its own program counter and internal state, processes
often need to interact with other processes
● One process may generate some output that another
process uses as input
● A process may be ready to run, but there is no input
waiting for it, then it must block until some input is
available. It is also possible for a process that is
conceptually ready and able to run but is stopped
because the operating system has decided to allocate
the CPU to another process for a while
PROCESS – Process States

● A "ready" or "waiting" process has been loaded into


main memory and is awaiting execution on a CPU (to
be context switched onto the CPU by the dispatcher,
or short-term scheduler). for example, in a one-
processor system, only one process can be executing
at any one time, and all other "concurrently
executing" processes will be waiting for execution.
PROCESS – Process States

● A process moves into the running state when it is


chosen for execution. The process's instructions are
executed by one of the CPUs (or cores) of the system.
There is at most one running process per CPU or core.
A process can run in either of the two modes,
namely kernel mode or user mode
PROCESS – Process States

● A process transitions to a blocked state when it


cannot carry on without an external change in state
or event occurring. For example, a process may block
on a call to an I/O device such as a printer, if the
printer is not available. Processes also commonly
block when they require user input, or require access
to a critical section which must be executed
atomically. Such critical sections are protected using a
synchronization object such as a semaphore or mutex
PROCESS : Process Hierarchies
● Three states model

Next → Process Implementa on


PROCESS – Process Implementation

● To implement a process, an OS maintains a table


called process table or PCB containing array of
structures with one entry per process
● Also an OS maintains these, since a process may be
interrupted thousands of times during its execution,
but the key idea is that after each interrupt the
interrupted process returns to precisely the same
state it was in before the interrupt occurred
THREADS
Highlights
● Motivation

● Thread Types

● Multi-threading Models

● Issues in Threading
THREADS – Motivation
● A thread is the subset of a process and is also known as the lightweight process.
A process can have more than one thread, and these threads are managed
independently by the scheduler.
● When a process is executing there is the point to which the execution is
located. This locus is what we call thread
● Therefore, a thread is a flow of execution through the process code and its own
associated contexts (PCB information)
● The thread has a program counter that keeps track of which instruction to
execute next. It has registers, which hold its current working variables. It has a
stack, which contains the execution history.
● Each thread belongs to exactly one process and no thread can exist outside a
process
How does thread work?
As we have discussed that a thread is a sub process or an execution unit within a process.
A process can contain a single thread to multiple threads. A thread works as follows:
•When a process starts, OS assigns the memory and resources to it. Each thread within a
process shares the memory and resources of that process only.
•Threads are mainly used to improve the processing of an application. In reality, only a
single thread is executed at a time, but due to fast context switching between threads
gives an illusion that threads are running parallelly.
•If a single thread executes in a process, it is known as a single-threaded And if multiple
threads execute simultaneously, then it is known as multithreading.
THREADS – Motivation
• Some differences between thread and process

• A process is independent and does not contained within another


process, whereas all threads are logically contained within a process.
• Processes are heavily weighted, whereas threads are light-weighted.
• A process can exist individually as it contains its own memory and
other resources, whereas a thread cannot have its individual
existence.
• A proper synchronization between processes is not required. In
contrast, threads need to be synchronized in order to avoid
unexpected scenarios.
• Processes can communicate with each other using inter-process
communication only; in contrast, threads can directly communicate
with each other as they share the same address space
THREADS – Motivation
● Advantages of Threads;
✔ Thread minimizes context switching time; use of threads

provides concurrency within a process with efficient


communication. It is more economical to create and context
switch threads, provides utilization of multiprocessor
architectures to a greater scale and efficiency.
✔ Scalability. One thread runs on one CPU. In multi threaded

processes, threads can be distributed over a series of


processors to scale
● Disadvantages of Threads;

✔ an action of blocking parent threads will stop all child thread.

✔ Issue of security.

Next → Thread Types


THREADS – Thread Types
● User Level Threads (ULT);
● The user-level threads are only managed by users, and the
kernel does not have its information.
✔ They operates in user level library, these libraries provide
programmers with API for creation and management of
threads
✔ These libraries contains code for creation and termination of
threads, data and message exchange among threads, thread
scheduling, saving and restoring thread context
✔ So thread switching does not need to call operating system
and to cause interrupt to the kernel. In fact, the kernel knows
nothing about user level threads and manages them as if
they were single-threaded process
✔ Examples; POSIX Pthreads, Mach C-threads and Solaris
threads
THREADS – Thread Types

● Advantages of ULT
✔ does not require modification to operating system,

easy to represent and manage. user level thread can


run on any operating system thus they are
independent. user level threads are fast and efficient
● Disadvantages of ULT

✔ lack of coordination between threads and operating

system kernel, ULT require non-blocking system call


THREADS – Thread Types
● Kernel Level Threads (KLT)
● The kernel-level threads are handled by the Operating
system and managed by its kernel.
✔ No run-time system is needed in this case. OS kernel
provides system call to create and manage threads.
✔ Examples; Windows 95/98/NT/200, solaris, Tru64 UNIX,
BeOS and Linux
● Advantages of KLT
✔ Kernel can simultaneously schedule multiple threads from
the same process on multiple processes, If one thread in a
process is blocked the kernel can schedule another thread
of the same process, Kernel routines themselves can be
multi-threaded
THREADS – Thread Types

● Disadvantages of KLT
✔ Kernel thread are generally slower to create and

manage than the user threads, Kernel requires Thread


Control Block (TCB) for each thread in the pool, hence
complexity increases
✔ But to overcome these disadvantages, Jacketing

mechanism is used that converts a blocking system


call into a non-blocking system call, that is Writing an
application as multiple processes rather than multiple
threads
THREADS – Thread Types

● The term multi-threading is also used to describe the


situation of allowing multiple threads in the same process

Next → Mul -threading


THREADS – Multi-threading Models

● The situation of allowing multiple threads in the same


process is what we call multi-threading
● Some operating system provides a combined user
level thread and kernel level thread facility. Solaris is a
good example of this combined approach
● Multi-Threading models are of the three types; Many-
to-One Model, One-to-One Model and Many-to-Many
Model
SCHEDULING
Highlights
● Motivation

● Schedulers

● Swapping

● Context switching

● Scheduling Algorithms
SCHEDULING - Motivation

● Under the basis of a particular strategy, the running


process needs to be removed from the CPU so that it can
allow the selection of another process can begin
● This process manager activity is what we call process
scheduling, is an essential part of a multi-programming
operating systems
● Thus, an OS allow more than one process to be loaded
into the executable memory at a time and the loaded
process shares the CPU using time multiplexing
● An OS achieve these all by using process schedulers

Next → Schedulers
SCHEDULING - Schedulers
● Process scheduling is handled in many ways. Scheduler is a special
system software as a part of an OS towards these handling
● Schedulers main task is to select the jobs to be submitted into the
system and to decide which process to run
● They are of three types namely; Long-Term Scheduler, Short-Term
Scheduler, and Medium-Term Scheduler.
SCHEDULING - Schedulers
● Long Term Scheduler
✔ This is a job scheduler since it determines which programs are

admitted to the system for processing


✔ It selects processes from the queue and loads them into memory for

execution, the place where CPU scheduling is done


✔ It’s primary objective is to provide a balanced mix of jobs, such as I/O

bound and processor bound, so it controls the degree of multi-


programming
✔ When a process changes the state from new to ready, then there is

use of long-term scheduler


SCHEDULING - Schedulers
● Short Term Scheduler
✔ Its aiming at increasing the system performance in accordance with

the chosen set of criteria, so it is a CPU scheduler


✔ They are also known as dispatchers, since they make the decision of

which process to execute next


✔ They allocate the CPU to the selected process among the processes

that are ready to execute


✔ It is the change of ready state to running state of the process

✔ These are faster than long-term schedulers


SCHEDULING - Schedulers
● Medium Term Scheduler
✔ Is a part of swapping since it is the one in-charge in
handling the swapped out-processes.
✔ As a result swapping is involved with removing process
from memory into the secondary and make space for
other processes, the medium-term scheduler is
responsible towards these swapped out-processes
✔ A running process may become suspended if it makes an
I/O request
✔ These reduces the degree of multi-programming

Next → Swapping
SCHEDULING - Swapping
● Too many processes in main memory lead to decreased
performance. Swapping is the approach that is being
used, that is moving the processes to secondary storage
(backing store) and move back again to main memory for
continued execution. There is switching from one process
to another
● Two concepts here

✔ Backing store – fast disk large enough to accommodate


copies of all memory images for all users and must
provide direct access to these memory images
✔ Roll out, roll in – swapping variant used for priority-based
scheduling algorithms whereby lower-priority process is
swapped out so higher-priority process can be loaded and
executed
Next → Context Switching
SCHEDULING – Scheduling Algorithms

● For a process to be assigned a CPU, a scheduler schedules


such different processes based on particular algorithms
● These algorithms are either non-preemptive or
preemptive
● Non-preemptive algorithms are designed so that once a
process enters the running state, it cannot be preempted
(interrupted) until it completes its allocated time
● Preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process
anytime when a high priority process enters into a ready
state
SCHEDULING – Scheduling Algorithms

● For the scheduler to do such work of scheduling with the


selected best algorithm, it uses the following criteria;
✔ CPU utilization: CPU as busy as possible
✔ Throughput: Number of process that are completed per
time unit
✔ Turnaround time :Time between submission and
completion
✔ Waiting time: Scheduling affects only waiting time
✔ Response time: Time between submission and first
response
SCHEDULING – Scheduling Algorithms
• Scheduling algorithms
● First Come First Serve (FCFS)

✔ It executes the FIFO queue, so it is non-preemptive

✔ Easy to understand and implement, and used in batch


systems
✔ Parallel utilization of resources is not possible, which leads
to Convoy Effect, and hence poor resource(CPU, I/O etc)
utilization and poor performance.
✔ When it happen that many short-time process are waiting
for the resource that is held by another long-time process,
we say that it is a convoy effect
✔ The demonstration of how non-preemptive FCFS work is
to be done practically in this part
SCHEDULING – Scheduling Algorithms

● Shortest Job First (SJF) Scheduling


✔ These approach also aims at minimizing waiting time, so it
is non-preemptive and also preemptive
✔ Easy to implement in batch systems where required CPU
time is known in advance but not to interactive systems
✔ So, the processor should know in advance how much time
process will take in execution
✔ The demonstration of how non-preemptive and
preemptive SJF work are to be done practically in this part
SCHEDULING – Scheduling Algorithms

● Priority Based Scheduling


✔ Process with highest priority is to be executed first and so on,
also it is non-preemptive and also preemptive
✔ Common scheduling algorithms in batch systems
✔ Each process is assigned a priority, where the processes with
same priority are executed on first come first served basis
✔ Priority can be decided based on memory requirements, time
requirements or any other resource requirement. Whereas,
external priorities are set based on criteria outside the
operating system, like the importance of the process, funds
paid for the computer resource use, market factor etc
SCHEDULING – Scheduling Algorithms
✔ With preemptive priority scheduling, the arrival of new
process with higher priority preempts the currently
running process.
✔ With non-preemptive, when new process arrives with a
higher priority than the current running process, the
incoming process is put at the head of the ready queue,
which means after the execution of the current process it
will be processed
✔ processes with lower priority are waiting in the ready
queue with for long duration before getting the CPU for
execution, hence indefinite blocking or starvation
✔ To handle this problem, aging technique is used where we
keep on increasing the priority of low-priority process
based on the its waiting time
SYNCHRONIZATION
Highlights
● Motivation

● Requirements of Synchronization mechanisms

● Synchronization Mechanisms
SYNCHRONIZATION - Motivation
● On the basis of synchronization, processes are categorized
into;
✔ Independent Process : Execution of one process does not
affects the execution of other processes
✔ Cooperative Process : Execution of one process affects the
execution of other processes.
● When two or more process cooperates with each other,
their order of execution must be preserved
● If not that there can be conflicts in their execution and
inappropriate outputs can be produced - sharing impact
● Such processes need to be synchronized so that their
order of execution can be guaranteed
SYNCHRONIZATION - Motivation

● The procedure involved in preserving the appropriate


order of execution of cooperative processes is what
we call Process Synchronization
● Now lets go to this terms extending this problem

above,
● Race Condition

✔ typically occurs when two or more threads try to

read, write and possibly make the decisions based on


the memory that they are accessing concurrently
SYNCHRONIZATION - Motivation

● Critical Section
✔ These are regions of a program that try to access shared
resources and may cause race conditions
✔ To avoid race condition among the processes, we need to
assure that only one process at a time can execute within
the critical section
✔ The critical section problem is used to design a set of
protocols (mechanisms) which can ensure that the race
condition among the processes will never arise

Next – Requirements to Synchronization Mechanisms


SYNCHRONIZATION - Requirements of
Synchronization mechanisms
• Any solution to the critical section problem must satisfy three
requirements below
● Mutual Exclusion : Our solution must provide mutual exclusion. By

Mutual Exclusion, we mean that if one process is executing inside


critical section then the other process must not enter in the critical
section
● Progress; Progress means that if one process doesn't need to execute

into critical section then it should not stop other processes to get into
the critical section
SYNCHRONIZATION - Requirements of
Synchronization mechanisms
● Bounded Waiting; We should be able to predict the waiting time for
every process to get into the critical section. The process must not be
endlessly waiting for getting into the critical section.
● Architectural Neutrality (portability); Our mechanism must be
architectural natural. It means that if our solution is working fine on
one architecture then it should also run on the other ones as well
DEADLOCKS
Highlights
● Motivation

● Strategies for Handling Deadlocks


DEADLOCKS - Motivation
● Every process needs some resources to complete its execution.
However, the resource is granted in a sequential order
● The process requests for some resource then an OS grant the
resource if it is available otherwise let the process waits. The process
uses it and release on the completion
● A situation where each of the computer process waits for a resource
which is being assigned to some another process is what we call
Deadlock. In this situation, none of the process gets executed
DEADLOCKS - Motivation

Next → Condi ons for Deadlock


DEADLOCKS - Motivation
• Necessary conditions for Deadlocks
● Mutual Exclusion; A resource can only be shared in
mutually exclusive manner. It implies, the two process
cannot use the same resource at the same time
● Hold and Wait; A process waits for some resources while
holding another resource at the same time
● No preemption; The process which once scheduled will be
executed till the completion
● Circular Wait; All the processes must be waiting for the
resources in a cyclic manner so that the last process is
waiting for the resource which is being held by the first
process

Next → Deadlock Handling Strategies


DEADLOCKS - Strategies for Handling
Deadlock
● Deadlock Ignorance
✔ In this approach, the an OS assumes that deadlock never
occurs, so it imply ignores deadlock
✔ Is best suitable for a single end user system where user uses
the system only for browsing and all other normal stuff
✔ However, the performance of the system decreases if it uses
deadlock handling mechanism all the time, if deadlock
happens 1 out of 100 times then it is completely unnecessary
to use the deadlock handling mechanism all the time
✔ The user has to simply restart the computer in the case of
deadlock. Windows and Linux are mainly using this approach
DEADLOCKS - Strategies for Handling
Deadlock
● Deadlock Prevention
✔ Deadlock happens only when Mutual Exclusion, hold and
wait, No preemption and circular wait holds
simultaneously
✔ If it is possible to violate one of the four conditions at any
time then the deadlock can never occur in the system
✔ The idea behind the approach is very simple that we have
to fail one of the four conditions but here, there can be a
big argument on its physical implementation in the system

Next → How to fail each of the four condi ons


DEADLOCKS - Strategies for Handling
Deadlock
✔ Handle Mutual Exclusion; concept of spooling may be
used here, for instance a printer collects many jobs and
print each one of them according to FCFS. By using this
mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing. Later, it
collects the output when it is produced
✔ Hold and Wait; uses the negation of hold and wait - !(Hold
and wait) = !hold or !wait (). Can be implemented
practically if a process declares all the resources initially.
Problem with this approach, a process can't determine
necessary resources initially
DEADLOCKS - Strategies for Handling
Deadlock
● No Preemption
✔ Deadlock arises due to the fact that a process can't be stopped once it

starts, but if we take the resource away from the process which is
causing deadlock then we can prevent deadlock
✔ It not a good approach at all since if we take a resource away which is

being used by the process then all the work which it has done till now
can become inconsistent
✔ Consider an example of printer
DEADLOCKS - Strategies for Handling
Deadlock
● Circular Wait
✔ One way to avoid circular wait is to number all resources, and to

require that processes request resources only in strictly increasing ( or


decreasing ) order
✔ In other words, in order to request resource Rj, a process must first

release all Ri such that i >= j


✔ We can assign a priority number to each of the resource, so a process

can't request for a lesser priority resource


✔ With this, no single process can request a resource which is being

utilized by some other process hence no cycle will be formed


DEADLOCKS - Strategies for Handling Deadlock

● Circular Wait
✔ One big challenge in this scheme is to determine the
relative ordering of the different resources
✔ Among all the methods, violating circular wait is the only
approach that can be implemented practically.

Next → Deadlock Avoidance Strategy


DEADLOCKS - Strategies for Handling Deadlock
● Deadlock Avoidance
✔ In this strategy, the operating system checks whether the
system is in safe state or in unsafe state at every step
which the operating system performs
✔ The process continues until the system is in safe state,
once the system moves to unsafe state, the OS has to
backtrack one step
✔ In simple words, The OS reviews each allocation so that
the allocation doesn't cause the deadlock in the system
✔ To avoid deadlocks, the process must tell OS, the
maximum number of resources a process can request to
complete its execution to examine if there can be circular
wait condition
✔ Dijkstra banker’s algorithm is used here

Next → Deadlock Detec on and Recovery Strategy


DEADLOCKS - Strategies for Handling
Deadlock
● Deadlock Detection and Recovery
✔ This approach let the processes fall in deadlock and then
periodically check whether deadlock occur in the system
or not
✔ The system does not attempt to prevent deadlocks from
occurring. Instead, it lets them occur, tries to detect when
this happens, and then takes some action to recover after
the fact
✔ The OS can detect the deadlocks with the help of
Resource Allocation Graph. The resource allocation graph
is the pictorial representation of the state of a system
DEADLOCKS - Strategies for Handling
Deadlock
✔ The resource allocation graph is the complete information
about all the processes which are holding some resources
or waiting for some resources
✔ It also contains the information about all the instances of
all the resources whether they are available or being used
by the processes
✔ If a cycle is being formed in a Resource allocation graph
where all the resources have the single instance then the
system is deadlocked

Next → Example using Resource Alloca on Graph


DEADLOCKS - Strategies for Handling
Deadlock
✔ Consider a system with seven processes, A though G, and
six resources, R through W. The state of which resources
are currently owned and which ones are currently being
requested is as follows;
➔ Process A holds R and wants S.

➔ Process B holds nothing but wants T.

➔ Process C holds nothing but wants S.

➔ Process D holds U and wants S and T.

➔ Process E holds T and wants V.

➔ Process F holds W and wants S.

➔ Process G holds V and wants U.


DEADLOCKS - Strategies for Handling
Deadlock

Next → Recovery from deadlock using this strategy


DEADLOCKS - Strategies for Handling
Deadlock
● In order to recover the system from deadlocks, either OS
considers resources or processes
● For resources;
✔ Preempt the resource - We can snatch one of the resources
from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the
execution and will release this resource sooner. But, choosing a
resource which will be snatched is going to be a bit difficult
✔ Rollback to a safe state - System passes through various states
to get into the deadlock state, so an OS can rollback the system
to the previous safe state. For this purpose, OS needs to
implement check pointing at every state whereby the moment,
a process get into deadlock, we will rollback all the allocations
to get into the previous safe state
DEADLOCKS - Strategies for Handling
Deadlock
● For processes;
✔ Kill a process - Killing a process can solve our problem but
the bigger concern is to decide which process to kill.
Generally, OS kills a process which has done least amount
of work until the time deadlock occurs
✔ Kill all process - This is not a suggestible approach but can
be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system
because all the processes will execute again from starting

Next → END
END

You might also like