UNIT-II
UNIT-II
21BTCS504-Operating System
Class - T.Y.
PLD(SEM-I)
Unit - II
PROCESS MANAGEMENT
AY 2024-2025 SEM-I
1
MIT School of Computing
Department of Computer Science & Engineering
Unit-II Syllabus/Contents
• Process Concept,
• Operations on Processes,
• Process Scheduling, PLD
• Inter-process Communication, 5 Examples of IPC Systems
• Multithreading Models, Threading Issues,
• CPU Scheduling, Basic Concepts, Scheduling Criteria,
Scheduling Algorithms, Multiple-Processor Scheduling, Thread
Scheduling
2
MIT School of Computing
Department of Computer Science & Engineering
Process
• Program under execution
• An instance of a program
• Schedulable/Dispatch unit (CPU)
• Unit of execution (CPU)PLD
3
MIT School of Computing
Department of Computer Science & Engineering
Process representation
PLD
4
• A process is also known as job or task. A process is
more than the program code, which is sometimes
known as the text section. It also includes the
current activity, as represented by the value of the
program counter and the contents of the
processor’s registers.
• A process generally also includes the process
stack, which contains temporary data (such as
function parameters, return addresses, and local
variables),
• A data section, which contains global variables.
• A process may also include a heap, which is
memory that is dynamically allocated during
process run time.
5
MIT School of Computing
Department of Computer Science & Engineering
Process operations
• Create (space allocation)
• Schedule/run
• Wait/Block
• Suspend/Resume PLD
6
MIT School of Computing
Department of Computer Science & Engineering
PLD
7
• Process state. The state may be new, ready, running, and waiting, halted,
and so on.
• Program counter. The counter indicates the address of the next instruction to
be executed for this process.
• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward
(Figure 3.4).
• CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such items
as the value of the base and limit registers and the page tables, or the segment
tables, depending on the memory system used by the operating system.
•Accounting information. This information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers, and so
on.
• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on. In brief, the PCB simply
serves as the repository for any information that may vary from process to
8
process.
MIT School of Computing
Department of Computer Science & Engineering
9
MIT School of Computing
Department of Computer Science & Engineering
Context Switching in OS
• CPU uses registers to store current process
execution state
• When stopped in between, have to save these
values in PCB to resumePLDfrom that point onwards
• Saving current running process’s content(CPU
register values) from CPU to PCB of the process and
loading other process’s context in CPU registers
10
MIT School of Computing
Department of Computer Science & Engineering
PLD
11
Thread in Operating
System
• A thread is a single sequence stream
within a process. Threads are also called
lightweight processes as they possess
some of the properties of processes.
• Each thread belongs to exactly one
process. In an operating system that
supports multithreading, the process can
consist of many threads.
• But threads can be effective only if the
CPU is more than 1 otherwise two threads
have to context switch for that single CPU.
12
What is Thread in Operating
Systems?
13
Why Do We Need
Thread?
• Threads run in parallel improving the application
performance. Each such thread has its own CPU
state and stack, but they share the address space
of the process and the environment.
• Threads can share common data so they do not
need to use inter-process communication. Like the
processes, threads also have states like ready,
executing, blocked, etc.
• Priority can be assigned to the threads just like the
process, and the highest priority thread is
scheduled first.
• Each thread has its own Thread Control Block
(TCB). Like the process, a context switch occurs for
the thread, and register contents are saved in
(TCB). As threads share the same address space
and resources, synchronization is also required for
the various activities of the thread. 14
Types of Thread in Operating
System
• Threads are of
two types.
These are
described
below.
• User Level
Thread
• Kernel Level
Thread
15
User Level Threads
16
• Advantages of User-Level Threads
• Implementation of the User-Level Thread is
easier than Kernel Level Thread.
• Context Switch Time is less in User Level
Thread.
• User-Level Thread is more efficient than Kernel-
Level Thread.
• Because of the presence of only Program
Counter, Register Set, and Stack Space, it has
a simple representation.
• Disadvantages of User-Level Threads
• There is a lack of coordination between Thread
and Kernel.
• In case of a page fault, the whole process can
be blocked.
17
Kernel Level Threads
18
• Advantages of Kernel-Level Threads
• It has up-to-date information on all threads.
• Applications that block frequency are to be
handled by the Kernel-Level Threads.
• Whenever any process requires more time
to process, Kernel-Level Thread provides
more time to it.
• Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-
Level Thread.
• Implementation of this type of thread is a
little more complex than a user-level thread.
19
20
MIT School of Computing
Department of Computer Science & Engineering
PLD
21
MIT School of Computing
Department of Computer Science & Engineering
PLD
22
MIT School of Computing
Department of Computer Science & Engineering
Explanation
• New: All installed processes are here (SSD, HDD)
• Ready: Processes among the installed processes that
are wanting to run on CPU (in MM)
PLD
• Running: Running on CPU (MM)
• Blocked: Waiting for any I/O or event (MM)
• Terminated: All completed processes are in this state
(goes out from the MM)
23
MIT School of Computing
Department of Computer Science & Engineering
Note
• When a new process is formed, OS makes the PCB
• In non preemptive process, no factor can force a
running process to come out of CPU. Termination or
PLD
going to waiting is only its wish.
• PCB will be there in running, ready and waiting
state. In these three states, process will be in MM
only
24
MIT School of Computing
Department of Computer Science & Engineering
CPU scheduling
• Make a selection of ready processes to run on CPU.
• Needed for better performance
• Metric for performancePLDmay vary
• Multiple policies according to metric
25
MIT School of Computing
Department of Computer Science & Engineering
Scheduling Queue
• Job queue: to keep all processes in new state
• Ready queue: to keep all processes in ready state
• Device queue: to keep the processes in blocked
state PLD
26
MIT School of Computing
Department of Computer Science & Engineering
Scheduling Objectives
• Maximize throughput.
• Maximize number of users receiving acceptable
response times.
• Be predictable. PLD
27
MIT School of Computing
Department of Computer Science & Engineering
28
MIT School of Computing
Department of Computer Science & Engineering
Types of scheduler
• Long term scheduler: new to ready
• Short term scheduler: ready to running
• Mid term scheduler: swapping
PLD
is done by this.
Swapping leads to suspension
29
MIT School of Computing
Department of Computer Science & Engineering
PLD
30
MIT School of Computing
Department of Computer Science & Engineering
s
CPU Scheduling
• Types: Preemptive and non preemptive
• Assumption initially: No process has I/O requirement
• Some terminologies:
• Arrival Time(AT): Process arrival time(admitted by LTS)
• Burst/Service time(BT): burst time is the time, required the cpu to
PLD
execute that job, it is in milli seconds.
• Completion time (CT): Time when process completes execution
• Turn around time (TAT): Time from arrival until completion (TAT=CT-
AT)
• Waiting time (WT): Waiting for CPU in ready queue (WT=TAT-BT)
• Response time: Time from arrival to the first response on CPU
• Scheduling length: max(CT)-min(AT)
• Throughput: No of processes executed per unit time
31
MIT School of Computing
Department of Computer Science & Engineering
Scheduling Criteria
• CPU Utilization
• Throughput
• Turnaround time PLD
• Waiting Time
• Response Time
32
MIT School of Computing
Department of Computer Science & Engineering
33
MIT School of Computing
Department of Computer Science & Engineering
34
MIT School of Computing
Department of Computer Science & Engineering
P1 3 4
P2 5 3
PLD
P3 0 2
P4 5 1
P5 4 3
35
MIT School of Computing
Department of Computer Science & Engineering
Solution-
Gantt Chart-
PLD
36
MIT School of Computing
Department of Computer Science & Engineering
P1 7 7–3=4 4–4=0
P2 13 13 – 5 = 8 8–3=5
PLD
P3 2 2–0=2 2–2=0
P4 14 14 – 5 = 9 9–1=8
P5 10 10 – 4 = 6 6–3=3
37
MIT School of Computing
Department of Computer Science & Engineering
38
MIT School of Computing
Department of Computer Science & Engineering
• Disadvantages
• No option of preemption
• Convoy effect
39
MIT School of Computing
Department of Computer Science & Engineering
Convoy effect
• Only FCFS suffers
• If a slow process (large service time) is scheduled first, then it makes
entire system slow
PLD
40
MIT School of Computing
Department of Computer Science & Engineering
SJF
• Criteria: min BT first
• Mode: Non preemptive
• Tie breaker: FCFS
PLD
41
MIT School of Computing
Department of Computer Science & Engineering
Example
PLD
42
MIT School of Computing
Department of Computer Science & Engineering
PLD
43
MIT School of Computing
Department of Computer Science & Engineering
44
MIT School of Computing
Department of Computer Science & Engineering
Priority Scheduling
PLD
45
MIT School of Computing
Department of Computer Science & Engineering
PLD
46
MIT School of Computing
Department of Computer Science & Engineering
PLD
47
MIT School of Computing
Department of Computer Science & Engineering
PLD
48
MIT School of Computing
Department of Computer Science & Engineering
49
MIT School of Computing
Department of Computer Science & Engineering
PLD
50
MIT School of Computing
Department of Computer Science & Engineering
PLD
51
MIT School of Computing
Department of Computer Science & Engineering
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
• foreground (interactive)
• background (batch)
• Process permanently in a given queue
PLD
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
52
MIT School of Computing
Department of Computer Science & Engineering
PLD
53
MIT School of Computing
Department of Computer Science & Engineering
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will
enter when that process needs service
54
MIT School of Computing
Department of Computer Science & Engineering
• Scheduling PLD
• A new job enters queue Q0 which is
served FCFS
• When it gains CPU, job receives 8
milliseconds
• If it does not finish in 8
milliseconds, job is moved to
queue Q1
• At Q1 job is again served FCFS and
receives 16 additional milliseconds
• If it still does not complete, it is
preempted and moved to queue Q2
55
MIT School of Computing
Department of Computer Science & Engineering
Thread Scheduling
• Distinction between user-level and kernel-level threads
• When threads supported, threads scheduled, not processes
• Many-to-one and many-to-many models, thread library
schedules user-level threads
PLD
to run on LWP
• Known as process-contention scope (PCS) since scheduling
competition is within the process
• Typically done via priority set by programmer
• Kernel thread scheduled onto available CPU is system-
contention scope (SCS) – competition among all threads in
system
56
MIT School of Computing
Department of Computer Science & Engineering
Multiple-Processor Scheduling
• CPU scheduling more complex when multiple CPUs are available
• Homogeneous processors within a multiprocessor
• Asymmetric multiprocessing – only one processor accesses the
system data structures, alleviating the need for data sharing
PLD
• Symmetric multiprocessing (SMP) – each processor is self-
scheduling, all processes in common ready queue, or each has
its own private queue of ready processes
• Currently, most common
• Processor affinity – process has affinity for processor on which
it is currently running
• soft affinity
• hard affinity
• Variations including processor sets
57
MIT School of Computing
Department of Computer Science & Engineering
58
MIT School of Computing
Department of Computer Science & Engineering
Multicore Processors
59
MIT School of Computing
Department of Computer Science & Engineering
PLD
60
MIT School of Computing
Department of Computer Science & Engineering
• Pipes.
• Server.
61
MIT School of Computing
Department of Computer Science & Engineering
Advantages
• It permits one application to manage another application
Disadvantages
• Similar to the pipe, every data block includes the highest limit
of length.
• Complete system data length for all blocks integrated within
the queue also includes anPLD
upper limit.
• All the processes or programs that utilize the model of shared
memory need to check that they are not writing to a similar
memory location.
• A shared memory model may form issues like synchronization
& protection of memory that require to be addressed.
• As compared to straight function calls, it is slow.
63
MIT School of Computing
Department of Computer Science & Engineering
PLD
PLD
65
MIT School of Computing
Department of Computer Science & Engineering
PLD
66
MIT School of Computing
Department of Computer Science & Engineering
PLD
67
MIT School of Computing
Department of Computer Science & Engineering
PLD
68
MIT School of Computing
Department of Computer Science & Engineering
PLD
69