CS Notes
CS Notes
Processes &
Process Scheduling
Topics to Study
u Even on a single-user system, a user can run several programs at the same
time:
u When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new
process via a context switch
u Context of a process represented in the PCB
u Context-switch time is overhead; the system does no useful work
while switching
u The more complex the OS and the PCB è the longer the context
switch
u Time is dependent on hardware support
u Some hardware provides multiple sets of registers per CPU è
multiple contexts loaded at once
Process Representation in Linux
The PCB in the Linux operating system is represented by the C structure task_struct
pid t_pid; /* process identifier */
long state; /* state of the process */
unsigned int time_slice /* scheduling information */
struct task_struct *parent; /* this processʼs parent */
struct list_head children; /* this processʼs children */
struct files_struct *files; /* list of open files */
struct mm_struct *mm; /* address space of this process */
Within the Linux kernel, all active processes are represented using a doubly linked list of task_struct.
The pointer current points to the process currently executing.
u The state of the process currently running can be changed to the value new_state with the
following: current->state = new_state;
Threads
u So far, a process is a program that performs a single thread of
execution.
u Ie., a single thread of instructions is being executed. This allows the
process to perform only one task at a time.
u Most modern operating systems have extended the process
concept to allow a process to have multiple threads of execution
and thus to perform more than one task at a time.
u For example, when a process is running a word-processor program, the
user can simultaneously type in characters, run the spell checker within the
same process and do Auto-Save.
u Here multiple threads can run in parallel.
u On a system that supports threads, the PCB is expanded to
include information for each thread, such as multiple program
counters.
Process Scheduling
u Basic Concepts
u Scheduling Criteria
u Scheduling Algorithms
Basic Concepts
u In a single-processor system, only one process can run at a
time.
u A process is executed until it must wait, typically for the
completion of some I/O request. Till then, the CPU just sits idle.
u All this waiting time is wasted with no useful work
u The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.
u Several processes are kept in memory at one time.
u When one process has to wait, the operating system takes the CPU
away from that process and gives the CPU to another process.
u This pattern continues.
u Every time one process has to wait, another process can take over
use of the CPU.
Basic Concepts :- CPU–I/O Burst Cycle
u Throughput – Number of processes that complete their execution per time unit
u Waiting time – amount of time a process has been waiting in the ready queue.
The sum of the periods spent waiting in the ready queue.
u Response time – amount of time it takes from when a request was submitted
until the first response is produced, not output.
Scheduling Criteria (contd)
u Max throughput
u Scheduling algorithms
uPriority scheduling
u Associates with each process, the length of its next CPU burst
u When the CPU is available, it is assigned to the process that
has the smallest next CPU burst.
u If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
u Shortest-next- CPU-burst algorithm
ProcessArriveBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
u SJF scheduling chart
! Now we add the concepts of varying arrival times and preemption to the analysis
Processri Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
! At time 0, process P1 is started, since it is the only process in the queue.
! At time 1, Process P2 arrives.
Preemptive SJF Gantt Chart
Example of Shortest-remaining-time-first
Process Arrival Time Burst Time. t =0 1 2 3
P1 0 8 8✅ 7 7 7
P2 1 4. 4✅ 3✅ 2✅
P3 2 9 9 9
P4 3 5. 5
! The remaining time for process P1 (8-1= 7 milliseconds) is larger than the time
required by process P2 (4 milliseconds).
! So process P1 is preempted, and process P2 is scheduled.
! Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec
Priority Scheduling
u A priority number (integer) is associated with each process
u The CPU is allocated to the process with the highest priority
**(smallest integer implies highest priority)
u Preemptive or Nonpreemptive
u SJF is a priority scheduling algorithm, where priority is the inverse of
predicted next CPU burst time
u Problem : Starvation or Indefinite blocking
ua steady stream of higher-priority processes can prevent a low-
priority process from ever getting the CPU, leaving some low-
priority processes waiting indefinitely
u low priority processes may never execute
u Solution : Aging - gradually increasing the priority of processes that
wait in the system for a long time
Example of Priority Scheduling
u Performance
u If q large Þ same as FIFO
u If q small Þ results in a large number of context switches, overhead is too high
u Typically, higher average turnaround than SJF, but better response
u q should be large compared to context switch time
Operations on Processes
init
pid = 1
emacs tcsch
ps
pid = 9204 pid = 4005
pid = 9298
Process Creation (Cont.)
u Execution possibilities
u Parent and children execute concurrently
u Parent waits until children terminate
u A process that has terminated, but whose parent has not yet
called wait(), is known as a zombie process.
Independent Processes
u A process is independent if it cannot affect or be affected by
the other processes executing in the system.
u Any process that does not share data with any other process is
independent.
Cooperating Processes
u A process is cooperating if it can affect or be affected by the
other processes executing in the system.
u Clearly, any process that shares data with other processes is a
cooperating process.
Advantages of Cooperating Processes
process A process A
process B
message queue
m0 m1 m2 m3 ... mn
kernel
kernel
(a) (b)
u Message passing is useful for exchanging smaller amounts of data
u Shared memory can be faster than message passing, since message-passing
systems are typically implemented using system calls
Interprocess Communication – Shared Memory
Producer-Consumer Problem
out
in
0
u The shared buffer is implemented as a circular array with two logical pointers: in and out.
u in points to the next free position in the buffer;
u out points to the first full position in the buffer.
u The buffer is empty when in == out;
u the buffer is full when ((in + 1) % BUFFER SIZE) == out.
u Solution is correct, but can only use BUFFER_SIZE-1 elements
Producer - using Bounded-Buffer
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Consumer - using Bounded-Buffer
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
u Implementation issues:
u How are links established?
u Can a link be associated with more than two processes?
u How many links can there be between every pair of communicating
processes?
u What is the capacity of a link?
u Is the size of a message that the link can accommodate fixed or variable?
u Is a link unidirectional or bi-directional?
Message Passing (Cont.)
! Producer-consumer problem
becomes trivial when we use
blocking send() and receive()
statements.