Os Chapter Three 1
Os Chapter Three 1
Running state- When new process is created by Operating System that process enters
into the system as in the running state.
Not running state- Processes that are not running are kept in queue, waiting for their
turn to execute.
• Terminates
3.2 Scheduling criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – number of processes that complete their execution per time unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
Scheduling Algorithm Optimization Criteria's are:
• Maximum CPU utilization
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time
3.3 Schedulers
• Schedulers are special system software's which handles process scheduling in
various ways.
• Their main task is to select the jobs to be submitted into the system and to
decide which process to run.
Preemptive scheduling –
• processes using the CPU can be removed by the system.
• It is used when a process switches from running state to ready state or from
the waiting state to ready state
• The resources (mainly CPU cycles) are allocated to the process for a limited
amount of time and then taken away, and the process is again placed back in
the ready queue if that process still has CPU burst time remaining
• That process stays in the ready queue till it gets its next chance to execute.
• In preemptive scheduling if a high-priority process frequently arrives in the
ready queue then the process with low priority has to wait for a long, and it
may have to starve
• Process can be interrupted in between.
• It is flexible scheduling
• In preemptive scheduling, CPU utilization is high.
• Algorithms based on preemptive scheduling are: Round Robin(RR), Shortest
Remaining Time First(SRTF), priority(preemptive version), etc
Non-preemptive scheduling –
• processes using the CPU cannot be removed by the CPU.
• It is used when a process terminates, or a process switches from running to the waiting
state.
• In this scheduling, once the resources (CPU cycles) are allocated to a process, the
process holds the CPU till it gets terminated or reaches a waiting state.
• In the case of non-preemptive scheduling process does not interrupt a process running
CPU in the middle of the execution. Instead, it waits till the process completes its CPU
burst time, and then it can allocate the CPU to another process.
• In the non-preemptive scheduling, if CPU is allocated to the process having a larger
burst time then the processes with small burst time may have to starve.
• Process can not be interrupted until it terminates itself or its time is up.
• It is rigid scheduling type
• In non-preemptive scheduling, CPU utilization is low.
• Algorithms based on non-preemptive scheduling are: First Come First Serve, Shortest
Job First(SJF) and priority(non-preemptive version), etc.
3.4 Types of Scheduler
Long Term Scheduler
• It is also called job scheduler.
• Long term scheduler determines which programs are admitted to the system for
processing.
• Job scheduler selects processes from the queue and loads them into memory for
execution.
• Process loads into the memory for CPU scheduling.
• The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
• It also controls the degree of multiprogramming.
• If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the system
• When process changes the state from new to ready, then there is use of long term
scheduler.
Short Term Scheduler
• It is also called CPU scheduler.
• Main objective is increasing system performance in accordance with the chosen set of
criteria.
• It is the change of ready state to running state of the process.
• CPU scheduler selects process among the processes that are ready to execute and
allocates CPU to one of them.
• Short term scheduler also known as dispatcher, execute most frequently and makes the
fine grained decision of which process to execute next.
• Short term scheduler is faster than long term scheduler.
• Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another
running
Medium Term Scheduler
• Medium term scheduling is part of the swapping.
• It removes the processes from the memory.
• It reduces the degree of multiprogramming.
• It may decide to swap out a process which has not been active for some time, or a
process which has a low priority, or a process which is taking up a large amount of
memory in order to free up main memory for other processes.
• Swapping the process back in later when more memory is available, or when the
process has been unblocked and is no longer waiting for a resource.
• The medium term scheduler is in-charge of handling the swapped out-processes.
• Running process may become suspended if it makes an I/O request.
• Suspended processes cannot make any progress towards completion.
• In this condition, to remove the process from memory and make space for other
process, the suspended process is moved to the secondary storage.
• This process is called swapping, and the process is said to be swapped out or rolled
out.
• Swapping may be necessary to improve the process mix.
Diagrammatic representation of schedulers
3.5 Dispatcher
• Dispatcher's job is that of allocating a processor to a process when the
processor becomes available, but have excluded any involvement in causing
processors to become available.
• The nature of the dispatcher's job depends on the balance between processes
requiring service and processors available for service
• In a single-processor, or tightly coupled multiprocessor, system with more
processes than processors, the dispatcher is used when a processor becomes
vacant to allocate the processor to some waiting process
• The dispatcher's job is to organize and manage the ready state.
• It must take a process from some pool of ready processes, and to set the
process running on an available processor
• Two components of dispatcher operation
A. Selecting a process to run.
• All ready processes are attached to the end of a single ready queue, and the
dispatcher always removes the process from the head of the queue and sets it
running.
B. Making the process run.
• Once the process is selected, the rest is easy.
• The dispatcher must carry out any necessary housekeeping on the ready
queue ( such as detaching therefrom the process to be executed )
• mark the new active process as running,
• set the new process's register values in the machine registers,
• and finally branch to the appropriate location within the new process's
code.
Difference between the Scheduler and Dispatcher
• Procedure of selecting a process among various process is done
by scheduler.
• Now here the task of scheduler completed.
• Now dispatcher comes into picture as scheduler have decide a process for
execution, it is dispatcher who takes that process from ready queue to the
running status, or you can say that providing CPU to that process is the task
of dispatcher.
Example:
• There are 4 process in ready queue, i.e., P1, P2, P3, P4;
• They all are arrived at t0, t1, t2, t3 respectively.
• First in First out (FIFO) scheduling algorithm is used.
• So, scheduler decided that first of all P1 has came, so this is to be
executed first.
• Now dispatcher takes P1 to the running state.
Cont’d…
• One of the major drawback of this scheme is that the average time is
often quite long.
0 4 7 10 14 1822 26 30
Example of RR with Time Quantum =
4
Process Burst Time
P1 24
P2 3
P3 3
• Waiting Time:
• P1: (10-4) = 6 P1 P2 P3 P1 P1 P1 P1 P1
• P2: (4-0) = 4
• P3: (7-0) = 7 0 4 7 10 14 1822 26 30
• Completion Time:
• P1: 30
• P2: 7
• P3: 10
• Average Waiting Time: (6 + 4 + 7)/3= 5.67
• Average Completion Time: (30+7+10)/3=15.67
Example#2 of RR with Time
Quantum = 20
• Waiting Time: A process can finish before the time quantum expires, and release the CPU.
• P1: (68-20)+(112-88) = 72
• P2: (20-0) = 20
• P3: (28-0)+(88-48)+(125-108) = 85
• P4: (48-0)+(108-68) = 88
• Completion Time:
• P1: 125
• P2: 28
• P3: 153
• P4: 112
• Average Waiting Time: (72+20+85+88)/4 = 66.25
• Average Completion Time: (125+28+153+112)/4 = 104.5
D. Shortest remaining time first
• In this scheduling algorithm, the process with the smallest amount of time remaining
until completion is selected to execute.
• Since the currently executing process is the one with the shortest amount of time
remaining by definition, and since that time should only reduce as execution
progresses, processes will always run until they complete or a new process is added
that requires a smaller amount of time.
• Shortest remaining time is advantageous because short processes are handled very
quickly.
• The system also requires very little overhead since it only makes a decision when a
process completes or a new process is added, and when a new process is added the
algorithm only needs to compare the currently executing process with the new process,
ignoring all other processes currently waiting to execute.
• However, it has the potential for process starvation for processes which will require a
long time to complete if short processes are continually added, though this threat can be
minimal when process times follow a heavy-tailed distribution.
• Like shortest job first scheduling, shortest remaining time first scheduling is rarely used
outside of specialized environments because it requires accurate estimations of the
runtime of all processes that are waiting to execute.
E. Priority Scheduling
• A CPU algorithm that schedules processes based on priority.
• It used in Operating systems for performing batch processes.
• If two jobs having the same priority are READY, it works on a FIRST COME,
FIRST SERVED basis.
• In priority scheduling, a number is assigned to each process that indicates its
priority level.
• Lower the number, higher is the priority.
• In this type of scheduling algorithm, if a newer process arrives, that is having a
higher priority than the currently running process, then the currently running
process is preempted.
• In Preemptive (if a higher priority process enters, it receives the CPU
immediately)
• In Non-preemptive (higher priority processes must wait until the current process
finishes; then, the highest priority ready process is selected)
F. Multiple-Level Queues Scheduling
• Multiple-level queues are not an independent scheduling algorithm.
• They make use of other existing algorithms to group and schedule jobs
with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue. For example, CPU-bound jobs can be
scheduled in one queue and all I/O-bound jobs in another queue.
• The Process Scheduler then alternately selects jobs from each queue and assigns
them to the CPU based on the algorithm assigned to the queue
3.7 Process and Thread
3.7.1 process
• It is a program state
• CPU registers
• Stack pointer
• Only one process can be running in the CPU at any given time!
How process can be created?
• Processes can be created in two ways
• System initialization: one or more processes created when the OS starts up
• Execution of a process creation system call: something explicitly asks for a
new process
• System calls can come from
• User request to create a new process (system call executed from user shell)
• Already running processes
• User programs
• System daemons
When do processes end?
• Conditions that terminate processes can be
• Voluntary
• Involuntary
• Voluntary
• Normal exit
• Error exit
• Involuntary
• Fatal error (only sort of involuntary)
• Killed by another process
Process states
• Process in one of 5 states
• Created
Created • Ready
• Running
1
• Blocked
Ready • Exit
2
• Transitions between states
5
3 1 - Process enters ready queue
Blocked
(waiting)
Running 2 - Scheduler picks this process
4 3 - Scheduler picks a different
process
7 4 - Process waits for event (such as
7
6 I/O)
Exit 5 - Event occurs
6 - Process exits
Operations on the Process
1. Creation
• Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.
2. Scheduling
• Out of many processes present in the ready queue, the Operating system
chooses one process and start executing it.
3. Execution
• Once the process is scheduled for the execution, the processor starts executing
it.
• Process may come to the blocked or wait state during the execution then in that
case the processor starts executing the other processes.
4. Deletion/killing
• Once the purpose of the process gets over, then the OS will kill the process.
• The Context of the process (PCB) will be deleted and the process gets
terminated by the Operating system.
3.7.2 Thread
• The unit of execution is usually referred to a thread or a “lightweight process”
• A thread is a flow of execution through the process code,
• with its own program counter that keeps track of which instruction to execute next,
• system registers which hold its current working variables and stack which contains
the execution history.
• A thread shares with its peer threads few information like code segment, data segment
and open files.
• When one thread alters a code segment memory item, all other threads see that.
• A thread is also called a light weight process.
• Threads provide a way to improve application performance through parallelism.
• Threads represent a software approach to improve performance of operating system by
reducing the overhead
• Thread is equivalent to a classical process also called mini-process or process within
process.
• Each thread belongs to exactly one process and no thread can exist outside a process.
Why we need to have a thread?
• They are easier (i.e., faster) to create and destroy than processes.
• In many systems, creating a thread goes 10–100 times faster than creating a
process. When the number of threads needed changes dynamically and
rapidly, this property is useful to have.
• Threads are useful on systems with multiple CPUs, where real parallelism
is possible.
Threads and Processes