0% found this document useful (0 votes)
17 views

Os Chapter Three 1

Uploaded by

dillasemera2014
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Os Chapter Three 1

Uploaded by

dillasemera2014
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Chapter Three

Scheduling and Dispatch


3.1 Scheduling
• The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
• Process scheduling is an essential part of a Multiprogramming operating system
• There are important process scheduling queues maintained by the Operating System:
• Job queue - This queue keeps all the processes in the system.
• Ready queue - This queue keeps set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• I/O waiting queues - The processes which are blocked due to unavailability of an
I/O device constitute this queue.
• The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system
Diagrammatic representation of
queues
There is two process states model

Running state- When new process is created by Operating System that process enters
into the system as in the running state.

Not running state- Processes that are not running are kept in queue, waiting for their
turn to execute.

 CPU scheduling decisions may take place when a process:

• Switches from running to waiting state

• Switches from running to ready state

• Switches from waiting to ready state

• Terminates
3.2 Scheduling criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – number of processes that complete their execution per time unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
Scheduling Algorithm Optimization Criteria's are:
• Maximum CPU utilization
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time
3.3 Schedulers
• Schedulers are special system software's which handles process scheduling in
various ways.
• Their main task is to select the jobs to be submitted into the system and to
decide which process to run.
Preemptive scheduling –
• processes using the CPU can be removed by the system.
• It is used when a process switches from running state to ready state or from
the waiting state to ready state
• The resources (mainly CPU cycles) are allocated to the process for a limited
amount of time and then taken away, and the process is again placed back in
the ready queue if that process still has CPU burst time remaining
• That process stays in the ready queue till it gets its next chance to execute.
• In preemptive scheduling if a high-priority process frequently arrives in the
ready queue then the process with low priority has to wait for a long, and it
may have to starve
• Process can be interrupted in between.
• It is flexible scheduling
• In preemptive scheduling, CPU utilization is high.
• Algorithms based on preemptive scheduling are: Round Robin(RR), Shortest
Remaining Time First(SRTF), priority(preemptive version), etc
Non-preemptive scheduling –
• processes using the CPU cannot be removed by the CPU.
• It is used when a process terminates, or a process switches from running to the waiting
state.
• In this scheduling, once the resources (CPU cycles) are allocated to a process, the
process holds the CPU till it gets terminated or reaches a waiting state.
• In the case of non-preemptive scheduling process does not interrupt a process running
CPU in the middle of the execution. Instead, it waits till the process completes its CPU
burst time, and then it can allocate the CPU to another process.
• In the non-preemptive scheduling, if CPU is allocated to the process having a larger
burst time then the processes with small burst time may have to starve.
• Process can not be interrupted until it terminates itself or its time is up.
• It is rigid scheduling type
• In non-preemptive scheduling, CPU utilization is low.
• Algorithms based on non-preemptive scheduling are: First Come First Serve, Shortest
Job First(SJF) and priority(non-preemptive version), etc.
3.4 Types of Scheduler
Long Term Scheduler
• It is also called job scheduler.
• Long term scheduler determines which programs are admitted to the system for
processing.
• Job scheduler selects processes from the queue and loads them into memory for
execution.
• Process loads into the memory for CPU scheduling.
• The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
• It also controls the degree of multiprogramming.
• If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the system
• When process changes the state from new to ready, then there is use of long term
scheduler.
Short Term Scheduler
• It is also called CPU scheduler.
• Main objective is increasing system performance in accordance with the chosen set of
criteria.
• It is the change of ready state to running state of the process.
• CPU scheduler selects process among the processes that are ready to execute and
allocates CPU to one of them.
• Short term scheduler also known as dispatcher, execute most frequently and makes the
fine grained decision of which process to execute next.
• Short term scheduler is faster than long term scheduler.
• Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another
running
Medium Term Scheduler
• Medium term scheduling is part of the swapping.
• It removes the processes from the memory.
• It reduces the degree of multiprogramming.
• It may decide to swap out a process which has not been active for some time, or a
process which has a low priority, or a process which is taking up a large amount of
memory in order to free up main memory for other processes.
• Swapping the process back in later when more memory is available, or when the
process has been unblocked and is no longer waiting for a resource.
• The medium term scheduler is in-charge of handling the swapped out-processes.
• Running process may become suspended if it makes an I/O request.
• Suspended processes cannot make any progress towards completion.
• In this condition, to remove the process from memory and make space for other
process, the suspended process is moved to the secondary storage.
• This process is called swapping, and the process is said to be swapped out or rolled
out.
• Swapping may be necessary to improve the process mix.
Diagrammatic representation of schedulers
3.5 Dispatcher
• Dispatcher's job is that of allocating a processor to a process when the
processor becomes available, but have excluded any involvement in causing
processors to become available.
• The nature of the dispatcher's job depends on the balance between processes
requiring service and processors available for service
• In a single-processor, or tightly coupled multiprocessor, system with more
processes than processors, the dispatcher is used when a processor becomes
vacant to allocate the processor to some waiting process
• The dispatcher's job is to organize and manage the ready state.
• It must take a process from some pool of ready processes, and to set the
process running on an available processor
• Two components of dispatcher operation
A. Selecting a process to run.
• All ready processes are attached to the end of a single ready queue, and the
dispatcher always removes the process from the head of the queue and sets it
running.
B. Making the process run.
• Once the process is selected, the rest is easy.
• The dispatcher must carry out any necessary housekeeping on the ready
queue ( such as detaching therefrom the process to be executed )
• mark the new active process as running,
• set the new process's register values in the machine registers,
• and finally branch to the appropriate location within the new process's
code.
Difference between the Scheduler and Dispatcher
• Procedure of selecting a process among various process is done
by scheduler.
• Now here the task of scheduler completed.
• Now dispatcher comes into picture as scheduler have decide a process for
execution, it is dispatcher who takes that process from ready queue to the
running status, or you can say that providing CPU to that process is the task
of dispatcher.
Example:
• There are 4 process in ready queue, i.e., P1, P2, P3, P4;
• They all are arrived at t0, t1, t2, t3 respectively.
• First in First out (FIFO) scheduling algorithm is used.
• So, scheduler decided that first of all P1 has came, so this is to be
executed first.
• Now dispatcher takes P1 to the running state.
Cont’d…

• The dispatcher should be as fast as possible, given that it is invoked


during every process switch.
3.6 Scheduling Algorithms
A. First Come First Served (FCFS)
• First-Come-First-Served algorithm is the simplest scheduling algorithm.
• Whichever process requests the CPU first gets it first.
• It is implemented using a standard FIFO single queue.
• Processes are dispatched according to their arrival time on the ready queue.
• It is non-preemptive discipline, once a process has a CPU, it runs to completion.
• The FCFS scheduling is fair in the formal sense or human sense of fairness but it is
unfair in the sense that long jobs make short jobs wait and unimportant jobs make
important jobs wait.
• FCFS is more predictable than most of other schemes since it offers time.
• FCFS scheme is not useful in scheduling interactive users because it cannot guarantee
good response time.
• Waiting time can be long and it depends heavily on the order in which
processes request CPU time

• The code for FCFS scheduling is simple to write and understand.

• One of the major drawback of this scheme is that the average time is
often quite long.

• The First-Come-First-Served algorithm is rarely used as a master


scheme in modern operating systems but it is often embedded within
other schemes.
Scenario #1
Scenario #2
B. Shortest Job First (SJF)
• Most appropriately called Shortest Next CPU Burst First because it bases the order upon
an approximation of how long what the next CPU burst will be
• This can be proven to be the optimal scheduling algorithm with the shortest average
processing (and waiting) time.
• Shortest Job First (SJF) is a scheduling policy that selects the waiting process with the
smallest execution time to execute next.
• Shortest job first is advantageous because of its simplicity and because it maximizes
process throughput (in terms of the number of processes run to completion in a given
amount of time).
• However, it has the potential for process starvation for processes which will require a
long time to complete if short processes are continually added.
• Shortest job next scheduling is rarely used outside of specialized environments because
it requires accurate estimations of the runtime of all processes that are waiting to
execute.
• It is provably optimal with respect to minimizing the average waiting time.
C. Round Robin Scheduling
• Round-robin (RR) is one of the simplest scheduling algorithms for processes in an
operating system, which assigns time slices to each process in equal portions and in order,
handling all processes without priority.
• Round-robin scheduling is both simple and easy to implement, and starvation-free.
• Round-robin scheduling can also be applied to other scheduling problems, such as data
packet scheduling in computer networks.
• Round-robin job scheduling may not be desirable if the size of the jobs or tasks are
strongly varying.
• A process that produces large jobs would be favored over other processes. This problem
may be solved by time-sharing, i.e. by giving each job a time slot or quantum (its
allowance of CPU time), and interrupt the job if it is not completed by then.
• Each process is provided a fix time to execute called quantum usually 10-100ms.
• Once a process is executed for given time period. Process is preempted and other
process executes for given time period.
• The job is resumed next time a time slot is assigned to that process.
• Example: The time slot could be 100 milliseconds. If a job1 takes a total time of
250ms to complete, the round-robin scheduler will suspend the job after 100ms
and give other jobs their time on the CPU. Once the other jobs have had their
equal share (100ms each), job1 will get another allocation of CPU time and the
cycle will repeat. This process continues until the job finishes and needs no more
time on the CPU.
• Advantage: Fairness (each job gets an equal amount of the CPU)
• Disadvantage: Average waiting time can be bad (especially when the number of
processes is large)
Example#1 of RR with Time
Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3

• The Gantt chart is:


P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 1822 26 30
Example of RR with Time Quantum =
4
Process Burst Time
P1 24
P2 3
P3 3
• Waiting Time:
• P1: (10-4) = 6 P1 P2 P3 P1 P1 P1 P1 P1
• P2: (4-0) = 4
• P3: (7-0) = 7 0 4 7 10 14 1822 26 30
• Completion Time:
• P1: 30
• P2: 7
• P3: 10
• Average Waiting Time: (6 + 4 + 7)/3= 5.67
• Average Completion Time: (30+7+10)/3=15.67
Example#2 of RR with Time
Quantum = 20

• Waiting Time: A process can finish before the time quantum expires, and release the CPU.

• P1: (68-20)+(112-88) = 72
• P2: (20-0) = 20
• P3: (28-0)+(88-48)+(125-108) = 85
• P4: (48-0)+(108-68) = 88
• Completion Time:
• P1: 125
• P2: 28
• P3: 153
• P4: 112
• Average Waiting Time: (72+20+85+88)/4 = 66.25
• Average Completion Time: (125+28+153+112)/4 = 104.5
D. Shortest remaining time first

• Shortest remaining time first is a method of CPU scheduling that is a preemptive


version of shortest job first scheduling.

• In this scheduling algorithm, the process with the smallest amount of time remaining
until completion is selected to execute.

• Since the currently executing process is the one with the shortest amount of time
remaining by definition, and since that time should only reduce as execution
progresses, processes will always run until they complete or a new process is added
that requires a smaller amount of time.
• Shortest remaining time is advantageous because short processes are handled very
quickly.
• The system also requires very little overhead since it only makes a decision when a
process completes or a new process is added, and when a new process is added the
algorithm only needs to compare the currently executing process with the new process,
ignoring all other processes currently waiting to execute.
• However, it has the potential for process starvation for processes which will require a
long time to complete if short processes are continually added, though this threat can be
minimal when process times follow a heavy-tailed distribution.
• Like shortest job first scheduling, shortest remaining time first scheduling is rarely used
outside of specialized environments because it requires accurate estimations of the
runtime of all processes that are waiting to execute.
E. Priority Scheduling
• A CPU algorithm that schedules processes based on priority.
• It used in Operating systems for performing batch processes.
• If two jobs having the same priority are READY, it works on a FIRST COME,
FIRST SERVED basis.
• In priority scheduling, a number is assigned to each process that indicates its
priority level.
• Lower the number, higher is the priority.
• In this type of scheduling algorithm, if a newer process arrives, that is having a
higher priority than the currently running process, then the currently running
process is preempted.
• In Preemptive (if a higher priority process enters, it receives the CPU
immediately)
• In Non-preemptive (higher priority processes must wait until the current process
finishes; then, the highest priority ready process is selected)
F. Multiple-Level Queues Scheduling
• Multiple-level queues are not an independent scheduling algorithm.
• They make use of other existing algorithms to group and schedule jobs
with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue. For example, CPU-bound jobs can be
scheduled in one queue and all I/O-bound jobs in another queue.
• The Process Scheduler then alternately selects jobs from each queue and assigns
them to the CPU based on the algorithm assigned to the queue
3.7 Process and Thread

3.7.1 process

• Process is Code, data, and stack


• Usually (but not always) has its own address space

• It is a program state
• CPU registers

• Program counter (current location in the code)

• Stack pointer

• Only one process can be running in the CPU at any given time!
How process can be created?
• Processes can be created in two ways
• System initialization: one or more processes created when the OS starts up
• Execution of a process creation system call: something explicitly asks for a
new process
• System calls can come from
• User request to create a new process (system call executed from user shell)
• Already running processes
• User programs
• System daemons
When do processes end?
• Conditions that terminate processes can be
• Voluntary
• Involuntary
• Voluntary
• Normal exit
• Error exit
• Involuntary
• Fatal error (only sort of involuntary)
• Killed by another process
Process states
• Process in one of 5 states
• Created
Created • Ready
• Running
1
• Blocked
Ready • Exit
2
• Transitions between states
5
3 1 - Process enters ready queue
Blocked
(waiting)
Running 2 - Scheduler picks this process
4 3 - Scheduler picks a different
process
7 4 - Process waits for event (such as
7
6 I/O)
Exit 5 - Event occurs
6 - Process exits
Operations on the Process

1. Creation

• Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.

2. Scheduling

• Out of many processes present in the ready queue, the Operating system
chooses one process and start executing it.

• So selecting the process which is to be executed next, is known as scheduling.


Cont’d…

3. Execution
• Once the process is scheduled for the execution, the processor starts executing
it.
• Process may come to the blocked or wait state during the execution then in that
case the processor starts executing the other processes.

4. Deletion/killing
• Once the purpose of the process gets over, then the OS will kill the process.
• The Context of the process (PCB) will be deleted and the process gets
terminated by the Operating system.
3.7.2 Thread
• The unit of execution is usually referred to a thread or a “lightweight process”
• A thread is a flow of execution through the process code,
• with its own program counter that keeps track of which instruction to execute next,
• system registers which hold its current working variables and stack which contains
the execution history.
• A thread shares with its peer threads few information like code segment, data segment
and open files.
• When one thread alters a code segment memory item, all other threads see that.
• A thread is also called a light weight process.
• Threads provide a way to improve application performance through parallelism.
• Threads represent a software approach to improve performance of operating system by
reducing the overhead
• Thread is equivalent to a classical process also called mini-process or process within
process.
• Each thread belongs to exactly one process and no thread can exist outside a process.
Why we need to have a thread?

• Because they are lighter weight than processes,

• They are easier (i.e., faster) to create and destroy than processes.

• In many systems, creating a thread goes 10–100 times faster than creating a
process. When the number of threads needed changes dynamically and
rapidly, this property is useful to have.

• Threads are useful on systems with multiple CPUs, where real parallelism
is possible.
Threads and Processes

Single Threading Multi-Threading


40
Difference between process and thread
process Thread
• Process is heavy weight or resource • Thread is light weight taking lesser
intensive. resources than a process.
• Process switching needs interaction with • Thread switching does not need to
operating system.
interact with operating system.
• In multiple processing environments each
process executes the same code but has its • All threads can share same set of open
own memory and file resources. files, child processes
• If one process is blocked then no other • While one thread is blocked and waiting,
process can execute until the first process is second thread in the same task can run.
unblocked. • Multiple threaded processes use fewer
• Multiple processes without using threads resources.
use more resources.
• One thread can read, write or change
• In multiple processes each process operates another thread's data.
independently of the others.
Types of Thread
• Threads are implemented in following two ways
• User Level Threads -- User managed threads
• Kernel Level Threads -- Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
• In this case, application manages thread kernel is not aware of the existence of threads.
• The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving and
restoring thread contexts.
• The application begins with a single thread and begins running in that thread
Kernel Level Threads
• In this case, thread management done by the Kernel.
• There is no thread management code in the application area.
• Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded.
• All of the threads within an application are supported within a single process.
• The Kernel maintains context information for the process as a whole and for
individuals threads within the process.
• Scheduling by the Kernel is done on a thread basis.
• The Kernel performs thread creation, scheduling and management in Kernel
space.
• Kernel threads are generally slower to create and manage than the user
threads.
Difference between user and kernel level thread
User Level Threads Kernel Level Thread
• User level threads are faster to create • Kernel level threads are slower to
and manage. create and manage.
• Implementation is by a thread library at • Operating system supports creation of
the user level. Kernel threads.
• User level thread is generic and can run • Kernel level thread is specific to the
on any operating system. operating system
• Multi-threaded application cannot take • Kernel routines themselves can be
advantage of multiprocessing. multithreaded.
3.8 Real time system
• A real-time operating system (RTOS) is intended to serve real-time applications that
process data without buffer delays.
• A common characteristic of many real-time systems is that their requirements
specification includes timing information in the form of deadlines.
• Real-time system is a time-bound system with well-defined and fixed time constraints,
and processing must be done within the defined constraints; otherwise, the system will
fail.
• Real-Time System is used at those Places where we require higher and timely
responses.
• Real-time operating systems involve a set of applications where the operations are
performed on time to run the activities in an external system.
• It uses the quantitative expression of time to analyze the system's performance.
• The quick response of the process is a must in real-time operating systems.
• There is no chance of any delay in completing any process because a little delay can
cause several dangerous issues.
• The deadline in the context of a real-time system is the moment of the time by
which the job's execution is needed to be accomplished. Most real-time operating
systems use a pre-emptive scheduling algorithm.
• Examples of real-time operating system
• The operating system of the microwave oven.
• The operating system of the Washing machine.
• The operating system of the airplane.
• The operating system of digital cameras and many more
• A real-time operating system is divided into two systems, such as:
Hard real-time system
Soft real-time system
• Hard and Soft real-time systems are the variants of real-time systems where the
hard real-time system is more restrictive than the soft real-time system
 Hard real-time system
• The hard real-time system must assure to finish the real-time task within the specified
deadline.
• A hard real-time system considers timelines as a deadline, and it should not be
omitted in any circumstances.
• Hard Real-Time System must generate accurate responses to the events within the
specified time.
• A hard real-time system is a purely deterministic and time constraint system.
• For example, users expected the output for the given input in 5 sec then the system
should process the input data and give the output exactly by the 5th second. It should
not give the output by the 6th second or by the 4th second. Here above 5 seconds is the
deadline to complete the process for given data.
• In the hard real-time system, meeting the deadline is very important if the deadline is
not met, the system performance will fail.
Examples of Hard Real-Time Systems
• Flight Control Systems
• Missile Guidance Systems
• Weapons Defense System
• Medical System
• Inkjet printer system
• Railway signaling system
• Air traffic control systems
• Nuclear reactor control systems
• Anti-missile system
• Chemical plant control
• Autopilot System in Plane
• Pacemakers
Soft Real-Time System
• A soft real-time system is a system whose operation is degraded if results are not
produced according to the specified timing requirement.
• In a soft real-time system, the meeting of deadline is not compulsory for every task, but
the process should get processed and give the result.
• Even the soft real-time systems cannot miss the deadline for every task or process
according to the priority it should meet the deadline or miss the deadline.
• If a system is missing the deadline every time, the system's performance will be worse
and cannot be used by the users.
• The best example for the soft real-time system is a personal computer, audio and video
systems, etc.
• Soft real-time systems consider the processes as the main task and control the entire
task.
Examples of Soft Real-Time Systems
• Personal computer
• Audio and video systems
• DVD Players
• Weather Monitoring Systems
• Electronic games
• Multimedia system
• Web browsing
• Online transaction systems
• Telephone switches
• Virtual reality
• Mobile communication

You might also like