0% found this document useful (0 votes)
406 views

Platform Technologies Module 3

This document discusses process synchronization in operating systems. It begins with an introduction that defines process synchronization as a mechanism that assigns a proper time schedule between cooperating processes to ensure orderly access to shared resources. It then discusses classical problems like the critical section problem and solutions like mutual exclusion. The document provides examples of issues that can occur with concurrent access to shared data like counters. It aims to describe process synchronization comprehensively and classify classical problems. The learning outcomes include understanding process synchronization, critical section problems, and CPU scheduling algorithms.

Uploaded by

Christian Kingaw
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
406 views

Platform Technologies Module 3

This document discusses process synchronization in operating systems. It begins with an introduction that defines process synchronization as a mechanism that assigns a proper time schedule between cooperating processes to ensure orderly access to shared resources. It then discusses classical problems like the critical section problem and solutions like mutual exclusion. The document provides examples of issues that can occur with concurrent access to shared data like counters. It aims to describe process synchronization comprehensively and classify classical problems. The learning outcomes include understanding process synchronization, critical section problems, and CPU scheduling algorithms.

Uploaded by

Christian Kingaw
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Republic of the Philippines

Mountain Province State Polytechnic College


Bontoc, Mountain Province

PROCESS SYNCHRONIZATION

Module 3 of 4 modules

PLATFORM TECHNOLOGIES

Brueckner B. Aswigue

Information Technology Department

1st Semester, SY 2020-2021


CP #: 0947-3621-495
[email protected]
INTRODUCTION

This module will discuss extensively the process synchronization and Central
Processing Unit (CPU) scheduling. The first lesson concerns with the topic of process
synchronization among concurrently executing processes includes mutual exclusion,
bounded-buffer, and readers/writers.

Another discussion is how the CPU schedules all the processes. It will introduce
the basic scheduling concepts and discuss in great length CPU scheduling. First Come
First Serve (FCFS), Short Job First (SJF), Round-Robin, Priority, and the other
scheduling algorithms should be familiar. Gantt charts, simulations, and play acting
are valuable ways to get the ideas across.

This module presents the overview and application of process synchronization


and CPU Scheduling. Powerpoint presentation and ebook will be given you for more
details of the module and also as your references.

The number of hours allotted for this module shall be for 14 hours. You are
expected to finish the module in 7 weeks.

LEARNING OUTCOMES

At the end of the module, you should be able to:


a. describe comprehensively the overview and application of process
synchronization;
b. classify widely the several classical process-synchronization problems; and,
c. demonstrate thoroughly the basis for multiprogrammed OS in CPU scheduling
and algorithms in the several operating systems;

PRE-TEST

The following questions cover general areas of this module. You may not
know the answers to all questions, but please attempt to answer them without
asking others or referring to modules.
Choose the best answer for each question and write the letter of your choice
after the number.

1. A task of scheduling the execution of the process in a predefined order to take


care of the computer operations in a smoother ways
a. Operating System
b. Process Concepts
c. Process Synchronization
d. Cooperating Process
2. In the Peterson Solution, atomic means
a. Cannot be interrupted
b. Can be interrupted
c. It will hold the load
d. It will interrupt the store
3. Synchronization that provides more sophisticated ways to process synchronize
their activities.
a. Semaphone
b. Mutex Lock
c. Peterson Solution
d. Synchronization Hardware

1
4. It is called indefinite blocking of any processes.
a. Deadlock
b. Starvation
c. Priority Inversion
d. Synchronization
5. Another problem in process synchronization that spend their lives alternating
thinking and eating
a. Dining-Philosophers problem
b. Reader-writers problem
c. Bounded-Buffer Problem
d. All of the above
6. Basis of multiprogramming operating systems
a. CPU and monitor
b. CPU scheduling
c. CPU multitasking
d. CPU ulitization
7. Once a process starts running it keeps running until it finishes.
a. Nonpreemptive
b. Preemptive
c. Scheduling
d. I/O Burst
8. It is a time required by the process to complete execution
a. Finish time
b. Jobs
c. User
d. Burst time
9. A process that you want to keep the CPU as busy as possible.
a. CPU utilization
b. Process
c. Job
d. Arrival Time
10. One measure of work is the number of processes that are completed per time
unit.
a. CPU utilization
b. Process
c. Job
d. Arrival Time

LESSON 1: Process Synchronization

Objectives:
At the end of the lesson, you should be able to:
a. understand the basic of process synchronization;
b. understand the concepts of critical-section problem and solutions for the
consistency of shared data;
c. analyze the software and hardware solutions of the critical-section problem;
d. understand several classical process-synchronization problems; and,
e. analyze accurately tools that are used to solve process synchronization problems.

Let’s Engage.
In the context of operating system environment, the accessibility to sharable
object needs proper synchronization mechanism. Process synchronization is a
mechanism in which there should be a proper time schedule assigned between all the
cooperating processes. To implement this, it needs a rule of using sharable object. That
is, at a time only one process will get a chance to work with a sharable object, so that
remaining co-operating processes will have to wait. They will become active only after

2
they get a chance to work with sharable object. Preparing a time schedule to this
situation needs many mechanisms like mutual exclusiveness, progress and bounded
waiting. Mutual exclusiveness is a technique which allows only one process to access
the critical section of other process.

A cooperating process is one that can affect or be affected by other processes


executing in the system. Cooperating processes can either directly share a logical
address space (that is, both code and data) or be allowed to share data only through
files or messages.

Concurrent access to shared data may result in data inconsistency, however. In


this module, we discuss various mechanisms to ensure the orderly execution of
cooperating processes that share a logical address space, so that data consistency is
maintained.

According to the Computer Science and Engineering Research Study, MIT, press
(1980), designing correct routines for controlling concurrent activities proved to be one
of the most difficult aspects of systems programming. The ad hoc techniques used by
programmers of early multiprogramming and real-time systems were always vulnerable
to subtle programming errors whose effects could be observed only when certain
relatively rare sequences of actions occurred. The errors are particularly difficult to
locate, since the precise conditions under which they appear are very hard to reproduce.

(Operating System. http://www.os-book.com).

Information Technology may be at the cutting edge of the platform revolution in


technology but as information makes its way out into the physical world through the
Internet of Things all physical technologies will be increasingly recognized as platforms
becoming designed and operated as such. The smart grid will be a platform, the smart
airport will be a platform, the smart city, car, and house will be platforms and even the
smart door handle will be a platform. This Platform Technologies or popularly known as
Operating Systems installed in the server, desktop, and mobile computing.

An operating system is software that manages the computer hardware. The


hardware must provide appropriate mechanisms to ensure the correct operation of the
computer system and to prevent user programs from interfering with the proper
operation of the system.

Internally, operating systems vary greatly in their makeup, since they are
organized along many different lines. The design of a new operating
system is a major task. It is important that the goals of the system be well
defined before the design begins. These goals form the basis for choices
among various algorithms and strategies.

3
PROCESS SYNCHRONIZATION
1. Background
• Processes can execute concurrently
o May be interrupted at any time, partially completing execution
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes

• Illustration of the problem:


Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers. We can do so by having an integer counter that
keeps track of the number of full buffers. Initially, counter is set to 0. It is
incremented by the producer after it produces a new buffer and is decremented
by the consumer after it consumes a buffer.
Producer
while (true) {
/* produce an item in next produced */
while (counter == BUFFER_SIZE) ;
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}

1. Although the producer and consumer routines shown above are correct
separately, they may not function correctly when executed concurrently. As an
illustration, suppose that the value of the variable counter is currently 5 and that
the producer and consumer processes concurrently execute the statements
“counter++” and “counter--”. Following the execution of these two statements,
the value of the variable counter may be 4, 5, or 6! The only correct result, though,
is counter == 5, which is generated correctly if the producer and consumer
execute separately.

Race Condition
2. It is called the several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order
in which the access takes place
3. counter++ could be implemented as
register1=counter
register1=register1+1
counter = register1
4. counter-- could be implemented as

4
register2=counter
register2=register2-1
counter = register2
5. Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}

To guard against the race condition above, we need to ensure that only one
process at a time can be manipulating the variable counter. To make such
a guarantee, we require that the processes be synchronized in some way.

2. The Critical Section Problem


1. Consider system of n processes {p0, p1, … pn-1}
2. Each process has critical section segment of code
a. Process may be changing common variables, updating table, writing file,
etc
b. When one process in critical section, no other may be in its critical
section
3. Critical section problem is to design protocol to solve this
4. The important feature of the system is that, when one process is executing in its
critical section, no other process is allowed to execute in its critical section or we
say “no two processes are executing in their critical sections at the same time”.
5. Each process must ask permission to enter critical section in entry section,
may follow critical section with exit section, then remainder section

General structure of process Pi

6. Algorithm for Process Pi


do {

while (turn == j);


critical section
turn = j;
remainder section
} while (true);

7. A solution to the critical-section problem must satisfy the following three


requirements:
1. Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed
indefinitely.

5
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
i. Assume that each process executes at a nonzero speed
ii. No assumption concerning relative speed of the n processes

8. Critical-Section Handling in OS
• Two approaches depending on if kernel is preemptive or non- preemptive
▪ Preemptive – allows preemption of process when running in kernel
mode
▪ Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU
• Essentially free of race conditions in kernel mode
3. Peterson’s Solution
• Good algorithmic description of solving the problem
• Two process solution
• Assume that the load and store machine-language instructions are atomic; that
is, cannot be interrupted
• The two processes share two variables:
int turn;
Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical section
• The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!

• Algorithm for Process Pi


do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
• Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

4. Synchronization Hardware
• Many systems provide hardware support for implementing the critical section
code.
• All solutions below based on idea of locking
o Protecting critical regions via locks
• Uniprocessors – could disable interrupts
o Currently running code would execute without preemption
o Generally too inefficient on multiprocessor systems
▪ Operating systems using this not broadly scalable
• Modern machines provide special atomic hardware instructions
o Atomic = non-interruptible
o Either test memory word and set value
o Or swap contents of two memory words

6
• Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

• test_and_set Instruction
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1. Executed atomically
2. Returns the original value of passed parameter
3. Set the new value of passed parameter to “TRUE”.

• Solution using test_and_set()


Shared Boolean variable lock, initialized to FALSE
Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
• compare_and_swap Instruction
Definition:
int compare _and_swap(int *value, int expected, int new_value)
{
int temp = *value;
if (*value == expected)
*value = new_value;
return temp;
}
1. Executed atomically
2. Returns the original value of passed parameter “value”
3. Set the variable “value” the value of the passed parameter “new_value”
but only if “value” ==“expected”. That is, the swap takes place only
under this condition.

• Solution using compare_and_swap


o Shared integer “lock” initialized to 0;
o Solution:
do {
while (compare_and_swap(&lock, 0, 1) != 0)

; /* do nothing */

7
/* critical section */
lock = 0;
/* remainder section */
} while (true);

• Bounded-waiting Mutual Exclusion with test_and_set


do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);

5. Mutex Locks
• Previous solutions are complicated and generally inaccessible to application
programmers
• OS designers build software tools to solve critical section problem
• Simplest is mutex lock
• Protect a critical section by first acquire() a lock then release() the lock
o Boolean variable indicating if lock is available or not
• Calls to acquire() and release() must be atomic
o Usually implemented via hardware atomic instructions
• But this solution requires busy waiting
o This lock therefore called a spinlock

• acquire() and release()


acquire() {
while (!available)
; /* busy wait */
available = false;;
}
release() {
available = true;
}
do {
acquire lock
critical section
release lock
remainder section
} while (true);
6. Semaphore
o Synchronization tool that provides more sophisticated ways (than Mutex locks)
for process to synchronize their activities.

8
o Semaphore S – integer variable
o Can only be accessed via two indivisible (atomic) operations
o wait() and signal()
▪ Originally called P() and V()
o Definition of the wait() operation
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
o Definition of the signal() operation
signal(S) {
S++;
}

1. Semaphore Usage
o Counting semaphore – integer value can range over an unrestricted
domain
o Binary semaphore – integer value can range only between 0 and 1
o Same as a mutex lock
o Can solve various synchronization problems
o Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1:
S 1;
signal(synch);
P2:
wait(synch);
S 2;
1. Can implement a counting semaphore S as a binary semaphore

2. Semaphore Implementation
o Must guarantee that no two processes can execute the wait() and signal()
on the same semaphore at the same time
o Thus, the implementation becomes the critical section problem where the
wait and signal code are placed in the critical section
o Could now have busy waiting in critical section implementation
a. But implementation code is short
b. Little busy waiting if critical section rarely occupied
o Note that applications may spend lots of time in critical sections and
therefore this is not a good solution

o Semaphore Implementation with no Busy waiting


o With each semaphore there is an associated waiting queue
o Each entry in a waiting queue has two data items:
a. value (of type integer)
b. pointer to next record in the list
o Two operations:
a. block – place the process invoking the operation on the
appropriate waiting queue
b. wakeup – remove one of processes in the waiting queue and
c. place it in the ready queue
o typedef struct{

9
int value;
struct process *list;
} semaphore;
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
3. Deadlock and Starvation
o Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes
o Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);
o Starvation – indefinite blocking
o A process may never be removed from the semaphore queue in
which it is suspended
o Priority Inversion – Scheduling problem when lower-priority
process holds a lock needed by higher-priority process
o Solved via priority-inheritance protocol

7. Classical Problems of Synchronization


• These problems are used for testing nearly every newly proposed synchronization
scheme. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions.
However, actual implementations of these solutions could use mutex locks in
place of binary semaphores.
• Classical problems used to test newly-proposed synchronization schemes
o Bounded-Buffer Problem
o Readers and Writers Problem
o Dining-Philosophers Problem
1. Bounded-Buffer Problem
• n buffers, each can hold one item
• Semaphore mutex initialized to the value 1
• Semaphore full initialized to the value 0
• Semaphore empty initialized to the value n

10
• The structure of the producer process
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);

• The structure of the consumer process

Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);

2. Readers-Writers Problem
• A data set is shared among a number of concurrent processes
o Readers – only read the data set; they do not perform any updates
o Writers – can both read and write
• Problem – allow multiple readers to read at the same time
o Only one single writer can access the shared data at the same time
• Several variations of how readers and writers are considered – all involve
some form of priorities
• Shared Data
o Data set
o Semaphore rw_mutex initialized to 1
o Semaphore mutex initialized to 1
o Integer read_count initialized to 0

• The structure of a writer process


do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);

11
• The structure of a reader process
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);

Readers-Writers Problem Variations


• First variation – no reader kept waiting unless writer has permission
to use shared object
• Second variation – once writer is ready, it performs the write ASAP
• Both may have starvation leading to even more variations
• Problem is solved on some systems by kernel providing reader-writer
locks

3. Dining-Philosophers Problem
• Philosophers spend their lives alternating thinking and eating
• Don’t interact with their neighbors, occasionally try to pick up 2
chopsticks (one at a time) to eat from bowl
o Need both to eat, then release both when done
• In the case of 5 philosophers
o Shared data
▪ Bowl of rice (data set)
▪ Semaphore chopstick [5] initialized to 1

• Dining-Philosophers Problem Algorithm


• The structure of Philosopher i:
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
} while (TRUE);
• What is the problem with this algorithm?
o Deadlock handling
▪ Allow at most 4 philosophers to be sitting simultaneously at
the table.

12
▪ Allow a philosopher to pick up the forks only if both are
available (picking must be done in a critical section.
▪ Use an asymmetric solution--an odd - numbered
philosopher picks up first the left chopstick and then the
right chopstick. Even-numbered philosopher picks up first
the right chopstick and then the left chopstick.

• Problems with Semaphores


o Incorrect use of semaphore operations:
▪ signal (mutex) …. wait (mutex)
▪ wait (mutex) … wait (mutex)
▪ Omitting of wait (mutex) or signal (mutex) (or both)
o Deadlock and starvation are possible.

8. Monitors
• A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
• Abstract data type, internal variables only accessible by code within the
procedure
• Only one process may be active within the monitor at a time
• But not powerful enough to model some synchronization schemes
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}

Additional Notes:
1. Solaris Synchronization
• Implements a variety of locks to support multitasking, multithreading
(including real-time threads), and multiprocessing
• Uses adaptive mutexes for efficiency when protecting data from short code
segments
o Starts as a standard semaphore spin-lock
o If lock held, and by a thread running on another CPU, spins
o If lock held by non-run-state thread, block and sleep waiting for signal
of lock being released
• Uses condition variables
• Uses readers-writers locks when longer sections of code need access to data
• Uses turnstiles to order the list of threads waiting to acquire either an
adaptive mutex or reader-writer lock
o Turnstiles are per-lock-holding-thread, not per-object
• Priority-inheritance per-turnstile gives the running thread the highest of the
priorities of the threads in its turnstile

2. Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on uniprocessor
systems
• Uses spinlocks on multiprocessor systems
a. Spinlocking-thread will never be preempted

13
• Also provides dispatcher objects user-land which may act mutexes,
semaphores, events, and timers
a. Events
i. An event acts much like a condition variable
b. Timers notify one or more thread when time expired
c. Dispatcher objects either signaled-state (object available) or non-signaled
state (thread will block)
3. Linux Synchronization
• Linux:
a. Prior to kernel Version 2.6, disables interrupts to implement short critical
sections
b. Version 2.6 and later, fully preemptive
• Linux provides:
a. semaphores
b. spinlocks
c. reader-writer versions of both
• On single-cpu system, spinlocks replaced by enabling and disabling kernel
preemption

If you want to know more interesting facts about this module, visit the
following:
• Operating system. http://www.wiley.com/college and clicking
“Who’s my rep?”
• Operating System. http://www.os-book.com.
• It may be a good idea to review the basic concepts of machine
organization and assembly language programming. You should be
comfortable with the concepts of memory, CPU, registers, I/O,
interrupts, instructions, and the instruction execution cycle. Since
the operating system is the interface between the hardware and
user programs, a good understanding of operating systems
requires an understanding of both hardware and programs.
• The powerpoint presentation will be additional references for you
to elaborate some of the topics see the attachment.

IT’S YOUR TURN:

Direction: Read the passage carefully and plan what you will write. Place your answer
in the pad paper whether yellow or white to be submitted. The Essay rubrics have a
correspond points that will guide you in your essay. Explain how it is possible for
static methods also to be declared as synchronized.

Features 9-10 points 7-8 points 4-6 points 1-3 points


Expert Accomplished Capable Beginner
Understanding Writing shows Writing shows a Writing shows Writing shows
strong clear adequate little
understanding understanding understanding understanding
Quality of Piece was Piece was written Piece had little Piece had no style
Writing written in an in an interesting style
extraordinary style
style
Gives no new
Very informative Somewhat Gives some new information and
and well- informative and information but very poorly
organized organized poorly organized organized

14
Grammar, Virtually no Few spelling and A number of So many spelling,
Usage & spelling, punctuation spelling, punctuation and
Mechanics punctuation or errors, minor punctuation or grammatical
grammatical grammatical grammatical errors that it
errors errors errors interferes with the
meaning

Question: In this lesson, we used the synchronized statement with instance methods.
Calling an instance method requires associating the method with an object. Entering a
synchronized method requires owning the object’s lock. Static methods are unlike
instance methods in that they do not require association with an object when they are
called.
“Construct your determination with Sustained Effort,
Controlled Attention and Concentrated Energy, Opportunities never come to those who
wait… they are captured by those who dare to attack” – Paul J. Meyer

Lesson 2: CPU Scheduling

Objectives:
At the end of the lesson, you should be able to:
1. distinguish precisely the basis for multiprogrammed OS in CPU scheduling;
2. formulate thoroughly various CPU-scheduling algorithms; and
3. analyze precisely the scheduling algorithms of several operating systems.

CPU scheduling is the basis of multiprogrammed operating systems. By switching


the CPU among processes, the operating system can make the computer more
productive. In this lesson, we introduce basic CPU-scheduling concepts and present
several CPU-scheduling algorithms. We also consider the problem of selecting an
algorithm for a particular system.

CPU Scheduling is a process of determining which process will own CPU for
execution while another process is on hold. The main task of CPU scheduling is to make
sure that whenever the CPU remains idle, the OS at least select one of the processes
available in the ready queue for execution. The selection process will be carried out by
the CPU scheduler. It selects one of the processes in memory that are ready for
execution.

In previous lessons where we introduced threads to the process model. On


operating systems that support them, it is kernel-level threads—not processes—that are
in fact being scheduled by the operating system. However, the terms "process
scheduling" and "thread scheduling" are often used interchangeably.

In this lesson, we use process scheduling when discussing general scheduling


concepts and thread scheduling to refer to thread-specific ideas.

1. Basic Concepts
• Maximum CPU utilization obtained with multiprogramming

15
• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution
and I/O wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main concern

Figure 1. Alternating sequence of CPU and I/O bursts (Operating


System. http://www.os-book.com).

1. CPU–I/O Burst Cycle


The success of CPU scheduling depends on an observed property of
processes: process execution consists of a cycle of CPU execution and I/O
wait. Processes alternate between these two states. Process execution begins
with a CPU burst. That is followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and so on. Figure 2 shows
Histogram of CPU-burst Times where the final CPU burst ends with a system
request to terminate execution.

Figure 2. Histogram of CPU-burst durations (Operating System.


http://www.os-book.com).

2. CPU Scheduler
• Short-term scheduler selects from among the processes in ready queue, and
allocates the CPU to one of them
o Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is nonpreemptive or cooperative.

16
• Under these conditions, once a process starts running it keeps running,
until it either voluntarily blocks or until it finishes. Otherwise the system is
said to be preemptive.
• All other scheduling is preemptive
o Consider access to shared data
o Consider preemption while in kernel mode
o Consider interrupts occurring during crucial OS activities

a. Dispatcher
• Dispatcher module gives control of the CPU to the process selected by the
short-term scheduler; this involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop one process and
start another running.
• The dispatcher should be as fast as possible, given that it is invoked during
every process switch. The time taken by the dispatcher to stop one process
and start another process is known as the Dispatch Latency. Dispatch
Latency can be explained using the below figure:

Figure 3. Dispatch Latency (Operating System. http://www.os-book.com).

Important CPU scheduling Terminologies


o Burst Time/Execution Time: It is a time required by the process to
complete execution. It is also called running time.
o Arrival Time: when a process enters in a ready state
o Finish Time: when process complete and exit from a system
o Multiprogramming: A number of programs which can be present in
memory at the same time.
o Jobs: It is a type of program without any kind of user interaction.
o User: It is a kind of program having user interaction.
o Process: It is the reference that is used for both job and user.
o CPU/IO burst cycle: Characterizes process execution, which alternates
between CPU and I/O activity. CPU times are usually shorter than the
time of I/O.

2. Scheduling Criteria
Many criteria have been suggested for comparing CPU-scheduling
algorithms. Which characteristics are used for comparison can make a
substantial difference in which algorithm is judged to be best. The criteria
include the following:

17
o CPU utilization. We want to keep the CPU as busy as possible.
Conceptually, CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90
percent (for a heavily loaded system).
o Throughput. If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are completed
per time unit, called throughput. For long processes, this rate may be one
process per hour; for short transactions, it may be ten processes per
second.
o Turnaround time. From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The
interval from the time of submission of a process to the time of completion
is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the
CPU, and doing I/O.
o Waiting time. The CPU-scheduling algorithm does not affect the amount
of time during which a process executes or does I/O. It affects only the
amount of time that a process spends waiting in the ready queue. Waiting
time is the sum of the periods spent waiting in the ready queue.
o Response time. In an interactive system, turnaround time may not be the
best criterion. Often, a process can produce some output fairly early and
can continue computing new results while previous results are being
output to the user. Thus, another measure is the time from the submission
of a request until the first response is produced. This measure, called
response time, is the time it takes to start responding, not the time it takes
to output the response. The turnaround time is generally limited by the
speed of the output device.

Scheduling Algorithm Optimization Criteria


It is desirable to maximize CPU utilization and throughput and to
minimize turnaround time, waiting time, and response time. In most cases,
we optimize the average measure. However, under some circumstances, we
prefer to optimize the minimum or maximum values rather than the
average. For example, to guarantee that all users get good service, we may
want to minimize the maximum response time.
o Max CPU utilization
o Max throughput
o Min turnaround time
o Min waiting time
o Min response time

Investigators have suggested that, for interactive systems (such as


desktop systems), it is more important to minimize the variance in the
response time than to minimize the average response time. A system with
reasonable and predictable response time may be considered more desirable
than a system that is faster on the average but is highly variable. However,
little work has been done on CPU-scheduling algorithms that minimize
variance.

3. Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the
processes in the ready queue is to be allocated the CPU. There are many
different CPU-scheduling algorithms.

PREEMPTIVE VS NON-PREEMPTIVE SCHEDULING: KEY DIFFERENCES

What is Preemptive Scheduling?

18
Preemptive Scheduling is a scheduling method where the tasks are mostly
assigned with their priorities. Sometimes it is important to run a task with a
higher priority before another lower priority task, even if the lower priority task
is still running.

At that time, the lower priority task holds for some time and resumes when
the higher priority task finishes its execution.

What is Non- Preemptive Scheduling?

In this type of scheduling method, the CPU has been allocated to a specific
process. The process that keeps the CPU busy will release the CPU either by
switching context or terminating.

It is the only method that can be used for various hardware platforms.
That's because it doesn't need specialized hardware (for example, a timer) like
preemptive Scheduling.

Non-Preemptive Scheduling occurs when a process voluntarily enters the


wait state or terminates.

Difference Between Preemptive and Non-Preemptive Scheduling in OS

Preemptive Scheduling Non-preemptive Scheduling

A processor can be preempted to execute Once the processor starts its execution,
the different processes in the middle of it must finish it before executing the
any current process execution. other. It can't be paused in the middle.

CPU utilization is more efficient CPU utilization is less efficient


compared to Non-Preemptive compared to preemptive Scheduling.
Scheduling.

Waiting and response time of preemptive Waiting and response time of the non-
Scheduling is less. preemptive Scheduling method is
higher.

Preemptive Scheduling is prioritized. When any process enters the state of


The highest priority process is a process running, the state of that process is
that is currently utilized. never deleted from the scheduler until
it finishes its job.

Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid.

Examples: - Shortest Remaining Time Examples: First Come First Serve,


First, Round Robin, etc. Shortest Job First, Priority
Scheduling, etc.

19
Preemptive Scheduling algorithm can be In non-preemptive scheduling process
pre-empted that is the process can be cannot be Scheduled
Scheduled

In this process, the CPU is allocated to In this process, CPU is allocated to the
the processes for a specific time period. process until it terminates or switches
to the waiting state.

Preemptive algorithm has the overhead Non-preemptive Scheduling has no


of switching the process from the ready such overhead of switching the process
state to the running state and vice- from running into the ready state.
versa.

Advantages of Preemptive Scheduling

Here, are pros/benefits of Preemptive Scheduling method:

• Preemptive scheduling method is more robust, approach so one process


cannot monopolize the CPU
• Choice of running task reconsidered after each interruption.
• Each event cause interruption of running tasks
• The OS makes sure that CPU usage is the same by all running process.
• In this, the usage of CPU is the same, i.e., all the running processes will
make use of CPU equally.
• This scheduling method also improvises the average response time.
• Preemptive Scheduling is beneficial when we use it for the multi-
programming environment.

Advantages of Non-preemptive Scheduling

Here, are pros/benefits of Non-preemptive Scheduling method:

• Offers low scheduling overhead


• Tends to offer high throughput
• It is conceptually very simple method
• Less computational resources need for Scheduling

Disadvantages of Preemptive Scheduling

Here, are cons/drawback of Preemptive Scheduling method:

• Need limited computational resources for Scheduling


• Takes a higher time by the scheduler to suspend the running task,
switch the context, and dispatch the new incoming task.
• The process which has low priority needs to wait for a longer time if some
high priority processes arrive continuously.

Disadvantages of Non-Preemptive Scheduling

Here, are cons/drawback of Non-Preemptive Scheduling method:

• It can lead to starvation especially for those real-time tasks


• Bugs can cause a machine to freeze up
• It can make real-time and priority Scheduling difficult
• Poor response time for processes

20
Example of Non-Preemptive Scheduling

In non-preemptive SJF scheduling, once the CPU cycle is allocated to process,


the process holds it till it reaches a waiting state or terminated.

Consider the following five processes each having its own unique burst time
and arrival time.

Process Queue Burst Time Arrival Time


P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4

Step 0) At time=0, P4 arrives and starts execution.

Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to


complete. It will continue execution.

Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4
will continue execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time of P3
and P1 is compared. Process P1 is executed because its burst time is less
compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1


will continue execution.

21
Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1
will continue execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3,
P5, and P2 is compared. Process P2 is executed because its burst time is the
lowest.

Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3
and P5 is compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.

22
Step 11) Let's calculate the average waiting time for above example.

Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2

Example of Pre-emptive Scheduling

Consider this following three processes in Round-robin

Process Queue Burst Time


P1 4
P2 3
P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here,
every process executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts
executing.

23
Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts
executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2


starts execution

Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At


time=9, P2 completes execution. Then, P3 starts execution till it completes.

Step 7) Let's calculate the average waiting time for above example.

Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

KEY DIFFERENCES

• In Preemptive Scheduling, the CPU is allocated to the processes for a


specific time period, and non-preemptive scheduling CPU is allocated to
the process until it terminates.
• In Preemptive Scheduling, tasks are switched based on priority while
non-preemptive scheduling no switching takes place.
• Preemptive algorithm has the overhead of switching the process from the
ready state to the running state while Non-preemptive Scheduling has no
such overhead of switching.
• Preemptive Scheduling is flexible while Non-preemptive Scheduling is
rigid.

24
1. First- Come, First-Served (FCFS) Scheduling
o FCFS is very simple - Just a FIFO queue, like customers waiting in line at
the bank or the post office or at a copying machine.
o Unfortunately, however, FCFS can yield some very long average wait times,
particularly if the first process to get there takes a long time. For example,
consider the following three processes:

Suppose that the processes arrive in the order:


P2 , P3 , P1

o Turnaround Time = the end number of the Process/job in the G.C.


o The Turnaround Time for P1= 30; P2=3; P3=6
o Average Turnaround Time = (30+3+6)/3=11

CONVOY EFFECT. When one CPU intensive process blocks the CPU, a number
of I/O intensive processes can get backed up behind it, leaving the I/O devices
idle. When the CPU hog finally relinquishes the CPU, then the I/O processes pass
through the CPU quickly, leaving the CPU idle while everyone queues up for I/O,
and then the cycle repeats itself when the CPU intensive process gets back to the
ready queue.

Advantages
o Better for long processes
o Simple method (i.e., minimum overhead on processor)
o No starvation
Disadvantages
o Convoy effect occurs. Even very small process should wait for its turn to
come to utilize the CPU. Short process behind long process results in
lower CPU utilization.
o Throughput is not emphasized.

25
2. Shortest-Job-First (SJF) Scheduling
• The idea behind the SJF algorithm is to pick the quickest fastest little job
that needs to be done, get it out of the way first, and then pick the next
smallest fastest job to do next.
• Technically this algorithm picks a process based on the next shortest CPU
burst, not the overall process time.
• Associate with each process the length of its next CPU burst
o Use these lengths to schedule the process with the shortest time
• SJF is optimal – gives minimum average waiting time for a given set of
processes
o The difficulty is knowing the length of the next CPU request
o Could ask the user
• Example

Ave. Turnaround Time = (9 + 24 + 16 + 3) / 4 = 13

Advantages
o It gives superior turnaround time performance to shortest process next
because a short job is given immediate preference to a running longer
job.
o Throughput is high.
Disadvantages
o Elapsed time (i.e., execution-completed-time) must be recorded, it results
an additional overhead on the processor.
o Starvation may be possible for the longer processes.

3. Shortest-remaining-time-first (SRTF)
o Preemptive SJF is sometimes referred to as shortest remaining time
first scheduling.
o Preemption occurs when a new process arrives in the ready queue that
has a predicted burst time shorter than the time remaining in the
process whose burst is currently on the CPU.
o Now we add the concepts of varying arrival times and preemption to the
analysis.

26
Using Short cut Method
Determine the Average of turnaround Time and Waiting Time (Based
on the Gantt Chart (G.C.)
FINISH
TIME of
minus A.T. equal TT minus B.T equal W.T
Process

P1 17 - 0 = 17 - 8 = 9
P2 5 - 1 = 4 - 4 = 0
P3 26 - 2 = 24 - 9 = 15
P4 10 - 3 = 7 - 5 = 2
TOTAL 50 TOTAL 26
Ave. T.T. 12.5 Ave. W.T. 6.5
Note: In the P1 in the G.C., it has two presentation of time 1 and 17. Get
only the time where the process end in the Gantt chart. P1 ends at time
17. This Formula is applicable to all CPU Scheduling.

CPU Utilization = (Last number in the Gantt Chart) minus (the total of
Burst time)
= 20/20 x 100% = 100%

o Compare the value to 7.75 ms for non-preemptive SJF or 8.75 for FCFS. )

4. Priority Scheduling
• Priority scheduling is a more general case of SJF, in which each job is
assigned a priority and the job with the highest priority gets scheduled first.
(SJF uses the inverse of the next expected burst time as its priority - The
smaller the expected burst, the higher the priority. )
• Note that in practice, priorities are implemented using integers within a
fixed range, but there is no agreed-upon convention as to whether "high"
priorities use large numbers or small numbers.

Types of Priority Scheduling


Priority scheduling divided into two main types:

a. Preemptive Scheduling (PP)


In Preemptive Scheduling, the tasks are mostly assigned with their
priorities. Sometimes it is important to run a task with a higher priority before
another lower priority task, even if the lower priority task is still running. The
lower priority task holds for some time and resumes when the higher priority
task finishes its execution.

27
b. Non-Preemptive Scheduling (NPP)
In this type of scheduling method, the CPU has been allocated to a
specific process. The process that keeps the CPU busy, will release the CPU
either by switching context or terminating. It is the only method that can be
used for various hardware platforms. That's because it doesn't need special
hardware (for example, a timer) like preemptive scheduling.
Characteristics of Priority Scheduling
• A CPU algorithm that schedules processes based on priority.
• It used in Operating systems for performing batch processes.
• If two jobs having the same priority are READY, it works on a FIRST
COME, FIRST SERVED basis.
• In priority scheduling, a number is assigned to each process that
indicates its priority level.
• Lower the number, higher is the priority.
• In this type of scheduling algorithm, if a newer process arrives, that is
having a higher priority than the currently running process, then the
currently running process is preempted.

Example of Priority Scheduling

Consider following five processes P1 to P5. Each process has its unique
priority, burst time, and arrival time.

Process Priority Burst time Arrival time


P1 1 4 0
P2 2 3 0
P3 1 7 6
P4 3 4 11
P5 2 2 12

Step 0) At time=0, Process P1 and P2 arrive. P1 has higher priority than P2.
The execution begins with process P1, which has burst time 4.

Step 1) At time=1, no new process arrive. Execution continues with P1.

Step 2) At time 2, no new process arrives, so you can continue with P1.
P2 is in the waiting queue.

28
Step 3) At time 3, no new process arrives so you can continue with P1.
P2 process still in the waiting queue.

Step 4) At time 4, P1 has finished its execution. P2 starts execution.

Step 5) At time= 5, no new process arrives, so we continue with P2.

Step 6) At time=6, P3 arrives. P3 is at higher priority (1) compared to P2


having priority (2). P2 is preempted, and P3 begins its execution.

Process Priority Burst time Arrival time


P1 1 4 0
P2 2 1 out of 3 pending 0
P3 1 7 6
P4 3 4 11
P5 2 2 12

Step 7) At time 7, no-new process arrives, so we continue with P3. P2 is


in the waiting queue.

29
Step 8) At time= 8, no new process arrives, so we can continue with P3.

Step 9) At time= 9, no new process comes so we can continue with P3.

Step 10) At time interval 10, no new process comes, so we continue with
P3

Step 11) At time=11, P4 arrives with priority 4. P3 has higher priority, so


it continues its execution.

Process Priority Burst time Arrival time


P1 1 4 0
P2 2 1 out of 3 pending 0
P3 1 2 out of 7 pending 6
P4 3 4 11
P5 2 2 12

30
Step 12) At time=12, P5 arrives. P3 has higher priority, so it continues
execution.

Step 13) At time=13, P3 completes execution. We have P2,P4,P5 in ready


queue. P2 and P5 have equal priority. Arrival time of P2 is before P5. So
P2 starts execution.

Process Priority Burst time Arrival time


P1 1 4 0
P2 2 1 out of 3 pending 0
P3 1 7 6
P4 3 4 11
P5 2 2 12

Step 14) At time =14, the P2 process has finished its execution. P4 and
P5 are in the waiting state. P5 has the highest priority and starts
execution.

Step 15) At time =15, P5 continues execution.

Step 16) At time= 16, P5 is finished with its execution. P4 is the only
process left. It starts execution.

31
Step 17) At time =20, P5 has completed execution and no process is left.

Step 18) Let's calculate the average waiting time and turnaround time
for the above example.

Using Long cut method:


Waiting Time = start time - arrival time + wait time for next burst
P1 = 0 - 0 = 0
P2 =4 - 0 + 7 =11
P3= 6-6=0
P4= 16-11=5
Average Waiting time = (0+11+0+5+2)/5 = 18/5= 3.6

Using Short cut Method


Determine the Average of turnaround Time and Waiting Time (Based on t
he Step 17 – Gantt Chart (G.C.)
FINISH minus A.T. equal TT minus B.T equal W.T
TIME of
Process

P1 4 - 0 = 4 - 4 = 0
P2 14 - 0 = 14 - 3 = 11
P3 13 - 6 = 7 - 7 = 0
P4 20 - 11 = 9 - 4 = 5
P5 160 - 12 = 4 - 2 = 2
TOTAL 38 TOTAL 18
Ave. T.T. 9.6 Ave. W.T. 3.6
Note: In the P2 in the G.C., it has two presentation of time 6 and 14. Get
only the time where the process end in the Gantt chart. The P2 finished
at time 14. This Formula is applicable to all CPU Scheduling.

CPU Utilization = (Last number in the Gantt Chart) minus (the total of
Burst time)
= 20/20 x 100% = 100%

Advantages of priority scheduling


• Easy to use scheduling method
• Processes are executed on the basis of priority so high priority
does not need to wait for long which saves time
• This method provides a good mechanism where the relative
important of each process may be precisely defined.
• Suitable for applications with fluctuating time and resource
requirements.

32
Disadvantages of priority scheduling
• If the system eventually crashes, all low priority processes get lost.
• If high priority processes take lots of CPU time, then the lower
priority processes may starve and will be postponed for an
indefinite time.
• This scheduling algorithm may leave some low priority processes
waiting indefinitely.
• A process will be blocked when it is ready to run but has to wait
for the CPU because some other process is running currently.
• If a new higher priority process keeps on coming in the ready
queue, then the process which is in the waiting state may need to
wait for a long duration of time.

Summary:
• Priority scheduling is a method of scheduling processes that is
based on priority. In this algorithm, the scheduler selects the
tasks to work as per the priority.
• In Priority Preemptive Scheduling, the tasks are mostly assigned
with their priorities.
• In Priority Non-preemptive scheduling method, the CPU has been
allocated to a specific process.
• Processes are executed on the basis of priority so high priority
does not need to wait for long which saves time
• If high priority processes take lots of CPU time, then the lower
priority processes may starve and will be postponed for an
indefinite time.

5. Round-Robin Scheduling

The name of this algorithm comes from the round-robin principle, where
each person gets an equal share of something in turns. It is the oldest, simplest
scheduling algorithm, which is mostly used for multitasking.

In Round-robin scheduling, each ready task runs turn by turn only in a


cyclic queue for a limited time slice. This algorithm also offers starvation free
execution of processes.

Characteristics of Round-Robin Scheduling

Here are the important characteristics of Round-Robin


Scheduling:

• Round robin is a pre-emptive algorithm


• The CPU is shifted to the next process after fixed interval time, which is
called time quantum/time slice.
• The process that is preempted is added to the end of the queue.
• Round robin is a hybrid model which is clock-driven
• Time slice should be minimum, which is assigned for a specific task that
needs to be processed. However, it may differ OS to OS.
• It is a real time algorithm which responds to the event within a specific
time limit.
• Round robin is one of the oldest, fairest, and easiest algorithm.
• Widely used scheduling method in traditional OS.

33
Example of Round-robin Scheduling

Consider this following three processes

Process Queue Burst time

P1 4

P2 3

P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here,
every process executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4, P2 is preempted and add at the end of the queue. P3 starts
executing.

Step 4) At time=6, P3 is preempted and add at the end of the queue. P1 starts
executing.

34
Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2
starts execution

Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At


time=9, P2 completes execution. Then, P3 starts execution till it completes.

Step 7) Let's calculate the average waiting time for above example.

Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

Try yourself: Compute the Average of waiting time, turnaround time, and CPU utilization.

Advantage of Round-robin Scheduling


• It doesn't face the issues of starvation or convoy effect.
• All the jobs get a fair allocation of CPU.
• It deals with all process without any priority
• If you know the total number of processes on the run queue, then you can
also assume the worst-case response time for the same process.
• This scheduling method does not depend upon burst time. That's why it is
easily implementable on the system.
• Once a process is executed for a specific set of the period, the process is
preempted, and another process executes for that given time period.
• Allows OS to use the Context switching method to save states of
preempted processes.
• It gives the best performance in terms of average response time.

Disadvantages of Round-robin Scheduling (drawbacks/consequences)


• If slicing time of OS is low, the processor output will be reduced.
• This method spends more time on context switching

35
• Its performance heavily depends on time quantum.
• Priorities cannot be set for the processes.
• Round-robin scheduling doesn't give special priority to more important
tasks.
• Decreases comprehension
• Lower time quantum results in higher the context switching overhead in
the system.
• Finding a correct time quantum is a quite difficult task in this system.
Worst Case Latency
This term is used for the maximum time taken for execution of all the tasks.
• dt = Denote detection time when a task is brought into the list
• st = Denote switching time from one task to another
• et = Denote task execution time

Formula:
Tworst = {(dti+ sti + eti ), + (dti+ sti + eti )2 +...+ (dti+ sti + eti )N., + (dti+ sti + eti + eti)

N} + tISR
t,SR = sum of all execution times

Summary:
• The name of this algorithm comes from the round-robin principle, where
each person gets an equal share of something in turns.
• Round robin is one of the oldest, fairest, and easiest algorithms and
widely used scheduling methods in traditional OS.
• Round robin is a pre-emptive algorithm
• The biggest advantage of the round-robin scheduling method is that If
you know the total number of processes on the run queue, then you can
also assume the worst-case response time for the same process.
• This method spends more time on context switching
• Worst-case latency is a term used for the maximum time taken for the
execution of all the tasks.

6. Multilevel Queue Scheduling


• When processes can be readily categorized, then multiple separate queues can
be established, each implementing whatever scheduling algorithm is most
appropriate for that type of job, and/or with different parametric adjustments.
• Scheduling must also be done between queues where scheduling one queue to
get time relative to other queues. Two common options are strict priority (no job
in a lower priority queue runs until all higher priority queues are empty) and
round-robin (each queue gets a time slice in turn, possibly of different sizes).
• Note that under this algorithm jobs cannot switch from queue to queue - Once
they are assigned a queue it will be finish.

Figure 4 - Multilevel queue scheduling (Operating System. http://www.os-


book.com).

36
Multilevel Feedback-Queue Scheduling

• Multilevel feedback queue scheduling is similar to the ordinary multilevel


queue scheduling described above, except jobs may be moved from one
queue to another for a variety of reasons:
o If the characteristics of a job change between CPU-intensive and
I/O intensive, then it may be appropriate to switch a job from one
queue to another.
o Aging can also be incorporated, so that a job that has waited for a
long time can get bumped up into a higher priority queue for a
while.
• Multilevel feedback queue scheduling is the most flexible, because it can
be tuned for any situation. But it is also the most complex to implement
because of all the adjustable parameters. Some of the parameters which
define one of these systems include:
o The number of queues.
o The scheduling algorithm for each queue.
o The methods used to upgrade or demote processes from one queue
to another which may be different.
o The method used to determine which queue a process enters
initially.

Figure 5 - Multilevel feedback queues (Operating System. http://www.os-book.com).

Thread Scheduling

• The process scheduler schedules only the kernel threads.


• User threads are mapped to kernel threads by the thread library - The OS (and
in particular the scheduler) is unaware of them.

Contention Scope

• Contention scope refers to the scope in which threads compete for the use of
physical CPUs.
• On systems implementing many-to-one and many-to-many threads, Process
Contention Scope, PCS, occurs, because competition occurs between threads
that are part of the same process. ( This is the management / scheduling of
multiple user threads on a single kernel thread, and is managed by the thread
library. )
• System Contention Scope, SCS, involves the system scheduler scheduling
kernel threads to run on one or more CPUs. Systems implementing one-to-one
threads ( XP, Solaris 9, Linux ), use only SCS.
• PCS scheduling is typically done with priority, where the programmer can set
and/or change the priority of threads created by his or her programs. Even time
slicing is not guaranteed among threads of equal priority.

37
Multiple-Processor Scheduling

• When multiple processors are available, then the scheduling gets more
complicated, because now there is more than one CPU which must be kept
busy and in effective use at all times.
• Load sharing revolves around balancing the load between multiple processors.
• Multi-processor systems may be heterogeneous, ( different kinds of CPUs ),
or homogenous, ( all the same kind of CPU ). Even in the latter case there may
be special scheduling constraints, such as devices which are connected via a
private bus to only one of the CPUs. This module will restrict its discussion to
homogenous systems.

Approaches to Multiple-Processor Scheduling

• One approach to multi-processor scheduling is asymmetric


multiprocessing, in which one processor is the master, controlling all
activities and running all kernel code, while the other runs only user
code. This approach is relatively simple, as there is no need to share
critical system data.
• Another approach is symmetric multiprocessing, SMP, where each
processor schedules its own jobs, either from a common ready queue or
from separate ready queues for each processor.
• Virtually all modern OSes support SMP, including XP, Win 2000, Solaris,
Linux, and Mac OSX.

Processor Affinity
• Processors contain cache memory, which speeds up repeated accesses to
the same memory locations.
• If a process were to switch from one processor to another each time it got
a time slice, the data in the cache ( for that process ) would have to be
invalidated and re-loaded from main memory, thereby obviating the
benefit of the cache.
• Therefore SMP systems attempt to keep processes on the same
processor, via processor affinity. Soft affinity occurs when the system
attempts to keep processes on the same processor but makes no
guarantees. Linux and some other OSes support hard affinity, in which
a process specifies that it is not to be moved between processors.
• Main memory architecture can also affect process affinity, if particular
CPUs have faster access to memory on the same chip or board than to
other memory loaded elsewhere. ( Non-Uniform Memory Access, NUMA. )
As shown below, if a process has an affinity for a particular CPU, then it
should preferentially be assigned memory storage in "local" fast access
areas.

Figure 6 - NUMA and CPU scheduling (Operating System. http://www.os-


book.com).

38
Load Balancing

• Obviously an important goal in a multiprocessor system is to balance the load


between processors, so that one processor won't be sitting idle while another is
overloaded.
• Systems using a common ready queue are naturally self-balancing, and do not
need any special handling. Most systems, however, maintain separate ready
queues for each processor.
• Balancing can be achieved through either push migration or pull migration:
o Push migration involves a separate process that runs periodically, ( e.g.
every 200 milliseconds ), and moves processes from heavily loaded
processors onto less loaded ones.
o Pull migration involves idle processors taking processes from the ready
queues of other processors.
o Push and pull migration are not mutually exclusive.
• Note that moving processes from processor to processor to achieve load
balancing works against the principle of processor affinity, and if not carefully
managed, the savings gained by balancing the system can be lost in rebuilding
caches. One option is to only allow migration when imbalance surpasses a given
threshold.

Multicore Processors

• Traditional SMP required multiple CPU chips to run multiple kernel threads
concurrently.
• Recent trends are to put multiple CPUs ( cores ) onto a single chip, which
appear to the system as multiple processors.
• Compute cycles can be blocked by the time needed to access memory, whenever
the needed data is not already present in the cache. ( Cache misses. ) In Figure
5.10, as much as half of the CPU cycles are lost to memory stall.

Figure 7 - Memory stall (Operating System. http://www.os-book.com).

• By assigning multiple kernel threads to a single processor, memory stall can be


avoided ( or reduced ) by running one thread on the processor while the other
thread waits for memory.

Figure 8 - Multithreaded multicore system (Operating System. http://www.os-


book.com).

• A dual-threaded dual-core system has four logical processors available to the


operating system. The UltraSPARC T1 CPU has 8 cores per chip and 4
hardware threads per core, for a total of 32 logical processors per chip.
• There are two ways to multi-thread a processor:

39
1. Coarse-grained multithreading switches between threads only when one
thread blocks, say on a memory read. Context switching is similar to
process switching, with considerable overhead.
2. Fine-grained multithreading occurs on smaller regular intervals, say on
the boundary of instruction cycles. However the architecture is designed
to support thread switching, so the overhead is relatively minor.
• Note that for a multi-threaded multi-core system, there are two levels of
scheduling, at the kernel level:
1. The OS schedules which kernel thread(s) to assign to which logical
processors, and when to make context switches using algorithms as
described above.
2. On a lower level, the hardware schedules logical processors on each
physical core using some other algorithm.
▪ The UltraSPARC T1 uses a simple round-robin method to
schedule the 4 logical processors (kernel threads) on each physical
core.
▪ The Intel Itanium is a dual-core chip which uses a 7-level priority
scheme (urgency) to determine which thread to schedule when one
of 5 different events occurs.

• Virtualization adds another layer of complexity and scheduling.


• Typically there is one host operating system operating on "real" processor(s) and
a number of guest operating systems operating on virtual processors.
• The Host OS creates some number of virtual processors and presents them to
the guest OSes as if they were real processors.
• The guest OSes don't realize their processors are virtual, and make scheduling
decisions on the assumption of real processors.
• As a result, interactive and especially real-time performance can be severely
compromised on guest systems. The time-of-day clock will also frequently be
off.

Real-Time CPU Scheduling

Real-time systems are those in which the time at which tasks complete is crucial to
their performance

• Soft real-time systems have degraded performance if their timing needs


cannot be met. Example: streaming video.
• Hard real-time systems have total failure if their timing needs cannot be
met. Examples: Assembly line robotics, automobile air-bag deployment.

Minimizing Latency

• Event Latency is the time between the occurrence of a triggering event and
the ( completion of ) the system's response to the event:

Figure 9 - Event latency.

40
• In addition to the time it takes to actually process the event, there are two
additional steps that must occur before the event handler ( Interrupt Service
Routine, ISR ), can even start:
o Interrupt processing determines which interrupt(s) have occurred,
and which interrupt handler routine to run. In the event of
simultaneous interrupts, a priority scheme is needed to determine
which ISR has the highest priority and gets to go next.
o Context switching involves removing a running process from the CPU,
saving its state, and loading up the ISR so that it can run.

Figure 10 - Interrupt latency.

o The total dispatch latency ( context switching ) consists of two parts:


▪ Removing the currently running process from the CPU, and
freeing up any resources that are needed by the ISR. This step
can be speeded up a lot by the use of pre-emptive kernels.
▪ Loading the ISR up onto the CPU, ( dispatching )

Figure 11 - Dispatch latency.

Priority-Based Scheduling

• Real-time systems require pre-emptive priority-based scheduling systems.


• Soft real-time systems do the best they can, but make no guarantees. They
are generally discussed elsewhere.
• Hard real-time systems, described here, must be able to provide firm
guarantees that a tasks scheduling needs can be met.
o One technique is to use a admission-control algorithm, in which
each task must specify its needs at the time it attempts to launch,
and the system will only launch the task if it can guarantee that its
needs can be met.

41
• Hard real-time systems are often characterized by tasks that must run at
regular periodic intervals, each having a period p, a constant time required
to execute, ( CPU burst ), t, and a deadline after the beginning of each period
by which the task must be completed, d.
• In all cases, t <= d <= p

Figure 12 - Periodic task.

Rate-Monotonic Scheduling

• The rate-monotonic scheduling algorithm uses pre-emptive scheduling with


static priorities.
• Priorities are assigned inversely to the period of each task, giving higher
(better) priority to tasks with shorter periods.
• Let's consider an example, first showing what can happen if the task with
the longer period is given higher priority:
o Suppose that process P1 has a period of 50, an execution time of 20,
and a deadline that matches its period (50).
o Similarly suppose that process P2 has period 100, execution time of
35, and deadline of 100.
o The total CPU utilization time is 20 / 50 = 0.4 for P1, and 25 / 100 =
0.35 for P2, or 0.75 ( 75% ) overall.
o However if P2 is allowed to go first, then P1 cannot complete before its
deadline:

Figure 13 - Scheduling of tasks when P2 has a higher priority than P1.

o Now on the other hand, if P1 is given higher priority, it gets to go first,


and P2 starts after P1 completes its burst.
o At time 50 when the next period for P1 starts, P2 has only completed
30 of its 35 needed time units, but it gets pre-empted by P1.
o At time 70, P1 completes its task for its second period, and the P2 is
allowed to complete its last 5 time units.
o Overall both processes complete at time 75, and the cpu is then idle
for 25 time units, before the process repeats.

Figure 14 - Rate-monotonic scheduling.

o Rate-monotonic scheduling is considered optimal among algorithms


that use static priorities, because any set of processes that cannot be
scheduled with this algorithm cannot be scheduled with any other
static-priority scheduling algorithm either.
o There are, however, some sets of processes that cannot be scheduled
with static priorities.

42
▪ For example, supposing that P1 =50, T1 = 25, P2 = 80, T2 =
35, and the deadlines match the periods.
▪ Overall CPU usage is 25/50 = 0.5 for P1, 35 / 80 =0.44 for P2,
or 0.94 ( 94% ) overall, indicating it should be possible to
schedule the processes.
▪ With rate-monotonic scheduling, P1 goes first, and completes
its first burst at time 25.
▪ P2 goes next, and completes 25 out of its 35 time units before
it gets pre-empted by P1 at time 50.
▪ P1 completes its second burst at 75, and then P2 completes its
last 10 time units at time 85, missing its deadline of 80 by 5
time units. :-(

Figure 15 - Missing deadlines with rate-monotonic scheduling.

o The worst-case CPU utilization for scheduling N processes under this


algorithm is N * ( 2^(1/N) - 1 ), which is 100% for a single process,
but drops to 75% for two processes and to 69% as N approaches
infinity. Note that in our example above 94% is higher than 75%.

Earliest-Deadline-First Scheduling

• Earliest Deadline First ( EDF ) scheduling varies priorities dynamically,


giving the highest priority to the process with the earliest deadline.
• Figure 6.19 shows our previous example repeated under EDF scheduling:
o At time 0 P1 has the earliest deadline, highest priority, and goes first.,
followed by P2 at time 25 when P1 completes its first burst.
o At time 50 process P1 begins its second period, but since P2 has a
deadline of 80 and the deadline for P1 is not until 100, P2 is allowed
to stay on the CPU and complete its burst, which it does at time 60.
o P1 then starts its second burst, which it completes at time 85. P2
started its second period at time 80, but since P1 had an earlier
deadline, P2 did not pre-empt P1.
o P2 starts its second burst at time 85, and continues until time 100,
at which time P1 starts its third period.
o At this point P1 has a deadline of 150 and P2 has a deadline of 160,
so P1 preempts P2.
o P1 completes its third burst at time 125, at which time P2 starts,
completing its third burst at time 145.
o The CPU sits idle for 5 time units, until P1 starts its next period at
150 and P2 at 160.
o Question: Which process will get to run at time 160, and why?
▪ Answer: Process P2, because P1 now has a deadline of 200
and P2's new deadline is 240.

Figure 16 - Earliest-deadline-first scheduling (Operating System.


http://www.os-book.com).

43
Proportional Share Scheduling

• Proportional share scheduling works by dividing the total amount of time


available up into an equal number of shares, and then each process must
request a certain share of the total when it tries to start.
• Say for example that the total shares, T, = 100, and then a particular
process asks for N = 30 shares when it launches. It will then be guaranteed
30 / 100 or 30% of the available time.
• Proportional share scheduling works with an admission-control policy, not
starting any task if it cannot guarantee the shares that the task says that it
needs.

POSIX Real-Time Scheduling

• POSIX defines two scheduling classes for real time threads, SCHED_FIFO
and SCHED_RR, depending on how threads of equal priority share time.
• SCHED_FIFO schedules tasks in a first-in-first-out order, with no time
slicing among threads of equal priority.
• SCHED_RR performs round-robin time slicing among threads of equal
priority.
• POSIX provides methods for getting and setting the thread scheduling
policy, as shown below:

OPERATING SYSTEM EXAMPLES

Example: Linux Scheduling

• Prior to version 2.5, Linux used a traditional UNIX scheduling algorithm.


• Version 2.6 used an algorithm known as O(1), that ran in constant time
regardless of the number of tasks, and provided better support for SMP
systems. However it yielded poor interactive performance.
• Starting in 2.6.23, the Completely Fair Scheduler, CFS, became the standard
Linux scheduling system. See sidebar below

CFS ( Completely Fair Scheduler ) Performance

The Linux CFS scheduler provides an efficient algorithm for selecting which
task to run next. Each runnable task is placed in a red-black tree—a balanced
binary search tree whose key is based on the value of vruntime. This tree is
shown below:

When a task becomes runnable, it is added to the tree. If a task on the


tree is not runnable ( for example, if it is blocked while waiting for I/O ), it is
removed. Generally speaking, tasks that have been given less processing time

44
( smaller values of vruntime ) are toward the left side of the tree, and tasks
that have been given more processing time are on the right side. According
to the properties of a binary search tree, the leftmost node has the smallest
key value, which for the sake of the CFS scheduler means that it is the task
with the highest priority . Because the red-black tree is balanced, navigating
it to discover the leftmost node will require O(lgN) operations (where N
is the number of nodes in the tree). However, for efficiency reasons, the
Linux scheduler caches this value in the variable rb_leftmost, and thus
determining which task to run next requires only retrieving the cached value.

New sidebar in ninth edition

• The Linux scheduler is a preemptive priority-based algorithm with two priority


ranges - Real time from 0 to 99 and a nice range from 100 to 140.
• Unlike Solaris or XP, Linux assigns longer time quantums to higher priority
tasks.

Figure 17 - Scheduling priorities on a Linux system (Operating System.


http://www.os-book.com).

Figure 18 – The relationship between priorities and time-slice length (Operating


System. http://www.os-book.com).

• A runnable task is considered eligible for execution as long as it has not


consumed all the time available in it's time slice. Those tasks are stored in
an active array, indexed according to priority.

o When a process consumes its time slice, it is moved to an expired


array. The tasks priority may be re-assigned as part of the transferal.
o When the active array becomes empty, the two arrays are swapped.
o These arrays are stored in runqueue structures. On multiprocessor
machines, each processor has its own scheduler with its own runqueue.

45
Figure 19 – List of tasks indexed according to priority (Operating System.
http://www.os-book.com).

• Windows XP uses a priority-based preemptive scheduling algorithm.


• The dispatcher uses a 32-level priority scheme to determine the order of thread
execution, divided into two classes - variable class from 1 to 15 and real-time
class from 16 to 31, (plus a thread at priority 0 managing memory. )
• There is also a special idle thread that is scheduled when no other threads are
ready.
• Win XP identifies 7 priority classes (rows on the table below ), and 6 relative
priorities within each class ( columns. )
• Processes are also each given a base priority within their priority class. When
variable class processes consume their entire time quanta, then their priority
gets lowered, but not below their base priority.
• Processes in the foreground ( active window ) have their scheduling quanta
multiplied by 3, to give better response to interactive processes in the
foreground.

Figure 20 - Windows thread priorities (Operating System. http://www.os-book.com).

Example: Solaris Scheduling

• Priority-based kernel thread scheduling.


• Four classes ( real-time, system, interactive, and time-sharing ), and multiple
queues / algorithms within each class.
• Default is time-sharing.
o Process priorities and time slices are adjusted dynamically in a
multilevel-feedback priority queue system.
o Time slices are inversely proportional to priority - Higher priority jobs get
smaller time slices.
o Interactive jobs have higher priority than CPU-Bound ones.
o See the table below for some of the 60 priority levels and how they shift.
"Time quantum expired" and "return from sleep" indicate the new priority

46
when those events occur. ( Larger numbers are a higher, i.e. better
priority. )

Figure 21 - Solaris dispatch table for time-sharing and interactive threads


(Operating System. http://www.os-book.com).

• Solaris 9 introduced two new scheduling classes: Fixed priority and fair share.
o Fixed priority is similar to time sharing, but not adjusted dynamically.
o Fair share uses shares of CPU time rather than priorities to schedule
jobs. A certain share of the available CPU time is allocated to a project,
which is a set of processes.
• System class is reserved for kernel use. ( User programs running in kernel
mode are NOT considered in the system scheduling class. )

Figure 22 - Solaris scheduling (Operating System. http://www.os-book.com).

47
ALGORITHM EVALUATION

• The first step in determining which algorithm (and what parameter settings
within that algorithm) is optimal for a particular operating environment is to
determine what criteria are to be used, what goals are to be targeted, and what
constraints if any must be applied. For example, one might want to "maximize
CPU utilization, subject to a maximum response time of 1 second".
• Once criteria have been established, then different algorithms can be analyzed
and a "best choice" determined. The following sections outline some different
methods for determining the "best choice".

Deterministic Modeling

• If a specific workload is known, then the exact values for major criteria can be
fairly easily calculated, and the "best" determined. For example, consider the
following workload ( with all processes arriving at time 0 ), and the resulting
schedules determined by three different algorithms:

Process Burst Time

P1 10

P2 29

P3 3

P4 7

P5 12

FCFS:

Non-preemptive SJF:

Round Robin:

• The average waiting times for FCFS, SJF, and RR are 28ms, 13ms, and 23ms
respectively.

48
• Deterministic modeling is fast and easy, but it requires specific known input,
and the results only apply for that particular set of input. However by
examining multiple similar cases, certain trends can be observed. ( Like the fact
that for processes arriving at the same time, SJF will always yield the shortest
average wait time. )

Queuing Models

• Specific process data is often not available, particularly for future times.
However a study of historical performance can often produce statistical
descriptions of certain important parameters, such as the rate at which new
processes arrive, the ratio of CPU bursts to I/O times, the distribution of CPU
burst times and I/O burst times, etc.
• Armed with those probability distributions and some mathematical formulas, it
is possible to calculate certain performance characteristics of individual waiting
queues. For example, Little's Formula says that for an average queue length of
N, with an average waiting time in the queue of W, and an average arrival of
new jobs in the queue of Lambda, then these three terms can be related by:

N = Lambda * W

• Queuing models treat the computer as a network of interconnected queues,


each of which is described by its probability distribution statistics and formulas
such as Little's formula. Unfortunately real systems and modern scheduling
algorithms are so complex as to make the mathematics intractable in many
cases with real systems.

Simulations

• Another approach is to run computer simulations of the different proposed


algorithms ( and adjustment parameters ) under different load conditions, and
to analyze the results to determine the "best" choice of operation for a particular
load pattern.
• Operating conditions for simulations are often randomly generated using
distribution functions similar to those described above.
• A better alternative when possible is to generate trace tapes, by monitoring
and logging the performance of a real system under typical expected work loads.
These are better because they provide a more accurate picture of system loads,
and also because they allow multiple simulations to be run with the identical
process load, and not just statistically equivalent loads. A compromise is to
randomly determine system loads and then save the results into a file, so that
all simulations can be run against identical randomly determined system loads.
• Although trace tapes provide more accurate input information, they can be
difficult and expensive to collect and store, and their use increases the
complexity of the simulations significantly. There is also some question as to
whether the future performance of the new system will really match the past
performance of the old system. ( If the system runs faster, users may take fewer
coffee breaks, and submit more processes per hour than under the old system.
Conversely if the turnaround time for jobs is longer, intelligent users may think
more carefully about the jobs they submit rather than randomly submitting
jobs and hoping that one of them works out. )

49
Figure 23 - Evaluation of CPU schedulers by simulation (Operating System.
http://www.os-book.com).

Implementation

• The only real way to determine how a proposed scheduling algorithm is going to
operate is to implement it on a real system.
• For experimental algorithms and those under development, this can cause
difficulties and resistance among users who don't care about developing OSes
and are only trying to get their daily work done.
• Even in this case, the measured results may not be definitive, for at least two
major reasons: (1) System work loads are not static, but change over time as
new programs are installed, new users are added to the system, new hardware
becomes available, new work projects get started, and even societal changes.
(For example the explosion of the Internet has drastically changed the amount
of network traffic that a system sees and the importance of handling it with
rapid response times). (2) As mentioned above, changing the scheduling system
may have an impact on the work load and the ways in which users use the
system. (The lesson gives an example of a programmer who modified his code to
write an arbitrary character to the screen at regular intervals, just so his job
would be classified as interactive and placed into a higher priority queue).
• Most modern systems provide some capability for the system administrator to
adjust scheduling parameters, either on the fly or as the result of a reboot or a
kernel rebuild.

It’s Your Turn.


If you want to know more interesting facts about VMware r virtual
appliances, visit the following:

• In addition, the rise of virtualization as a mainstream (and


frequently free) computer function makes it possible to run many
operating systems on top of one core system. For example, VMware
(http://www.vmware.com)provides a free “player” for Windows on
which hundreds of free “virtual appliances” can run. Virtualbox
(http://www.virtualbox.com) provides a free, opensource virtual
machine manager on many operating systems. Using such tools,
students can try out hundreds of operating systems without
dedicated hardware.

• Operating system. http://www.wiley.com/college and clicking


“Who’s my rep?”

• Operating System. http://www.os-module.com.

50
If you want to know more interesting facts about Operating System
structures, visit the following:
• Operating system. http://www.wiley.com/college and clicking
“Who’s my rep?”

• Operating System. http://www.os-book.com.

• It may be a good idea to review the basic concepts of machine organization


and assembly language programming. You should be comfortable with the
concepts of memory, CPU, registers, I/O, interrupts, instructions, and the
instruction execution cycle. Since the operating system is the interface
between the hardware and user programs, a good understanding of operating
systems requires an understanding of both hardware and programs.

• The powerpoint and movies presentation will be additional references for you
to elaborate some of the topics see the attachment.

Activity:
Direction: Read the passage carefully and plan what you will write. Place your
answer in the pad paper (yellow or white) to be submitted. Each question has 10
points each. The Essay rubrics have a correspond points that will guide you in
your essay.
“Construct your determination with Sustained Effort, Controlled Attention and Concentrated
Energy, Opportunities never come to those who wait… they are captured by those who dare to
attack” – Paul J. Meyer
Features 9-10 points 7-8 points 4-6 points 1-3 points
Expert Accomplished Capable Beginner
Understanding Writing shows Writing shows a Writing shows Writing shows
strong clear adequate little
understanding understanding understanding understanding
Quality of Piece was Piece was written Piece had little Piece had no style
Writing written in an in an interesting style
extraordinary style
style
Gives no new
Very informative Somewhat Gives some new information and
and well- informative and information but very poorly
organized organized poorly organized organized
Grammar, Virtually no Few spelling and A number of So many spelling,
Usage & spelling, punctuation spelling, punctuation and
Mechanics punctuation or errors, minor punctuation or grammatical
grammatical grammatical grammatical errors that it
errors errors errors interferes with the
meaning
QUESTIONS:
1. A CPU-scheduling algorithm determines an order for the execution of its scheduled
processes. Given n processes to be scheduled on one processor, how many different
schedules are possible? Give a formula in terms of n.

2. Explain the difference between preemptive and nonpreemptive scheduling.

51
3. Suppose that the following processes arrive for execution at the times indicated.
Each process will run for the amount of time listed. In answering the questions, use
nonpreemptive scheduling, and base all decisions on the information you have at the
time the decision must be made.
Process Arrival Time Burst Time

P1 0.0 8

P2 0.4 4

P3 1.0 1

a. What is the average turnaround time for these processes with the
FCFS scheduling algorithm?
b. What is the average turnaround time for these processes with the
SJF scheduling algorithm?
c. The SJF algorithm is supposed to improve performance, but notice
that we chose to run process P1 at time 0 because we did not know
that two shorter processes would arrive soon. Compute what the
average turnaround time will be if the CPU is left idle for the first
1 unit and then SJF scheduling is used. Remember that processes
P1 and P2 are waiting during this idle time, so their waiting time
may increase. This algorithm could be called future-knowledge
scheduling.
4. What advantage is there in having different time-quantum sizes at
different levels of a multilevel queueing system?

POST TEST
Directions: The following questions cover general areas of this module. You may not
know the answers to all questions, but please attempt to answer them without asking
others or referring to books. Place your answer at the separate page or pad paper in able
to be submitted.
Choose the best answer for each question and write the letter of your
choice after the number.
1. It is called the several process access and manipulate the same data concurrently.
a. Critical Section Problem
b. Race Condition
c. General Structure
d. Entry Section
2. A solution to the critical problem that are complicated and generally inaccessible to
application programmer
a. Mutex Locks
b. Compare_ad_swap
c. Bounded-waiting mutual exclusion
d. Test-and-set
3. Two or more process are waiting indefinitely for an event that can be caused by
only one of the waiting processes
a. Deadlock
b. Semaphore
c. Synchronization
d. Starvation

52
4. Part in the Reader-writer problem that they do not perform any updates
a. Mutexlock
b. Dataset
c. Writers
d. Readers
5. A high level abstraction that provides a convenient and effective mechanism for
process synchronization.
a. Monitor
b. Solaris
c. Turnstile
d. Readers-writer
6. Short-term scheduler select from among the process in ready queue and allocate
the CPU to one of them
a. CPU – I/O
b. CPU scheduler
c. CPU scheduling
d. CPU Ulitization
7. Give control of the CPU to the process selected by the short-term scheduler
a. Latency
b. Dispatcher
c. Response Time
d. Switch context
8. A number of program which can present in memory at the same time.
a. Multiprogramming
b. Jobs
c. User
d. CPU/IO Burst
9. An important criterion is how long it takes to execute that process.
a. CPU utilization
b. Throughput
c. Turnaround Time
d. Waiting Time
10. The sum of the period spent waiting in the ready queue.
a. CPU utilization
b. Throughput
c. Turnaround Time
d. Waiting Time

REFERENCES
Silberschatz, A. et. al. (2013). Operating System Concepts. 9th Edition. Hoboken, New
Jersey, USA: John Wiley & Sons, Inc.

Stallings, William (2012). Operating System Internals and Design Principles. 7th edition.
Upper Saddle River, New Jersey: Pearson Education, Inc.

53

You might also like