CPU Scheduling 1
CPU Scheduling 1
CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold(in waiting state) due to unavailability of
any resource like I/O etc, thereby making full use of CPU. The aim of CPU
scheduling is to make the system efficient, fast and fair.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler). The scheduler selects from among
the processes in memory that are ready to execute, and allocates the CPU to one of
them.
Switching context
Switching to user mode
When a process switches from the running state to the waiting state(for I/O
request or invocation of wait for the termination of one of the child
processes).
When a process switches from the running state to the ready state (for
example, when an interrupt occurs).
When a process switches from the waiting state to the ready state(for
example, completion of I/O).
When a process terminates.
In circumstances 1 and 4, there is no choice in terms of scheduling. A new
process(if one exists in the ready queue) must be selected for execution. There is a
choice, however in circumstances 2 and 3.When Scheduling takes place only under
circumstances 1 and 4, we say the scheduling scheme is non-preemptive;
otherwise the scheduling scheme is preemptive.
Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU either by terminating or by
switching to the waiting state.This scheduling method is used by the Microsoft
Windows 3.1 and by the Apple Macintosh operating systems.
Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it
is necessary to run a certain task that has a higher priority before another task
although it is running. Therefore, the running task is interrupted for some time and
resumed later when the priority task has finished its execution.
CPU Utilization:To make out the best use of CPU and not to waste any CPU
cycle, CPU would be working most of the time(Ideally 100% of the time).
Considering a real system, CPU usage should range from 40% (lightly loaded) to
90% (heavily loaded.)
Waiting Time: The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready queue to acquire get control
on the CPU.
In general CPU utilization and Throughput are maximized and other factors are
reduced for proper optimization.
Scheduling Algorithms
To decide which process to execute first and which process to execute last to
achieve maximum CPU utilisation, computer scientists have defined some
algorithms, they are:
Shortest Job First (SJF): Process which have the shortest burst time are
scheduled first.If two processes have the same bust time then FCFS is used to
break the tie. It is a non-preemptive scheduling algorithm.
This method is mostly applied in batch environments where short jobs are
required to be given preference.
This is not an ideal method to implement it in a shared system where the
required CPU time is unknown.
Associate with each process as the length of its next CPU burst. So that
operating system uses these lengths, which helps to schedule the process
with the shortest possible time.
Shortest Remaining Time First (SRTF): It is preemptive mode of
SJF algorithm in which jobs are schedule according to shortest remaining time.
Turn Around Time: Time Difference between completion time and arrival time.
Waiting Time(W.T): Time Difference between turn around time and burst time.
Max throughput [Number of processes that complete their execution per time unit]
Min waiting time [Time a process waits in ready queue] Objectives of Process
Scheduling Algorithm
Some useful facts about Scheduling Algorithms: FCFS can cause long waiting
times, especially when the first job takes too much CPU time.
Both SJF and Shortest Remaining time first algorithms may cause starvation.
Consider a situation when the long process.
Topic Thread
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. In a process, threads allow multiple executions of streams. In many
respect, threads are popular way to improve application through parallelism. a
thread can be in any of several states (Running, Blocked, Ready or Terminated).
Each thread has its own stack. Since thread will generally call different procedures
and thus a different execution history. This is why thread needs its own stack. An
operating system that has thread facility, the basic unit of CPU utilization is a
thread. A thread has or consists of a program counter (PC), a register set, and a
stack space. Threads are not independent of one other like processes as a result
threads shares with other threads their code section, data section, OS resources
also known as task, such as open files and signals.
Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as
that of processes. Some of the similarities and differences are:
Similarities:Like processes threads share CPU and only one thread active
(running) at a time.
And like process, if one thread is blocked, another thread can run.
Differences: Unlike processes, threads are not independent of one another.
Unlike processes, all threads can access every address in the task .
Unlike processes, thread are design to assist one other. Note that processes might
or might not assist one another because processes may originate from different
users.
Why Threads?
Following are some reasons why we use threads in designing operating systems.A
process with multiple threads make a great server for example printer
server.Because threads can share common data, they do not need to use
interprocess communication.Because of the very nature, threads can take
advantage of multiprocessors. Threads are cheap in the sense that They only need a
stack and storage for registers therefore, threads are cheap to create. Threads use
very little resources of an operating system in which they are working. That is,
threads do not need new address space, global data, program code or operating
system resources. Context switching are fast when working with threads. The
reason is that we only have to save and/or restore PC, SP and registers. But this
cheapness does not come free - the biggest drawback is that there is no protection
between threads.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale
and efficiency.
Process Components
s.no Component Description
The process Stack
1 stack contains the
temporary data
such as
method/function
parameters, return
address and local
variables.
This is dynamically
2 Heap allocated memory
to a process during
its run time.
Program
#include <stdio.h>
int main() {
return 0;
A Critical Section is a code segment that accesses shared variables and has to be
executed as an atomic action. It means that in a group of cooperating processes,at a
given point of time, only one process must be executing its critical section. If any
other process also wants to execute its critical section, it must wait until the first
one finishes.
A solution to the critical section problem must satisfy the following three
conditions:
2. Progress:If no process is in its critical section, and if one or more threads want
to execute their critical section then any one of these threads must be allowed to
get into its critical section.
3. Bounded Waiting:After a process makes a request for getting into its critical
section, there is a limit for how many other processes can get into their critical
section, before this process's request is granted.So after the limit is reached, system
must grant the process permission to get into its critical section.
Race Condition When more than one processes are executing the same code or
accessing the same memory or any shared variable in that condition there is a
possibility that the output or the value of the shared variable is wrong so for that all
the processes doing race to say that my output is correct this condition known as
race condition. Several processes access and process the manipulations over the
same data concurrently, then the outcome depends on the particular order in which
the access takes place.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section
problem.In Peterson’s solution, we have two shared variables: boolean flag[i]
:Initialized to FALSE, initially no one is interested in entering the critical section
int turn : The process whose turn is to enter the critical section.Peterson’s Solution
preserves all three conditions :Mutual Exclusion is assured as only one process can
access the critical section at any time. Progress is also assured, as a process outside
the critical section does not block other processes from entering the critical section.
Bounded Waiting is preserved as every process gets a fair chance.
It is limited to 2 processes.
Synchronization Hardware
Many systems provide hardware support for critical section code. The critical
section problem could be solved easily in a single-processor environment if we
could disallow interrupts to occur while a shared variable or resource is being
modified. In this manner, we could be sure that the current sequence of instructions
would be allowed to execute in order without pre-emption. Unfortunately, this
solution is not feasible in a multiprocessor environment. Disabling interrupt on a
multiprocessor environment can be time consuming as the message is passed to all
the processors. This message transmission lag, delays entry of threads into critical
section and the system efficiency decreases.
Mutex Locks
As the synchronization hardware solution is not easy to implement for everyone, a
strict software approach called Mutex Locks was introduced. In this approach, in
the entry section of code, a LOCK is acquired over the critical resources modified
and used inside critical section, and in the exit section that LOCK is released.As
the resource is locked while a process executes its critical section hence no other
process can access it.
TOPIC Semaphores
A Semaphore is an integer variable, which can be accessed only through two
operations wait() and signal(). There are two types of semaphores : Binary
Semaphores and Counting Semaphores.
Binary Semaphores : They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes can share
the same mutex semaphore that is initialized to 1. Then, a process has to wait until
the lock becomes 0. Then, the process can make the mutex semaphore 1 and start
its critical section. When it completes its critical section, it can reset the value of
mutex semaphore to 0 and some other process can enter its critical section.
Counting Semaphores: They can have any value and are not restricted over a
certain domain. They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses. The semaphore can be
initialized to the number of instances of the resource. Whenever a process wants to
use that resource, it checks if the number of remaining instances is more than zero,
i.e., the process has an instance available. Then, the process can enter its critical
section thereby decreasing the value of the counting semaphore by 1. After the
process is over with the use of the instance of the resource, it can leave the critical
section thereby adding 1 to the number of available instances of the resource.