0% found this document useful (0 votes)
10 views

Unit III

A task is a unit of work in software that corresponds to OS threads when deployed to an embedded processor. A task in RTOS is a piece of schedulable code that accomplishes a useful purpose. A process is an actively executing program with its own memory space, while a thread is a lightweight sub-process that can be scheduled independently and shares resources with other threads of the same process.

Uploaded by

N x10
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Unit III

A task is a unit of work in software that corresponds to OS threads when deployed to an embedded processor. A task in RTOS is a piece of schedulable code that accomplishes a useful purpose. A process is an actively executing program with its own memory space, while a thread is a lightweight sub-process that can be scheduled independently and shares resources with other threads of the same process.

Uploaded by

N x10
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

What is a Task?

A task is a unit of execution or unit of work in a software application. Typically, task


execution in an embedded processor is managed by the operating system (OS). When
deployed to the embedded processor, a task corresponds to an OS threads.

What is a task in RTOS?


In the general-purpose operating system, we call it processes, but in RTOS, we call
it a task. A task is nothing but a piece of code that is schedulable and which
accomplishes our useful purpose.2

What is a Process?

A process is an active program, i.e., a program that is under execution. It is more than
the program code, as it includes the program counter, process stack, registers, program
code etc. Compared to this, the program code is only the text section.

When a computer program is triggered to execute, it does not run directly, but it first
determines the steps required for execution of the program, and following these steps of
execution is referred to as a process. An individual process takes its own memory space
and does not share this space with other processes.

Processes can be classified into two types namely – clone process and parent process.
A clone process, also called a child process, is one which is created by another process,
while the main process is one that is responsible for creating other processes to perform
multiple tasks at a time is called as parent process.

What is a Thread?

A thread is a lightweight process that can be managed independently by a scheduler. It


improves the application performance using parallelism. A thread shares information like
data segment, code segment, files etc. with its peer threads while it contains its own
registers, stack, counter etc.

A thread is basically a subpart of a large process. In a process, all the threads within it are
interrelated to each other. A typical thread contains some information like data segment,
code segment, etc. This information is being shared to their peer threads during execution.
The most important feature of threads is that they share memory, data, resources, etc.
with their peer threads within a process to which they belong. Also, all the threads within
a process are required to be synchronized to avoid unexpected results.

Difference between Process and Thread

The following table highlights the major differences between a process and a thread −

Comparison Process Thread


Basis

Definition A process is a program A thread is a lightweight


under execution i.e. an process that can be
active program. managed independently by
a scheduler

Context Processes require more Threads require less time


switching time time for context switching for context switching as
as they are heavier. they are lighter than
processes.

Memory Sharing Processes are totally A thread may share some


independent and don’t memory with its peer
share memory. threads.

Communication Communication between Communication between


processes requires more threads requires less time
time than between than between processes.
threads.

Blocked If a process gets blocked, If a user level thread gets


remaining processes can blocked, all of its peer
continue execution. threads also get blocked.

Resource Processes require more Threads generally need


Consumption resources than threads. less resources than
processes.

Dependency Individual processes are Threads are parts of a


independent of each process and so are
other. dependent.

Data and Code Processes have A thread shares the data


sharing independent data and segment, code segment,
code segments. files etc. with its peer
threads.

Treatment by All the different processes All user level peer threads
OS are treated separately by are treated as a single
the operating system. task by the operating
system.

Time for Processes require more Threads require less time


creation time for creation. for creation.
Time for Processes require more Threads require less time
termination time for termination. for termination.

Conclusion

The most significant difference between a process and a thread is that a process is defined
as a task that is being completed by the computer, whereas a thread is a lightweight
process that can be managed independently by a scheduler.
Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running state to the
ready state or from the waiting state to the ready state. The resources (mainly CPU
cycles) are allocated to the process for a limited amount of time and then taken away,
and the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in the ready queue till it gets its next chance
to execute.

Algorithms based on preemptive scheduling are Round Robin (RR), Shortest Remaining
Time First (SRTF), Priority (preemptive version), etc.

Preemptive scheduling has a number of advantages and disadvantages. The following


are non-preemptive scheduling’s benefits and drawbacks:
Advantages
1. Because a process may not monopolize the processor, it is a more reliable
method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
4. Utilizing this method in a multi-programming environment is more
advantageous.
5. The operating system makes sure that every process using the CPU is using
the same amount of CPU time.
Disadvantages
1. Limited computational resources must be used.
2. Suspending the running process, change the context, and dispatch the new
incoming process all take more time.
3. The low-priority process would have to wait if multiple high-priority processes
arrived at the same time.
Non-Preemptive Scheduling
Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to the waiting state. In this scheduling, once the resources (CPU cycles)
are allocated to a process, the process holds the CPU till it gets terminated or reaches a
waiting state. In the case of non-preemptive scheduling does not interrupt a process
running CPU in the middle of the execution. Instead, it waits till the process completes
its CPU burst time, and then it can allocate the CPU to another process.
Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically
non preemptive) and Priority (nonpreemptive version), etc.

Non-preemptive scheduling has both advantages and disadvantages. The following are
non-preemptive scheduling’s benefits and drawbacks:
Advantages
1. It has a minimal scheduling burden.
2. It is a very easy procedure.
3. Less computational resources are used.
4. It has a high throughput rate.
Disadvantages
1. Its response time to the process is super.
2. Bugs can cause a computer to freeze up.

Key Differences Between Preemptive and Non-Preemptive Scheduling


1. In preemptive scheduling, the CPU is allocated to the processes for a limited
time whereas, in Non-preemptive scheduling, the CPU is allocated to the
process till it terminates or switches to the waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle
of execution when a higher priority one comes whereas, the executing process
in non-preemptive scheduling is not interrupted in the middle of execution and
waits till its execution.
3. In Preemptive Scheduling, there is the overhead of switching the process from
the ready state to the running state, vise-verse, and maintaining the ready
queue. Whereas in the case of non-preemptive scheduling has no overhead of
switching the process from running state to ready state.
4. In preemptive scheduling, if a high-priorThe process The process non-
preemptive low-priority process frequently arrives in the ready queue then
the process with low priority has to wait for a long, and it may have to starve.
, in non-preemptive scheduling, if CPU is allocated to the process having a
larger burst time then the processes with a small burst time may have to
starve.
5. Preemptive scheduling attains flexibility by allowing the critical processes to
access the CPU as they arrive in the ready queue, no matter what process is
executing currently. Non-preemptive scheduling is called rigid as even if a
critical process enters the ready queue the process running CPU is not
disturbed.
6. Preemptive Scheduling has to maintain the integrity of shared data that’s why
it is cost associative which is not the case with Non-preemptive Scheduling.

Comparison Chart

PREEMPTIVE NON-PREEMPTIVE
Parameter SCHEDULING SCHEDULING

Once resources(CPU Cycle)


In this resources(CPU
are allocated to a process,
Cycle) are allocated to a
Basic the process holds it till it
process for a limited
completes its burst time or
time.
switches to waiting state.

Process can not be


Process can be interrupted until it
Interrupt
interrupted in between. terminates itself or its time
is up.

If a process having high If a process with a long


priority frequently arrives burst time is running CPU,
Starvation in the ready queue, a low then later coming process
priority process may with less CPU burst time
starve. may starve.

It has overheads of
Overhead It does not have overheads.
scheduling the processes.

Flexibility flexible rigid

Cost cost associated no cost associated

In preemptive
It is low in non preemptive
CPU Utilization scheduling, CPU
scheduling.
utilization is high.

Preemptive scheduling Non-preemptive scheduling


Waiting Time
waiting time is less. waiting time is high.
PREEMPTIVE NON-PREEMPTIVE
Parameter SCHEDULING SCHEDULING

Preemptive scheduling Non-preemptive scheduling


Response Time
response time is less. response time is high.

Decisions are made by the


Decisions are made by
process itself and the OS
the scheduler and are
Decision making just follows the process’s
based on priority and
instructions
time slice allocation

The OS has greater The OS has less control over


Process control control over the the scheduling of processes
scheduling of processes

Lower overhead since


Higher overhead due to
context switching is less
Overhead frequent context
frequent
switching

Examples of preemptive Examples of non-preemptive


scheduling are Round scheduling are First Come
Examples
Robin and Shortest First Serve and Shortest Job
Remaining Time First. First.

Round Robin Scheduling:

Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a
fixed time slot. It is the preemptive version of the First come First Serve CPU Scheduling
algorithm.
• Round Robin CPU Algorithm generally focuses on Time Sharing technique.
• The period of time for which a process or job is allowed to run in a pre-
emptive method is called time quantum.
• Each process or job present in the ready queue is assigned the CPU for that
time quantum, if the execution of the process is completed during that time,
then the process will end else the process will go back to the waiting
table and wait for its next turn to complete the execution.
Characteristics of Round Robin Algorithm
• It is simple, easy to implement, and starvation-free as all processes get a
fair share of CPU.
• One of the most commonly used techniques in CPU scheduling is a core.
• It is preemptive as processes are assigned CPU only for a fixed slice of time
at most.
• The disadvantage of it is more overhead of context switching.

Examples to show working of Round Robin Scheduling Algorithm

Example-1: Consider the following table of arrival time and burst time for four
processes P1, P2, P3, and P4 and given Time Quantum = 2

Process Burst Time Arrival Time

P1 5 ms 0 ms

P2 4 ms 1 ms
Process Burst Time Arrival Time

P3 2 ms 2 ms

P4 1 ms 4 ms

The Round Robin CPU Scheduling Algorithm will work on the basis of steps as
mentioned below:

At time = 0,
• The execution begins with process P1, which has burst time 5.
• Here, every process executes for 2 milliseconds (Time Quantum Period).
P2 and P3 are still in the waiting queue.
Read Initia
Time y Runnin l Remainin
Instanc Proces Arriva Queu g Executio Burst g Burst
e s l Time e Queue n Time Time Time

0-2ms P1 0ms P2, P3 P1 2ms 5ms 3ms

At time = 2,
• The processes P1 and P3 arrives in the ready queue and P2 starts executing
for TQ period

Read Initia
Time y Runnin l Remainin
Instanc Proces Arriva Queu g Executio Burst g Burst
e s l Time e Queue n Time Time Time

P1 0ms 0ms 3ms 3ms


2-4ms P3, P1 P2
P2 1ms 2ms 4ms 2ms

At time = 4,
• The process P4 arrives in the ready queue,
• Then P3 executes for TQ period.

Read Initia
Time y Runnin l Remainin
Instanc Proces Arriva Queu g Executio Burst g Burst
e s l Time e Queue n Time Time Time

4-6ms P1 0ms P3 0ms 3ms 3ms


Read Initia
Time y Runnin l Remainin
Instanc Proces Arriva Queu g Executio Burst g Burst
e s l Time e Queue n Time Time Time

P2 1ms 0ms 2ms 2ms

P1,
P3 2ms P4, P2 2ms 2ms 0ms

At time = 6,
• Process P3 completes its execution
• Process P1 starts executing for TQ period as it is next in the b.

Read Initia
Time y Runnin l Remainin
Instanc Proces Arriva Queu g Executio Burst g Burst
e s l Time e Queue n Time Time Time

P1 0ms 2ms 3ms 1ms


6-8ms P4, P2 P1
P2 1ms 0ms 2ms 2ms

Advantages of Round Robin CPU Scheduling Algorithm


• There is fairness since every process gets an equal share of the CPU.
• The newly created process is added to the end of the ready queue.
• A round-robin scheduler generally employs time-sharing, giving each job a
time slot or quantum.
• While performing a round-robin scheduling, a particular time quantum is
allotted to different jobs.
• Each process get a chance to reschedule after a particular quantum time in
this scheduling.
Disadvantages of Round Robin CPU Scheduling Algorithm
• There is Larger waiting time and Response time.
• There is Low throughput.
• There is Context Switches.
• Gantt chart seems to come too big (if quantum time is less for scheduling.
For Example:1 ms for big scheduling.)
• Time consuming scheduling for small quantum.

First Come First Serve Scheduling:

(Non Preemptive)
What is First Come First Serve Scheduling?

The First come first serve scheduling algorithm is non-preemptive in nature i.e, if a
process is already running, then it is not interrupted by another process until the
currently running process is executed completely

Simplest CPU scheduling algorithm that schedules according to arrival times of


processes. The first come first serve scheduling algorithm states that the process that
requests the CPU first is allocated the CPU first. It is implemented by using the FIFO
queue. When a process enters the ready queue, its PCB is linked to the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue. FCFS is a non-preemptive scheduling
algorithm.
Characteristics of FCFS
• FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not very efficient in performance, and the wait time is quite
high.
Algorithm for FCFS Scheduling
• The waiting time for the first process is 0 as it is executed first.
• The waiting time for the upcoming process can be calculated by:
wt[i] = ( at[i – 1] + bt[i – 1] + wt[i – 1] ) – at[i]
where
• wt[i] = waiting time of current process
• at[i-1] = arrival time of previous process
• bt[i-1] = burst time of previous process
• wt[i-1] = waiting time of previous process
• at[i] = arrival time of current process
• The Average waiting time can be calculated by:
Average Waiting Time = (sum of all waiting time)/(Number of processes)
Examples to Show Working of Non-Preemptive First come First Serve CPU Scheduling
Algorithm
Example-1: Consider the following table of arrival time and burst time for five
processes P1, P2, P3, P4 and P5.

Processes Arrival Time Burst Time

P1 0 4
Processes Arrival Time Burst Time

P2 1 3

P3 2 1

P4 3 2

P5 4 5

The First come First serve CPU Scheduling Algorithm will work on the basis of steps
as mentioned below:
Step 0: At time = 0,
• The process begins with P1
• As it has an arrival time 0

Initial Remaining
Time Arrival Waiting Execution Burst Burst
Instance Process Time Table Time Time Time

0-1ms P1 0ms 1ms 4ms 3ms

Step 1: At time = 1,
• The process P2 arrives
• But process P1 still executing,
• Thus, P2 is kept on a waiting table and waits for its execution.

Initial Remaining
Time Arrival Waiting Execution Burst Burst
Instance Process Time Table Time Time Time

P1 0ms 1ms 3ms 2ms

1-2ms

P2 1ms P2 0ms 3ms 3ms

Step 3: At time = 2,
• The process P3 arrives and kept in a waiting queue
• While process P1 is still executing as its burst time is 4.

Initial Remaining
Time Arrival Waiting Execution Burst Burst
Instance Process Time Table Time Time Time

2-3ms P1 0ms 1ms 2ms 1ms


Initial Remaining
Time Arrival Waiting Execution Burst Burst
Instance Process Time Table Time Time Time

P2 1ms P2 0ms 3ms 3ms

P3 2ms P2, P3 0ms 1ms 1ms

Step 4: At time = 3,
• The process P4 arrives and kept in the waiting queue
• While process P1 is still executing as its burst time is 4
Initial Remaining
Time Arrival Waiting Execution Burst Burst
Instance Process Time Table Time Time Time

P1 0ms 1ms 1ms 0ms

P2 1ms P2 0ms 3ms 3ms

3-4ms
P3 2ms P2, P3 0ms 1ms 1ms

P2, P3,
P4 3ms 0ms 2ms 2ms
P4

Step 5: At time = 4,
• The process P1 completes its execution
• Process P5 arrives in waiting queue while process P2 starts executing
Initial Remaining
Time Arrival Waiting Execution Burst Burst
Instance Process Time Table Time Time Time

P2 1ms 1ms 3ms 2ms

P3 2ms P3 0ms 1ms 1ms

4-5ms
P4 3ms P3, P4 0ms 2ms 2ms

P3, P4,
P5 4ms 0ms 5ms 5ms
P5
Advantages of FCFS

The simplest and basic form of CPU Scheduling algorithm

Easy to implement

First come first serve method

It is well suited for batch systems where the longer time periods for each process are
often acceptable.

Disadvantages of FCFS

As it is a Non-preemptive CPU Scheduling Algorithm, hence it will run till it finishes the
execution.

The average waiting time in the FCFS is much higher than in the others

It suffers from the Convoy effect.

Not very efficient due to its simplicity

Processes that are at the end of the queue, have to wait longer to finish.

It is not suitable for time-sharing operating systems where each process should get the
same amount of CPU time.

Shortest Job First Scheduling:


What is Shortest Job First Scheduling?

Shortest Job First (SJF) is an algorithm in which the process having the smallest
execution time is chosen for the next execution. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for
other processes awaiting execution. The full form of SJF is Shortest Job First.

There are basically two types of SJF methods:

Non-Preemptive SJF

Preemptive SJF
Characteristics of SJF Scheduling

• It is associated with each job as a unit of time to complete.


• This algorithm method is helpful for batch-type processing, where waiting for jobs
to complete is not critical.
• It can improve process throughput by making sure that shorter jobs are executed
first, hence possibly have a short turnaround time.
• It improves job output by offering shorter jobs, which should be executed first,
which mostly have a shorter turnaround time.

Non-Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to process, the process
holds it till it reaches a waiting state or terminated.

Consider the following five processes each having its own unique burst time and arrival
time.

Process Queue Burst time Arrival time


P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Step 0) At time=0, P4 arrives and starts execution.

Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to


complete. It will continue execution.
Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will
continue execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will
continue execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will
continue execution.
Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and
P2 is compared. Process P2 is executed because its burst time is the lowest.

Step 7) At time=10, P2 is executing and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.
Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.

Advantages of SJF

Here are the benefits/pros of using SJF method:

SJF is frequently used for long term scheduling.

It reduces the average waiting time over FIFO (First in First Out) algorithm.

SJF method gives the lowest average waiting time for a specific set of processes.

It is appropriate for the jobs running in batch, where run times are known in advance.

For the batch system of long-term scheduling, a burst time estimate can be obtained
from the job description.

For Short-Term Scheduling, we need to predict the value of the next burst time.

Probably optimal with regard to average turnaround time.

Disadvantages/Cons of SJF

Here are some drawbacks/cons of SJF algorithm:

Job completion time must be known earlier, but it is hard to predict.

It is often used in a batch system for long term scheduling.


SJF can’t be implemented for CPU scheduling for the short term. It is because there is no
specific method to predict the length of the upcoming CPU burst.

This algorithm may cause very long turnaround times or starvation.

Requires knowledge of how long a process or job will run.

It leads to the starvation that does not reduce average turnaround time.

It is hard to know the length of the upcoming CPU request.

Elapsed time should be recorded, that results in more overhead on the processor.

Summary

SJF is an algorithm in which the process having the smallest execution time is chosen for
the next execution.

SJF Scheduling is associated with each job as a unit of time to complete.

This algorithm method is helpful for batch-type processing, where waiting for jobs to
complete is not critical.

There are basically two types of SJF methods 1) Non-Preemptive SJF and 2) Preemptive
SJF.

In non-preemptive scheduling, once the CPU cycle is allocated to process, the process
holds it till it reaches a waiting state or terminated.

In Preemptive SJF Scheduling, jobs are put into the ready queue as they come.

Although a process with short burst time begins, the current process is removed or
preempted from execution, and the job which is shorter is executed 1st.

SJF is frequently used for long term scheduling.

It reduces the average waiting time over FIFO (First in First Out) algorithm.

In SJF scheduling, Job completion time must be known earlier, but it is hard to predict.

SJF can’t be implemented for CPU scheduling for the short term. It is because there is no
specific method to predict the length of the upcoming CPU burst.

You might also like