0% found this document useful (0 votes)
16 views

RTOS scheduling policy

Uploaded by

vaishukg112
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

RTOS scheduling policy

Uploaded by

vaishukg112
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Unit -2

REAL TIME OPERATING SYSTEM


Operating System Basic

• The operating system (OS) acts as bridge between the user


application/task and the underlying system resources.

• Primary function of OS
• (1) Make the system convenient
• (2) Organize and manages the system resources efficiently
and correctly.
Types of Operating Systems

• Stand-Alone Operating systems


• Network Operating systems
• Embedded Operating Systems
• Stand-Alone Operating System
• It is a complete operating system that works on a desktop or
notebook computer.
• Examples of stand-alone operating systems are: • DOS •
Windows 2000 Professional • Mac OS X
Types of Operating Systems

• Network Operating System


• It is an operating system that provides extensive support for
computer networks. A network operating system typically
resides on a server.
• Examples of a network operating system are: • Windows 2000
Server • Unix • Linux • Solaris
Types of Operating Systems

• Embedded Operating System


• You can find this operating system on handheld computers
and small devices, it resides on a ROM chip.
• Examples of embedded operating systems are: • Windows CE
• Pocket PC 2002 • Palm OS
• The above classification is based on the computing hardware
environment towards which the OS is targetted. All three
types can be either of a real-time or a non-real-time type.
• For example, VxWorks is an RTOS of the first category, RT-
Linux and ARTS are of the second category and Windows CE of
the third category.
Operating systems (Characteristics)
• General purpose operating systems (GPOS)
• OS is more generalised and it contains all kinds of services
required for executing generic applications.

• GPOS are often non-deterministic in behavior.

• Their services can introduce random delays.

• Slow responsiveness of a application at unexpected times.

• This OS are usually deployed in computing systems where


deterministic behavior is not an important criterion.
Real Time Operating Systems
(characteristics)
• Real time implies deterministic timing behavior.

• Deterministic timing – OS consumes only known and expected


amounts of time regardless the number of services.

• Predictable performance for a well-designed RTOS.

• This is best achieved by consistence application of policies


and rules.
• VxWorks, MicroC/OS-II
Real Time Operating Systems
(characteristics)
• Responsiveness:
• How long an operating system delays after acknowledging an
interrupt?
• Aspect of responsiveness include the following
• (1) The amount of time required to begin interrupt
• ISR requires process switching- longer time
• ISR within the current process-less time

• (2) Time required to execution of ISR (depends on hardware)

• (3) Effect interrupt nesting (if an ISR is interrupted by another


interrupt)
Real Time Operating Systems
(characteristics)
• User control
• In non-real time system user has no control over scheduling
function of the operating system.

• In real time system


• User has control over priorities.
• Process swapping ( which process must reside in main memory)
• Select scheduling algorithm
• Reliability

• Fail-soft operation
Real Time Kernel

• The real time kernel is highly specialised and it contains only


the minimal set of services required for running the user
application / task.
Real Time Kernel
• The basic functions of Real Time Kernel are:
• Task/Process management
• Task / Process scheduling
• Task / Process synchronization
• Error/ Exception handling
• Memory management
• Interrupt handling
• Time management
Task/Process management
• Deals with setting up the memory space for the task , loading
the task’s code into the memory space , allocating system
resources ,setting up a task control block (TCB).
• Task management services utilizes the TCB of a
task in the following way:
• Creates a TCB for a task on creating a task
• Delete/Removed the TCB of a task when the
task is terminated.
• Reads the TCB to get the state of a task
• Update the TCB with parameter (context switch)
• Modify the TCB to change the priority of the
task dynamically.
Task / Process scheduling

• Deals with sharing the CPU among various task/processes.

• Scheduler handles the task scheduling.

• Efficient and optimal scheduling of the task to provide a


deterministic behaviour.
Error/ Exception handling

• Deals with registering and handling the errors occurred


/exception raised during the execution of task.

• Insufficient memory , time out , deadlocks , deadline missing ,


divide by zero etc. are examples of errors /exception .

• Errors /Exception can happen at kernel level or task level.


• Deadlock –kernel level
• Time out – task level

• Watch dog timer is a mechanism for handling the timeout for


task.
Memory management
• Memory management is one of the most important
subsystems of any operating system for computer control
systems, and is even more critical in a RTOS than in standard
operating systems.
• Predictable timing and deterministic behavior are primary
focus of an RTOS.
• RTOS makes use of block based memory allocation technique ,
instead of dynamic allocation.
• In block based memory allocation , a block of fixed size
memory is always allocated for a task on need basis.
• It consumes fixed amount of time for memory allocation.
• No fragmentation issues.
Interrupt handling
• Interrupts informs the processor that an external device or an
associated task requires immediate attention of the CPU.
• Interrupts can be synchronous or asynchronous.

• Interrupts which occurs in sync with the currently executing


task are synchronous interrupt.
• Interrupts which occurs not in sync with the currently
executing task are asynchronous interrupt.
• Example -divide by zero
• ISR is written as a separate task.
• Context switching happens for interrupt handling
• Example – timer-overflow, TX/RX interrupt
Time management

• Time reference to kernel is provided by high resolution Real-


Time-Clock (RTC).
• Hardware timer is programed to interrupt the processor at
regular interval. (Timer tick)
• Kernel use this timer ticks for all actions.
Types of Real time systems

• A hard real-time system (also known as an immediate real-


time system) is hardware or software that must operate
within the confines of a stringent deadline.
• The application may be considered to have failed if it does not
complete its function within the allotted time span.
• Examples of hard real-time systems include components of
pacemakers, anti-lock brakes and aircraft control systems.
• “A late answer is a wrong answer”
Types of Real time systems

• Soft real time systems can miss some deadlines, but


eventually performance will degrade if too many are missed.
• A good example is the sound system in your computer. If you
miss a few bits, no big deal, but miss too many and you're
going to eventually degrade the system.

• “A late answer is an acceptable answer, but it could have done


bit faster”
Process

• A process consists of executable program (codes), state of


which is controlled by OS.

• Process is also known as an instance of a program.

• Multiple instance of the same program can execute


simultaneously.

• Process requires various resources like CPU for execution,


memory for storing code, associated variables and I/O
devices.
The structure of a process
• A process mimics a processor in properties and hold a set of
registers, process status, program counter, stack.
• When a process gets its turn, its registers and program
counter register is mapped to the physical memory of CPU.
The structure of a process
• From memory perspective , the memory occupied by process
is segregated in to three regions, namely stack memory, data
memory, code memory.
• Stack- to store temporary
data (local variables)

• Data – to store global data

• Code- programs

• Process control block (PCB)


Process states

•The process travers


through a series
of states during it
transition from the
newly crated state to
the terminated
state. (process life cycle)
Multi processing and multi tasking
• The ability of OS to have multiple program in memory, which
are ready for execution, is referred a multiprogramming.

• In a uniprocessor system, it is not possible to execute multiple


program simultaneously. However, it is possible to achieve
some degree of pseudo parallelism by switching the execution
among process.

• The ability of OS to switch the processor(CPU) from executing


one process to another process is known as multitasking.

• Multitasking creates the illusion of multiple task executing in


parallel.
Multi processing and multi tasking

• The act of switching CPU among the process or changing the


current execution context is known as context switching.

• The act of saving the current context which contains the


context details of the current running process is known as
context saving.

• The process of retrieving the saved context details for a


process, which is going to execute is known as context
retrieval.
Types of multitasking

• Co-operative multitasking
• Preemptive multitasking
• Non-Preemptive multitasking
Types of multitasking
• Co-operative multitasking :
• Task/ Process gets a chance only when the currently executing
Task/ Process voluntarily relinquishes the CPU.
• In this method a Task/ Process can hold the CPU as much
time as it wants.
• Each ready task cooperates to let the running one finish.
• None of the task does a block anywhere during the ready to
finish state.
• If the currently executing task is non-cooperative , the other
task may have to wait for a long time to get CPU.
Types of multitasking

• Preemptive multitasking:

• Preemptive multitasking ensure that every task/process gets a


chance to execute.

• The currently running task/ process is preempted to give a


chance to other task/ process to execute.

• The preemption of a task may be based on time slot or task


/process priority.
Types of multitasking

• Non-Preemptive multitasking:
• In Non-Preemptive multitasking, the process/task which is
currently given the CPU time, is allowed to execute until it
terminate or enters a Blocked/ Wait state.

• In cooperative multitasking, the currently executing


process/Task need not relinquish the CPU when it enters
Blocked/ Wait state.

• In Non-Preemptive multitasking the currently executing task


relinquishes the CPU when it enters Blocked/ Wait state.
Task Scheduling
• Determining which task/process is to be executed at a given point
of time is known as scheduling.
• Scheduling policies (algorithm) forms the guidelines for
determining which task/process is to be executed when.
• The kernel service which implements the scheduling algorithm is
known as scheduler.
• The process scheduling decision may take place when a process
switches its state to
• 1. Ready state from Running state. (pre-emptive)
• 2. Blocked/ Wait state from Running state. (pre-emptive or
Non pre-emptive )
• 3. Ready state from Blocked/ Wait state.
• 4. completed state. (pre-emptive or Non pre-emptive )
Task Scheduling

• The selection scheduling algorithm should consider following


factors.
• CPU Utilisation: scheduling algorithm should always make the
CPU Utilisation high.
• Throughput : This gives an indication of the number of
process executed per unit of time.
• Turnaround Time: it is the amount o time taken by a process
for completing its execution. It includes time spent in ready
queue, time spent in waiting for I/O, and time spent in
execution.
• Waiting time : time spent in Ready queue to get CPU time for
execution.
Task Scheduling

• The various queue maintained by OS in association with CPU


scheduling are :

• Job queue : contains all the process in the system.

• Ready queue: Contains all process ready for execution and


waiting for CPU. The Ready queue is empty when there is no
process ready for running.

• Device queue : Contains the set of process, which are waiting


or an I/O device.
Types of Scheduling
Preemptive Scheduling

• In Preemptive Scheduling, the tasks are mostly assigned with


their priorities.
• Sometimes it is important to run a task with a higher priority
before another lower priority task, even if the lower priority
task is still running.
• The lower priority task holds for some time and resumes
when the higher priority task finishes its execution.

• In Preemptive Scheduling, every task in the Ready queue gets


a chance to execute.
Preemptive Scheduling

• Scheduler can preempt (stop temporarily) the currently


executing task and select another from Ready queue for
execution.
• When to preempt a task and which task is to be picked up
from Ready queue for execution purely dependent on the
scheduling algorithm.
• The act of moving a Running task into the Ready queue by the
scheduler, without the task requesting for it is known as
preemption.
• Two important approaches adopted in preemptive scheduling
are time-based preemption and priority-based preemption.
Non –preemptive scheduling
• In this type of scheduling method, the CPU has been allocated
to a specific process.
• The process that keeps the CPU busy will release the CPU
either by switching context or terminating.
• It is the only method that can be used for various hardware
platforms.
• That’s because it doesn’t need special hardware (for example,
a timer) like preemptive scheduling
Non –preemptive scheduling

• Non- Preemptive scheduling is employed in non-preemptive


multitasking systems.
• In this scheduling type, the currently executing task/process is
allowed to run until it terminates or enters the wait state
waiting for an I/O or system resource.
• The various types of non-preemptive scheduling algorithms
are,
• First Come First Served (FCFS)
• Shortest Job First Scheduling (SJF)
• Last Come First Served (FCFS)
Types of scheduling Algorithms
Types of scheduling Algorithms
• First Come First Serve
• First Come First Serve is the full form of FCFS. It is the easiest
and most simple CPU scheduling algorithm.
• In this type of algorithm, the process which requests the CPU
gets the CPU allocation first. This scheduling method can be
managed with a FIFO queue.
• As the process enters the ready queue, its PCB (Process
Control Block) is linked with the tail of the queue.
• So, when CPU becomes free, it should be assigned to the
process at the beginning of the queue.
Types of scheduling Algorithms
• Shortest Remaining Time
• The full form of SRT is Shortest remaining time. It is also
known as SJF preemptive scheduling. In this method, the
process will be allocated to the task, which is closest to its
completion. This method prevents a newer ready state
process from holding the completion of an older process.
• Priority Based Scheduling
• It is a method of scheduling processes based on priority. In
this method, the scheduler selects the tasks to work as per
the priority.
• Priority scheduling also helps OS to involve priority
assignments.
Types of scheduling Algorithms
• The processes with higher priority should be carried out first,
whereas jobs with equal priorities are carried out on a round-
robin or FCFS basis. Priority can be decided based on memory
requirements, time requirements, etc.
• Round-Robin Scheduling
• Round robin is the oldest, simplest scheduling algorithm. The
name of this algorithm comes from the round-robin principle,
where each person gets an equal share of something in turn.
• It is mostly used for scheduling algorithms in multitasking.
This algorithm method helps for starvation free execution of
processes.
Types of scheduling Algorithms
• Shortest Job First
• SJF is a full form of (Shortest job first) is a scheduling
algorithm in which the process with the shortest execution
time should be selected for execution next.
• This scheduling method can be preemptive or non-
preemptive.
• It significantly reduces the average waiting time for other
processes awaiting execution.
Types of scheduling Algorithms
• Multiple-Level Queues Scheduling
• This algorithm separates the ready queue into various
separate queues.
• In this method, processes are assigned to a queue based on a
specific property of the process, like the process priority, size
of the memory, etc.
• However, this is not an independent scheduling OS algorithm
as it needs to use other types of algorithms in order to
schedule the jobs.
Overcoming the Drawbacks

• Preemptive base priority scheduling possesses the same


drawback of non preemptive scheduling- “starvation”
• Starvation – A process whose priority is low may not get a
chance to execute if more and more higher priority enter
Ready queue.
• This can be solved by dynamically raising the priority of low
priority process which is under starvation.
• The technique of gradually raising the priority of the process
which are waiting in the Ready queue as the time progress,
for preventing starvation is known as “Aging”
Scheduling Algorithms
Priority Driven Scheduling Policies for RTOS
• Periodic task are real time task which are activated (released)
regularly at a fixed rate (periods).

• Periodic task must be executed once per period.

• Large set of real time tasks are periodic.

• Example – sensor input are sampled at regular interval which


are fixed ‘a prior’.
• Scheduling policies:
– Rate monotonic scheduling (RMS)
– Earliest-deadline-first (EDF).
Metrics

• How do we evaluate a scheduling policy:

– Ability to satisfy all deadlines.

– CPU utilization---percentage of time devoted to useful


work.

– Scheduling overhead---time required to make scheduling


decision.
Rate monotonic scheduling

• RMS priorities
• Optimal (fixed) priority assignment:
– shortest-period process gets highest priority;
– priority inversely proportional to period;
– break ties arbitrarily.

• No fixed-priority scheme does better


RMS example

P3 P3 P3 P3 period

P2 P2 P2 period

P1 P1 P1 P1 period

0 2 4 6 8 10 12 time
Overheads for Computers as
© 2008 Wayne Wolf Components 2nd ed.
RM (Rate Monotonic)

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
RM (Rate Monotonic)
• Executes a job with the shortest period

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
RM (Rate Monotonic)
• Executes a job with the shortest period

Deadline Miss !

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
Conditions

• Sufficient condition for scheduling using the RMS algorithm


n

 
 n 2 n − 1
Ci 1

i =1
Pi  
• The LHS of the inequality is the total CPU utilization for n
tasks, where Ci is the execution time and Pi is the period of
task Ti.
• If this condition is satisfied, the RM algorithm will be able to
schedule the task within their respective deadlines.
• This is a sufficient condition, but not a necessary condition.
Earliest-deadline-first scheduling

• EDF: Dynamic priority scheduling scheme.

• Process closest to its deadline has highest priority.

• Requires recalculating processes at every timer interrupt.

• The task that cannot be scheduled by RMS can be scheduled


by this method.
EDF analysis

The necessary condition for scheduling with EDF algorithm is that


the CPU utilization must be less than 1.
EDF implementation

• On each timer interrupt:


– compute time to deadline;
– choose process closest to deadline.

– Disadvantages of EDF
• Generally considered too expensive (more over heads)to use
in practice.
• Dynamic priority schemes are not usually used in systems
which require absolute predictability.
Fixing scheduling problems

• What if your set of processes is unschedulable?

– Change deadlines in requirements.


– Reduce execution times of processes.
– Get a faster CPU.
Shared memory
– processes have some memory in common;
– must cooperate to avoid destroying/missing messages.

memory
CPU 1 CPU 2

Shard memory communication is very efficient and fast.


Drawback ---Race condition.
Race condition in shared memory
• Problem when two CPUs try to write the same location:
• There must be a flag that tell one CPU when the data from the
other CPU are ready.

– CPU 1 reads flag and sees 0.


– CPU 2 reads flag and sees 0.
– CPU 1 sets flag to one and writes location.
– CPU 2 sets flag to one and overwrites location.

– Two process wants to access share memory at same time.


• Problem can be solved with an atomic test-and-set:
– single bus operation reads memory location, tests it, writes
it.
Task Synchronisation

• When Two process wants to access share resources at same


time---race condition.

• The solution is , make each process aware of the access of a


shared resource either directly or indirectly.

• The act of making processes aware of the access of shared


resources by each process to avoid conflicts is known as Task
Synchronisation.
Dead lock

• Condition favoring deadlock


• Mutual Exclusion
• Hold and wait
• No resource preemption
• Circular wait
• Race condition : is the situation in which multiple process
compete each other to access and manipulate shared data
concurrently. In Race condition the final value of the shared
data depends on the process which acted on the data finally.
• Deadlock : a race condition produces incorrect results
whereas a deadlock condition creates a situation where none
of the process are able to make any progress in their
execution.
• Livelock: the livelock condition is similar to the deadlock
condition except that a process in livelock condition changes
its state with time.
• Starvation: is the condition in which a process does not get
resources required to continue its execution for a long time.
The Dining Philosophers Problem
Image result for dining philosophers problem
Deadlock and Starvation
Racing
Livelock and Starvation
Shared Data Problem
Shared memory
Update level

Update time

Low priority
High priority

Process A
Process B
(write the *T1 (Read the
level & *T2
level &
Time)
Time)
Process A is preempted from updating time and process B is allowed
to read both level and Time.(wrong result)
Semaphore
Using Semaphore

Take semaphore( )
{
Low priority Update level
High priority
Update time
}
Release semaphore( )
Process A Process B
Critical
region Critical
region
(write the Process B asking for (Read the
level & semaphore(Blocked)
level &
Time) Process B gets
semaphore Time)
Process A Taken PB PB
semaphore Process A release
(Critical region) PA PA the semaphore
P and V SEMAPHORES

• The semaphore, apart from initialization, is accessed only


through two standard atomic operations─ P and V

• P semaphore function signals that the task requires a resource


and if not available waits for it.

• V semaphore function signals which the task passes to the OS


that the resource is now free for the other users.
Priority Inversion Problem

• Priority inversion is the condition in which a high priority task


needs to wait for a low priority task to release a resource
which is shard between the high priority task and low priority
task.

• The synchronization technique is only interested in avoiding


conflicts that may arise due to the concurrent access of the
shared resources and not at all bothered about the priority of
the process which tries to access the shared resources.
Priority Inversion Problem

• Priority inversion produces unwanted delay in the execution


of the high priority task which is supposed to be executed
immediately when it was Ready.

• Priority inversion can lead to potential damage as a result of


missing critical deadlines.

• Proper workaround mechanism should be adopted for


handling the priority inversion problem.
• (1) Priority Inheritance
• (2) Priority Ceiling
Priority Inheritance
• Temporarily boosting the priority of the low priority task
which is holding the resource ,moment the high priority task
raises the request.
• When it release the resources its priority is brought back to
original value.
• “A true form of selfishness”

• It does not eliminate the delay in waiting the high priority task
to get resource from low priority task.

• It helps the low priority task to continue its execution and


release the shared resource as soon as possible.
Priority Ceiling

• Whenever a task accesses a shared resources, the scheduler


elevates the priority of the task to that of the ceiling priority
of the resources.

• Drawback – ”hidden priority inversion”

You might also like