Module 3 RTOS
Module 3 RTOS
The Operating System acts as a bridge between the user applications/tasks and
the underlying system resources through a set of system functionalities and
services
Page 2
The Kernel:
Kernel acts as the abstraction layer between system resources and user
applications
For a general purpose OS, the kernel contains different services like
Process Management
Protection
Time management
Interrupt Handling
Kernel
Page 3
All user applications are loaded to a specific area of primary memory and this
memory area is referred as User Space
The partitioning of memory into kernel and user space is purely Operating
System dependent
An operating system with virtual memory support, loads the user applications
into its corresponding virtual memory space with demand paging technique
Most of the operating systems keep the kernel application code in main
memory and it is not swapped out into the secondary memory
Monolithic Kernel:
All kernel modules run within the same memory space under a single kernel
thread
The tight internal integration of kernel modules in monolithic kernel
architecture allows the effective
utilization of the low-level features of
the underlying system Applications
The major drawback of monolithic
kernel is that any error or failure in any
one of the kernel modules leads tothe
crashing of the entire kernel
Monolithic kernel with all
application operating system services
running in kernel space
LINUX, SOLARIS, MS-DOS kernels
are examples of monolithic kernel
Page 4
Microkernel
abstraction.
Page 5
1. General Purpose Operating System (GPOS):
The kernel is more generalized and contains all the required services to
execute generic applications
May inject random delays into application software and thus cause slow
responsiveness of an application at unexpected times
Page 6
The Real Time Kernel: The kernel of a Real Time Operating System is referred as
Real Time kernel. In complement to the conventional OS kernel, the Real Time
kernel is highly specialized and it contains only the minimal set of services required
for running the user applications/tasks. The basic functions of a Real Timekernel are
a) Task/Process management
b) Task/Process scheduling
c) Task/Process synchronization
d) Error/Exception handling
e) Memory Management
f) Interrupt handling
g) Time management
allocating system resources, setting up a Task Control Block (TCB) for the task
and task/process termination/deletion. A Task Control Block (TCB) is used for
holding the information corresponding to a task. TCB usually contains the
following set of information
Task Type: Task type. Indicates what is the type for this task. The task can
be a hard real time or soft real time or background task.
Task Priority: Task priority (E.g. Task priority =1 for task with priority = 1)
Page 7
Task Memory Pointers: Pointers to the code memory, data memory and
stack memory for the task
Task Pointers: Pointers to other TCBs (TCBs for preceding, next and
waiting tasks)
The parameters and implementation of the TCB is kernel dependent. The TCB
parameters vary across different kernels, based on the task management
implementation
Page 8
Memory Management:
The memory allocation time increases depending on the size of the block
of memory needs to be allocated and the state of the allocated memory
block (initialized memory block consumes more allocation time than un-
initialized memory block)
Since predictable timing and deterministic behavior are the primary focus
for an RTOS, RTOS achieves this by compromising the effectiveness of
memory allocation
block
of the usual dynamic memory allocation techniques used by theGPOS.
RTOS kernel uses blocks of fixed size of dynamic memory and the block
Free
buffer Queue
Most of the RTOS kernels allow tasks to access any of the memory blocks
without any memory protection to achieve predictable timing and avoid
the timing overheads
RTOS kernels assume that the whole design is proven correct and
protection is unnecessary. Some commercial RTOS kernels allow memory
protection as optional and the kernel enters a fail-safe mode when an illegal
memory access occurs
Page 9
block always
allocated for tasks on need basis and it is taken as a unit. Hence, there will
not be any memory fragmentation issues.
Interrupt Handling:
Interrupts which occurs in sync with the currently executing task is known
as Synchronous interrupts. Usually the software interrupts fall under the
Synchronous Interrupt category. Divide by zero, memory segmentation
error etc are examples of Synchronous interrupts.
For synchronous interrupts, the interrupt handler runs in the same context
of the interrupting task.
Page 10
different context. Hence, a context switch happens while handling the
asynchronous interrupts.
Priority levels can be assigned to the interrupts and each interrupts can be
enabled or disabled individually.
Nested Interrupts
Interrupt nesting allows the pre-emption (interruption) of an Interrupt
Service Routine (ISR), servicing an interrupt, by a higher priority interrupt.
Time Management:
the
varies in the microseconds range
The time parameters for tasks are expressed as the multiples of the
232 * 10-6/ (24 * 60 * 60) = 49700 Days =~ 0.0497 Days = 1.19 Hours
Page 11
If the interval is 1 millisecond, the System time register will
reset in
Increment the System time register by one. Generate timing error and reset
the System time register if the timer tick count is greater than the maximum
range available for System time register
Invoke the scheduler and schedule the tasks again based on the scheduling
algorithm
Delete all the terminated tasks and their associated data structures (TCBs)
Load the context for the first task in the ready queue. Due to the re- scheduling,
the ready task might be changed to a new one from the task, which was pre-
empted by task
Page 12
Hard Real-time System:
A Hard Real Time system must meet the deadlines for a task without any
slippage
Missing any deadline may produce catastrophic results for Hard Real Time
Systems, including permanent data lose and irrecoverable damages to the
system/users
Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles
are typical examples of Hard Real Time Systems
As a rule of thumb, Hard Real Time Systems does not implement the
virtual memory model for handling the memory. This eliminates thedelay
in swapping in and out the code corresponding to the task to and from the
primary memory
The presence of Human in the loop (HITL) for tasks introduces un-
expected delays in the task execution. Most of the Hard Real TimeSystems
are automatic and does not contain a in the
Real Time Operating Systems that does not guarantee meeting deadlines,
but, offer the best effort to meet the deadline
A late answer is an
acceptable answer, but it could have done bit
Soft Real Time systems most often have a human in the loop (HITL)
Page 13
Automatic Teller Machine (ATM) is a typical example of Soft Real Time
System. If the ATM takes a few seconds more than the ideal operation time,
nothing fatal happens.
An audio video play back system is another example of Soft Real Time
system. No potential damage arises if a sample comes late by fraction of a
second, for play back.
Process
of tasks and thereby the efficient utilization of the CPU and other system
resources
Page 14
Process
A process, which inherits all
Stack
the properties of the CPU, (Stack Pointer)
processor
Code Memory
corresponding to the
Process
When the process gets its turn, its registers and Program counter register
becomes mapped to the physical registers of the CPU
Code Memory
The stack memory usually starts at the highest memory address from the
memory area allocated for the process (Depending on the OS kernel
implementation)
The process traverses through a series of states during its transition from the
newly created state to the terminated state
newly created
execution completed Process Life Cycle
through which a process traverses through during a Process Life Cycle
indicates the current status of the process with respect to time and also
provides information on what it is allowed to do next
Ready State: The state, where a process is incepted into the memory and
awaiting the processor time for execution, is known as Ready State At
this stage, t Ready list
OS
Running State: The state where in the source code instructions corresponding
Running State Running state is the
state at which the process execution happens
Page 16
Created
. Blocked State/Wait State: Refers
to a state where a running process is Incepted into memory
Threads
Page 17
Different threads, which are part of a
process, share the same address space;
meaning they share the data memory, code
memory and heap memory area
Thread 1
Registers
Thread 2
Registers
Thread 3
threads of the process that do not require the event, which the other thread is
waiting, for processing. This speeds up the execution of the process.
Page 18
Thread V/s Process
Thread Process
Thread is a single unit of execution and is part Process is a program in execution and contains
of process. one or more threads.
A thread does not have its own data memory Process has its own code memory, data memory
and heap memory. It shares the data memory and stack memory.
and heap memory with other threads of the
same process.
A thread cannot live independently; it lives A process contains at least one thread.
within the process.
There can be multiple threads in a process.The Threads within a process share the code, data
first thread (main thread) calls the main and heap memory. Each thread holds separate
function and occupies the start of the stack memory area for stack (shares the total stack
memory of the process. memory of the process).
Threads are very inexpensive to create Processes are very expensive to create. Involves
many OS overhead.
Context switching is inexpensive and fast Context switching is complex and involves lot of
OS overhead and is comparatively slower.
If a thread expires, its stack is reclaimed by the If a process dies, the resources allocated to it are
process. reclaimed by the OS and all the associated
threads of the process also dies.
Advantages of Threads:
1. Better memory utilization: Multiple threads of the same process share the
address space for data memory. This also reduces the complexity of inter
thread communication since variables can be shared across the threads.
Page 19
3. Speeds up the execution of the process: The process is split into different
threads, when one thread enters a wait state, the CPU can be utilized by other
threads of the process that do not require the event, which the other thread is
waiting, for processing.
Page 20
Multitasking Context Switching:
Idle
due to due to
Types of Multitasking :
Page 21
Co-operative Multitasking: Co-operative multitasking is the most primitive
form of multitasking in which a task/process gets a chance to execute only
when the currently executing task/process voluntarily relinquishes the CPU.
In this method, any task/process can avail the CPU as much time as it wants.
Since this type of implementation involves the mercy of the tasks each other
for getting the CPU time for execution, it is known as co-operative
multitasking. If the currently executing task is non-cooperative, the other tasks
may have to wait for a long time to get the CPU
state) or enters the state, waiting for an I/O. The co- operative
and non-preemptive multitasking differs in their behavior when they are in the
state. In co-operative multitasking, the currently executing
process/task need not relinquish the CPU when it enters
sate, waiting for an I/O, or a shared resource access or anevent to occur
whereas in non-preemptive multitasking the currently executing task
relinquishes the CPU when it waits for an I/O.
Task Scheduling:
In a multitasking system, there should be some mechanism in place to share
the CPU among the different tasks and to decide which process/task is to be
executed at a given point of time
Page 22
Task scheduling forms the basis of multitasking
Depending on the scheduling policy the process scheduling decision may take
place when a process switches its state to
Page 23
Waiting Time: Ready
queue waiting to get the CPU time for execution. The waiting time should be
minimal for a good scheduling algorithm.
Response Time: It is the time elapsed between the submission of a process
and the first response. For a good scheduling algorithm, the response time
should be as least as possible.
Process 1
Scheduler
Job Queue
Admitted Process 1
Process
Run Process
to Completion
Process n
Ready Queue
to queue Process
Move preempted process CPU
Device
Manager
Process
Process 1
Process 2
Page 24
Non-preemptive scheduling First Come First Served (FCFS)/First In
First Out (FIFO) Scheduling:
Allocates CPU time to the processes based on the order in which they enters
the Ready queue
The first entered process is serviced first
It is same as any real world application where queue systems are used; E.g.
Ticketing
Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
In general, FCFS favors CPU bound processes and I/O bound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.
The average waiting time is not minimal for FCFS scheduling algorithm
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT) for
each process and the Average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes).
0 10 15 22
10 5 7
Page 25
Assuming the CPU is readily available at the time of arrival of P1, P1 starts
is zero.
Waiting Time for P3 = 15 ms (P3 starts executing after completing P1 and P2)
Average waiting time = (Waiting time for all processes) / No. of Processes
Average Turn Around Time= (Turn Around Time for all processes) / No. of
Processes
= (10+15+22)/3 = 47/3
= 15.66 milliseconds
Allocates CPU time to the processes based on the order in which they are
entered in Ready queue
Page 26
LCFS scheduling is also known as Last In First Out (LIFO) where the process,
which is put last into Ready queue, is serviced first
Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
In general, LCFS favors CPU bound processes and I/O bound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.
The average waiting time is not minimal for LCFS scheduling algorithm
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together
Ready
Ready
process P4 with estimated completion
of scheduling P1. Calculate the waiting time and Turn Around Time (TAT) for each
process and the Average waiting time and Turn Around Time (Assumingthere is no
I/O waiting for the processes).Assume all the processes contain only CPU operation
and no I/O operations are involved.
Solution: Initially there is only P1 available in the Ready queue and the scheduling
sequence will be P1, P3, P2. P4 enters the queue during the execution of P1 and
becomes the last proces Ready
changes to P1, P4, P3, and P2 as given below.
Page 27
P1 P4 P3 P2
0 10 16 23 28
10 6 7 5
Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But P4
arrived after 5ms of execution of P1. Hence its waiting time = Execution start time
Arrival Time = 10-5 = 5)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4
= 18 milliseconds
Page 28
Non-preemptive scheduling Shortest Job First (SJF) Scheduling.
Allocates CPU time to the processes based on the execution completion time
for tasks
The average waiting time for a given set of processes is minimal in SJF
scheduling
Drawbacks:
A process whose estimated execution completion time is high may not get a
chance to execute if more and more processes with least estimated execution
Ready
execution time starts its execution
time
Difficult to Ready
for scheduling since new processes with different estimated execution time
keep entering the Ready queue at any point of time.
Page 29
The priority is assigned to the task on creating it. It can also be changed
dynamically (If the Operating System supports this feature)
The non-
on the priority and picks the process with the highest level of priority for
execution
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0- highest priority, 3
lowest priority) respectively enters the ready queue together. Calculate the waiting
time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes) in
priority based scheduling algorithm.
Solution: The scheduler sorts the Ready queue based on the priority and schedules
the process with the highest priority (P1 with priority number 0) first andthe next
high priority process (P3 with priority number 2) as second and so on. Theorder in
which the processes are scheduled for execution is represented as
P1 P3 P2
0 10 17 22
10 7 5
Waiting Time for P2 = 17 ms (P2 starts executing after completing P1 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
Page 30
= (0+10+17)/3 = 27/3
= 9 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time= (Turn Around Time for all processes) / No. of Processes
= (10+17+22)/3 = 49/3
= 16.33 milliseconds
Drawbacks:
The technique of gradually raising the priority of processes which are waiting
Starvation is known
Aging
Page 31
Preemptive scheduling:
Employed in systems, which implements preemptive multitasking model
Ready often
each process gets a chance to execute (gets the CPU time) is dependenton the
type of preemptive scheduling algorithm used for scheduling the processes
Ready eue.
Running Ready
scheduler, without the processes requesting for it is known Preemption
If the execution time of the new process is less, the currently executing
process is preempted and the new process is scheduled for execution
Page 32
Always compares the execution completion time (ie the remaining execution
time f
the remaining time for completion of the currently executing process and
schedules the process with shortest remaining time for execution.
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together.
after 2ms. Assume all the processes contain only CPU operation and no I/O
operations are involved.
Solution: At the beginning, there are only three processes (P1, P2 and P3) available
Ready Shortest
remaining time for execution completion (In this example P2 with remaining time
5ms) for scheduling. Now process P4 with estimated execution completion time 2ms
Ready P2. The processes are re-
scheduled for execution in the following order
P2 P4 P2 P3 P1
0 2 4 7 14 24
2 2 3 7 10
Waiting Time for P2 = 0 ms + (4 -2) ms = 2ms (P2 starts executing first and is
interrupted by P4 and has to wait till the completion of
P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the
execution time for completion of P4 (2ms) is less
than that of the Remaining time for execution
completion of P2 (Here it is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Page 33
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time Arrival
Time) + Estimated Execution Time = (2-2) + 2)
Scheduling:
for a
Process 3
pre-defined time
Page 34
Figure 11 Round Robin Scheduling
-defined time
period, the scheduler comes back and picks the first
queue again for execution.
Round Robin scheduling is similar to the FCFS scheduling and the only
difference is that a time slice based preemption is added to switch the
execution between the processes in the queue
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 6, 4, 2 milliseconds respectively, enters the ready queue together in
the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT) for
each process and the Average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.
P1 P2 P3 P1 P2 P1
0 2 4 6 8 10 12
2 2 2 2 2 2
Page 36
The waiting time for all the processes are given as
Waiting Time for P1 = 0 + (6-2) + (10-8) = 0+4+2= 6ms (P1 starts executing first
and waits for two time slices to get execution back and
again 1 time slice for getting CPU time)
Waiting Time for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1
executes for 1 time slice and waits for two time
slices to get the CPU time)
Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the first
time slices for P1 and P2 and completes its execution in a single time slice.)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (6+6+4)/3 = 16/3
= 5.33 milliseconds
Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (12+10+6)/3 = 28/3
= 9.33 milliseconds.
Page 37
Preemptive scheduling Priority based Scheduling
Same as that of the non-preemptive priority based scheduling except for the
switching of execution between tasks
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 1, 3, 2 (0- highest priority, 3
lowest priority) respectively enters the ready queue together. A new process P4 with
5ms of
start of execution of P1. Assume all the processes contain only CPU operation and
no I/O operations are involved.
Solution: At the beginning, there are only three processes (P1, P2 and P3) available
Ready highest priority
(In this example P1 with priority 1) for scheduling. Now process P4 with estimated
execution completion time 6ms and priority 0 Ready queue after 5ms of
start of execution of P1. The processes are re-scheduled for execution in the
following order
P1 P4 P1 P3 P2
0 5 11 16 23 28
5 6 5 7 5
Page 38
The waiting time for all the processes are given as
Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and gets
Preempted by P4 after 5ms and again gets the CPU time
after completion of P4)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue + Execution Time
= (Execution Start Time Arrival Time) + Estimated Execution Time = (5-5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time= (Turn Around Time for all the processes) / No. of Processes
= (16+6+23+28)/4 = 73/4
= 18.25 milliseconds
Page 39
How to chose RTOS:
The decision of an RTOS for an embedded design is very critical.
1. Functional
2. Non-functional requirements.
1. Functional Requirements:
1. Processor support:
It is not necessary that all support all kinds of processor
architectures.
2. Memory Requirements:
The RTOS requires ROM memory for holding the OS files and it is
normally stored in a non-volatile memory like FLASH.
3. Real-Time Capabilities:
It is not mandatory that the OS for all embedded systems need to be Real-
Time and all are - behavior.
Page 40
3. Kernel and Interrupt Latency:
The kernel of the OS may disable interrupts while executing certain services
and it may lead to interrupt latency.
For an embedded system whose response requirements are high, this latency
should be minimal.
6. Modularization Support:
The OS kernel may provide stack implementation and driver support for a
bunch of communication interfaces and networking.
Ensure that the OS under consideration provides support for all the
interfaces required by the embedded product.
Page 41
2. Non-Functional Requirements:
2. Cost:
The total cost for developing or buying the OS and maintaining it in terms of
commercial product and custom build needs to be evaluated before taking a
decision on the selection of OS.
4. Ease of Use:
5. After Sales:
For a commercial embedded RTOS, after sales in the form of e-mail, on-call
services etc. for bug fixes, critical patch updates and support for production
issues etc. should be analyzed thoroughly.
Page 42