@vtudeveloper.in ESD Module 3
@vtudeveloper.in ESD Module 3
Syllabus:
Operating System basics,
Types of operating systems,
Task, process and threads (Only POSIX Threads with an example program),
Thread preemption, Preemptive Task scheduling techniques,
Task Communication,
Task synchronization issues – Racing and Deadlock.
How to choose an RTOS,
Integration and testing of Embedded hardware and firmware,
Embedded system Development Environment – Block diagram (excluding Keil).
(Text 1: Ch-10 (Sections 10.1, 10.2, 10.3, 10.5.2 , 10.7, 10.8.1.1, 10.8.1.2 only), Ch-12, Ch-
13 (a block diagram before 13.1, only).
The Kernel
The kernel is the core of the operating system and is responsible for managing the
system resources and the communication among the hardware and other system
services.
Kernel acts as the abstraction layer between system resources and user applications.
Kernel contains a set of system libraries and services.
For a general purpose OS, the kernel contains different services for handling the following:
• Process Management
• Primary Memory Management
• File System Management
• I/O System (Device) Management
• Secondary Storage Management
• Protection Systems
• Interrupt Handler
Process Management:
• Process management deals with managing the processes/tasks.
• Process management includes
• Setting up the memory space for the process
• Loading the process's code into the memory space
• Allocating system resources
• Scheduling and managing the execution of the process
• Setting up and managing the Process Control Block (PCB)
• Inter Process Communication and synchronisation
• Process termination/deletion, etc.
Microkernel
• The microkernel design incorporates only the essential set of Operating System services
into the kernel.
• The rest of the Operating System services are implemented in programs known as
'Servers' which runs in user space.
• This provides a 'highly modular design and OS-neutral abstract to the kernel.
• Memory management, process management, timer systems and interrupt handlers are
the essential services, which forms the part of the microkernel.
• Mach, QNX, Minix 3 kernels are examples for microkernel.
• Microkernel based design approach offers the following benefits:
• Robustness
• If a problem is encountered in any of the services, which runs as 'Server'
application, the same can be reconfigured and re-started without the
need for re- starting the entire OS.
• This approach is highly useful for systems, which demands high
'availability’.
• Since the services which run as 'Servers' are running on a different
memory space, the chances of corruption of kernel services are ideally
zero.
• Configurability
• Any service which runs as 'Server' application can be changed without
the need to restart the whole system.
• This makes the system dynamically configurable.
Types of Operating Systems
Depending on the type of kernel and kernel services, purpose and type of computing systems
where the OS is deployed and the responsiveness to applications, Operating Systems are
classified into different types.
• General Purpose Operating System (GPOS)
• Real-Time Operating System (RTOS)
General Purpose Operating System (GPOS)
The operating systems which are deployed in general computing systems are referred as
General Purpose Operating Systems (GPOS).
• The kernel of such a GPOS is more generalised and it contains all kinds of services
required for executing generic applications.
• General purpose operating systems are often quite non-deterministic in behaviour.
• GPOS are usually deployed in computing systems where deterministic behaviour is not
an important criterion.
• Personal Computer/Desktop system is a typical example for a system where GPOSs are
deployed.
• Windows (11,10, 7) XP/MS-DOS etc. are examples for General Purpose Operating
Systems.
Real-Time Operating System (RTOS)
Real-Time' refers to deterministic timing behavior, meaning OS services take a known, fixed
time regardless of the number of tasks.
An RTOS (Real-Time Operating System) enforces policies and rules for time-critical resource
allocation. It determines task execution order and time allocation, ensuring predictable
performance.
A well-designed RTOS maintains consistency through policies that shape its structure and rules
that enforce them while resolving conflicts.
Examples of RTOS include Windows CE, QNX, VxWorks, and MicroC/OS-II.
The Real-Time Kernel
• The kernel of a Real-Time Operating System is referred as Real-Time kernel.
• The Real-Time kernel is highly specialised and it contains only the minimal set of
services required for running the user applications/tasks.
• The basic functions of a Real-Time kernel are:
• Task/Process Management
• Task/Process Scheduling
• Task/Process Synchronisation
• Error/Exception Handling
• Memory Management
• Interrupt Handling
• Time Management
Task/Process Management
Deals with
Setting up the memory space for the tasks
Loading the task's code into the memory space allocating system resources
Setting up a Task Control Block (TCB) for the task task/process termination/deletion
A Task Control Block (TCB) is used for holding the information corresponding to a
task.
TCB usually contains the following set of information:
Task ID: Task Identification Number
Task State: The current state of the task (e.g. State = 'Ready' for a task which is ready
to execute)
Task Type: Indicates what is the type for this task. The task can be a hard real time or
soft real time or background task.
Task Priority: Task priority (e.g. Task priority = 1 for task with priority = 1)
Task Context Pointer: Pointer for context saving
Task Memory Pointers: Pointers to the code memory, data memory and stack memory
for the task
Task System Resource Pointers: Pointers to system resources (semaphores, mutex,
etc.) used by the task
Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
Other Parameters: Other relevant task parameters
The parameters and implementation of the TCB is kernel dependent.
The TCB parameters vary across different kernels, based on the task management
implementation.
Task management service utilises the TCB of a task in the following way:
Creates a TCB for a task on creating a task
Delete/remove the TCB of a task when the task is terminated or deleted
Reads the TCB to get the state of a task
Update the TCB with updated parameters on need basis (e.g. on a context switch)
Modify the TCB to change the priority of the task dynamically
Task/Process Scheduling
Deals with sharing the CPU among various tasks/processes.
A kernel application called 'Scheduler' handles the task scheduling.
Scheduler is nothing but an algorithm implementation, which performs the efficient and
optimum scheduling of tasks to provide a deterministic behaviour.
Task/Process Synchronisation
Deals with synchronising the concurrent access of a resource, which is shared across
multiple tasks and the communication between various tasks.
Error/Exception Handling
Deals with registering and handling the errors occurred/exceptions raised during the
execution of tasks.
Insufficient memory, timeouts, deadlocks, deadline missing, bus error, divide by zero,
unknown instruction execution, etc. are examples of errors/exceptions.
Errors/Exceptions can happen at the kernel level services or at task level.
Deadlock is an example for kernel level exception, whereas timeout is an example for
a task level exception.
The OS kernel gives the information about the error in the form of a system call (API).
Watchdog timer is a mechanism for handling the timeouts for tasks.
Memory Management
RTOS makes use of 'block' based memory allocation technique, instead of the usual
dynamic memory allocation techniques used by the GPOS.
RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated
for a task on a need basis.
The blocks are stored in a 'Free Buffer Queue’.
To achieve predictable timing and avoid the timing overheads, most of the RTOS
kernels allow tasks to access any of the memory blocks without any memory protection.
RTOS kernels assume that the whole design is proven correct and protection is
unnecessary.
Some commercial RTOS kernels allow memory protection as optional.
Interrupt Handling
Deals with the handling of various types of interrupts.
Interrupts provide Real-Time behaviour to systems.
Interrupts inform the processor that an external device or an associated task requires
immediate attention of the CPU.
Interrupts can be either Synchronous or Asynchronous.
Synchronous interrupts:
Occur in sync with the currently executing task.
Usually the software interrupts fall under this category.
Divide by zero, memory segmentation error, etc. are examples of synchronous
interrupts. For synchronous interrupts, the interrupt handler runs in the same context of
the interrupting task.
Asynchronous interrupts:
Occur at any point of execution of any task, and are not in sync with the currently
executing task.
The interrupts generated by external devices (by asserting the interrupt line of the
processor/controller to which the interrupt line of the device is connected) connected to
the processor/controller, timer overflow interrupts, serial data reception/ transmission
interrupts, etc. are examples for asynchronous interrupts.
For asynchronous interrupts, the interrupt handler is usually written as separate task and
it runs in a different context.
Hence, a context switch happens while handling the asynchronous interrupts.
Priority levels can be assigned to the interrupts and each interrupt can be enabled or
disabled individually.
Most of the RTOS kernel implements 'Nested Interrupts' architecture.
Interrupt nesting allows the pre-emption (interruption) of an Interrupt Service Routine
(ISR), servicing an interrupt, by a high priority interrupt.
Time Management
Accurate time management is essential for providing precise time reference for all
applications.
The time reference to kernel is provided by a high-resolution Real-Time Clock (RTC)
hardware chip (hardware timer).
The hardware timer is programmed to interrupt the processor/controller at a fixed rate.
This timer interrupt is referred as ‘Timer tick’ and is taken as the timing reference by
the kernel.
The 'Timer tick' interval may vary depending on the hardware timer.
Usually the 'Timer tick' varies in the microseconds range.
The time parameters for tasks are expressed as the multiples of the ‘Timer tick'.
Hard Real-Time Systems
Real-Time Operating Systems that strictly adhere to the timing constraints for a task are
referred as 'Hard Real-Time' systems.
Missing any deadline may produce catastrophic results for Hard Real-Time Systems, including
permanent data loss and irrecoverable damages to the system/users.
Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles are typical examples
for Hard Real-Time Systems.
Any delay in the deployment of the air bags makes the life of the passengers under threat.
• Hard Real-Time Systems does not implement the virtual memory model for handling
the memory.
• This eliminates the delay in swapping in and out the code corresponding to the
task to and from the primary memory.
• Most of the Hard Real-Time Systems are automatic and does not contain a Human in
the Loop (HITL).
• The presence of human in the loop for tasks introduces unexpected delays in the
task execution.
Soft Real-Time System
• Real-Time Operating Systems that do not guarantee meeting deadlines, but offer the
best effort to meet the deadline are referred as 'Soft Real-Time' systems.
• Missing deadlines for tasks are acceptable for a Soft Real-time system if the frequency
of deadline missing is within the compliance limit of the Quality of Service (QoS).
• A Soft Real-Time system emphasises the principle 'A late answer is an acceptable
answer, but it could have done bit faster’.
• Soft Real-Time systems most often have a human in the loop (HITL).
• Automated Teller Machine (ATM) is a typical example for Soft-Real-Time System.
• If the ATM takes a few seconds more than the ideal operation time, nothing
fatal happens.
• An audio-video playback system is another example for Soft Real-Time system.
• No potential damage arises if a sample comes late by fraction of a second, for
playback.
The Concept of Multithreading: The process is split into multiple threads, which executes a
portion of the process; there will be a main thread and rest of the threads will be created within
the main thread.
Use of multiple threads to execute a process brings the following advantage:
Better memory utilization: Multiple threads of the same process share the address space for
data memory. This also reduces the complexity of inter thread communication since variables
can be shared across the threads.
Since the process is split into different threads, when one thread enters a wait state, the CPU
can be utilized by other threads of the process that do not require the event, which the other
thread is waiting, for processing. This speeds up the execution of the process.
Efficient CPU utilization. The CPU is engaged all time.
Thread Standards:
Thread standards deal with the different standards available for thread creation and
management. These standards are utilized by the Operating Systems for thread creation and
thread management.
It is a set of thread class libraries. The commonly available thread class libraries are –
POSIX Threads: POSIX stands for Portable Operating System Interface. The POSIX.4
standard deals with the Real Time extensions and POSIX.4a standard deals with thread
extensions. The POSIX standard library for thread creation and management is „Pthreads‟.
„Pthreads‟ library defines the set of POSIX thread creation and management functions in „C‟
language.
Thread Termination:
Natural termination: The thread completes its execution and returns to the main thread
through a simple return or by executing the pthread_exit() call.
Forced termination: This can be achieved by the call pthread_cancel() or through the
termination of the main thread with exit or exec functions. pthread_cancel() call is used by a
thread to terminate another thread.
Thread Preemption
Thread pre-emption is the act of pre-empting the currently running thread. It means, stopping
the currently running thread temporarily.
Thread pre-emption is performed for sharing the CPU time among all the threads. The
execution switching among threads is known as 'Thread context switching’.
Thread context switching is dependent on the Operating system's scheduler and the type of the
thread.
Types of Threads
User Level Threads
User level threads do not have kernel/Operating System support and they exist solely
in the running process.
Even if a process contains multiple user level threads, the OS treats it as single thread
and will not switch the execution among the different threads of it.
It is the responsibility of the process to schedule each thread as and when required.
In summary, user level threads of a process are non-preemptive at thread level from OS
perspective.
The execution switching (thread context switching) happens only when the currently
executing user level thread is voluntarily blocked.
Hence, no OS intervention and system calls are involved in the context switching of
user level threads.
This makes context switching of user level threads very fast.
Kernel Level Threads
Kernel level threads are individual units of execution, which the OS treats as separate
threads. The OS interrupts the execution of the currently running kernel thread and
switches the execution to another kernel thread based on the scheduling policies
implemented by the OS. In summary, kernel level threads are pre-emptive. Kernel level
threads involve lots of kernel overhead and involve system calls for context switching.
Thread Binding Models
There are many ways for binding user level threads with system/kernel level threads.
Many-to-One Model
Here, many user level threads are mapped to a single kernel thread. In this model, the
kernel treats all user level threads as single thread and the execution switching among
the user level threads happens when a currently executing user level thread
voluntarily blocks itself or relinquishes the CPU. Solaris Green threads and GNU
Portable Threads are examples for this.
The 'PThread’ example is an illustrative example for application with Many-to-One
thread model.
One-to-One Model
• Here, each user level thread is bonded to a kernel/system level thread.
• Windows XP/NT/2000 and Linux threads are examples for One-to-One thread
models. The modified 'PThread' example is an illustrative example for
application with One-to-One thread model.
Many-to-Many Model
• In this model, many user level threads are allowed to be mapped to many kernel
threads. Windows NT/2000 with ThreadFibre package is an example for this.
Thread vs. Process
Task Scheduling
• Determining which task/process is to be executed at a given point of time is known as
task/process scheduling. Scheduling policies forms the guidelines for determining
which task is to be executed when The scheduling policies are implemented in an
algorithm and it is run by the kernel as a service
Based on the scheduling algorithm used, scheduling can be classified into:
• Non-preemptive Scheduling: The currently executing task/process is allowed to run
until it terminates or enters the ‘Wait’ state waiting for an I/O or system resource.
• Preemptive Scheduling: The currently executing task/process is preempted (stopped
temporarily) and another task from the Ready queue is selected for execution.
Preemptive Scheduling
In preemptive scheduling, the scheduler can preempt (stop temporarily) the currently
executing task/process and select another task from the 'Ready' queue for execution.
When to pre-empt a task and which task is to be picked up from the 'Ready' queue for
execution after preempting the current task is purely dependent on the scheduling
algorithm. A task which is preempted by the scheduler is moved to the 'Ready' queue.
The act of moving a 'Running' process/task into the 'Ready' queue by the scheduler,
without the processes requesting for it is known as ‘Preemption’
Preemptive scheduling can be implemented in different approaches.
Time-based preemption
Priority-based preemption
The various types of preemptive scheduling adopted in task/process scheduling are:
Preemptive Shortest Job First (SJF)/Shortest Remaining Time (SRT) Scheduling
Round Robin (RR) Scheduling
Priority Based Scheduling
Preemptive Shortest Job First (SJF)/Shortest Remaining Time (SRT) Scheduling
In SJF, the process with the shortest estimated run time is scheduled first, followed by
the next shortest process, and so on. The preemptive SJF scheduling algorithm sorts the
'Ready' queue when a new process enters the 'Ready' queue and checks whether the
execution time of the new process is shorter than the remaining of the total estimated
time for the currently executing process. If the execution time of the new process is
less, the currently executing process is preempted and the new process is scheduled for
execution. Thus preemptive SJF scheduling always compares the execution completion
time (It is same as the remaining time for the new process) of a new process entered the
'Ready' queue with the remaining time for completion of the currently executing process
and schedules the process with shortest remaining time for execution. Preemptive SJF
scheduling is also known as Shortest Remaining Time (SRT) scheduling .
Example:
Question: Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5,
7 milliseconds respectively enter the ready queue together. A new process P4 with estimated
completion time 2 ms enters the 'Ready' queue after 2 ms. Assume all the processes contain
only CPU operation and no I/O operations are involved. Calculate the waiting time and Turn
Around Time (TAT) for each process and the average waiting time and Turn Around Time in
the SRT scheduling.
Round Robin (RR) Scheduling
In Round Robin scheduling, each process in the 'Ready' queue is executed for a pre-defined
time slot. 'Round Robin' brings the message "Equal chance to all". The execution starts with
picking up the first process in the 'Ready' queue.
• Round Robin scheduling ensures that every process gets a fixed amount of-CPU time
for execution. When the process gets its fixed time for execution is determined by the
First Come First Serve (FCFS) policy. If a process terminates before the elapse of the
time slice, the process releases the CPU voluntarily and the next process in the queue
is scheduled for execution by the scheduler.
Question: Three processes with process IDs P1, P2, P3 with estimated completion time 6, 4, 2
milliseconds respectively, enter the ready queue together in the order P1, P2, P3. Calculate the
waiting time and Turn Around Time (TAT) for each process and the Average waiting time and
Turn Around Time (Assuming there is no I/O waiting for the processes) in RR algorithm with
Time slice = 2 ms.
Priority Based Scheduling
The Priority Based Preemptive Scheduling ensures that a process with high priority is serviced
at the earliest compared to other low priority processes in the ‘Ready’ queue.
Any high priority process entering the 'Ready' queue is immediately scheduled for
execution. The priority of a task/process can be indicated through various mechanisms.
While creating the process/task, the priority can be assigned to it. The priority number
associated with a task/process is the direct indication of its priority. The priority number
0 indicates the highest priority. This convention need not be universal and it depends
on the kernel level implementation of the priority structure. Whenever a new process
enters the ‘Ready’ queue, the scheduler sorts the 'Ready' queue based on priority and
picks the process with the highest level of priority for execution.
Question: Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds and priorities 1, 3, 2 (0 – highest priority, 3 - lowest priority) respectively enter the ready
queue together. A new process P4 with estimated completion time 6 ms and priority 0 enters the 'Ready'
queue after 5 ms of start of execution of P1. Calculate the waiting time and Turn Around Time (TAT)
for each process and the Average waiting time and Turn Around Time (Assuming there is no I/O waiting
for the processes) in priority based scheduling algorithm.
Task Communication
• In a multitasking system, multiple tasks/processes run concurrently (in pseudo
parallelism) and each process may or may not interact between.
• Based on the degree of interaction, the processes running on an OS are classified as
Co-operating Processes: One process requires the inputs from other processes to complete its
execution.
Competing Processes:
• The competing processes do not share anything among themselves but they share the
system resources.
• The competing processes compete for the system resources such as file, display device,
etc.
• Co-operating processes exchanges information and communicate through the
following methods:
• Co-operation through Sharing:
• The co-operating process exchange data through some shared resources.
• Co-operation through Communication:
• No data is shared between the processes.
• But they communicate for synchronisation.
• The mechanism through which processes/tasks communicate each other is known as
Inter Process/Task Communication (IPC).
• Inter Process Communication is essential for process co-ordination.
• The various types of Inter Process Communication (IPC) mechanisms adopted by
process are kernel (Operating System) dependent.
• Some of the important IPC mechanisms adopted by various kernels are:
• Shared Memory
• Pipes and Memory Mapped Objects
• Message Passing
• Message Queue, Mailbox and Signalling
• Remote Procedure Call and Sockets
1. Shared Memory
Memory Mapped Objects
• Memory mapped object is a shared memory technique adopted by certain Real-Time
Operating Systems for allocating a shared block of memory which can be accessed by
multiple process simultaneously.
• In this approach, a mapping object is created and physical storage for it is reserved and
committed.
• A process can map the entire committed physical area or a block of it to its virtual
address space.
• All read and write operation to this virtual address space by a process is directed to its
committed physical area.
• Any process which wants to share data with other processes can map the physical
memory area of the mapped object to its virtual memory space and use it for sharing
the data.
2. Message Passing
• Message passing is an (a)synchronous information exchange mechanism used for Inter
Process/Thread Communication.
• The major difference between shared memory and message passing technique is that,
through shared memory lots of data can be shared whereas only limited amount of
information/data is passed through message passing.
• Also, message passing is relatively fast and free from the synchronisation overheads
compared to shared memory.
• Based on the message passing operation between the processes, message passing is
classified into:
• Message Queue
• Mailbox
• Signalling
Message Queue
• 'Message queue’ is a First-In-First-Out (FIFO) queue which stores the messages
temporarily in a system defined memory object to pass it to the desired process.
• Usually the process which wants to talk to another process posts the message to a
message queue.
• Messages are sent and received through send and receive methods.
• send (Name of the process to which the message is to be sent, message)
• receive (Name of the process from which the message is to be received, message)
• The implementation of the message queue, send and receive methods are OS kernel
dependent.
Mailbox
• Mailbox is an alternate form of ‘Message queue’ and it is used in RTOS for IPC usually
for one way messaging.
• The task/thread which wants to send a message to other tasks/threads creates a mailbox
for posting the messages.
• The threads which are interested in receiving the messages posted to the mailbox by the
mailbox creator thread can subscribe to the mailbox.
• The thread which creates the mailbox is known as 'mailbox server' and the threads
which subscribe to the mailbox are known as 'mailbox clients’.
• The mailbox server posts messages to the mailbox and notifies it to the clients which
are subscribed to the mailbox.
• The clients read the message from the mailbox on receiving the notification.
Signalling
• Signalling is a primitive way of communication between processes/threads.
• Signals are used for asynchronous notifications where one process/thread fires a
signal, indicating the occurrence of a scenario which the other process(es)/thread(s) is
waiting.
• Signals are not queued and they do not carry any data.
• E.g. Communication mechanisms used in RTX51 Tiny OS, inter process
communication in VxWorks OS Kernel are examples for signalling.
3. Remote Procedure Call (RPC) and Sockets
• Remote Procedure Call (RPC) is the Inter Process Communication (IPC) mechanism
used by a process to call a procedure of another process running on the same CPU or
on a different CPU which is interconnected in a network.
• In the object oriented language terminology, RPC is also known as Remote Invocation
or Remote Method Invocation (RMI).
• RPC is mainly used for distributed applications like client-server applications.
• With RPC it is possible to communicate over a heterogeneous network (i.e.
Network where Client and server applications are running on different operating
systems). The CPU/process containing the procedure which needs to be invoked
remotely is known as server.
• The CPU/process which initiates an RPC request is known as client.
• Sockets are used for RPC communication.
• Socket is a logical endpoint in a two-way communication link between two applications
running on a network.
• A port number is associated with a socket so that the network layer of the
communication channel can deliver the data to the designated application.
• Sockets are of different types, namely, Internet sockets (INET), UNIX sockets, etc.
• The INET socket works on internet communication protocol.
• TCP/IP, UDP, etc. are the communication protocols used by INET sockets.
Task Synchronisation Issues
• In a multitasking environment, multiple processes run concurrently (in pseudo
parallelism) and share the system resources.
• The processes communicate with each other with different IPC mechanisms including
shared memory and variables.
• Imagine a situation where two processes try to access display hardware connected to
the system or two processes try to access a shared memory area where one process
tries to write to a memory location when the other process is trying to read from this.
• This would result in unexpected results.
• This can be solved by making each process aware of the access of a shared
resource either directly or indirectly.
• The act of making processes aware of the access of shared resources by each process
to avoid conflicts is known as ‘Task/Process Synchronisation’.
• Various task communication/synchronisation issues may arise in a multitasking
environment if processes are not synchronised properly.
• Racing
• Deadlock
Racing:
• Racing or Race condition is the situation in which multiple processes compete
(race) each other to access and manipulate shared data concurrently.
• In a Race condition, the final value of the shared data depends on the process
which acted on the data finally.
Deadlock
Deadlock Handling
A smart OS may foresee the deadlock condition and will act proactively to avoid such a
situation.
If a deadlock occurs, the reaction to it by OS is nonuniform.
The OS may adopt any of the following techniques to detect and prevent deadlock
conditions.
Ignore Deadlocks:
Always assume that the system design is deadlock free.
This is acceptable for the reason that the cost of removing a deadlock is large compared to the
chance of happening a deadlock.
UNIX is an example for an OS following this principle.
A life critical system cannot pretend that it is deadlock free for any reason.
Detect and Recover:
This approach suggests the detection of a deadlock situation and recovery from it.
This is similar to the deadlock condition that may arise at a traffic junction.
When the vehicles from different directions compete to cross the junction, deadlock
(traffic jam) condition is resulted.
Once a deadlock (traffic jam) has happened at the junction, the only solution is to back
up the vehicles from one direction and allow the vehicles from opposite direction to
cross the junction.
If the traffic is too high, lots of vehicles may have to be backed up to resolve the traffic
jam.
This technique is also known as 'back up cars' technique.
Operating systems keep a resource graph in their memory.
• The resource graph is updated on each resource request and release.
• A deadlock condition can be detected by analysing the resource graph by graph analyser
algorithms.
• Once a deadlock condition is detected, the system can terminate a process or preempt
the resource to break the deadlocking cycle.
• Avoid Deadlocks:
• Deadlock is avoided by the careful resource allocation techniques by the
Operating System.
• It is similar to the traffic light mechanism at junctions to avoid the traffic jams.
• Prevent Deadlocks:
• Prevent the deadlock condition by negating one of the four conditions favouring
the deadlock situation.
• Ensure that a process does not hold any other resources when it requests a resource.
A process must request all its required resource and the resources should be allocated before
the process begins its execution.
Grant resource allocation requests from processes only if the process does not hold a resource
currently.
Ensure that resource preemption (resource releasing) is possible at operating system level.
Release all the resources currently held by a process if a request made by the process for anew
resource is not able to fulfil immediately. Add the resources which are preempted (released) to
a resource list describing the resources which the process requires to complete its execution.
Reschedule the process for execution only when the process gets its old resources and the new
resource which is requested by the process.
Livelock
The Livelock condition is similar to the deadlock condition except that a process in livelock
condition changes its state with time.
While in deadlock a process enters in wait state for a resource and continues in that state forever
without making any progress in the execution, in a livelock condition a process always does
something but is unable to make any progress in the execution completion.
The livelock condition is better explained with the real world example, two people attempting
to cross each other in a narrow corridor.
Both the persons move towards each side of the corridor to allow the opposite person
to cross.
Since the corridor is narrow, none of them are able to cross each other.
Here both of the persons perform some action but still they are unable to achieve their
target, cross each other.
Starvation
In the multitasking context, starvation is the condition in which a process does not get the
resources required to continue its execution for a long time.
As time progresses, the process starves on resource.
Starvation may arise due to various conditions like byproduct of preventive measures of
deadlock, scheduling policies favouring high priority tasks and tasks with shortest execution
time, etc.
How to Choose an RTOS
• The decision of choosing an RTOS for an embedded design is very crucial.
• A lot of factors needs to be analysed carefully before making a decision on the selection
of an RTOS.
• The requirements that needs to be analysed in the selection of an RTOS for an
embedded design fall under two categories:
• Functional requirements
• Non-functional requirements
Functional Requirements
Processor Support
It is not necessary that all RTOS's support all kinds of processor architecture.
It is essential to ensure the processor support by the RTOS.
Memory Requirements
The OS requires ROM memory for holding the OS files and it is normally stored in a
non-volatile memory like FLASH.
OS also requires working memory RAM for loading the OS services.
Since embedded systems are memory constrained, it is essential to evaluate the minimal
ROM and RAM requirements for the OS under consideration.
Real-time Capabilities
It is not mandatory that the operating system for all embedded systems need to be Real-
time and all embedded Operating systems are 'Real-time’ in behaviour.
The task/process scheduling policies plays an important role in the 'Real-time'
behaviour of an OS.
Analyse the real-time capabilities of the OS under consideration and the
standards met by the operating system for real-time capabilities.
Kernel and Interrupt Latency
The kernel of the OS may disable interrupts while executing certain services and it may
lead to interrupt latency.
For an embedded system whose response requirements are high, this latency should be
minimal.
Inter Process Communication and Task Synchronisation
The implementation of Inter Process Communication and Synchronisation is S kernel
dependent.
Certain kernels may provide a bunch of options whereas others provide very limited
options.
Modularisation Support
Most of the operating systems provide a bunch of features.
At times it may not be necessary for an embedded product for its functioning.
It is very useful if the OS supports modularisation where in the developer can choose
the essential modules and re-compile the OS image for functioning.
Windows CE is an example for a highly modular operating system.
Support for Networking and Communication
The OS kernel may provide stack implementation and driver support for a bunch of
communication interfaces and networking.
Ensure that the OS under consideration provides support for all the interfaces required
by the embedded product.
Development Language Support
Certain operating systems include the run time libraries required for running
applications written in languages like Java and C#.
A Java Virtual Machine (JVM) customised for the Operating System is essential for
running java applications.
Similarly the .NET Compact Framework (.NETCF) is required for running Microsoft
.NET applications on top of the Operating System.
The OS may include these components as built-in component, if not, check the
availability of the same from a third party vendor for the OS under consideration.
Non-Functional Requirements
Custom Developed or Off the Shelf
Depending on the OS requirement, it is possible to go for the complete development of an
operating system suiting the embedded system needs or use an off the shelf, readily available
operating system, which is either a commercial product or an Open Source product, which is
in close match with the system requirements.
Sometimes it may be possible to build the required features by customising an Open source
OS.
The decision on which to select is purely dependent on the development cost, licensing fees for
the OS, development time and availability of skilled resources.
Cost
The total cost for developing or buying the OS and maintaining it in terms of commercial
product and custom build needs to be evaluated before taking a decision on the selection of
OS.
Development and Debugging Tools Availability
The availability of development and debugging tools is a critical decision making factor in the
selection of an OS for embedded design.
Certain Operating Systems may be superior in performance, but the availability of tools for
supporting the development may be limited.
Explore the different tools available for the OS under consideration.
Ease of Use
How easy it is to use a commercial RTOS is another important feature that needs to be
considered in the RTOS selection.
After Sales
For a commercial embedded RTOS, after sales in the form of e-mail, on-call services, etc. for
bug fixes, critical patch updates and support for production issues, etc. should be analysed
thoroughly.
•
• The sequence of operations for embedding the firmware with a programmer is listed
below:
1. Connect the programming device to the specified port of PC (USB/COM
port/parallel port).
2. Power up the device (Ensure that the power indication LED is ON).
3. Execute the programming utility on the PC and ensure proper connectivity is
established between PC and programmer. In case of error, turn off device power
and try connecting it again.
4. Unlock the ZIF socket by turning the lock pin.
5. Insert the device to be programmed into the open socket.
6. Lock the ZIF socket.
7. Select the device name from the list of supported devices.
8. Load the hex file which is to be embedded into the device.
9. Program the device by 'Program' option of utility program.
10. Wait till the completion of programming operation (Till busy LED of
programmer is OFF).
11. Ensure that programming is successful by checking the status LED on the
programmer (Usually 'Green' for success and 'Red' for error condition)
or by noticing the feedback from the utility program.
12. Unlock the ZIF socket and take the device out of programmer.
Drawbacks
The major drawback of out-of-circuit programming is the high development time.Whenever
the firmware is changed, the chip should be taken out of the development board for re-
programming. This is tedious and prone to chip damages due to frequent insertion and removal.
The programmer facilitates programming of only one chip at a time and it is not suitable for
batch production. Can be resolved using a 'Gang Programmer’, which contains multiple ZIF
sockets (4 to 8) and capable of programming multiple devices at a time. But it is bit expensive
compared to an ordinary programmer. Another big drawback of out-of-circuit programming is
that once the product is deployed in the market in a production environment, it is very difficult
to upgrade the firmware
Applications
• The out-of-system programming technique is used for firmware integration for low end
embedded products which runs without an operating system.
• Out-of-circuit programming is commonly used for development of low volume
products and Proof of Concept (PoC) product Development.
2. In System Programming (ISP)
• Here, the programming is done 'within the system', meaning the firmware is embedded
into the target device without removing it from the target board.
• It is the most flexible and easy way of firmware embedding.
• The only pre-requisite is that the target device must have an ISP support.
• Apart from the target board, PC, ISP cable and ISP utility, no other additional hardware
is required for ISP.
• The target board can be interfaced to the utility program running on PC through Serial
Port/Parallel Port/USB.
• The communication between the target device and ISP utility will be in a serial format.
• The serial protocols used for ISP may be 'Joint Test Action Group (JTAG)' or
'Serial Peripheral Interface (SPI)' or any other proprietary protocol.
In order to perform ISP operations, the target device should be powered up in a special ‘ISP
mode’. ISP mode allows the device to communicate with an external host, such as a PC or
terminal, through a serial interface. The device receives commands and data from the host,
erases and reprograms code memory according to the received command. Once the ISP
operations are completed, the device is re-configured so that it will operate normally by
applying a reset or a re-power up.
Devices with SPI - In System Programming support contains a built- in SPI interface (Serial
Peripheral Interface) and the on-chip EEPROM or FLASH memory is programmed through
this interface.
• The primary I/O lines involved in SPI - In System Programming are:
• MOSI - Master Out Slave In
• MISO - Master In Slave Out
• SCK - System Clock
• RST - Reset of Target Device
• GND - Ground of Target Device
PC acts as the master and target device acts as the slave in ISP. The program data is sent to the
MOSI pin of target device and the device acknowledgement is originated from the MISO pin
of the device.
• SCK pin acts as the clock for data transfer.
• Since the target device works under a supply voltage less than 5V (TTL/CMOS), it is
better to connect these lines of the target device with the parallel port of the PC. Since
parallel port operations are also at 5V logic, no need for any other intermediate
hardware for signal conversion. Standard SPI-ISP utilities are feely available on the
internet and there is no need for going for writing own program.
The power up sequence for In System Programming for Atmel's AT89S series
microcontroller family is listed below:
1. Apply supply voltage between VCC and GND pins of target chip.
2. Set RST pin to "HIGH" state.
3. If a crystal is not connected across pins XTAL1 and XTAL2, apply a 3 MHz to 24 MHz
clock to XTAL1 pin and wait for at least 10 milliseconds.
4. Enable serial programming by sending the Programming Enable serial instruction to
pin MOSI/P1.5. The frequency of the shift clock supplied at pin SCK/P1.7 needs to be
less than the CPU clock at XTAL1 divided by 40.
5. The Code or Data array is programmed one byte at a time by supplying the address and
data together with the appropriate Write instruction. The selected memory location is
first erased before the new data is written. The write cycle is self-timed and typically
takes less than 2.5 ms at 5V.
6. Any memory location can be verified by using the Read instruction, which returns the
content at the selected address at serial output MISO/P1 .6.
7. After successfully programming the device, set RST pin low or turn off the chip power
supply and turn it ON to commence the normal operation.
3. In Application Programming
In Application Programming (IAP) is a technique used by the firmware running on the
target device for modifying a selected portion of the code memory. It is not a technique for first
time embedding of user written firmware. It modifies the program code memory under the
control of the embedded application. Updating calibration data, look-up tables, etc., which are
stored in code memory, are typical examples of IAP.
• The Boot ROM resident API instructions which perform various functions such as
programming, erasing, and reading the Flash memory during ISP- mode, are made
available to the end-user written firmware for IAP.
• Thus, it is possible for an end-user application to perform operations on the Flash
memory.
• A common entry point to these API routines is provided for interfacing them to the end-
user's application.
• Functions are performed by setting up specific registers as required by a specific
operation and performing a call to the common entry point.
• Like any other subroutine call, after completion of the function, control will return to
the end-user's code.
• The Boot ROM is shadowed with the user code memory in its address range. This
shadowing is controlled by a status bit.
• When this status bit is set, accesses to the internal code memory in this address range
will be from the Boot ROM.
• When cleared, accesses will be from the user's code memory. Hence the user should set
the status bit prior to calling the common entry point for IAP operations.