0% found this document useful (0 votes)
12 views

Chapter 3

Uploaded by

zebrehe
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Chapter 3

Uploaded by

zebrehe
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 133

Real-time Task

1
Definition

Process (or task)


 is a sequence of instructions that in the
absence of other activities is continuously
executed by the process until completion.

Arrival time Task τi

Start time

ai si fi t
Finishing time

2
Definition

A real time tasks are generated due to certain


event occurrences
• Either internal or external
Example:
• A task may get generated due to a
temperature sensor sensing high level.

3
Task States

A task is said to be:


• ACTIVE: if it can be executed by the CPU.
• BLOCKED: if it is waiting for one event.
An active task can be:
• RUNNING: if it is being executed by the
CPU.
• READY: if it is waiting for the CPU.
4
Task State Transitions

signal BLOCKED wait

dispatching Termination
activation
READY RUNNING

Preemption

5
Ready Queue

The ready tasks are kept in a waiting queue,


called ready queue
The strategy for choosing the ready task to be
executed on the CPU is the scheduling algorithm.

activation Dispatching Termination


τ3 τ2 τ1 CPU
Ready queue
6
Real-Time Task

Ti
τi Di
Ci

ri si fi di t
• Ti inter-arrival time (period)-applicable for periodic tasks
• ri request time (arrival time ai)
• si is start time
• Ci worst case execution time (wcet)
• di absolute deadline
• Di relative deadline (relative to the arrival time)
7
• f finishing time
Other parameters

τi ci(t) slack

ri si fi di t
t

• Lateness: Li = fi – di
• Residual wcet ci (t): ci (ti) = Ci
• Laxity (or slack): di - t - ci (t)

8
Other parameters

Start-time Jitter
τi

si,1 si,2 si,3

Completion-time Jitter (I/O Jitter)


τi

si,1 fi,1 si,2 fi,2 si,3 fi,3


9
Task Criticality

HARD real time tasks


• Missing a deadline is highly undesirable
• May cause catastrophical effects on the
system
SOFT real time tasks
• Occasional miss of a deadline is tolerable
•An Causes a performance
operating system able to handle degradation
hard RT tasks is called a
hard real-time system
10
Examples

HARD real time tasks


• Flight control system
• Car brake system
SOFT real time tasks
• Reading data from the keypad
• Playing video

11
Real-time tasks

Periodic
• Periodic tasks repeats after a certain
fixed time interval
Sporadic
• Sporadic tasks recurs at random instant
Aperiodic
• Same as sporadic except that minimum
separation b/n two instant can be 0. 12
Activation modes

Time driven: periodic tasks


• The task is automatically activated by the
kernel at regular intervals
Event driven: aperiodic/sporadic tasks
• The task is activated upon the arrival of
an event or through an explicit invocation
of the activation primitive
• Sporadic: known minimum inter-arrival
times; often done for worst case
calculation 13
Phases of periodic task

The time from zero till the occurrence of the


first instants of the task.
Denoted by 

0 e1 e2 e3
14
Periodic task model

ri ,1  i
Inter-arrival
ri ,k 1 ri ,k  Ti time
τi (Ci,Ti,Di)

Ci Ti

ri,1 = Фi ri,k ri,k+1

ri ,k  i  k  1.Ti [Often Di Ti ]


d i ,k ri ,k  Di

15
Types of Constraints

Timing Constraints
• Activation, Completion, Jitter
Precedence Constraints
• They impose an ordering in the execution
Resource Constraints
• They enforce a synchronization in the
access of mutually exclusive resources.
16
Time Constraints

Can be explicit or implicit


• Explicit constraints
 Are included in the specification of the
system
• Example
 Open the valve in 10 seconds
 Send the position within 40 ms
 Read the altimeter every 200 ms
17
Time Constraints

• Implicit constraints
 Do not appear in the system specification but
must be respected to meet the requirements.
• Example
 Avoid obstacles while driving at speed V
 The speed has to be carefully calculated such
that all obstacles are avoided.

18
Precedence Constraints

Sometimes tasks must be executed with specific


precedence relations, specified by a Directed
Acyclic Graph.

τ1 Predecessor
τ1 τ4
τ2 τ3
Immediate predecessor
τ1 τ4
τ4 τ5

19
Resource Constraints

To preserve data consistency, shared resources


must be accessed in mutual exclusion:
However, mutual exclusion introduces extra
delays

20
Scheduling Task

21
General Definition

A set of tasks Γ (Gamma) is said to be schedulable


if there exists a feasible schedule for it.
A schedule is a particular assignment of tasks to
the processor in time.
A schedule σ is said to be feasible if all the tasks
are able to complete within a set of constraints.

22
Scheduling Terminology

Valid Schedule:
• At most one task is assigned to a
processor at a time.
• No task is scheduled before it is ready
• Precedence and resource constraints of
tasks are satisfied.

23
Scheduling Terminology

Feasible Schedule:
• Valid schedule is one in which all tasks
meet the timing constraints
Optimal Scheduler:
• An optimal scheduler can feasibly
schedule any task set that can be
scheduled by any other scheduler.
24
The General Scheduling Problem

Given a set Γ of n tasks, a set P of m processors,


and a set R of r resources, find an assignment of P
and R to Γ which produce a feasible schedule.

P Scheduling σ
algorithm feasible
R
25
Complexity

In 1975, Garey and Johnson showed that the


general scheduling problem is NP hard.
However, polynomial time algorithms can be
found under particular conditions.

26
Scheduling algorithm taxonomy

Preemptive vs. Non pre-emptive


Static vs. dynamic
Best Effort vs. Optimal

27
Scheduling

A scheduling algorithm is said to be:


• Preemptive: if the running task can be
temporarily suspended in the ready
queue to execute a more important task.
• Non preemptive: if the running task can
be suspended until completion.
• Deferred preemptive: the running task is
allowed to run until a bounded time.
28
Preemptive System

Tasks

Interrupt a task when a higher priority task


wants to execute.
Easy to provide guarantees
Higher overhead of switching and memory
29
Non-Preemptive System

Tasks

30
Deferred Preemptive System

When a high priority task arrives, the low


priority task is allowed to continue for a
bounded time (not necessary to completion).
Essentially each task can be considered to be
composed of small non-preemptive tasks-each
small task with a bounded time.
31
Deferred Preemptive System

τ1
R1

0 4 8 12 16 20

32
Deferred Preemptive System

τ1 τ2
R1 R2

0 4 8 12 16 20

33
Deferred Preemptive System

τ1 τ2
R1 R2

0 4 8 12 16 20 24 28 t

34
Static vs. Dynamic

A scheduler is called static (or pre-run-time) if it


makes its scheduling decisions at compile time.
It generates a dispatching table for the run-time
dispatcher off-line.

35
Static vs. Dynamic (Cont.)

For this purpose it needs complete prior


knowledge about the task-set characteristics,
e.g., maximum execution times, precedence
constraints, mutual exclusion constraints, and
deadlines.

36
Static vs. Dynamic (Cont.)

A scheduler is called dynamic (or on-line) if it


makes its scheduling decisions at run time,
selecting one out of the current set of ready
tasks.
Dynamic schedulers are flexible and adapt to an
evolving task scenario. They consider only the
current task requests. 37
Scheduling Polices

38
Tasks (Rep.)

Let {Ti} be a set of tasks


• ci be the execution time of Ti

• di be the deadline interval, that is the

time b/n Ti becoming available & the time

until which Ti has to finish execution.

• li be the laxity or slack, defined as li = di -

ci
39
Scheduling Point

At these points on time line


• Scheduler makes decision regarding task
to be run next
Clock-driven
• Scheduling points defined by interrupts
for a periodic timer.
Event-driven
• Scheduling points defined by interrupts
for a periodic timer.

40
Task Scheduling On Uni-processor

• Real-time task scheduler can be broadly classified


into
• Clock-driven
• Event-driven

41
Clock-driven scheduling

Decision regarding which job to run next is made


only at the clock interrupt instances.
• Timer are used to determine the
scheduling points
• The job list as well as which task to be
run for how long stored in the table
42
Clock-driven scheduling

Popular Examples
• Table-driven scheduler
• Cyclic scheduler

43
Clock-driven scheduler

Also called offline scheduler and also called


static scheduler
Cannot schedule aperiodic tasks

44
Table-driven scheduler

Task Start Time Stop


T1 0 100
T2 101 150
T3 151 225

45
Table-driven scheduling

For scheduling n periodic tasks


• The scheduler develops a permanent
schedule for a period LCM (P1, P2, . . .
Pn)
• Store the schedule in a table
• The schedule is repeated forever.

46
Table-driven scheduling

Table driven scheduler are


• Simple: Used in low cost applications
• Efficient: Very little runtime overhead
• Inflexible: Very difficult to accommodate
dynamic tasks.

47
Disadvantage

When the number of tasks are large


• Requires setting a large number of timers
• Number of timers supported by an
operating system is restricted due to
efficiency reasons.

48
Cyclic Schedulers

Cyclic schedulers are very popular


• Being extensively used in the industry
• A large majority of small embedded
applications being manufactured uses
cyclic scheduler.

49
Cyclic Schedulers

Repeats a pre-computed schedule


The schedule needs to be stored for a major
cycle
A major cycle is divided into one or more minor
cycles (frames)

50
Cyclic Schedulers (Cont.)

Scheduling point for a cyclic scheduler


• Occurs at the beginning of frames
Each task is assigned to run in one or more
frames

51
Major Cyclic

A major cycle is a set of tasks


• In each major cycle, the different tasks
recur identically

Major Cycle Major Cycle

52
Minor Cyclic

Each major cycle


• Usually has an integral number of minor
cycles or frames
• Period of major cycle LCM (P1, P2,…, Pn)
Frame boundaries are marked
• Through interrupts from a periodic timer

53
Selecting an appropriate frame size
(F)

Minimize context switch


• F should be larger than each task size
Minimize table size
• F squarely divide major cycle
Satisfaction of task deadline
• B/n the arrival of a task & its deadline - at
least one full frame must exist.
54
Disadvantages of cyclic scheduler

As number of tasks increase


• It becomes difficult to select a suitable
frame size
CPU times in many frame are wasted

55
Event-Driven Schedulers

Unlike clock-driven schedulers:


• These can handle both sporadic &
aperiodic tasks
• Used in more complex applications
Frame boundaries are marked
• Through interrupts from a periodic timer

56
Event-Driven Scheduling

Scheduling points:
• Defined by task completion & event arrival
Preemptive Scheduler
• On arrival of higher priority task, the
running task may be preempted.
These are greedy schedulers:
• Never keep the processor idle if a task is
ready. 57
Event-Driven Static Schedulers

The task priorities once assigned by the


programmer
• Do not change during runtime.
• RMA (Rate Monotonic Algorithm) is the
optimal static priority scheduling
algorithm.

58
Event-Driven Dynamic Schedulers

The task priorities can change during runtime


• Based on the relative urgency of
completion of tasks
• EDF (Earliest Deadline First) is the
optimal uni-processor scheduling
algorithm

59
Earliest Deadline First (EDF)

Algorithm
• Each time a new ready task arrives:
• It is inserted in to a queue of ready tasks,
sorted by their deadlines.
• If a newly arrived task is inserted at the
head of the queue, the currently
executing task is pre-empted.

60
Earliest Deadline First (EDF)

Scheduling is complex
• Processes to be sorted according to
deadline at each scheduling instant.
• Sorted list maintained as a heap.
• Each update requires O(log n).

61
Example

Early Deadline First (EDF)

Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

T1
T2
T3

0 2 4 6 8 10 12 14 16 18 20 22 24

62
Accumulated Utilization

n
ci
Accumulated Utilization:  
i 1 pi

Necessary condition for


schedulability (with m = number  m
of processors)

63
EDF Schedulability Check

Sum of utilization of tasks is less than one

n
ei

i 1 pi
 ui 1

 Both the necessary and sufficient condition for


schedulability

64
EDF is a dynamic algorithm

The priority of a task can be determined at any


point of time.
The longer the task waits in a ready queue – the
higher the chance (probability) of its being taken
up for scheduling.

65
Implementation of EDF

Simple FIFO queue


• A freshly arriving task is inserted at the
end of the queue
Sorted queue
• Priority queue

66
Implementation of EDF

A queue is maintained for each distinct deadline


When a task arrives:
• Its absolute deadline is computed and
inserted in Q.

67
EDF Properties

EDF is optimal for a uni-processor with task pre-


emption being allowed
• If a feasible schedule exist then EDF will
schedule the tasks
EDF can achieve 100% processor utilization
• When an EDF system is overloaded and
misses a deadline, it will run at 100%
capacity for a time before the deadline
missed.
68
Minimum Laxity First (MLF)

Priorities are decreasing function of the laxity


Laxity = relative deadline – time required to
complete task execution.

69
Minimum Laxity First (MLF)

The task that is most likely to fail first is assigned


highest priority
• Less laxity, higher priority
• Dynamically changing priority
• Pre-emptive

70
Minimum Laxity First (MLF)

• Requires calling the scheduler periodically, and to


re-compute the laxity overhead for may calls of
the scheduler & many context switches.
• Detects missed deadline early
• Requires the knowledge of the execution
time.

71
Example

Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

T1
T2

T3

0 2 4 6 8 10 12 14 16 18 20 22 24
l(T1)=33-4-6=23 l(T1)=33-5-6=22 l(T1)=33-15-6=10
l(T2)=28-4-3=21 l(T2)=28-5-2=21 l(T2)=28-15-2=9
l(T3)=29-5-10=14

72
Resource Sharing for
Real-Time Tasks

73
Introduction

So far, the only resource that we considered is


CPU
• CPU is serially reusable
• Can be used by one task at a time.
• The task can be preempted at any time
without affecting correctness.

74
Introduction

Two tasks conflict with one another if they


require the same resource.
Two tasks contend with one another if one job
requests for a resource already held by another
task.
When a task does not get the requested
resource it is blocked-removed from the ready
queue. 75
Critical Sections

Tasks in reality need to share many types of


resources
• Files, data structures, devices
• These are non-preempted resources
A piece of code in which a shared non-
preemptable resource is accessed
• Called a critical section in the operating
systems litrature. 76
Critical Section Execution

Traditional OS solution to execute critical


sections:
• Semaphores
However, in real-time systems this solution does
not work well--- result is
• Priority Inversion
• Unbounded priority Inversion
77
Priority Inversion

When a resource need to be shared in the exclusive


mode
• A task may be blocked by a lower priority task

which is already holding the resource.

A task instance using a critical resource cannot be


preempted.
• Until it is done with all its required completion
using it
78
Priority Inversion

Consequence: a higher priority task cannot


make progress & keeps waiting
• While the lower priority task progress
with its computation.

79
Unbounded Priority Inversion

Consider the following situation


• A lower priority task is holding a resource
• A higher priority task is waiting
• However, an intermediate priority task
which does not need resource preempt
the lower priority task.

80
Unbounded Priority Inversion

T1

T2

T3

T4

T5
Lock Unlock
RC RC
T6

CPU Usage Time

81
Unbounded Priority Inversion

Number of priority inversion suffered by a high


priority task
• Can be too many causing it to miss its
deadline

82
Solution for Simple Priority Inversion

A simple priority inversion can be tolerated:


• Limit the time for which a task executes
its critical section.
• A simple priority inversion can be limited
to tolerable levels by careful
programming.

83
Priority Inheritance Protocols

The main idea behind this scheme:


• A task in critical section cannot be
preempted
 It should be allowed to complete as early as possible

How do you make a task complete as early as


possible?
• Raise its priority, so that low priority
tasks are not able to preempt it. 84
Priority Inheritance Protocols

By how much should its priority be raised


• Make its priority as much as that of the
task it is blocking.

85
Priority Inheritance Protocols

When a resource is busy:


• Requests to lock the resource are queued
in FIFO order.
• Then apply the inheritance clause after a
higher priority task blocks
• That is, it is raised to the highest priority
in the queue

86
Priority Inheritance Protocols

As soon as the task releases the resource,


• It gets back to its original priority
value if it is holding no other critical
section
In case it is holding other critical resources,
• It inherits priority of the highest
priority task waiting for that resource.
87
Working of PIP

Pri(Ti) = 5 Pri(Ti) = 5 Pri(Ti) = 10 Pri(Ti) = 5

CR CR CR
Ti Ti Ti Ti

CR
Tj Tj Tj

Pri(Tj) = 10 Pri(Tj) = 10 Pri(Tj) = 10

Instance 1 Instance 2 Instance 3 Instance 4


88
Short comings of the basic priority
inheritance scheme

PIP suffers from two important drawbacks:


• Deadlock
• Chain blocking
That is, PIP is susceptible to chain blocking
• Also does nothing to prevent deadlock

89
Deadlock

Consider two tasks T1 & T2 accessing critical


resources CR1 & CR2
Assume:
• T1 has a higher priority than T2
• T2 starts first
T1: Lock CR1, Lock CR2, Unlock CR2, Unlock CR1
T2:Lock CR2, Lock CR1, Unlock CR1, Unlock CR2
90
Chain blocking

A task needing to use a set of resources is said to


undergo chain blocking
• If each time it needs a resource, it
undergoes priority inversion
Example
• Assume a high priority task T1 needs
several resources
91
Chain blocking

CR1
T2 T2 CR2 T2 CR2
CR2

Waiting for CR1 Waiting for CR2

T1 T1 CR1 T1 CR1

T2 executing T1 executing T2 executing


T1 blocked T2 blocked T1 blocked 92
Highest Locker Protocol (HLP)

Addresses the short comings of PIP


• However, introduces new complication.
During the design of a system
• A ceiling priority value is assigned to all
resources
• The ceiling priority is equal to the highest
priority of all tasks needing that resource.

93
Ceiling Priority of a resource

When a task acquires a resource


• Its priority value is raised to the ceiling
priority of that resource.

R
Ceil(R) = max-prio(T1,T2,T3)

T1 T2 T3

94
Highest Locker Protocol (HLP)

As soon as a task acquires a resource R:


• Its priority is raised to Ceil(R)
• Helps eliminate the problem of
 Unbounded priority inversion
 Deadlock, and
 Chain blocking

However, introduces inheritance blocking

95
Ceiling Priority of a resource

R Ceil(R) = max-prio(T1,T2,T3) = 2

T1 T2 T3
5 2 8

T1
R
5

96
Ceiling Priority of a resource

R Ceil(R) = max-prio(T1,T2,T3) = 2

T1 T2 T3
5 2 8

T5
T2
3
T4
T1 4
R
2

97
Highest Locker Protocol (HLP)

Theorem
• When HLP is used for resource sharing
 Once a task get any one of the resource required
by it, it is not blocked any further
Corollary 1:
• Under HLP, before a task is granted one
resource
 All the resources required by it must be free
Corollary 1:
• A task cannot undergo chain blocking
in HLP 98
Shortcomings of HLP

Inheritance blocking occurs


• When the priority of a lower priority task
holding a resource is raised to a high value
• Intermediate priority task not needing the
resource
 Cannot execute and undergo priority inversion
• This may lead to several intermediate
priority task to miss their deadline.
99
Priority Ceiling Protocol (PCP)

Like HLP, each resource is assigned a ceiling


priority.
An operating system variable denoting highest
ceiling of all locked semaphores is maintained.
• We will call it Current System Ceiling
(CSC)

100
Task Dependency In Real Time Systems

In practical situation:


• Some tasks have dependencies among
each other.
Existing scheduling techniques need to be
suitably modified.

101
Table-Driven Algorithm

• Arrange the task in increasing order of their deadlines


• Do
 Scan the list from the right most and
 Find a yet to be scheduled tasks whose all successors
have been scheduled
• Schedule it as late as possible
• While there are tasks yet to be scheduled
• Move all the tasks forward as much as possible.

102
Example

Determine a feasible schedule for a real-time task

set {T1, T2,…, T5}


• T1= (2, 8)

• T2= (5, 25)

• T3= (6, 24)

• T4= (10, 50)

• T5= (7, 48)


103
Precedence Relationship

T1

T2 T3

T5 T4

104
Solution

Arrangement of tasks in ascending order


Step 1: of task deadlines
T1 T3 T2 T5 T4

Step 2: Schedule tasks as late as possible without


violating precedence constraints

T1 T3 T2 T5 T4
6 8 14 20 25 33 40 50

105
Solution

Step 3: Move tasks as early as possible without


altering the schedule

T1 T3 T2 T5 T4
2 8 13 20 30

106
Clock in Distributed System
Clocks in distributed System

 In a distributed system, there is one clock at


each node
 
N1 N1

 
N1
N1
Use of clocks in DS

 Determining timeout
 Time stamping
Why time stamping is necessary

 Give the receiver an idea about the age of a


message
 Also used for message ordering
Clocks in Distributed System

 Clocks tend to diverge


• It is unlikely that two clocks would run
at the same speed
 This lack synchrony is called clock
skew
• The skew increases with time
Clocks Synchronization

 Goal- make all clocks of a system on same time


value.
• Agreed time may be different from the
world time standard (UTC)
 UTC signal can be obtained through
• GPS (Global Positioning System)
Types of Clock Synchronization

 Internal Clock Synchronization


• All clocks are synchronized with respect
to a clock internal to the system.
 External Clock Synchronization
• The clocks are synchronized with
respect to a clock external to the
system.
External Clock Synchronization

 In a distributed system, there is one clock at each node

N1
 
N1

Maste
r
Clock 
N1  N1
Advantage & Disadvantage

 Advantage
• Easy to implement
• Zero communication overhead
 Disadvantage
• Expensive to have a GPS receiver at
each node
Internal Clock Synchronization

 Centralized synchronization
• A node broadcast its time, other clock
set their time
 Distributed synchronization
• Average time value of different clocks
computed and used.
Centralized Clock Synchronization

 One clock is designated as the master clock (also called the


time server)
• Other clocks (called slaves) are kept in sync with
the master

Master
Clock

SC1 SC2 SCn-1 SCn


Centralized Clock Synchronization

 Server broadcast its time once every ΔT time


interval.
 Choosing right value for ΔT is important issue
• If ΔT is too small-high communication
overhead but good synchronization
• If ΔT is too large-clocks may drift too
much apart.
Maximum Drift

 Assume that the rate of drift b/n two clocks is


restricted to some constant ρ.
• Specified by clock manufacturers
 Suppose clocks are resynchronized after ΔT
interval
Maximum Drift

 The drift of any slave clock from its master


would be bounded by ΔT.
 Maximum drift b/n any two clocks is limited to
2ρΔT.
Advantage & Disadvantage

 Advantage
• Not very hard to implement
• Moderate communication overhead
 Disadvantage
• Susceptible to single point failure
• Synchronization fails if the master clock
fails
Example

 Synchronize six distributed clocks using central


synchronization scheme.
 Assume that ρ= 5x10-5
 The maximum drift b/n any two clocks is to be
restricted to =1ms.
Solution
3
10
2 T   T  5
10 sec
2  5 10
 Master transmits message per
resynchronization interval, 6-1 = 5
 Number of resynchronization interval per
hour
60
60  360
10
Solution

 Number of messages transmitted


per hour

360  5 1800
Distributed Clock Synchronization

 No master clock
 All clocks periodically exchange their clock
reading
• Each Clock computes the average time &
set its clock accordingly.
C4
C1

C3
C2
Bad Clock

 In a distributed system
• Some cocks can be bad
 Bad clocks exhibit large drifts
• Drifts larger than manufacturer
specified tolerance, time can be
removed.
Real Time Applications
• Digital control, optimal control, command and control, signal
processing, tracking, real –time databases, and multimedia
• Reading Assignment: Select Two Real time Applications and
disucss the working principle in detail.
RTOS support for semaphores, queues, and events
 Semaphore: a signal between tasks/interrupts that does not carry any additional
data.
 The meaning of the signal is implied by the semaphore object, so you need one
semaphore for each purpose.
 The most common type of semaphore is a binary semaphore, that triggers
activation of a task.
 The typical design pattern is that a task contains a main loop with an RTOS call
to “take” the semaphore.
 If the semaphore is not yet signaled, the RTOS blocks the task from executing
further until some task or interrupt routine “gives” the semaphore, i.e., signals it.
RTOS support for semaphores, queues, and events Cont …
 Mutex: a binary semaphore for mutual exclusion between tasks, to protect a
critical section. Internally it works much the same way as a binary semaphore,
but it is used in a different way.
 It is “taken” before the critical section and “given” right after, i.e., in the same
task.
 A mutex typically stores the current “owner” task and may boost its scheduling
priority to avoid a problem called “priority inversion”, discussed below.
RTOS support for semaphores, queues, and events Cont…
 Counting Semaphore: a semaphore that contains a counter with an upper bound.
This allows for keeping track of limited shared resources.
 Whenever a resource is to be allocated, an attempt to “take” the semaphore is
made and the counter is incremented if below the specified upper bound,
otherwise the attempted allocation blocks the task (possibly with a timeout) or
fails directly, depending on the parameters to the RTOS semaphore service.
 When the resource is to be released, a “give” operation is made which
decrements the counter.
RTOS support for semaphores, queues, and events Cont…

Queue: a FIFO buffer that allows for passing arbitrary messages to tasks.
Typically, each queue has just one specific receiver task and one or several
sender tasks.
Queues are often used as input for server-style tasks that provide multiple
services/commands.
A common design pattern in that case is to have common data structure for
such messages consisting of a command code and parameters, and use a
switch statement in the receiver task to handle the different message codes.
If using a union structure for the parameters, or even just a void pointer, the
parameters can be defined separately for each command code.
RTOS support for semaphores, queues, and events Cont…
RTOS Features Related to Semaphores, Queues, and Events:
Task Scheduling:.
Priority Handling:.
Inter-task Synchronization.
Timeouts: Many RTOSes
Example RTOS Implementations:
FreeRTOS: A widely used open-source RTOS that provides support for semaphores,
queues, and events. It includes primitives like binary semaphores, mutexes, counting
semaphores, message queues, and event groups.
RTX (Keil): A real-time kernel that provides support for semaphores, queues, and
events. It has efficient memory management and scheduling features.
CMSIS-RTOS: Part of ARM’s CMSIS, which includes support for basic real-time
operating system functionalities, including semaphores, message queues, and event
flags.
END

You might also like