Chapter 3
Chapter 3
1
Definition
Start time
ai si fi t
Finishing time
2
Definition
3
Task States
dispatching Termination
activation
READY RUNNING
Preemption
5
Ready Queue
Ti
τi Di
Ci
ri si fi di t
• Ti inter-arrival time (period)-applicable for periodic tasks
• ri request time (arrival time ai)
• si is start time
• Ci worst case execution time (wcet)
• di absolute deadline
• Di relative deadline (relative to the arrival time)
7
• f finishing time
Other parameters
τi ci(t) slack
ri si fi di t
t
• Lateness: Li = fi – di
• Residual wcet ci (t): ci (ti) = Ci
• Laxity (or slack): di - t - ci (t)
8
Other parameters
Start-time Jitter
τi
11
Real-time tasks
Periodic
• Periodic tasks repeats after a certain
fixed time interval
Sporadic
• Sporadic tasks recurs at random instant
Aperiodic
• Same as sporadic except that minimum
separation b/n two instant can be 0. 12
Activation modes
0 e1 e2 e3
14
Periodic task model
ri ,1 i
Inter-arrival
ri ,k 1 ri ,k Ti time
τi (Ci,Ti,Di)
Ci Ti
15
Types of Constraints
Timing Constraints
• Activation, Completion, Jitter
Precedence Constraints
• They impose an ordering in the execution
Resource Constraints
• They enforce a synchronization in the
access of mutually exclusive resources.
16
Time Constraints
• Implicit constraints
Do not appear in the system specification but
must be respected to meet the requirements.
• Example
Avoid obstacles while driving at speed V
The speed has to be carefully calculated such
that all obstacles are avoided.
18
Precedence Constraints
τ1 Predecessor
τ1 τ4
τ2 τ3
Immediate predecessor
τ1 τ4
τ4 τ5
19
Resource Constraints
20
Scheduling Task
21
General Definition
22
Scheduling Terminology
Valid Schedule:
• At most one task is assigned to a
processor at a time.
• No task is scheduled before it is ready
• Precedence and resource constraints of
tasks are satisfied.
23
Scheduling Terminology
Feasible Schedule:
• Valid schedule is one in which all tasks
meet the timing constraints
Optimal Scheduler:
• An optimal scheduler can feasibly
schedule any task set that can be
scheduled by any other scheduler.
24
The General Scheduling Problem
P Scheduling σ
algorithm feasible
R
25
Complexity
26
Scheduling algorithm taxonomy
27
Scheduling
Tasks
Tasks
30
Deferred Preemptive System
τ1
R1
0 4 8 12 16 20
32
Deferred Preemptive System
τ1 τ2
R1 R2
0 4 8 12 16 20
33
Deferred Preemptive System
τ1 τ2
R1 R2
0 4 8 12 16 20 24 28 t
34
Static vs. Dynamic
35
Static vs. Dynamic (Cont.)
36
Static vs. Dynamic (Cont.)
38
Tasks (Rep.)
ci
39
Scheduling Point
40
Task Scheduling On Uni-processor
41
Clock-driven scheduling
Popular Examples
• Table-driven scheduler
• Cyclic scheduler
43
Clock-driven scheduler
44
Table-driven scheduler
45
Table-driven scheduling
46
Table-driven scheduling
47
Disadvantage
48
Cyclic Schedulers
49
Cyclic Schedulers
50
Cyclic Schedulers (Cont.)
51
Major Cyclic
52
Minor Cyclic
53
Selecting an appropriate frame size
(F)
55
Event-Driven Schedulers
56
Event-Driven Scheduling
Scheduling points:
• Defined by task completion & event arrival
Preemptive Scheduler
• On arrival of higher priority task, the
running task may be preempted.
These are greedy schedulers:
• Never keep the processor idle if a task is
ready. 57
Event-Driven Static Schedulers
58
Event-Driven Dynamic Schedulers
59
Earliest Deadline First (EDF)
Algorithm
• Each time a new ready task arrives:
• It is inserted in to a queue of ready tasks,
sorted by their deadlines.
• If a newly arrived task is inserted at the
head of the queue, the currently
executing task is pre-empted.
60
Earliest Deadline First (EDF)
Scheduling is complex
• Processes to be sorted according to
deadline at each scheduling instant.
• Sorted list maintained as a heap.
• Each update requires O(log n).
61
Example
T1
T2
T3
0 2 4 6 8 10 12 14 16 18 20 22 24
62
Accumulated Utilization
n
ci
Accumulated Utilization:
i 1 pi
63
EDF Schedulability Check
n
ei
i 1 pi
ui 1
64
EDF is a dynamic algorithm
65
Implementation of EDF
66
Implementation of EDF
67
EDF Properties
69
Minimum Laxity First (MLF)
70
Minimum Laxity First (MLF)
71
Example
T1
T2
T3
0 2 4 6 8 10 12 14 16 18 20 22 24
l(T1)=33-4-6=23 l(T1)=33-5-6=22 l(T1)=33-15-6=10
l(T2)=28-4-3=21 l(T2)=28-5-2=21 l(T2)=28-15-2=9
l(T3)=29-5-10=14
72
Resource Sharing for
Real-Time Tasks
73
Introduction
74
Introduction
79
Unbounded Priority Inversion
80
Unbounded Priority Inversion
T1
T2
T3
T4
T5
Lock Unlock
RC RC
T6
81
Unbounded Priority Inversion
82
Solution for Simple Priority Inversion
83
Priority Inheritance Protocols
85
Priority Inheritance Protocols
86
Priority Inheritance Protocols
CR CR CR
Ti Ti Ti Ti
CR
Tj Tj Tj
89
Deadlock
CR1
T2 T2 CR2 T2 CR2
CR2
T1 T1 CR1 T1 CR1
93
Ceiling Priority of a resource
R
Ceil(R) = max-prio(T1,T2,T3)
T1 T2 T3
94
Highest Locker Protocol (HLP)
95
Ceiling Priority of a resource
R Ceil(R) = max-prio(T1,T2,T3) = 2
T1 T2 T3
5 2 8
T1
R
5
96
Ceiling Priority of a resource
R Ceil(R) = max-prio(T1,T2,T3) = 2
T1 T2 T3
5 2 8
T5
T2
3
T4
T1 4
R
2
97
Highest Locker Protocol (HLP)
Theorem
• When HLP is used for resource sharing
Once a task get any one of the resource required
by it, it is not blocked any further
Corollary 1:
• Under HLP, before a task is granted one
resource
All the resources required by it must be free
Corollary 1:
• A task cannot undergo chain blocking
in HLP 98
Shortcomings of HLP
100
Task Dependency In Real Time Systems
101
Table-Driven Algorithm
102
Example
T1
T2 T3
T5 T4
104
Solution
T1 T3 T2 T5 T4
6 8 14 20 25 33 40 50
105
Solution
T1 T3 T2 T5 T4
2 8 13 20 30
106
Clock in Distributed System
Clocks in distributed System
N1
N1
Use of clocks in DS
Determining timeout
Time stamping
Why time stamping is necessary
N1
N1
Maste
r
Clock
N1 N1
Advantage & Disadvantage
Advantage
• Easy to implement
• Zero communication overhead
Disadvantage
• Expensive to have a GPS receiver at
each node
Internal Clock Synchronization
Centralized synchronization
• A node broadcast its time, other clock
set their time
Distributed synchronization
• Average time value of different clocks
computed and used.
Centralized Clock Synchronization
Master
Clock
Advantage
• Not very hard to implement
• Moderate communication overhead
Disadvantage
• Susceptible to single point failure
• Synchronization fails if the master clock
fails
Example
360 5 1800
Distributed Clock Synchronization
No master clock
All clocks periodically exchange their clock
reading
• Each Clock computes the average time &
set its clock accordingly.
C4
C1
C3
C2
Bad Clock
In a distributed system
• Some cocks can be bad
Bad clocks exhibit large drifts
• Drifts larger than manufacturer
specified tolerance, time can be
removed.
Real Time Applications
• Digital control, optimal control, command and control, signal
processing, tracking, real –time databases, and multimedia
• Reading Assignment: Select Two Real time Applications and
disucss the working principle in detail.
RTOS support for semaphores, queues, and events
Semaphore: a signal between tasks/interrupts that does not carry any additional
data.
The meaning of the signal is implied by the semaphore object, so you need one
semaphore for each purpose.
The most common type of semaphore is a binary semaphore, that triggers
activation of a task.
The typical design pattern is that a task contains a main loop with an RTOS call
to “take” the semaphore.
If the semaphore is not yet signaled, the RTOS blocks the task from executing
further until some task or interrupt routine “gives” the semaphore, i.e., signals it.
RTOS support for semaphores, queues, and events Cont …
Mutex: a binary semaphore for mutual exclusion between tasks, to protect a
critical section. Internally it works much the same way as a binary semaphore,
but it is used in a different way.
It is “taken” before the critical section and “given” right after, i.e., in the same
task.
A mutex typically stores the current “owner” task and may boost its scheduling
priority to avoid a problem called “priority inversion”, discussed below.
RTOS support for semaphores, queues, and events Cont…
Counting Semaphore: a semaphore that contains a counter with an upper bound.
This allows for keeping track of limited shared resources.
Whenever a resource is to be allocated, an attempt to “take” the semaphore is
made and the counter is incremented if below the specified upper bound,
otherwise the attempted allocation blocks the task (possibly with a timeout) or
fails directly, depending on the parameters to the RTOS semaphore service.
When the resource is to be released, a “give” operation is made which
decrements the counter.
RTOS support for semaphores, queues, and events Cont…
Queue: a FIFO buffer that allows for passing arbitrary messages to tasks.
Typically, each queue has just one specific receiver task and one or several
sender tasks.
Queues are often used as input for server-style tasks that provide multiple
services/commands.
A common design pattern in that case is to have common data structure for
such messages consisting of a command code and parameters, and use a
switch statement in the receiver task to handle the different message codes.
If using a union structure for the parameters, or even just a void pointer, the
parameters can be defined separately for each command code.
RTOS support for semaphores, queues, and events Cont…
RTOS Features Related to Semaphores, Queues, and Events:
Task Scheduling:.
Priority Handling:.
Inter-task Synchronization.
Timeouts: Many RTOSes
Example RTOS Implementations:
FreeRTOS: A widely used open-source RTOS that provides support for semaphores,
queues, and events. It includes primitives like binary semaphores, mutexes, counting
semaphores, message queues, and event groups.
RTX (Keil): A real-time kernel that provides support for semaphores, queues, and
events. It has efficient memory management and scheduling features.
CMSIS-RTOS: Part of ARM’s CMSIS, which includes support for basic real-time
operating system functionalities, including semaphores, message queues, and event
flags.
END