6 RealTimeScheduling
6 RealTimeScheduling
© Lothar Thiele
Computer Engineering and Networks Laboratory
Where we are …
6-2
Basic Terms and Models
6-3
Basic Terms
Real-time systems
Hard: A real-time task is said to be hard, if missing its deadline may cause
catastrophic consequences on the environment under control. Examples are
sensory data acquisition, detection of critical conditions, actuator servoing.
Soft: A real-time task is called soft, if meeting its deadline is desirable for
performance reasons, but missing its deadline does not cause serious damage to
the environment and does not jeopardize correct system behavior. Examples are
command interpreter of the user interface, displaying messages on the screen.
6-4
Schedule
Given a set of tasks J {J1 , J 2 ,...}:
A schedule is an assignment of tasks to the processor, such that each task is
executed until completion.
A schedule can be defined as an integer step function : R N
where (t ) denotes the task which is executed at time t. If
(t ) 0 then the processor is called idle.
If (t ) changes its value at some time, then the processor performs a context
switch.
Each interval, in which (t ) is constant is called a time slice.
A preemptive schedule is a schedule in which the running task can be arbitrarily
suspended at any time, to assign the CPU to another task according to a
predefined scheduling policy.
6-5
Schedule and Timing
A schedule is said to be feasible, if all task can be completed according to a set
of specified constraints.
A set of tasks is said to be schedulable, if there exists at least one algorithm that
can produce a feasible schedule.
Arrival time ai or release time ri is the time at which a task becomes ready for
execution.
Computation time Ci is the time necessary to the processor for executing the
task without interruption.
Deadline d i is the time at which a task should be completed.
Start time si is the time at which a task starts its execution.
Finishing time f i is the time at which a task finishes its execution.
6-6
Schedule and Timing
6-7
Schedule and Timing
Periodic task i : infinite sequence of identical activities, called instances or jobs,
that are regularly activated at a constant rate with period Ti . The activation
time of the first instance is called phase i .
relative deadline
instance 1 instance 2
6-8
Example for Real-Time Model
task J1 task J2
5 10 15 20 25
r1 r2 d1 d2
Computation times: C1 = 9, C2 = 12
Start times: s1 = 0, s2 = 6
Finishing times: f1 = 18, f2 = 28
Lateness: L1 = -4, L2 = 1
Tardiness: E1 = 0, E2 = 1
Laxity: X1 = 13, X2 = 11
6-9
Precedence Constraints
Precedence relations between tasks can be described through an acyclic directed
graph G where tasks are represented by nodes and precedence relations by
arrows. G induces a partial order on the task set.
J4 J5
6 - 10
Precedence Constraints
Example for concurrent activation:
Image acquisition acq1 acq 2
Low level image processing edge1 edge 2
Feature/contour extraction shape
Pixel disparities disp
Object size H
Object recognition rec
6 - 11
Classification of Scheduling Algorithms
With preemptive algorithms, the running task can be interrupted at any time to
assign the processor to another active task, according to a predefined
scheduling policy.
With a non-preemptive algorithm, a task, once started, is executed by the
processor until completion.
Static algorithms are those in which scheduling decisions are based on fixed
parameters, assigned to tasks before their activation.
Dynamic algorithms are those in which scheduling decisions are based on
dynamic parameters that may change during system execution.
6 - 12
Classification of Scheduling Algorithms
An algorithm is said optimal if it minimizes some given cost function defined
over the task set.
An algorithm is said to be heuristic if it tends toward but does not guarantee to
find the optimal schedule.
Acceptance Test: The runtime system decides whenever a task is added to the
system, whether it can schedule the whole task set without deadline violations.
6 - 13
Metrics to Compare Schedules
1 n
Average response time: tr f i ri
n i 1
Total completion time: tc max f i min ri
i i
n
wi ( fi ri )
Weighted sum of response time: t w i 1 n
wi
i 1
Maximum lateness: Lmax max f i d i
i
n
Number of late tasks: N late miss f i
i 1
0 if f i di
miss f i
1 otherwise
6 - 14
Metrics Example
task J1 task J2
5 10 15 20 25
r1 r2 d1 d2
6 - 15
Metrics and Scheduling Example
In schedule (a), the maximum lateness is minimized, but all tasks miss their deadlines.
In schedule (b), the maximal lateness is larger, but only one task misses its deadline.
6 - 16
Real-Time Scheduling of Aperiodic Tasks
6 - 17
Overview Aperiodic Task Scheduling
Scheduling of aperiodic tasks with real-time constraints:
Table with some known algorithms:
6 - 18
Earliest Deadline Due (EDD)
Jackson’s rule: Given a set of n tasks. Processing in order of non-decreasing
deadlines is optimal with respect to minimizing the maximum lateness.
6 - 19
Earliest Deadline Due (EDD)
Example 1:
6 - 20
Earliest Deadline Due (EDD)
Jackson’s rule: Given a set of n tasks. Processing in order of non-decreasing
deadlines is optimal with respect to minimizing the maximum lateness.
Proof concept:
6 - 21
Earliest Deadline Due (EDD)
Example 2:
6 - 22
Earliest Deadline First (EDF)
Horn’s rule: Given a set of n independent tasks with arbitrary arrival times, any
algorithm that at any instant executes a task with the earliest absolute deadline
among the ready tasks is optimal with respect to minimizing the maximum
lateness.
6 - 23
Earliest Deadline First (EDF)
Example:
6 - 24
Earliest Deadline First (EDF)
Horn’s rule: Given a set of n independent tasks with arbitrary arrival times, any
algorithm that at any instant executes the task with the earliest absolute deadline
among the ready tasks is optimal with respect to minimizing the maximum
lateness.
Concept of proof:
For each time interval t , t 1 it is verified, whether the actual running task is
the one with the earliest absolute deadline. If this is not the case, the task with the
earliest absolute deadline is executed in this interval instead. This operation cannot
increase the maximum lateness.
6 - 25
Earliest Deadline First (EDF)
which task is
executing ?
time slice
slice for
interchange
situation after
interchange
6 - 26
remaining worst-
Earliest Deadline First (EDF) case execution time
of task k
Acceptance test: i
worst case finishing time of task i: f i t ck (t )
k 1
i
EDF guarantee condition: i 1,..., n t ck (t ) d i
k 1
algorithm:
Algorithm: EDF_guarantee (J, Jnew)
{ J‘=J{Jnew}; /* ordered by deadline */
t = current_time();
f0 = t;
for (each JiJ‘) {
fi = fi-1 + ci(t);
if (fi > di) return(INFEASIBLE);
}
return(FEASIBLE);
}
6 - 27
Earliest Deadline First (EDF*)
The problem of scheduling a set of n tasks with precedence constraints
(concurrent activation) can be solved in polynomial time complexity if tasks are
preemptable.
The EDF* algorithm determines a feasible schedule in the case of tasks with
precedence constraints if there exists one.
6 - 28
EDF*
6 - 29
EDF*
6 - 30
Earliest Deadline First (EDF*)
Modification of deadlines:
Task must finish the execution time within its deadline.
Task must not finish the execution later than the maximum start time of its
successor.
task b depends on task a: J a Jb
j
6 - 31
Earliest Deadline First (EDF*)
Modification of release times:
Task must start the execution not earlier than its release time.
Task must not start the execution earlier than the minimum finishing time of its
predecessor.
i 6 - 32
Earliest Deadline First (EDF*)
Algorithm for modification of release times:
1. For any initial node of the precedence graph set ri * ri
2. Select a task j such that its release time has not been modified but the release times of
all immediate predecessors i have been modified. If no such task exists, exit.
3. Set r j * max r j , max ri * Ci : J i J j
4. Return to step 2
6 - 33
Earliest Deadline First (EDF*)
Proof concept:
Show that if there exists a feasible schedule for the modified task set under EDF
then the original task set is also schedulable. To this end, show that the original
task set meets the timing constraints also. This can be done by using ri * ri ,
di * di ; we only made the constraints stricter.
Show that if there exists a schedule for the original task set, then also for the
modified one. We can show the following: If there exists no schedule for the
modified task set, then there is none for the original task set. This can be done by
showing that no feasible schedule was excluded by changing the deadlines and
release times.
In addition, show that the precedence relations in the original task set are not
violated. In particular, show that
a task cannot start before its predecessor and
a task cannot preempt its predecessor.
6 - 34
Real-Time Scheduling of Periodic Tasks
6 - 35
Overview
Table of some known preemptive scheduling algorithms for periodic tasks:
6 - 36
Model of Periodic Tasks
Examples: sensory data acquisition, low-level actuation, control loops, action
planning and system monitoring.
When an application consists of several concurrent periodic tasks with individual
timing constraints, the OS has to guarantee that each periodic instance is
regularly activated at its proper rate and is completed within its deadline.
Definitions:
: denotes a set of periodic tasks
i : denotes a periodic task
i, j : denotes the jth instance of task i
ri , j , si , j , f i , j , di , j : denote the release time, start time, finishing time, absolute
deadline of the jth instance of task i
i : denotes the phase of task i (release time of its first instance)
Di : denotes the relative deadline of task i
Ti : denotes the period of task i
6 - 37
Model of Periodic Tasks
The following hypotheses are assumed on the tasks:
The instances of a periodic task are regularly activated at a constant rate. The
interval Ti between two consecutive activations is called period. The release times
satisfy
ri, j i j 1Ti
Often, the relative deadline equals the period Di Ti (implicit deadline), and
therefore
di , j i jTi
6 - 38
Model of Periodic Tasks
The following hypotheses are assumed on the tasks (continued):
All periodic tasks are independent; that is, there are no precedence relations and
no resource constraints.
No task can suspend itself, for example on I/O operations.
All tasks are released as soon as they arrive.
All overheads in the OS kernel are assumed to be zero.
Example:
Ti
i
i Di i ,3
ri ,1 ri , 2 si ,3 f i ,3
Ci
6 - 39
Rate Monotonic Scheduling (RM)
Assumptions:
Task priorities are assigned to tasks before execution and do not change over time
(static priority assignment).
RM is intrinsically preemptive: the currently executing job is preempted by a job of
a task with higher priority.
Deadlines equal the periods Di Ti .
6 - 40
Periodic Tasks
Example: 2 tasks, deadlines = periods, utilization = 97%
6 - 41
Rate Monotonic Scheduling (RM)
Optimality: RM is optimal among all fixed-priority assignments in the sense that
no other fixed-priority algorithm can schedule a task set that cannot be
scheduled by RM.
The proof is done by considering several cases that may occur, but the main
ideas are as follows:
A critical instant for any task occurs whenever the task is released
simultaneously with all higher priority tasks. The tasks schedulability can easily
be checked at their critical instants. If all tasks are feasible at their critical
instant, then the task set is schedulable in any other condition.
Show that, given two periodic tasks, if the schedule is feasible by an arbitrary
priority assignment, then it is also feasible by RM.
Extend the result to a set of n periodic tasks.
6 - 42
Proof of Critical Instance
Definition: A critical instant of a task is the time at which the release of a job
will produce the largest response time.
Lemma: For any task, the critical instant occurs if a job is simultaneously
released with all higher priority jobs.
•1
C2+2C1 t
6 - 43
Proof of Critical Instance
Delay may increase if 1 starts earlier:
•2
•1
C2+3C1 t
Repeating the argument for all higher priority tasks of some task 2 :
• The worst case response time of a job occurs when it
• is released simultaneously with all higher-priority jobs.
6 - 44
Proof of RM Optimality (2 Tasks)
We have two tasks 1, 2 with periods T1 < T2.
Define F= T2/T1: the number of periods of 1 fully contained in T2
6 - 45
Proof of RM Optimality (2 Tasks)
Case B: Assume RM is used prio(1) is highest:
•1
C1 T2–FT1
•2 t
FT1 T2
Schedulable is feasible if
FC1+C2+min(T2–FT1, C1) T2 and C1 T1 (B)
Given tasks 1 and 2 with T1 < T2, then if the schedule is feasible by an
arbitrary fixed priority assignment, it is also feasible by RM.
6 - 46
Proof of RM Optimality (2 Tasks)
Case B: Assume RM is used prio(1) is highest:
•1
C1 T2–FT1
•2 t
FT1 T2
Schedulable is feasible if
FC1+C2+min(T2–FT1, C1) T2 and C1 T1 (B)
Given tasks 1 and 2 with T1 < T2, then if the schedule is feasible by an
arbitrary fixed priority assignment, it is also feasible by RM.
6 - 47
Proof of RM Optimality (2 Tasks)
Case B: Assume RM is used prio(1) is highest:
•1
C1 T2–FT1
•2 t
FT1 T2
Schedulable is feasible if
FC1+C2+min(T2–FT1, C1) T2 and C1 T1 (B)
Given tasks 1 and 2 with T1 < T2, then if the schedule is feasible by an
arbitrary fixed priority assignment, it is also feasible by RM.
6 - 48
Proof of RM Optimality (2 Tasks)
Case B: Assume RM is used prio(1) is highest:
•1
C1 T2–FT1
•2 t
FT1 T2
Schedulable is feasible if
FC1+C2+min(T2–FT1, C1) T2 and C1 T1 (B)
Given tasks 1 and 2 with T1 < T2, then if the schedule is feasible by an
arbitrary fixed priority assignment, it is also feasible by RM.
6 - 49
Proof of RM Optimality (2 Tasks)
Case B: Assume RM is used prio(1) is highest:
•1
C1 T2–FT1
•2 t
FT1 T2
Schedulable is feasible if
FC1+C2+min(T2–FT1, C1) T2 and C1 T1 (B)
Given tasks 1 and 2 with T1 < T2, then if the schedule is feasible by an
arbitrary fixed priority assignment, it is also feasible by RM.
6 - 50
Admittance Test
6 - 51
Rate Monotonic Scheduling (RM)
Schedulability analysis: A set of periodic tasks is schedulable with RM if
n
Ci
T n 21/ n
1
i 1 i
n
Ci
The term U denotes the processor
T
i 1 i
6 - 52
Proof of Utilization Bound (2 Tasks)
We have two tasks 1, 2 with periods T1 < T2.
Define F= T2/T1: number of periods of 1 fully contained in T2
Proof Concept: Compute upper bound on utilization U such that the task set is
still schedulable:
assign priorities according to RM;
compute upper bound Uup by increasing the computation time C2 to just
meet the deadline of 2; we will determine this limit of C2 using the results
of the RM optimality proof.
minimize upper bound with respect to other task parameters in order to
find the utilization below which the system is definitely schedulable.
6 - 53
Proof of Utilization Bound (2 Tasks)
As before:
•1
C1 T2–FT1
•2 t
FT1 T2
Schedulable if FC1+C2+min(T2–FT1, C1) T2 and C1 T1
Utilization:
6 - 54
Proof of Utilization Bound (2 Tasks)
6 - 55
Proof of Utilization Bound (2 Tasks)
Minimize utilization bound w.r.t C1:
If C1 T2–FT1 then U decreases with increasing C1
If T2–FT1 C1 then U decreases with decreasing C1
Therefore, minimum U is obtained with C1 = T2–FT1 :
We now need to minimize w.r.t. G =T2/T1 where F = T2/T1 and T1 < T2. As F is
integer, we first suppose that it is independent of G = T2/T1. Then we obtain
6 - 56
Proof of Utilization Bound (2 Tasks)
Minimizing U with respect to G yields
It can easily be checked, that all other integer values for F lead to a larger upper
bound on the utilization.
6 - 57
Deadline Monotonic Scheduling (DM)
Assumptions are as in rate monotonic scheduling, but deadlines may be smaller
than the period, i.e.
Ci Di Ti
Algorithm: Each task is assigned a priority. Tasks with smaller relative deadlines will
have higher priorities. Jobs with higher priority interrupt jobs with lower priority.
n
Ci
D n 21/ n
1
i 1 i
This condition is sufficient but not necessary (in general).
6 - 58
Deadline Monotonic Scheduling (DM) - Example
n
Ci
U = 0.874 D 1 . 08 n 21/ n
1 0.757
i 1 i
1
1 10
2
1 10
3
1 10
4
1 10
6 - 59
Deadline Monotonic Scheduling (DM)
There is also a necessary and sufficient schedulability test which is computationally
more involved. It is based on the following observations:
The worst-case processor demand occurs when all tasks are released
simultaneously; that is, at their critical instances.
For each task i, the sum of its processing time and the interference imposed
by higher priority tasks must be less than or equal to Di .
A measure of the worst case interference for task i can be computed as the
sum of the processing times of all higher priority tasks released before some
time t where tasks are ordered according to m n Dm Dn :
i 1
t
I i C j
j 1 j
T
6 - 60
Deadline Monotonic Scheduling (DM)
The longest response time Ri of a job of a periodic task i is computed, at the
critical instant, as the sum of its computation time and the interference due to
preemption by higher priority tasks:
Ri Ci I i
Hence, the schedulability test needs to compute the smallest Ri that satisfies
i 1
Ri
Ri Ci C j
j 1 T j
6 - 61
Deadline Monotonic Scheduling (DM)
The longest response times Ri of the periodic tasks i can be computed iteratively
by the following algorithm:
6 - 62
DM Example
Example:
Task 1: C1 1; T1 4; D1 3
Task 2: C2 1; T2 5; D2 4
Task 3: C3 2; T3 6; D3 5
Task 4: C4 1; T4 11; D4 10
Algorithm for the schedulability test for task 4:
Step 0: R4 1
Step 1: R4 5
Step 2: R4 6
Step 3: R4 7
Step 4: R4 9
Step 5: R4 10
6 - 63
DM Example
n
Ci
U = 0.874 D 1 . 08 n 21/ n
1 0.757
i 1 i
1
1 10
2
1 10
3
1 10
4
1 10
6 - 64
EDF Scheduling (earliest deadline first)
Assumptions:
dynamic priority assignment
intrinsically preemptive
Optimality: No other algorithm can schedule a set of periodic tasks if the set that
can not be scheduled by EDF.
The proof is simple and follows that of the aperiodic case.
6 - 65
Periodic Tasks
Example: 2 tasks, deadlines = periods, utilization = 97%
6 - 66
EDF Scheduling
A necessary and sufficient schedulability test for Di Ti :
n
Ci
A set of periodic tasks is schedulable with EDF if and only if U 1
i 1 Ti
nCi
The term U denotes the average processor utilization.
i 1 Ti
6 - 67
EDF Scheduling
If the utilization satisfies U 1, then there is no valid schedule: The total
demand of computation time in interval T T1 T2 ... Tn is
n
Ci
i 1 Ti
T UT T
We will proof this fact by contradiction: Assume that deadline is missed at some
time t2 . Then we will show that the utilization was larger than 1.
6 - 68
6 - 69
EDF Scheduling
If the deadline was missed at t2 then define t1 as a time before t2 such that (a) the processor is
continuously busy in [t1, t2 ] and (b) the processor only executes tasks that have their arrival
time AND their deadline in [t1, t2 ].
Why does such a time t1 exist? We find such a t1 by starting at t2 and going backwards in time,
always ensuring that the processor only executed tasks that have their deadline before or at t2 :
Because of EDF, the processor will be busy shortly before t2 and it executes on the task that has
deadline at t2.
Suppose that we reach a time such that shortly before the processor works on a task with deadline
after t2 or the processor is idle, then we found t1: We know that there is no execution on a task with
deadline after t2 .
But it could be in principle, that a task that arrived before t1 is executing in [t1, t2 ].
If the processor is idle before t1, then this is clearly not possible due to EDF (the processor is not idle, if
there is a ready task).
If the processor is not idle before t1, this is not possible as well. Due to EDF, the processor will always
work on the task with the closest deadline and therefore, once starting with a task with deadline after t2
all task with deadlines before t2 are finished.
6 - 70
6 - 71
EDF Scheduling
Within the interval t1 ,t2 the total computation time demanded by the periodic
tasks is bounded by
n
t2 t1 n
t 2 t1
C p (t1 , t 2 ) Ci Ci t2 t1 U
i 1 Ti i 1 Ti
t2 t1 C p t1 , t2 t2 t1 U U 1
6 - 72
Periodic Task Scheduling
Example: 2 tasks, deadlines = periods, utilization = 97%
6 - 73
Real-Time Scheduling of Mixed Task Sets
6 - 74
Problem of Mixed Task Sets
In many applications, there are aperiodic as well as periodic tasks.
Periodic tasks: time-driven, execute critical control activities with hard timing
constraints aimed at guaranteeing regular activation rates.
Aperiodic tasks: event-driven, may have hard, soft, non-real-time requirements
depending on the specific application.
Sporadic tasks: Offline guarantee of event-driven aperiodic tasks with critical
timing constraints can be done only by making proper assumptions on the
environment; that is by assuming a maximum arrival rate for each critical event.
Aperiodic tasks characterized by a minimum interarrival time are called
sporadic.
6 - 75
Background Scheduling
Background scheduling is a simple solution for RM and EDF:
Processing of aperiodic tasks in the background, i.e. execute if there are no
pending periodic requests.
Periodic tasks are not affected.
Response of aperiodic tasks may be prohibitively long and there is no possibility to
assign a higher priority to them.
Example:
6 - 76
Background Scheduling
Example (rate monotonic periodic schedule):
6 - 77
Rate-Monotonic Polling Server
Idea: Introduce an artificial periodic task whose purpose is to service aperiodic
requests as soon as possible (therefore, “server”).
Function of polling server (PS)
At regular intervals equal to Ts , a PS task is instantiated. When it has the highest
current priority, it serves any pending aperiodic requests within the limit of its
capacity Cs .
If no aperiodic requests are pending, PS suspends itself until the beginning of the
next period and the time originally allocated for aperiodic service is not preserved
for aperiodic execution.
Its priority (period!) can be chosen to match the response time requirement for
the aperiodic tasks.
Disadvantage: If an aperiodic requests arrives just after the server has
suspended, it must wait until the beginning of the next polling period.
6 - 78
Rate-Monotonic Polling Server
Example:
Cs n Ci
Ts i 1 Ti
(n 1) 21/( n1) 1
6 - 80
Rate-Monotonic Polling Server
Guarantee the response time of aperiodic requests:
Assumption: An aperiodic task is finished before a new aperiodic request
arrives.
Computation time Ca , deadline Da
Sufficient schedulability test:
If the server task
Ca has the highest
(1 )Ts Da priority there is a
Cs necessary test also.
6 - 81
EDF – Total Bandwidth Server
Total Bandwidth Server:
When the kth aperiodic request arrives at time t = rk, it receives a deadline
Ck
d k max( rk , d k 1 )
Us
where Ck is the execution time of the request and Us is the server utilization
factor (that is, its bandwidth). By definition, d0=0.
Once a deadline is assigned, the request is inserted into the ready queue of
the system as any other periodic instance.
6 - 82
6 - 83
EDF – Total Bandwidth Server
Example:
U p 0.75, U s 0.25, U p U s 1
6 - 84
EDF – Total Bandwidth Server
Schedulability test:
Given a set of n periodic tasks with processor utilization Up and a total bandwidth
server with utilization Us, the whole set is schedulable by EDF if and only if
U p Us 1
Proof:
In each interval of time [t1, t2 ] , if Cape is the total execution time demanded by
aperiodic requests arrived at t1 or later and served with deadlines less or equal to
t2, then
Cape (t2 t1 )U s
6 - 85
EDF – Total Bandwidth Server
If this has been proven, the proof of the schedulability test follows closely that of the
periodic case.
Proof of lemma:
k2
C ape C
k k1
k
k2
U s (d k max( rk , d k 1 ))
k k1
U s d k 2 max( rk1 , d k1 1 )
U s (t 2 t1 )
6 - 86