BITS ZG553: Real Time Systems
BITS ZG553: Real Time Systems
1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad
Note: Students are requested to NOT to rely on PPTs/Recorded sessions as their only source of knowledge, explore sources within your own organization or
web for any specific topic; attend classes regularly and involve in discussions;
PLEASE DO NOT PRINT PPTs, Save the Environment!
2
Source PPT Courtesy: Some of the contents of this PPT is sourced from Presentatoons of Prof K R Anupa / Prof B Mishra, BITS-Pilani WILP Division
Off-line vs On-line scheduling
Off-line scheduling: When the schedule is pre-computed and kept, it is called
off-line scheduling.
Example: Clock-driven scheduling / Table-driven Scheduling / Round-Robin
(Time-Slicing) / Weighted Round-Robin
It is possible only when the system parameters are known a priori
Advantages: Deterministic timing behaviour, Lesser complexity, Very less
scheduling overhead
On-line scheduling: When the scheduler makes each scheduling decision
without knowledge about the jobs that will be released in the future
Example: Priority driven scheduling
It is the only option when future workload is unpredictable
The price of the flexibility and adaptability is a reduced ability for the scheduler to
come up with an optimal schedule making the best use of the system resources
3
BITS Pilani, Pilani Campus
Priority Driven vs Clock Driven
approaches
Priority driven approaches have many advantages compared to clock driven
approach:
They don’t have to have the information on the release time, execution time etc
(in contrast with clock driven approach, where these parameters are required to
be known a priori)
It is best suited for applications with varying time and resource requirements
Many well-known priority –driven algorithms use very simple priority assignments
reducing the overhead of maintaining multiple queues.
Despite all these advantages, Clock-driven approaches are used for hard
real-time systems, especially in safety-critical systems.
The major reason is that the timing behaviour of a priority-driven system is
nondeterministic when job parameters vary.
Consequently it is difficult to validate the deadlines of all the jobs in a priority-
driven approach that they meet the deadline, when job parameters vary.
4
BITS Pilani, Pilani Campus
Clock-driven Approach
Scheduling decisions are made at specific time instants, which are
chosen a priori before the system begins execution
Typically this type of scheduling is suitable for hard real-time
systems, where the parameters are fixed and known.
Scheduling decisions are computed off-line and stored for use at
run-time, thus scheduling overhead is minimal.
Generally a hardware time is set to expire periodically.
After the system gets initialized, the scheduler selects and
schedules jobs which execute till the next scheduling decision is
made. Then the scheduler blocks itself waiting for the expiration of
the timer.
When the timer expires, the scheduler wakes up, does necessary
scheduling and sleeps again. This process repeats.
5
BITS Pilani, Pilani Campus
Round-robin Approach
Also known as time-sharing
Every job joins a FIFO (First-in-first-out) queue when it
becomes ready for execution
The entire time period is divided into several time-slices
The job at the head of the queue executes for one time-
slice.
If the job doesn’t complete at the end of the time-slice, it
gets pre-empted and placed at the end of the queue to
waits for its next turn.
If there are ‘n’ jobs ready for execution, each job gets
1/nth share of the processor.
6
BITS Pilani, Pilani Campus
Round-robin Approach -
Example
time
(Round-robin execution of two tasks on a single processor)
P1 J1,1 J2,1
J1,1 J1,2
time
(Round-robin execution of two tasks on two processors)
7
BITS Pilani, Pilani Campus
Weighted round-robin
Approach
This approach is a round robin approach with different
weights assigned to different jobs.
If a job has weight ‘wt’, then it will get ‘wt’ time slices
every round for execution.
8
BITS Pilani, Pilani Campus
Priority Driven Approach
Also known as greedy scheduling, list scheduling and work-
conserving scheduling
Priorities are assigned to the jobs based on their criticality
Jobs ready for execution are placed in one or more queues
ordered by priorities of the jobs.
At any scheduling decision time, the jobs with the highest
priorities are scheduled and executed on the available
processors.
Most scheduling algorithms used in non-real-time systems are
priority driven.
FIFO (First In First Out)
LIFO (Last In First Out)
SETF (Shortest Execution Time First)
LETF (Longest Execution Time First)
For jobs of same priority, round-robin scheduling is used
9
BITS Pilani, Pilani Campus
Priority-driven Scheduling
Example
Rules:
– each process has a fixed priority (1 highest);
– highest-priority ready process gets CPU;
– process continues until done.
Processes
– J1: priority 1, release time 15, execution time 10
– J2: priority 2, release time 0, execution time 30
– J3: priority 3, release time 18, execution time 20
10
BITS Pilani, Pilani Campus
Priority-driven Scheduling
Example
J3 ready t=18
J2 ready t=0 J1 ready t=15
J2 J1 J2 J3
0 10 20 30 40 50 60
time
11
BITS Pilani, Pilani Campus
Assumptions
Tasks are independent
There are no aperiodic and sporadic tasks
Every job is ready for execution as soon as released
A job can be preempted any time
A job never suspends itself
Scheduling decisions are made immediately upon the job releases and
completions
Context switch overhead is negligibly small compared with the execution
times of the tasks
Number of priority levels are unlimited
Number of periodic tasks are fixed
Generally the real time algorithms of practical interest are fixed priority
algorithms.
13
BITS Pilani, Pilani Campus
Rate Monotonic (RM)
Algorithm
Fixed priority algorithm
Shorter the period, higher the priority
Rate is inverse of the period. Hence higher the rate, higher the priority. – so the name ‘rate
monotonic’
T1 has shortest period (i.e. 4), so should have higher priority, followed by T2 and T3.
T1 get scheduled at after 4 time units i.e at times 0, 4, 8, 12, 16, 20, ...
T2 gets scheduled at time units 1, 5, 11 for 2 time slots. When T2 gets released at time 15,
it is given this slot, preempting T3. But at time 16, T1 gets released. It gets scheduled
preempting T2. Once T1 is done, T2 again gts scheduled at time 18.
T3 gets scheduled in the remaining slots 3, 7. It gets time slot 9, since T1 is done and T2 is
not released that time. But it gets preempted in the next slot because T2 gets released.
Again it gets scheduled in slot 13 and 14. With this T3 completes for one period 20.
T1 T2 T2 T3 T1 T2 T2 T3 T1 T3 T2 T2 T1 T3 T3 T2 T1 T2 T1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 29 20
14
BITS Pilani, Pilani Campus
Rate Monotonic (RM)
Algorithm
T1 J1,5
J1,1 J1,2 J1,3 J1,4 J1,6
T3 J3,1 J3,1
J3,1 J3,1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
15
BITS Pilani, Pilani Campus
Deadline Monotonic (DM)
Algorithm
Fixed priority algorithm
Shorter the relative deadline, higher the priority
When the relative deadlines of every task is proportional to their period, the schedule produced by RM
and DM algorithms are identical.
But when the relative deadlines are arbitrary, DM may produce a feasible schedule when RM fails.
Example:
T1 = (50, 50, 25, 100), T2 = (0, 60, 10, 20), T3 = (0, 125, 25, 50)
According to DM algorithm, T2 has highest priority because its relative deadline is 20.
Similarly T1 has lowest priority and T3 has priority in between.
J J
T1 J1,1 J1,1 J1,2 1
, J1,3 1
, J1,4
2 3
T3 J3,1 J3,2
0 10 20 35 40 50 60 70 80 85 100 120 130 140 155 160 180 190 200 220 225
16
BITS Pilani, Pilani Campus
EDF (Earliest Deadline First)
Algorithm
Dynamic priority algorithm
Job with earliest (absolute) deadline has highest priority
J
T2 J2,1 J2,1 J2,2 J2,2 2
,
J2,3
2
0 1 2 3 4 5 6 7 8 9 10 11
Example (Jobs with Precedence) t = 0: J1 is released, no other job in the system, so gets
scheduled
J1 3 (0, 6] J2 2 (5, 8] t = 2: J3 is released.
Deadline of J1 = 6,
Deadline of J3 = 8.
J3 3 (2, 8] (Execution times are So J1 has higher priority, hence it continues
mentioned before the t = 3: J1 completes, J3 starts.
feasible intervals) t = 5: J2 is released
Deadline of J2 = 8
Deadline of J3 = 8
So both have same priority, let J3 continue.
t = 6: J3 is done, J2 gets scheduled
J1 J3 J2
t = 8: J2 is done
0 3 6 8
18
BITS Pilani, Pilani Campus
LST (Least-Slack-Time-First)
Algorithm
Task level dynamic algorithm, and job level dynamic algorithm
Job with smallest slack has highest priority
At time t, the slack of a job whose remaining execution time is x and
whose deadline is d
= d - t- x
J
T2 J2,1 J2,1 J2,2 J2,2 2
,
J2,3
2
0 1 2 3 4 5 6 7 8 9 10 11
22
BITS Pilani, Pilani Campus
Schedulable Utilization
Example for U > 1:
Consider EDF schedule for tasks: T1 = (2,1), T2 =(5,3)
Total utilization U = 1/2+ 3/5 = 0.5 + 0.6 = 1.1
J2,2 missed its deadline 10
0 1 2 3 4 5 6 7 8 9 10 11
23
BITS Pilani, Pilani Campus
Relative Merits
Criterion to measure performance: Schedulable Utilization
So higher the schedulable utilization, better is the algorithm.
An algorithm whose schedulable utilization is 1, is an optimal
algorithm.
Optimal dynamic-priority algorithms outperforms fixed-
priority algorithms in terms of schedulable utilization.
But advantage of fixed-priority algorithms is predictability.
24
BITS Pilani, Pilani Campus
Schedulable Utilization of EDF
Algorithm
Theorem:
A system T of independent, preemptable tasks with relative deadlines
equal to their respective periods can be feasibly scheduled on one
processor if and only if its total utilization is equal to or less than 1.
25
BITS Pilani, Pilani Campus
Schedulablity Test of EDF
Algorithm
Schedulablity condition for EDF algorithm:
n
ek
k 1 min( Dk , pk )
1
26
BITS Pilani, Pilani Campus
Optimality of RM & DM
Algorithms
Since these algorithms assign fixed priorities, they can’t be
optimal.
While RM algorithm is not optimal for arbitrary periods, it is
optimal in the special case when the periodic tasks in the
system are simply periodic.
A system of periodic tasks is simply periodic if for every pair
of tasks Ti and Tk in the system and pi < pk, pk is an integer
multiple of pi.
In other words, for simply periodic tasks, the period of all
tasks are integer multiple of the shortest period.
27
BITS Pilani, Pilani Campus
Optimality of RM Algorithm
Theorem:
28
BITS Pilani, Pilani Campus
Optimality of DM Algorithm
Theorem:
29
BITS Pilani, Pilani Campus
Sufficient schedulability
condition for RM algorithm
Theorem:
A system of ‘m’ independent, preemptable periodic tasks with relative
deadlines equal to their respective periods can be feasibly scheduled
on a processor according to RM algorithm if its total utilization ‘U’ is
less than or equal to 1
URM m(2 m 1)
30
BITS Pilani, Pilani Campus
Lehoczky’s Schedulability Test
for Fixed-Priority Algorithms
Used when Sufficient Schedulability Condition fails
Theorem
A set of periodic real-time tasks is RMA schedulable under any task phasing, if all the tasks
meet their respective first deadlines under zero phasing (i.e. when all tasks have phases equal
to 0).
Explanation
Let there are two tasks T1 = (30, 10) and T2 = (120, 60) scheduled as per RM Algorithm.
Then T1 has higher priority then T2.
. From this example, it is
J1,1 J2,1 J1,2 J2,1 J1,3 J2,1 obvious that worst case
response time occurs for a
0 10 20 30 40 50 60 70 80 90
lower priority task, the
phase of this task and the
(T1 and T2 have 0 phases – First job of T2 finishes at 90) phases of all other higher
priority task are 0.
J2,1 J1,1 J2,1 J1,2 J2,1 J1,3
0 10 20 30 40 50 60 70 80 90
(T1 has phase 20 and T2 has phase 0 – First job of T2 finishes at 80)
31
BITS Pilani, Pilani Campus
Lehoczky’s Schedulability Test
(Contd.)
As seen in the example, within deadline of the first job of T2 i.e. 120, higher priority task T1 can be scheduled
120 / 30 = 4 times.
So T2 has to wait for 4 X 10 (execution time of each job of T1) time slots during execution of its first job.
Hence in the worst case, the amount of time a low priority task Ti has to wait due to the higher priority tasks
(T1,T2, …,Ti-1) in the system is
i 1
D
p e i
k
k 1 k
Then for all the tasks to be feasibly scheduled, this time should be less than or equal to the respective
deadline i.e.
D
i 1
ei i ek Di
k 1 pk
This is Lehochky’s Schedulability Test.
32
BITS Pilani, Pilani Campus
Floor and Ceiling Functions
Floor
floor(x) = x is the largest integer not greater than x
Ceiling
ceiling(x) = x is the smallest integer not less than x
Example:
x x x
2.4 2 3
5.5 5 6
-2.1 -3 -2
-2 -2 -2
33
BITS Pilani, Pilani Campus
Example
Question:
Please check if following sets of tasks can be scheduled by EDF and
RM Algorithms.
T1 = (8,3), T2 = (9, 3), T3 = (15, 3)
Answer:
Utilization U = (3 / 8) + (3 / 9) + (3 / 15) = 0.375 + 0.333 + 0.2 = 0.9083
U < 1, so these tasks are schedulable by EDF algorithm.
Let us calculated the sufficient condition for RM schedulablity.
1 1
URM m( 2 m
1) 3( 2 1) 3 X (1.26 1) 0.78
3
So U > URM, hence fails this test. But this doesn’t mean that these tasks
can’t be schedulable by RM algorithm.
Let us perform Lehoczky’s test.
34
BITS Pilani, Pilani Campus
Example (contd.)
As per RMA, the priorities of these tasks are T1 > T2 > T3.
For T1, the execution time 3 is less than its deadline 8, so it is schedulable.
For T2, the time it will be in the system in worst case scenario =
i 1
D 9
ei i ek 3 X 3 3 2 X 3 9
k 1 pk 8
36
BITS Pilani, Pilani Campus
Practical Factors
Nonpreemptability
So jobs are by nature nonpreemptable e.g. disk scheduling.
When a low priority job is scheduled and it happens to be
nonpreemptable, if a high priority job arrives later (either from
blocked state or it gets released), then it has to wait.
The high priority job has to wait until the nonpreemtable low
priority job completes.
This will increase the response time of the high priority job.
Hence while considering whether the high priority jobs can meet
their deadline or not, we also need to consider the effect of low
priority nonpreemptable jobs on them.
37
BITS Pilani, Pilani Campus
Practical Factors
Self-suspension
A job may suspend itself during execution due to various reasons like
waiting for an I/O or remote procedure call etc.
As a result O/S removes it from the ready queue and puts it in the
suspended queue.
The time spent during self suspension should also be considered
during timing analysis of the jobs.
Context Switches
Context switch is a usual phenomenon in a priority driven system.
Hence context switch time should also be taken into consideration
during the timing analysis.
38
BITS Pilani, Pilani Campus
Practical Factors
Limited-Priority Levels
In practical systems number of priority levels are limited (e.g. in a token ring
network, there are 8 priority levels, in RTOSes, usually there are 256 levels)
Hence tasks(jobs) have non-distinct priorities, which need to be considered during
the analysis.
Tick Scheduling
During our analysis till now, we have assumed that scheduler does the
schedulability tests as and when the jobs arrives (i.e. scheduler is event-driven).
But practically, there will be a timer running and the scheduler will be waking up at
each timer tick.
So even if a job is ready, the scheduler may not notice it until the next timer
interrupt. This introduces a certain delay in completion of the job.
Also the ready job which yet to be noticed and to be put in a ready queue, should
be placed in some other queue.
These factors should also be considered during the analysis.
39
BITS Pilani, Pilani Campus
Practical Factors
Varying Priority in Fixed-Priority Systems
In order to tackle priority inversion problem, sometimes the priorities of the lower
priority jobs are raised. Such an operation will have effect on the analysis.
Hierarchical Scheduling
This scheduling is done, when there are multiple tasks/jobs of same priorities.
The tasks/jobs having same priorities are put into a cluster/subsystem.
Two common type of scheduling approaches are used.
Priority-driven/round-robin system: Here The clusters are scheduled in priority
driven manner and the tasks/jobs in a clusters are scheduled in a round-robin
manner.
Fixed-time partitioning scheme: The clusters/subsystems are scheduled according
to a cyclic schedules and the tasks/jobs in the subsystem are scheduled as per the
scheduling algorithm chosen for the sybsystem.
40
BITS Pilani, Pilani Campus
Schedulability Test for Fixed-Priority Tasks
with Short Response Times
42
BITS Pilani, Pilani Campus
Critical Instants
Critical Instant of a task Ti is a time instant which is such that
The job in Ti released at the instant has the maximum response time of all jobs
in Ti, if the response time of every job in Ti is equal or less than the relative
deadline Di of Ti
And
The response time of the job released at that instant is greater than Di if the
response time of some jobs in Ti exceeds Di.
The response time of a job in Ti released at the critical instance has maximum
response time.
Theorem
In a fixed-priority system, where every job completes before the next job in the
same task is released, a critical instant of any task Ti occurs when one of its job
Ji,c is released at the same time with a job in every higher-priority task, that is
ri,c = rk, lk for some lk for every k = 1,2,…,i-1.
43
BITS Pilani, Pilani Campus
Time Demand Function
Suppose the release time t0 is the job is a critical instant of task Ti.
Then at time t0 +t, t > 0, the total processor time demand wi(t) of this job and
all the higher-priority jobs released in [t0, t] is given by
t
i 1
wi (t ) ei ek ,0 t pi
k 1 pk
If wi(t) > t for all 0 < t ≤ Di , then the job can not complete by its deadline.
45
BITS Pilani, Pilani Campus
Time Demand Function –
Example (contd.)
t t t
w4 (t ) e4 e1 e2 e3 ,0 t p4
p1 p2 p3
t t t
w4 (t ) 0.5 1 1.5 1.25,0 t 9
3 5 7
4.25,0 t 3
5.25,3 t 5
w4 (t ) 6.75,5 t 6
7.75,6 t 7
9.0,7 t 9
46
BITS Pilani, Pilani Campus
Time Demand Function
12
wi(t) 11
wi(t) > t, wi(t) = t
10 i.e. Demand > Supply
w4(t)
9
9
8
6 w3(t)
5
4 4.75
w2(t)
3
2 2.5
1 wi(t) < t,
1 w1(t) i.e. Demand < Supply
t
0 1 2 3 4 5 6 7 8 9 10 11 12
Please note that, for T2, the y=t line crosses 2.5 and 3.5. The max response time is 2.5 (the smallest
value of t satisfying the equation. 3.5 can’t be considered as the max response time, because in one
period 5, a job of T2 can execute at earliest possible instance, which is 2.5. 47
BITS Pilani, Pilani Campus
Time Demand Function
4 Tasks: T1 = (3, 1), T2 = (5, 1.5), T3 = (7, 1.25), T4 = (9, 0.5) are scheduled based on RM algorithm.
T1
1 1 1 1 1 1
T2
T3
4.75 1.75 3.75
T4
9 3
2.5 4.75 7.5 8.75 11.5 17.5 17.75
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
48
BITS Pilani, Pilani Campus
Time Demand Analysis
Time demand function is a staircase function, with steps at integer
multiple of the periods of high priority tasks.
wi(t) is the demand of time and t is the supply of time.
The task is schedulable if at any point of time during the inter-release time
of two adjacent jobs of the task, the demand is less than or equal to the
supply.
It happens if wi(t) ≤ t at any point of time during the inter-release time of
two adjacent jobs of the task
It means that wi(t) must intersect the straight line y(t) = t.
It can only happen if wi(t) ≤ t for some t = the integer multiple the period
of any of the high priority tasks or that of the current task
Intersection of wi(t) and y(t)=t, indicates the maximum response time,
since this is the time instance where demand = supply.
49
BITS Pilani, Pilani Campus
Time Demand Analysis
Time Demand Analysis method proposed by Lehoczky
For each task Ti ,
w1 (t ) 1.0,0 t 3
2.5,0 t 3
w2 (t )
3.5,3 t 5
3.75,0 t 3
4.75,3 t 5
w3 (t )
6.25,5 t 6
7.25,6 t 7
4.25,0 t 3
5.25,3 t 5
w4 (t ) 6.75,5 t 6
7.75,6 t 7
9.0,7 t 9
51
BITS Pilani, Pilani Campus
Time Demand Analysis –
Example (contd.)
For T1:
For T3:
w1 (t ) 1.0,0 t 3 3.75,0 t 3
4.75,3 t 5
i = 1, so k and j doesn’t exist. w3 (t )
Hence T1 is schedulable. 6.25,5 t 6
7.25,6 t 7
For T2: i = 3, so k = 1, 2, 3
2.5,0 t 3 For k = 1:
w2 (t )
3.5,3 t 5 j = 1,2,..., floor(min(7,7)/3)) = 1, 2
For j=1, t = jpk = 1 X 3 = 3, w3 (3) = 3.75 > 3 .
i = 2, so k = 1, 2 For j=2, t = jpk = 2 X 3 = 6, w3 (6) = 6.25 > 6.
For k=1: For k = 2:
j = 1,2,..., floor(min(5,5)/3)) = 1 j = 1,2,..., floor(min(7,7)/5)) = 1
t = jpk = 1 X 3 = 3, w2 (3) = 2.5 < 3. For j=1, t = jpk = 1 X 5 = 5, w3 (5) = 4.75 < 5.
For k=2: For k = 3.
j = 1,2,..., floor(min(5,5)/5)) = 1 j = 1,2,..., floor(min(9,9)/7)) = 1
t = jpk = 1 X 5 = 5, w2 (5) = 3.5 < 5. For j=1, t = jpk = 1 X 7 = 7, w3 (7) = 7.25 > 7.
Hence T3 is schedulable, since the inequality is satisfied for t = 5.
Hence T2 is schedulable, since the 52
inequality is satisfied for all the cases. BITS Pilani, Pilani Campus
Time Demand Analysis –
Example (contd.)
For T4:
i = 4, so k = 1, 2, 3, 4
4.25,0 t 3
For k = 1: 5.25,3 t 5
j = 1,2,..., floor(min(9,9)/3)) = 1, 2, 3
For j=1, t = jpk = 1 X 3 = 3, w4 (3) = 4.25 > 3. w4 (t ) 6.75,5 t 6
For j=2, t = jpk = 2 X 3 = 6, w4 (6) = 6.25 > 6.
7.75,6 t 7
For j=3, t = jpk = 3 X 3 = 9, w4 (9) = 9.
9.0,7 t 9
For k = 2:
j = 1,2,..., floor(min(9,9)/5)) = 1
For j=1, t = jpk = 1 X 5 = 5, w4 (5) = 5.25 > 5.
For k = 3:
j = 1,2,..., floor(min(9,9)/7)) = 1
For j=1, t = jpk = 1 X 7 = 7, w4 (7) = 7.75 > 7.
For k = 4:
j = 1,2,..., floor(min(9,9)/9)) = 1
For j=1, t = jpk = 1 X 9 = 7, w4 (9) = 9 .
The time interval (t0, t] is called level ∏i busy interval, if in this interval, the
processor is busy all the time executing jobs with priorities ∏i or higher, all the
jobs executed in the busy interval are released in the interval, and at the end of
the interval there is no backlog of jobs to be executed afterwards.
54
BITS Pilani, Pilani Campus
Thank You.
Any Questions?
55