0% found this document useful (0 votes)
50 views

BITS ZG553: Real Time Systems

This document provides an overview of real-time scheduling algorithms including off-line and on-line approaches. Off-line approaches like clock-driven scheduling pre-compute schedules while on-line approaches like priority-driven scheduling make decisions without full knowledge of future jobs. Common scheduling algorithms discussed include round-robin, weighted round-robin, and priority-driven approaches. The document also covers assumptions made in real-time scheduling and differences between fixed and dynamic priority algorithms.

Uploaded by

POOJA SHARMA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

BITS ZG553: Real Time Systems

This document provides an overview of real-time scheduling algorithms including off-line and on-line approaches. Off-line approaches like clock-driven scheduling pre-compute schedules while on-line approaches like priority-driven scheduling make decisions without full knowledge of future jobs. Common scheduling algorithms discussed include round-robin, weighted round-robin, and priority-driven approaches. The document also covers assumptions made in real-time scheduling and differences between fixed and dynamic priority algorithms.

Uploaded by

POOJA SHARMA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

BITS ZG553: Real Time Systems

[L6: Overview of RTS Schedulers,


Performance, Optimality, Schedulability]
BITS Pilani K G Krishna
WILP Division, BITS-Pilani, Hyderabad
Pilani Campus

1
BITS Pilani
Pilani|Dubai|Goa|Hyderabad

L-6: Review of RTS Schedulers -


(Feasibility, Optimality & Schedulability)
[Ref: T1/C4]

Note: Students are requested to NOT to rely on PPTs/Recorded sessions as their only source of knowledge, explore sources within your own organization or
web for any specific topic; attend classes regularly and involve in discussions;
PLEASE DO NOT PRINT PPTs, Save the Environment!
2
Source PPT Courtesy: Some of the contents of this PPT is sourced from Presentatoons of Prof K R Anupa / Prof B Mishra, BITS-Pilani WILP Division
Off-line vs On-line scheduling
Off-line scheduling: When the schedule is pre-computed and kept, it is called
off-line scheduling.
 Example: Clock-driven scheduling / Table-driven Scheduling / Round-Robin
(Time-Slicing) / Weighted Round-Robin
 It is possible only when the system parameters are known a priori
 Advantages: Deterministic timing behaviour, Lesser complexity, Very less
scheduling overhead
On-line scheduling: When the scheduler makes each scheduling decision
without knowledge about the jobs that will be released in the future
 Example: Priority driven scheduling
 It is the only option when future workload is unpredictable
 The price of the flexibility and adaptability is a reduced ability for the scheduler to
come up with an optimal schedule making the best use of the system resources

3
BITS Pilani, Pilani Campus
Priority Driven vs Clock Driven
approaches
 Priority driven approaches have many advantages compared to clock driven
approach:
 They don’t have to have the information on the release time, execution time etc
(in contrast with clock driven approach, where these parameters are required to
be known a priori)
 It is best suited for applications with varying time and resource requirements
 Many well-known priority –driven algorithms use very simple priority assignments
reducing the overhead of maintaining multiple queues.
 Despite all these advantages, Clock-driven approaches are used for hard
real-time systems, especially in safety-critical systems.
 The major reason is that the timing behaviour of a priority-driven system is
nondeterministic when job parameters vary.
 Consequently it is difficult to validate the deadlines of all the jobs in a priority-
driven approach that they meet the deadline, when job parameters vary.

4
BITS Pilani, Pilani Campus
Clock-driven Approach
 Scheduling decisions are made at specific time instants, which are
chosen a priori before the system begins execution
 Typically this type of scheduling is suitable for hard real-time
systems, where the parameters are fixed and known.
 Scheduling decisions are computed off-line and stored for use at
run-time, thus scheduling overhead is minimal.
 Generally a hardware time is set to expire periodically.
 After the system gets initialized, the scheduler selects and
schedules jobs which execute till the next scheduling decision is
made. Then the scheduler blocks itself waiting for the expiration of
the timer.
 When the timer expires, the scheduler wakes up, does necessary
scheduling and sleeps again. This process repeats.
5
BITS Pilani, Pilani Campus
Round-robin Approach
 Also known as time-sharing
 Every job joins a FIFO (First-in-first-out) queue when it
becomes ready for execution
 The entire time period is divided into several time-slices
 The job at the head of the queue executes for one time-
slice.
 If the job doesn’t complete at the end of the time-slice, it
gets pre-empted and placed at the end of the queue to
waits for its next turn.
 If there are ‘n’ jobs ready for execution, each job gets
1/nth share of the processor.

6
BITS Pilani, Pilani Campus
Round-robin Approach -
Example

J1,1 J2,1 J1,2 J2,2 J1,3 J2,3

time
(Round-robin execution of two tasks on a single processor)

P1 J1,1 J2,1
J1,1 J1,2

J2,1 J2,2 P2 J1,2 J2,2

time
(Round-robin execution of two tasks on two processors)
7
BITS Pilani, Pilani Campus
Weighted round-robin
Approach
 This approach is a round robin approach with different
weights assigned to different jobs.
 If a job has weight ‘wt’, then it will get ‘wt’ time slices
every round for execution.

8
BITS Pilani, Pilani Campus
Priority Driven Approach
 Also known as greedy scheduling, list scheduling and work-
conserving scheduling
 Priorities are assigned to the jobs based on their criticality
 Jobs ready for execution are placed in one or more queues
ordered by priorities of the jobs.
 At any scheduling decision time, the jobs with the highest
priorities are scheduled and executed on the available
processors.
 Most scheduling algorithms used in non-real-time systems are
priority driven.
 FIFO (First In First Out)
 LIFO (Last In First Out)
 SETF (Shortest Execution Time First)
 LETF (Longest Execution Time First)
 For jobs of same priority, round-robin scheduling is used

9
BITS Pilani, Pilani Campus
Priority-driven Scheduling
Example
Rules:
– each process has a fixed priority (1 highest);
– highest-priority ready process gets CPU;
– process continues until done.
Processes
– J1: priority 1, release time 15, execution time 10
– J2: priority 2, release time 0, execution time 30
– J3: priority 3, release time 18, execution time 20

10
BITS Pilani, Pilani Campus
Priority-driven Scheduling
Example

J3 ready t=18
J2 ready t=0 J1 ready t=15

J2 J1 J2 J3

0 10 20 30 40 50 60
time

11
BITS Pilani, Pilani Campus
Assumptions
 Tasks are independent
 There are no aperiodic and sporadic tasks
 Every job is ready for execution as soon as released
 A job can be preempted any time
 A job never suspends itself
 Scheduling decisions are made immediately upon the job releases and
completions
 Context switch overhead is negligibly small compared with the execution
times of the tasks
 Number of priority levels are unlimited
 Number of periodic tasks are fixed

When an application creates a new task,


 The application first requests the scheduler to add a new task by providing the
scheduler with relevant parameters of the task including period, execution
time and relative deadline.
 Then the scheduler does acceptance test to check if the new task can be
feasibly scheduled with all other existing tasks. If it is not feasible, the
scheduler rejects the task.
12
BITS Pilani, Pilani Campus
Fixed Priority vs Dynamic
Priority Algorithms
 A priority driven scheduler is an on-line scheduler. It doesn’t pre-compute
the schedule.
 It assigns priorities to jobs after they are released, and places them in
appropriate queues.

 Fixed priority algorithm: A fixed priority algorithm assigns the same


priority to all the jobs in each task.
 Dynamic priority algorithm: A dynamic priority algorithm assigns different
priorities to the individual jobs in each task. So priorities of a task can
change as the jobs of that task are released.

Generally the real time algorithms of practical interest are fixed priority
algorithms.

13
BITS Pilani, Pilani Campus
Rate Monotonic (RM)
Algorithm
 Fixed priority algorithm
 Shorter the period, higher the priority
 Rate is inverse of the period. Hence higher the rate, higher the priority. – so the name ‘rate
monotonic’

Example: 3 tasks T1 = (4,1), T2 = (5, 2), T3 = (20, 5) to be scheduled based on RM algorithm.

 T1 has shortest period (i.e. 4), so should have higher priority, followed by T2 and T3.
 T1 get scheduled at after 4 time units i.e at times 0, 4, 8, 12, 16, 20, ...
 T2 gets scheduled at time units 1, 5, 11 for 2 time slots. When T2 gets released at time 15,
it is given this slot, preempting T3. But at time 16, T1 gets released. It gets scheduled
preempting T2. Once T1 is done, T2 again gts scheduled at time 18.
 T3 gets scheduled in the remaining slots 3, 7. It gets time slot 9, since T1 is done and T2 is
not released that time. But it gets preempted in the next slot because T2 gets released.
Again it gets scheduled in slot 13 and 14. With this T3 completes for one period 20.

T1 T2 T2 T3 T1 T2 T2 T3 T1 T3 T2 T2 T1 T3 T3 T2 T1 T2 T1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 29 20

14
BITS Pilani, Pilani Campus
Rate Monotonic (RM)
Algorithm

T1 J1,5
J1,1 J1,2 J1,3 J1,4 J1,6

T2 J2,1 J2,2 J2,3 J2,4 J2,4

T3 J3,1 J3,1
J3,1 J3,1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

(A different representation of the schedule)

15
BITS Pilani, Pilani Campus
Deadline Monotonic (DM)
Algorithm
 Fixed priority algorithm
 Shorter the relative deadline, higher the priority
 When the relative deadlines of every task is proportional to their period, the schedule produced by RM
and DM algorithms are identical.
 But when the relative deadlines are arbitrary, DM may produce a feasible schedule when RM fails.

Example:
T1 = (50, 50, 25, 100), T2 = (0, 60, 10, 20), T3 = (0, 125, 25, 50)
According to DM algorithm, T2 has highest priority because its relative deadline is 20.
Similarly T1 has lowest priority and T3 has priority in between.

J J
T1 J1,1 J1,1 J1,2 1
, J1,3 1
, J1,4
2 3

T2 J2,1 J2,2 J2,3 J2,4

T3 J3,1 J3,2

0 10 20 35 40 50 60 70 80 85 100 120 130 140 155 160 180 190 200 220 225

16
BITS Pilani, Pilani Campus
EDF (Earliest Deadline First)
Algorithm
 Dynamic priority algorithm
 Job with earliest (absolute) deadline has highest priority

Example: Consider two tasks T1 = (2, 0.9) and T2 = (5, 2.3)

T1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

J
T2 J2,1 J2,1 J2,2 J2,2 2
,
J2,3
2

0 1 2 3 4 5 6 7 8 9 10 11

Deadlines: Deadlines: Deadlines: J2,2 Deadlines: Deadlines: Deadlines:


J1,1 : 2 J1,2 : 4 J1,3 : 6 only J1,4 : 8 J1,5 : 10 J1,6 : 12
J2,1 : 5 J2,1 : 5 J2,1 : 5 job J2,2 : 10 J2,2 : 10 J2,3 : 15
J1,1 scheduled J1,2 scheduled J2,1 scheduled J1,4 scheduled J1,5 scheduled J1,6 scheduled
17
BITS Pilani, Pilani Campus
EDF (Earliest Deadline First)
Algorithm
 Job with earliest (absolute) deadline has highest priority (Process closest to its deadline has
highest priority)
 Priorities is assigned dynamically, since deadlines of the jobs varied. So it is a dynamic-
priority algorithm.

Example (Jobs with Precedence)  t = 0: J1 is released, no other job in the system, so gets
scheduled
J1 3 (0, 6] J2 2 (5, 8]  t = 2: J3 is released.
 Deadline of J1 = 6,
 Deadline of J3 = 8.
J3 3 (2, 8] (Execution times are  So J1 has higher priority, hence it continues
mentioned before the  t = 3: J1 completes, J3 starts.
feasible intervals)  t = 5: J2 is released
 Deadline of J2 = 8
 Deadline of J3 = 8
 So both have same priority, let J3 continue.
 t = 6: J3 is done, J2 gets scheduled
J1 J3 J2
 t = 8: J2 is done

0 3 6 8
18
BITS Pilani, Pilani Campus
LST (Least-Slack-Time-First)
Algorithm
 Task level dynamic algorithm, and job level dynamic algorithm
 Job with smallest slack has highest priority
 At time t, the slack of a job whose remaining execution time is x and
whose deadline is d
= d - t- x

Example: Consider two tasks T1 = (2, 0.9) and T2 = (5, 2.3)

T1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

J
T2 J2,1 J2,1 J2,2 J2,2 2
,
J2,3
2

0 1 2 3 4 5 6 7 8 9 10 11

Slack: Slack: Slack: J2,2 Slack: Slack: Slack:


J1,1 : 1.1 J1,2 : 1.1 J1,3 : 1.1 only J1,4 : 1.1 J1,5 : 1.1 J1,6 : 1.1
J2,1 : 2.7 J2,1 : 1.8 J2,1 : 0.9 job J2,2 : 2.7 J2,2 : 1.8 J2,3 : 2.7
J1,1 scheduled J1,2 scheduled J2,1 scheduled J1,4 scheduled J1,5 scheduled J1,6 scheduled
19
BITS Pilani, Pilani Campus
LST (Least Slack Time First)
Algorithm (Example)
Jobs with Precedence

 t = 0: J1 is released, no other job in the system, so gets


scheduled
J1 3 (0, 6] J2 2 (5, 8]  t = 2: J3 is released.
 Slack of J1 = 6 – 2 – (3 – 2) = 3,
 Slack of J3 = 8 – 2 – 3 = 3.
J3 3 (2, 8] (Execution times are  So both J1 and J3 are of same priority. Let J1
continue
mentioned before the
feasible intervals)  t = 3: J1 completes, J3 starts.
 t = 5: J2 is released
 Slack of J2 = 8 – 5 – 2 = 1
 Slack of J3 = 8 – 5 – (3 – 2) = 2
 So J2 will have higher priority, hence J2 will
J1 J3 J2 J3 preempt J3.
 t = 7: J2 is done, J3 gets scheduled
3 5 7 8
 Slack of J3 = 8 – 7 – (3 – 2) = 0
 t = 8: J3 is done

BITS Pilani, Pilani Campus


LRT (Latest Release Time)
Algorithm
 There is no advantage in finishing early in a hard-real time system.
 LRT Algorithm takes the advantage of this fact.
 It treats release time as deadline and deadline as release time and
schedule the jobs backward starting from latest deadline in ‘priority-driven’
manner.
Example (Jobs with Precedence)  Latest deadline is 8. So start scheduling
backward from time 8.
J1 3 (0, 6] J2 2 (5, 8]  J2 should start at time 6, so that it completes
at time 8 (=6 + execution time 2)
J3 2 (2, 7] (Execution times are  At time 7, J3 is ready to be scheduled (its
mentioned before the deadline is 7). So it gets scheduled at time 6
feasible intervals) (after J2).
 J3 should start at time 4, so that it completes
J1 J3 J2 at time 6 (=4 + execution time 2)
 At time 6, J1 is ready to be scheduled. So it
0 1 4 6 8 gets scheduled at time 4 after J3.
 J1 should start at time 1, so that it completes
at time 4 (=1 + execution time 3)
21
BITS Pilani, Pilani Campus
Schedulable Utilization
A scheduling algorithm can feasibly schedule any set of
periodic task on a processor if the total utilization of the tasks
is equal to or less than the schedulable utilization of the
algorithm.

No algorithm can schedule a set of tasks with a total


utilization greater than 1

22
BITS Pilani, Pilani Campus
Schedulable Utilization
Example for U > 1:
Consider EDF schedule for tasks: T1 = (2,1), T2 =(5,3)
Total utilization U = 1/2+ 3/5 = 0.5 + 0.6 = 1.1
J2,2 missed its deadline 10

T1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

T2 J2,1 J2,1 J2,2 J2,2

0 1 2 3 4 5 6 7 8 9 10 11

Deadlines: Deadlines: Deadlines: Deadlines: Deadlines: Deadlines: Deadlines:


J1,1 : 2 J1,2 : 4 J1,3 : 6 J1,3 : 6 J1,4 : 8 J1,5 : 10 J1,6 : 12
J2,1 : 5 J2,1 : 5 J2,1 : 5 J2,2 : 10 J2,2 : 10 J2,2 : 10 J2,2 : 10
J1,1 scheduled J1,2 scheduled J2,1 scheduled J1,3 scheduled J1,4 scheduled Either can be J2,2 scheduled
scheduled

23
BITS Pilani, Pilani Campus
Relative Merits
 Criterion to measure performance: Schedulable Utilization
 So higher the schedulable utilization, better is the algorithm.
 An algorithm whose schedulable utilization is 1, is an optimal
algorithm.
 Optimal dynamic-priority algorithms outperforms fixed-
priority algorithms in terms of schedulable utilization.
 But advantage of fixed-priority algorithms is predictability.

24
BITS Pilani, Pilani Campus
Schedulable Utilization of EDF
Algorithm
Theorem:
A system T of independent, preemptable tasks with relative deadlines
equal to their respective periods can be feasibly scheduled on one
processor if and only if its total utilization is equal to or less than 1.

Schedulable utilization of the LST algorithm is also 1.

25
BITS Pilani, Pilani Campus
Schedulablity Test of EDF
Algorithm
Schedulablity condition for EDF algorithm:

n
ek

k 1 min( Dk , pk )
1

If above condition is satisfied, the system is schedulable according to


EDF algorithm.

26
BITS Pilani, Pilani Campus
Optimality of RM & DM
Algorithms
 Since these algorithms assign fixed priorities, they can’t be
optimal.
 While RM algorithm is not optimal for arbitrary periods, it is
optimal in the special case when the periodic tasks in the
system are simply periodic.
 A system of periodic tasks is simply periodic if for every pair
of tasks Ti and Tk in the system and pi < pk, pk is an integer
multiple of pi.
 In other words, for simply periodic tasks, the period of all
tasks are integer multiple of the shortest period.

27
BITS Pilani, Pilani Campus
Optimality of RM Algorithm
Theorem:

A system of simply periodic, independent, preemptable tasks


whose relative deadlines are equal to or larger than their
periods is schedulable on one processor according to the RM
algorithm if and only if it total utilization is equal to or less
than 1.

RM Algorithm is optimal among all fixed-priority algorithms


whenever the relative deadlines of the tasks are proportional
to their periods.

28
BITS Pilani, Pilani Campus
Optimality of DM Algorithm
Theorem:

A system T of periodic, independent, preemptable tasks that


are in phase and have relative deadlines are equal to or less
than their periods can be feasibly scheduled on one processor
according to the DM algorithm whenever it can be feasibly
scheduled according to any fixed-priority algorithms.

29
BITS Pilani, Pilani Campus
Sufficient schedulability
condition for RM algorithm
Theorem:
A system of ‘m’ independent, preemptable periodic tasks with relative
deadlines equal to their respective periods can be feasibly scheduled
on a processor according to RM algorithm if its total utilization ‘U’ is
less than or equal to 1
URM  m(2 m  1)

As number of tasks approaches infinity, schedulable bound


1
URM  lim m(2  1)  ln 2  0.69  69%
m
m  

U(n) ≤ URM(n) is not a necessary condition, a system of tasks may


nevertheless be scheduled even when its total utilization exceeds the
schedulable bound URM(n).

30
BITS Pilani, Pilani Campus
Lehoczky’s Schedulability Test
for Fixed-Priority Algorithms
Used when Sufficient Schedulability Condition fails
Theorem
A set of periodic real-time tasks is RMA schedulable under any task phasing, if all the tasks
meet their respective first deadlines under zero phasing (i.e. when all tasks have phases equal
to 0).

Explanation
Let there are two tasks T1 = (30, 10) and T2 = (120, 60) scheduled as per RM Algorithm.
Then T1 has higher priority then T2.
. From this example, it is
J1,1 J2,1 J1,2 J2,1 J1,3 J2,1 obvious that worst case
response time occurs for a
0 10 20 30 40 50 60 70 80 90
lower priority task, the
phase of this task and the
(T1 and T2 have 0 phases – First job of T2 finishes at 90) phases of all other higher
priority task are 0.
J2,1 J1,1 J2,1 J1,2 J2,1 J1,3
0 10 20 30 40 50 60 70 80 90

(T1 has phase 20 and T2 has phase 0 – First job of T2 finishes at 80)
31
BITS Pilani, Pilani Campus
Lehoczky’s Schedulability Test
(Contd.)
As seen in the example, within deadline of the first job of T2 i.e. 120, higher priority task T1 can be scheduled
120 / 30 = 4 times.
So T2 has to wait for 4 X 10 (execution time of each job of T1) time slots during execution of its first job.

Hence in the worst case, the amount of time a low priority task Ti has to wait due to the higher priority tasks
(T1,T2, …,Ti-1) in the system is
i 1
D 
  p e i
k
k 1  k

So in worst case, Ti will be in the system for


 Di
i 1

ei     ek
k 1  pk 

Then for all the tasks to be feasibly scheduled, this time should be less than or equal to the respective
deadline i.e.

D
i 1

ei    i  ek  Di
k 1  pk 
This is Lehochky’s Schedulability Test.
32
BITS Pilani, Pilani Campus
Floor and Ceiling Functions
Floor
floor(x) = x  is the largest integer not greater than x
Ceiling
ceiling(x) = x  is the smallest integer not less than x

Example:
x x  x 
2.4 2 3
5.5 5 6
-2.1 -3 -2
-2 -2 -2

33
BITS Pilani, Pilani Campus
Example
Question:
Please check if following sets of tasks can be scheduled by EDF and
RM Algorithms.
T1 = (8,3), T2 = (9, 3), T3 = (15, 3)
Answer:
Utilization U = (3 / 8) + (3 / 9) + (3 / 15) = 0.375 + 0.333 + 0.2 = 0.9083
U < 1, so these tasks are schedulable by EDF algorithm.
Let us calculated the sufficient condition for RM schedulablity.
1 1
URM  m( 2 m
 1)  3( 2  1) 3 X (1.26  1)  0.78
3

So U > URM, hence fails this test. But this doesn’t mean that these tasks
can’t be schedulable by RM algorithm.
Let us perform Lehoczky’s test.

34
BITS Pilani, Pilani Campus
Example (contd.)
As per RMA, the priorities of these tasks are T1 > T2 > T3.
For T1, the execution time 3 is less than its deadline 8, so it is schedulable.
For T2, the time it will be in the system in worst case scenario =
i 1
D  9 
ei    i  ek  3    X 3  3  2 X 3  9
k 1  pk  8 

T2 is schedulable since this time is equal to its deadline.


For T3, the time it will be in the system in worst case scenario =
i 1
D  15  15 
ei    i  ek  3    X 3    X 3  3  2 X 3  2 X 3  15
k 1  pk  8   9 

T3 is schedulable since this time is equal to its deadline.


Hence all the 3 tasks pass Lehoczky’s schedulability test.
Therefore all the 3 tasks are schedulable by RM Algorithm.
35
BITS Pilani, Pilani Campus
Overload condition
 A system is said to be overloaded when the job offered to the scheduler
can’t be feasibly scheduled by a clairvoyant scheduler.
 When the system is not overloaded, an optimal on-line scheduling algorithm
is one that always produces a feasible schedule of all offered job.
 No optimal on-line scheduling algorithm exists when some jobs are non-
preemptable.
 During an overload, some jobs must be discarded in order to allow other
jobs to complete on time.

36
BITS Pilani, Pilani Campus
Practical Factors

Nonpreemptability
 So jobs are by nature nonpreemptable e.g. disk scheduling.
 When a low priority job is scheduled and it happens to be
nonpreemptable, if a high priority job arrives later (either from
blocked state or it gets released), then it has to wait.
 The high priority job has to wait until the nonpreemtable low
priority job completes.
 This will increase the response time of the high priority job.
 Hence while considering whether the high priority jobs can meet
their deadline or not, we also need to consider the effect of low
priority nonpreemptable jobs on them.

37
BITS Pilani, Pilani Campus
Practical Factors

Self-suspension
 A job may suspend itself during execution due to various reasons like
waiting for an I/O or remote procedure call etc.
 As a result O/S removes it from the ready queue and puts it in the
suspended queue.
 The time spent during self suspension should also be considered
during timing analysis of the jobs.

Context Switches
 Context switch is a usual phenomenon in a priority driven system.
 Hence context switch time should also be taken into consideration
during the timing analysis.

38
BITS Pilani, Pilani Campus
Practical Factors
Limited-Priority Levels
 In practical systems number of priority levels are limited (e.g. in a token ring
network, there are 8 priority levels, in RTOSes, usually there are 256 levels)
 Hence tasks(jobs) have non-distinct priorities, which need to be considered during
the analysis.

Tick Scheduling
 During our analysis till now, we have assumed that scheduler does the
schedulability tests as and when the jobs arrives (i.e. scheduler is event-driven).
 But practically, there will be a timer running and the scheduler will be waking up at
each timer tick.
 So even if a job is ready, the scheduler may not notice it until the next timer
interrupt. This introduces a certain delay in completion of the job.
 Also the ready job which yet to be noticed and to be put in a ready queue, should
be placed in some other queue.
 These factors should also be considered during the analysis.

39
BITS Pilani, Pilani Campus
Practical Factors
Varying Priority in Fixed-Priority Systems
 In order to tackle priority inversion problem, sometimes the priorities of the lower
priority jobs are raised. Such an operation will have effect on the analysis.

Hierarchical Scheduling
 This scheduling is done, when there are multiple tasks/jobs of same priorities.
 The tasks/jobs having same priorities are put into a cluster/subsystem.
 Two common type of scheduling approaches are used.
 Priority-driven/round-robin system: Here The clusters are scheduled in priority
driven manner and the tasks/jobs in a clusters are scheduled in a round-robin
manner.
 Fixed-time partitioning scheme: The clusters/subsystems are scheduled according
to a cyclic schedules and the tasks/jobs in the subsystem are scheduled as per the
scheduling algorithm chosen for the sybsystem.

40
BITS Pilani, Pilani Campus
Schedulability Test for Fixed-Priority Tasks
with Short Response Times

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Assumptions
We will confine our attention to the case where
o Priorities are fixed (e.g. RM Algorithm)
o Response times of the jobs are smaller than or equal to their respective
periods.

42
BITS Pilani, Pilani Campus
Critical Instants
Critical Instant of a task Ti is a time instant which is such that
 The job in Ti released at the instant has the maximum response time of all jobs
in Ti, if the response time of every job in Ti is equal or less than the relative
deadline Di of Ti
And
 The response time of the job released at that instant is greater than Di if the
response time of some jobs in Ti exceeds Di.

The response time of a job in Ti released at the critical instance has maximum
response time.

Theorem

In a fixed-priority system, where every job completes before the next job in the
same task is released, a critical instant of any task Ti occurs when one of its job
Ji,c is released at the same time with a job in every higher-priority task, that is
ri,c = rk, lk for some lk for every k = 1,2,…,i-1.
43
BITS Pilani, Pilani Campus
Time Demand Function
Suppose the release time t0 is the job is a critical instant of task Ti.
Then at time t0 +t, t > 0, the total processor time demand wi(t) of this job and
all the higher-priority jobs released in [t0, t] is given by

 t 
i 1
wi (t )  ei     ek ,0  t  pi
k 1  pk 

wi(t) is called the time demand function of task Ti.

If wi(t) > t for all 0 < t ≤ Di , then the job can not complete by its deadline.

The maximum possible response time Wi of all jobs in Ti is equal to the


smallest value of ‘t’ (because the job will try to complete earliest) that
satisfies the equation i 1
 t 
t  ei     ek ,0  t  pi
k 1  pk 
44
BITS Pilani, Pilani Campus
Time Demand Function -
Example
4 Tasks: T1 = (φ1, 3, 1), T2 = (φ2, 5, 1.5), T3 = (φ3, 7, 1.25), T4 = (φ4, 9, 0.5) are
scheduled based on RM algorithm.
So priorities of these tasks are as following.
T1 > T2 > T3 > T4.
 t   t 
w1(t) = e1 = 1, 0 < t ≤ 3
w3 (t )  e3    e1    e2 ,0  t  p3
 p1   p2 
 t  t  t 
w2 (t )  e2    e1 ,0  t  p2  w3 (t )  1.25     1     1.5,0  t  7
 p1  3 5
t  3.75,0  t  3
 w2 (t )  1.5   1,0  t  5 4.75,3  t  5
3 
 w3 (t )  
2.5,0  t  3 6.25,5  t  6
 w2 (t )  
3.5,3  t  5 
7.25,6  t  7

45
BITS Pilani, Pilani Campus
Time Demand Function –
Example (contd.)

 t   t   t 
w4 (t )  e4    e1    e2    e3 ,0  t  p4
 p1   p2   p3 
t  t  t 
 w4 (t )  0.5     1    1.5    1.25,0  t  9
3 5 7 
4.25,0  t  3
5.25,3  t  5


 w4 (t )  6.75,5  t  6
7.75,6  t  7


9.0,7  t  9

46
BITS Pilani, Pilani Campus
Time Demand Function
12

wi(t) 11
wi(t) > t, wi(t) = t
10 i.e. Demand > Supply
w4(t)
9
9
8

6 w3(t)
5

4 4.75
w2(t)
3

2 2.5
1 wi(t) < t,
1 w1(t) i.e. Demand < Supply
t
0 1 2 3 4 5 6 7 8 9 10 11 12
Please note that, for T2, the y=t line crosses 2.5 and 3.5. The max response time is 2.5 (the smallest
value of t satisfying the equation. 3.5 can’t be considered as the max response time, because in one
period 5, a job of T2 can execute at earliest possible instance, which is 2.5. 47
BITS Pilani, Pilani Campus
Time Demand Function
4 Tasks: T1 = (3, 1), T2 = (5, 1.5), T3 = (7, 1.25), T4 = (9, 0.5) are scheduled based on RM algorithm.

So priorities of these tasks are: T1 > T2 > T3 > T4.

Response times are shown as blue arrows in the diagram.

T1

1 1 1 1 1 1
T2

2.5 2.5 1.5 2.5

T3
4.75 1.75 3.75

T4
9 3
2.5 4.75 7.5 8.75 11.5 17.5 17.75

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
48
BITS Pilani, Pilani Campus
Time Demand Analysis
 Time demand function is a staircase function, with steps at integer
multiple of the periods of high priority tasks.
 wi(t) is the demand of time and t is the supply of time.
 The task is schedulable if at any point of time during the inter-release time
of two adjacent jobs of the task, the demand is less than or equal to the
supply.
 It happens if wi(t) ≤ t at any point of time during the inter-release time of
two adjacent jobs of the task
 It means that wi(t) must intersect the straight line y(t) = t.
 It can only happen if wi(t) ≤ t for some t = the integer multiple the period
of any of the high priority tasks or that of the current task
 Intersection of wi(t) and y(t)=t, indicates the maximum response time,
since this is the time instance where demand = supply.

49
BITS Pilani, Pilani Campus
Time Demand Analysis
Time Demand Analysis method proposed by Lehoczky
For each task Ti ,

1. Compute the time demand function wi(t).


 t 
i 1
wi (t )  ei     ek ,0  t  pi
k 1  pk 

2. Check whether the inequality


wi(t)  t
is satisfied for the values of t that are equal to
t  jpk ,
k  1, 2 , 3,  , i ;
j  1, 2 ,  ,min (p i , Di ) / p k 
If this inequality is satisfied at any of these instants, Ti is schedulable.
50
BITS Pilani, Pilani Campus
Time Demand Analysis -
Example
Let us take the old example i.e. T1 = (φ1, 3, 1), (φ2, 5, 1.5), (φ3, 7, 1.25), (φ4, 9, 0.5).
The time demand functions for all 4 tasks have been calculated.

w1 (t )  1.0,0  t  3

2.5,0  t  3
w2 (t )  
3.5,3  t  5
3.75,0  t 3
4.75,3  t 5

w3 (t )  
6.25,5  t 6

7.25,6  t 7
4.25,0  t  3
5.25,3  t  5


w4 (t )  6.75,5  t  6
7.75,6  t  7


9.0,7  t  9
51
BITS Pilani, Pilani Campus
Time Demand Analysis –
Example (contd.)
For T1:
For T3:
w1 (t )  1.0,0  t  3 3.75,0  t  3
4.75,3  t  5

i = 1, so k and j doesn’t exist. w3 (t )  
Hence T1 is schedulable. 6.25,5  t  6

7.25,6  t  7
For T2: i = 3, so k = 1, 2, 3
2.5,0  t  3 For k = 1:
w2 (t )  
3.5,3  t  5 j = 1,2,..., floor(min(7,7)/3)) = 1, 2
For j=1, t = jpk = 1 X 3 = 3, w3 (3) = 3.75 > 3 .
i = 2, so k = 1, 2 For j=2, t = jpk = 2 X 3 = 6, w3 (6) = 6.25 > 6.
For k=1: For k = 2:
j = 1,2,..., floor(min(5,5)/3)) = 1 j = 1,2,..., floor(min(7,7)/5)) = 1
t = jpk = 1 X 3 = 3, w2 (3) = 2.5 < 3. For j=1, t = jpk = 1 X 5 = 5, w3 (5) = 4.75 < 5.
For k=2: For k = 3.
j = 1,2,..., floor(min(5,5)/5)) = 1 j = 1,2,..., floor(min(9,9)/7)) = 1
t = jpk = 1 X 5 = 5, w2 (5) = 3.5 < 5. For j=1, t = jpk = 1 X 7 = 7, w3 (7) = 7.25 > 7.
Hence T3 is schedulable, since the inequality is satisfied for t = 5.
Hence T2 is schedulable, since the 52
inequality is satisfied for all the cases. BITS Pilani, Pilani Campus
Time Demand Analysis –
Example (contd.)
For T4:
i = 4, so k = 1, 2, 3, 4
4.25,0  t  3
For k = 1: 5.25,3  t  5
j = 1,2,..., floor(min(9,9)/3)) = 1, 2, 3 

For j=1, t = jpk = 1 X 3 = 3, w4 (3) = 4.25 > 3. w4 (t )  6.75,5  t  6
For j=2, t = jpk = 2 X 3 = 6, w4 (6) = 6.25 > 6.
7.75,6  t  7

For j=3, t = jpk = 3 X 3 = 9, w4 (9) = 9. 
9.0,7  t  9
For k = 2:
j = 1,2,..., floor(min(9,9)/5)) = 1
For j=1, t = jpk = 1 X 5 = 5, w4 (5) = 5.25 > 5.
For k = 3:
j = 1,2,..., floor(min(9,9)/7)) = 1
For j=1, t = jpk = 1 X 7 = 7, w4 (7) = 7.75 > 7.
For k = 4:
j = 1,2,..., floor(min(9,9)/9)) = 1
For j=1, t = jpk = 1 X 9 = 7, w4 (9) = 9 .

T4 is schedulable, since the inequality is satisfied for t = 9. 53


BITS Pilani, Pilani Campus
Busy Intervals

The time interval (t0, t] is called level ∏i busy interval, if in this interval, the
processor is busy all the time executing jobs with priorities ∏i or higher, all the
jobs executed in the busy interval are released in the interval, and at the end of
the interval there is no backlog of jobs to be executed afterwards.

54
BITS Pilani, Pilani Campus
Thank You.

Any Questions?

55

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956

You might also like