RTS Unit 2 Notes
RTS Unit 2 Notes
Clock-Driven Scheduling
Assumptions:
To represent the clock driven scheduling, following assumptions and notations are used. The assumptions are:
Notations:
The notation of clock driven scheduling consists of four different parameters that is used to refer the job. The combination
of four parameters is called tuple. The standard notation is Ti = (φi, pi, ei, Di) where Ti refers a periodic task with phase
φi, period pi, execution time ei, and relative deadline Di.
Example:
1. ii) T2 = (10, 3, 6) ⇒ φ2 = 0 , p2 = 10 , e2 = 3 , D2 = 6
The scheduler dispatches the jobs according to the static schedule and repeats the jobs in each hyper periods. The static
schedule guarantees that each job completes by its deadline and no job overrun can occur.
Example:
Four independent periodic tasks: T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Utilization = 1/4 + 1.8/5 + 1/20 +
2/20 = 0.76
In this schedule T1 starts its execution at 0 time and repeats after each periods. Some intervals are not used by periodic
task called as slack time. This adds the advantage since other a periodic jobs can be executed here.
Table driven schedulers usually pre-compute which task would run when and store this schedule in a table at the time the
system is designed. Rather than automatic computation of schedule by the scheduler, the application programmer can be
given the freedom to select his own schedule for the set of tasks in the application and store the schedule in a table (called
schedule table) to be used by the scheduler at the run time.
An example of a schedule table is shown as following fig. There is difficult to implement the scheduling since the table
becomes very large in case of large and complex system.
Tasks Start time in millisecond
T1 0
T2 3
T3 10
T4 12
T5 17
Fig: Time Driven Scheduling
2. Cyclic Schedule:
Cyclic schedules are very popular and extensively used in industry. Cyclic schedules are simple, efficient and are easy to
program. An example application where cyclic schedule is used, is a temperature controller. A temperature controller
periodically samples the temperature of a room and maintains it at a preset value. Such temperature controllers are
embedded in typical computer- controlled air conditioners.
A cyclic scheduler repeats a pre-computed schedule. The pre-computed schedule needs to be stored only for one major
cycle.
The total scheduling time are divided into number of time intervals called frames. Every frame has length f called frame
size. Scheduling decision are made only at the beginning of frame and there is no preemption with in each frame. The
phase of the each periodic task is a positive integer multiple of frame size. The first job of every task is released at
beginning of frame and φ = k⋅f. This approach provides two major benefits:
Scheduler can easily check for overruns and missed deadlines at the end of each
Can use a periodic clock interrupt, rather than programmable timer?
This constraint explains how to choose frame length in cyclic scheduling. To avoid preemption, want jobs to start and
complete execution within a single frame:
To allow scheduler to check that jobs complete by their deadline, should be at least one frame boundary between release
time of a job and its deadline:
Given tasks are T1 = (4, 1.0), T2 = (5, 1.8) T3 = (20, 1.0), T4 = (20, 2.0).
Eq.2 ⇒ f ∈ { 2, 4, 5, 10, 20 }
2f - gcd( 5, f ) ≤ 5 (T2)
Job Slices:
Sometimes, a system cannot meet all three frame size constraints simultaneously. At this situation we can often solve by
partitioning a job with large execution time into slices (sub-jobs) with shorter execution times/deadlines then these slices
are called job slices.
Condition 1 è f ≥ 5
Condition 2 è 2, 4, 5, 10, 20
In above example, we can divide each job in (20, 5) into a chain of three slices with execution time 1, 3 and 1 respectively
as (20, 1), (20, 3), (20, 1). The time schedule be generated as,
Sometimes need to partition jobs more slices than required by frame size constraints to yield a feasible schedule. To
construct a cyclic schedule, we need to make 3 kinds of design constraints:
Slack Stealing:
A natural way to improve the response times of aperiodic jobs is by executing the aperiodic jobs ahead of the periodic
jobs whenever possible. This approach, called slack stealing. Every periodic jobs slice must be scheduled in a frame that
ends no later than its deadline. When aperiodic job executes ahead of slice of periodic task then it consumes the slack in
the frame. It reduce the response time of aperiodic jobs but requires accurate timer.
For example:
Three periodic tasks T1 (4, 1), T2 (5, 2), and T3 (8, 10, 1)
Three aperiodic tasks A1 (4, 5), A2 (9.5, 0.5), and A3 (10.5, 2)
Now major cycle in the cyclic schedule of the periodic tasks is:
This shows that the slack time be available at 3 to 4, 10 to 12, 15 to 16 and 18 to 20 where aperiodic task can be executed.
The aperiodic task for execution shown as below:
When the aperiodic jobs execute by use of the cyclic executive such that
Job A1 can executed in idle period starting at 7 and preempted at Similarly resumed at 10 and executed at 10.5.
At time 10.5 A2 starts its execution and finished at Similarly A3 starts its execution at time 11 and preempted at
12.
Job A3 is resumed on slack starts at 15 and executed at 16.
Here,
Response time of A1 = 6.5 Response time for A2 = 1.5 Response time for A3 = 5.5 Average response time = 4.5
At time between 4 and 8 there is the slack of 1 such that A1 can be executed at time 4 and preempted at 5.
At time between 8 to 12 there exists the slack of 2 such that A1 can resumed up to 5 and A2 can execute between
9.5 to 10. Similarly A3 can execute between 11 to12 and preempted.
At time between 12 and 16 there exists the slack of 1 and task A3 can resumed on 12and executed at 13.
Here,
1. Acceptance test
2. EDF scheduling of accepted jobs
1. Acceptance Test:
A main problem is to determine whether all sporadic jobs can complete in time. A common way to deal with this situation
is to have the scheduler perform an acceptance test when each sporadic job is released. During an acceptance test, the
scheduler checks whether the newly released sporadic job can be feasibly scheduled with all the jobs in the system at the
time. If there is sufficient amount of time in the frames before its deadlines to complete the newly released sporadic job
without causing any job in the system to complete too late then scheduler accepts and schedule the job otherwise rejects
the new sporadic jobs. That means
If total amount of slack time in frame >= its execution time and no adverse effect on sporadic job then
Else
A queue of sporadic job may be formed for testing at a same time on EDF basis.
EDF is best suited method to schedule the accepted sporadic jobs. For this purpose, the schedule maintains a queue of
accepted sporadic jobs in increasing order of their deadlines and inserts each newly accepted sporadic job into this queue
in increasing order.
For example:
Frame size is 4, gray rectangles are periodic tasks S1 – S4 are sporadic tasks with parameters (Di , Ci).
S1 is released in time
o Must be scheduled in frames 2, 3 and
o Acceptance test – at the beginning of frame 2 and Slack time is 4 which is less that execution time such
that job is rejected.
S2 is released in time
o Must be scheduled in frames 3 through 7.
o Acceptance test – at the beginning of frame 3 but Slack time is 5 such that job is accepted.
o First part (2 units) executes in current frame.
S3 is released in time
o Must be scheduled in frames 4 and 5. S3 runs ahead of S2.
o Acceptance test – at the beginning of frame 4 and Slack time is 2 (enough for S3 and the part of S2) – job
is First part (1 unit) S3 executes in current frame, followed by second part of S2.
S4 is released in time 14
Acceptance test – at beginning of frame 5 and Slack time is 5 (accounted for slack committed by S2 and S3) such
that job is rejected.
Remaining portion of S3 completes in current frame, followed by the part of
Remaining potions of S2 execute in the next two frames.
Handling overruns:
o Jobs are scheduled based on maximum execution time, but failures might cause
o A robust system will handle this by either: 1) killing the job and starting an error recovery task; or 2)
preempting the job and scheduling the remainder as an aperiodic
o Depends on usefulness of late results, dependencies between jobs,
Mode changes:
o A cyclic scheduler needs to know all parameters of real-time jobs a
o Switching between modes of operation implies reconfiguring the scheduler and bringing in the code/data
for the new jobs.
o This can take a long time: schedule the reconfiguration job as an aperiodic or sporadic task to ensure other
deadlines met during mode
Multiple processors:
The major component of INF is network flow graph. The constraints on when the jobs can be scheduled are represented
by the network-flow graph of the system. This graph contains the following vertices and edges; the capacity of an edge is
a nonnegative number associated with the edge.
3. There is a directed edge (Ji , j ) from a job vertex Ji to a frame vertex j if the job Ji can be scheduled in the frame j
, and the capacity of the edge is the frame size f .
4. There is a directed edge from the source vertex to every job vertex Ji , and the capacity of this edge is the
execution time ei of the
5. There is a directed edge from every frame vertex to the sink, and the capacity of this edge is
Conceptual simplicity
Ability to consider complex dependencies, communication delays, and resource contention among jobs when
constructing the static schedule, guaranteeing absence of deadlocks and unpredictable
Entire schedule is captured in a static
Different operating modes can be represented by different
No concurrency control or synchronization
If completion time jitter requirements exist, can be captured in the
o When workload is mostly periodic and the schedule is cyclic, timing constraints can be checked and
enforced at each frame boundary.
o Choice of frame size can minimize context switching and communication
o Relatively easy to validate, test and
Cons:
Inflexible
Pre-compilation of knowledge into scheduling tables means that if anything changes materially, have to redo the
table generation.
Best suited for systems which are rarely modified once built
Other disadvantages:
o Release times of all jobs must be
o All possible combinations of periodic tasks that can execute at the same time must be known a priori, so
that the combined schedule can be pre-computed