05-27 Rtsa
05-27 Rtsa
November 2005
Final Report
DOT/FAA/AR-05/27
4. Title and Subtitle 5. Report Date
November 2005
REAL-TIME SCHEDULING ANALYSIS 6. Performing Organization Code
The Federal Aviation Administration Airport and Aircraft Safety R&D Division COTR was Charles Kilgore.
16. Abstract
This project was concerned with scheduling analysis of real-time tasks. It consisted of two major tasks. The first task explored
and reported the industry approaches to scheduling real-time tasks and the tools they used in the verification of temporal
correctness. To carry out this task, a questionnaire was designed and sent to a number of industry people who are involved in
developing software for real-time systems. Their responses were analyzed and conclusions were drawn.
The second task consisted of developing scheduling algorithms and temporal verification tools for a model of periodic, real-time
tasks. An optimal scheduling algorithm, called Deadline-Monotonic-with-Limited-Priority-Levels, was developed for a system
with a single processor and a limited number of priority levels. A procedure to determine if a given set of periodic, real-time
tasks is feasible on one processor with m priority levels, where m is less than the number of tasks, was also developed.
Two heuristics for a multiprocessor system with a limited number of priority levels were given. Additionally, a conjecture on the
processor utilization bound, U(n), below which a set of unit-execution-time tasks is always schedulable was provided. While a
complete proof of the conjecture has not been accomplished, it has been demonstrated that it is valid for several special cases.
Periodic, real-time tasks; Rate-monotonic and deadline- This document is available to the public through the National
monotonic algorithms; Validation procedure; Real-time Technical Information Service (NTIS) Springfield, Virginia
scheduling 22161.
19. Security Classif. (of this report) 20. Security Classif. (of this page) 21. No. of Pages 22. Price
Page
1. INTRODUCTION 1-1
6. REFERENCES 6-1
APPENDICES
iii
LIST OF FIGURES
Figure Page
iv
LIST OF ACRONYMS
v/vi
EXECUTIVE SUMMARY
Many real-time systems are embedded in sensors and actuators and function as digital
controllers. Typically, in these type of applications, signals arrive periodically at fixed periods.
When the signals arrive, they must be processed before the arrival of the next batch of signals.
Real-time tasks can be classified as hard real-time or soft real-time. Hard real-time tasks are
those that require a strict adherence to deadline constraints, or else the consequence is disastrous.
By contrast, soft real-time tasks are those that do not require a strict adherence to deadline
constraints, but it is desirable to do so. Most real-time systems have a combination of both hard
and soft real-time tasks.
Hard real-time tasks create another dimension in verifying their correctness. Not only do their
logical correctness need to be verified (i.e., the program implements exactly the things it is
supposed to do), their temporal correctness must also be verified (i.e., all deadlines are met). A
hard real-time task must be both logically and temporally correct for it to be usable. Since a hard
real-time task is executed periodically during its entire operational time and since the period of
every task is very small compared to the duration of its operation, the schedule can be regarded
as an infinite schedule for all practical purposes. Verifying the temporal correctness of an
infinite schedule is a challenging problem since there are infinitely many deadlines to check.
One of the main goals of this project is to develop tools to solve this verification problem.
This project consists of two major jobs. The first job was to explore and report the industry
approaches to scheduling real-time tasks and the tools they use in the verification of temporal
correctness. A questionnaire was developed and sent to a number of industry representatives
who are involved in developing software for real-time systems. Based on their responses, some
conclusions were drawn, which are described in this report.
The second job consisted of developing scheduling algorithms and temporal verification tools for
a model of periodic, real-time tasks. An optimal scheduling algorithm, called Deadline-
Monotonic-with-Limited-Priority-Levels, was developed for a system with a single processor
and a limited number of priority levels. As a byproduct of the work on the Deadline-Monotonic-
with-Limited-Priority-Levels algorithm, a procedure to determine if a given set of periodic, real-
time tasks is feasible on one processor with m priority levels, where m is less than the number of
tasks, was also developed.
This report begins with a summary of the industry survey results, then the three approaches that
were used to schedule a real-time task system are discussed: (1) Clock-Driven, (2) Processor-
Sharing, and (3) Priority-Driven. It was reasoned that the Priority-Driven approach is far
vii
superior to the Clock-Driven and Processor-Sharing approaches. The report then reviews the
literature on Priority-Driven scheduling algorithms, which can be divided into two categories:
Dynamic-Priority and Fixed-Priority. While Dynamic-Priority scheduling algorithms are more
effective than Fixed-Priority scheduling algorithms, they are rarely used in practice because of
the overhead involved. Therefore, the report concentrates on Fixed-Priority scheduling
algorithms.
The Deadline Monotonic algorithm is an optimal Fixed-Priority scheduling algorithm for one
processor. Unfortunately, the algorithm assumes that the number of priorities is the same as the
number of real-time tasks. In practice, one can only have a limited number of priorities, say m,
supported by a system. Under this scenario, the Deadline Monotonic algorithm fails to be
optimal, and as a result of the work to find an optimal scheduling algorithm, the Deadline-
Monotonic-with-Limited-Priority-Levels algorithm was developed, along with a procedure to
check if a given set of real-time tasks is feasible on one processor with m priority levels.
The same problem was explored for multiprocessor systems. It was demonstrated that finding an
optimal assignment is strongly nondeterministic polynomial (NP)-hard, which is tantamount to
showing that there is no efficient algorithm to solve this problem. Motivated by the
computational complexity, several heuristics for solving this problem are suggested.
The minimum processor utilization, U(n), for a set of n unit-execution-time tasks, was also
studied. U(n) is the threshold for the total processor utilization of the n tasks, below which they
are always schedulable. It is conjectured that
U ( n) = 1
n −1 + 1n + n1+1 + ... + 2 n1−3 + 2n1 .
Some special cases of this conjecture are proven, but a complete proof failed to be solvable.
Some other jobs were planned for this research effort (i.e., study of fault-tolerant issues, which
are concerned with schedulability analysis when there are time losses due to transient hardware
or software failures, and study of CPU scheduling coupled with I/O activities). However, due to
time constraints, significant progress was not made in these areas.
viii
1. INTRODUCTION.
Many real-time systems are embedded in sensors and actuators and function as digital
controllers. An example of a digital controller, taken from Liu [1] is shown in figure 1-1. The
term plant in the figure refers to a controlled system (such as an engine, a brake, an aircraft, or a
patient), A/D refers to analog-to-digital converter, and D/A refers to digital-to-analog converter.
The state of the plant is monitored by sensors and can be changed by actuators. The real-time
(computing) system estimates from the sensor readings the state of the plant, y(t), at time t and
computes a controlled output, u(t), based on the difference between the current state and the
desired state (called reference input in the figure), r(t). This computation is called the control-
law computation in the figure. The output generated by the control-law computation activates
the actuators, which bring the plant closer to the desired state.
controller
reference rk
A/D Control-law uk
input yk computation D/A
r(t) A/D
y(t) u(t)
In figure 1-1, r(t) and y(t) are sampled periodically every sampling period, T units of time.
Therefore, the control-law computation needs to be done periodically every T units of time. For
each sampled data, the computation must be completed within T units of time, or else it will be
erased by the next sampled data. Each computation is fairly deterministic in the sense that the
maximum execution time can be estimated fairly accurately.
1-1
A plant typically has more than one state variable; e.g., the rotation speed and temperature of an
engine. Therefore, it is controlled by multiple sensors and by multiple actuators. Because
different state variables may have different dynamics, the sampling periods may be different. As
an example, also taken from Liu [1], figure 1-2 shows the software structure of a flight
controller. The plant is a helicopter, which has three velocity components: forward, side-slip,
and altitude rates, which together are called collective in the figure. It also has three rotational
(angular) velocities, referred to as roll, pitch, and yaw. The system uses three sampling rates:
180, 90, and 30 Hz; i.e., the sampling periods are 1/180, 1/90, and 1/30 seconds, respectively.
• Validate sensor data and select data source; in the presence of failures, reconfigure the
system.
The above controller controls only flight dynamics. The control system on board an aircraft is
considerably more complex. It typically contains many other equally critical subsystems
(e.g., air inlet, fuel, hydraulic, and anti-ice controllers) and many noncritical subsystems
(e.g., compartment lighting and temperature controllers). So, in addition to the flight control-law
computations, the system also computes the control laws of these subsystems.
Controllers in a complex monitor and control system are typically organized hierarchically. One
or more digital controllers at the lowest level directly control the physical plant. Each output of a
1-2
higher-level controller is a reference input of one or more lower-level controllers. One or more
of the higher-level controller interfaces with the operator(s).
Figure 1-3, also taken from Liu [1], shows the hierarchy of flight control, avionics, and air traffic
control (ATC) systems. The ATC system is at the highest level. It regulates the flow of flights
to each destination airport. It does so by assigning to each aircraft an arrival time at each
metering fix en route to the destination: The aircraft is supposed to arrive at the metering fix at
the assigned arrival time. At any time while in flight, the assigned arrival time to the next
metering fix is a reference input to the onboard flight management system. The flight
management system chooses a time-referenced flight path that brings the aircraft to the next
metering fix at the assigned arrival time. The cruise speed, turn radius, descend/ascend rates,
and so forth required to follow the chosen time-referenced flight path are the reference inputs to
the flight controller at the lowest level of the control hierarchy.
response commands
Operator-system
interface
virtual plant
navigation
State Flight
estimator management
virtual plant
State flight
estimator control
physical plant
Air data
1-3
Real-time tasks can be classified as hard or soft. Hard real-time tasks are those that require a
strict adherence to deadline constraints, or else the consequence is disastrous. An example of a
hard real-time task is the flight controller shown in figure 1-2. By contrast, soft real-time tasks
are those that do not require a strict adherence to deadline constraints, but it is desirable to do so.
An example of a soft real-time task is the controller that controls the compartment lighting and
temperature in an aircraft. Most real-time systems have a combination of both hard and soft real-
time tasks.
Hard real-time tasks create another dimension in validating their correctness. Not only do their
logical correctness need to be verified (i.e., the program implements exactly the things it is
supposed to), their temporal correctness must also be verified (i.e., all deadlines are met). A hard
real-time task must be both logically and temporally correct for it to be usable. Since a hard real-
time task is executed periodically during its entire operational time and since the period of every
task is very small compared to the duration of its operation, the schedule can be regarded as an
infinite schedule for all practical purposes. Verifying the temporal correctness of an infinite
schedule is a challenging problem since there are infinitely many deadlines to check. One of the
main goals of this project is to develop tools to help solve this verification problem. Since the
main concern of this report is hard real-time tasks, the term hard real-time task will simply be
called a real-time task throughout this report.
This project consisted of two major jobs. The first job was to explore and report the industry
approaches to scheduling real-time tasks and the tools they use in the verification of temporal
correctness. A questionnaire was developed and sent to a number of industry representatives
who are involved in developing software for real-time systems. Based on their responses, some
conclusions were drawn, these are described in this report.
The second job consisted of developing scheduling algorithms and temporal verification tools for
a model of periodic, real-time tasks. An optimal scheduling algorithm, called Deadline-
Monotonic-with-Limited-Priority-Levels (DM-LPL), was developed for a system with a single
processor and a limited number of priority levels. As a byproduct of the DM-LPL algorithm, a
procedure to determine if a given set of periodic, real-time tasks is feasible on one processor with
m priority levels, where m is less than the number of tasks, was also developed.
A periodic, real-time task, Ti, is characterized by the quadruple (si, ei, di, pi), where si is the initial
request time, ei is the execution time, di is the relative deadline, and pi is the period. In this
characterization, Ti makes an initial request at time si, and thereafter at times si + kpi, k = 1, 2, …
The k-th request requires ei units of execution time, and it must be completed no later than the
deadline si+(k-1)pi + di. A real-time task system consists of n periodic, real-time tasks, and is
denoted by TS = ({Ti, {si},{ei}, {di}, {pi}).
A schedule S for a real-time task system TS is said to be valid if the deadline of each request of
each task is met. Since the schedule is infinite, checking if the schedule is valid is a nontrivial
problem. TS is feasible if there is a valid schedule for it. TS is schedulable by a particular
scheduling algorithm if the scheduling algorithm produces a valid schedule for it. A scheduling
algorithm is said to be optimal if every feasible task system is schedulable by the scheduling
algorithm.
1-4
1.2 REPORT OVERVIEW.
This report begins with a summary of the industry survey results. Three approaches that have
been used to schedule a real-time task system are (1) Clock-Driven, (2) Processor-Sharing, and
(3) Priority-Driven. It was reasoned that the Priority-Driven approach was far superior to the
Clock-Driven and Processor-Sharing approaches. The report then reviews the literature on
Priority-Driven scheduling algorithms, which can be divided into two categories: Dynamic-
Priority and Fixed-Priority. While Dynamic-Priority scheduling algorithms are more effective
than Fixed-Priority scheduling algorithms, they are rarely used in practice because of the
overhead involved. Therefore, the report concentrates on Fixed-Priority scheduling algorithms.
The remaining portions of the report focus on specific scheduling model. Leung and Whitehead
have shown that the Deadline Monotonic algorithm is an optimal Fixed-Priority scheduling
algorithm for one processor. Unfortunately, the algorithm assumes that the number of priorities
is the same as the number of real-time tasks. In practice, one can only have a limited number of
priorities, say m, supported by a system. Under this scenario, the Deadline Monotonic (DM)
algorithm fails to be optimal, and the optimal scheduling algorithm DM-LPL was developed,
along with a separate procedure to check if a given set of real-time tasks is feasible on one
processor with m priority levels.
The same problem is explored for multiprocessor systems. It is demonstrated that finding an
optimal assignment is strongly nondeterministic polynomial (NP)-hard, which is tantamount to
showing that there is no efficient algorithm to solve this problem. A problem Q is NP-hard if all
problems in the NP-class are reducible to Q. Motivated by the computational complexity,
several heuristics for solving this problem are suggested.
The minimum processor utilization, U(n), for a set of n unit-execution-time tasks is also studied.
U(n) is the threshold for the total processor utilization of the n tasks, below which they are
always schedulable. It is conjectured that
U ( n) = 1
n −1 + 1n + n1+1 + ... + 2 n1−3 + 2n1
Some special cases of this conjecture are proven, but time was not available to perform the
complete proof.
Some other tasks were planned for this research effort (i.e., study of fault-tolerant issues, which
are concerned with schedulability analysis when there are time losses due to transient hardware
or software failures, and study of central processing unit (CPU) scheduling coupled with
input/output (I/O) activities). However, due to time constraints, significant progress was not
made in these areas. These remain topics to be considered in future research.
There are a number of potential uses for this report. Since this research task falls more into the
category of basic research than many of the Federal Aviation Administration (FAA) Software
and Digital System Safety Project research and development initiatives, some explanation of
1-5
how various readers may use the report is provided. The intended audience for this report is
certification authorities, industry representatives, and researchers. A brief summary of how each
audience can use this report is listed below.
• Industry Representative. The industry can benefit from this entire report but should
realize that section 4 and appendix B are at a research stage. The proofs will require
verification by a qualified independent entity before they can be implemented in an actual
aviation project. It is also likely that the industry would need to develop tools to help
implement the algorithms of this report into a usable format.
• Researchers. As mentioned before and discussed throughout this report, this research
effort is really the beginning of what needs to be done before implementing the
algorithms. Researchers will likely benefit most from section 4 and appendix B, and will
likely want to build upon these in additional work. Section 5 provides specific
information about where the future research needs to go.
1-6
2. SURVEY OF INDUSTRY APPROACHES.
Of the 15 respondents, the majority (12) of them work for avionics or engine control developers
or aircraft or engine manufacturers. The majority either verify/test real-time scheduling
performance, perform RTSA on aviation system projects, or develop real-time operating systems
(RTOS) that support RTSA. On the whole, the respondents had the appropriate background to
answer this questionnaire. The responses and questions are summarized below.
Question A.2.1 asked, “What type of events are typically used to trigger time-critical functions
of your real-time system (e.g., interrupts, data message queries, data refresh rates, (pilot) user
input, change of state, certain conditions, display refresh rates, etc.)?” Interrupts were mentioned
by ten respondents as the main events that are typically used to trigger the time-critical functions
of their real-time systems. This fits well with the DM algorithm, which is essentially interrupt-
driven.
Question A.2.2 asked, “What are typical performance requirements that your real-time system
must meet?” The majority of the respondents mentioned that critical tasks must meet hard real-
time deadline constraints. The response times mentioned are from a few milliseconds to
hundreds of milliseconds. This justifies the study of scheduling analysis of hard real-time tasks,
which is the main topic in this project.
Question A.2.3 asked, “Where are your performance requirements for time-critical functions
typically defined (e.g., system requirements or interface control documents, software
requirements document)?” System requirements, interface requirements, and software
requirements are the most popular responses. It appears that performance requirements for time-
critical functions are typically defined in those documents.
Question A.2.4 asked, “How do you distinguish time-critical functions from other functions in
your application?” The majority of the respondents answer that time-critical functions are
explicitly stated in the requirement.
Question A.2.5 asked, “Do your time-critical functions have dependencies on hardware or shared
hardware devices (central processing unit, memory, data buses, I/O ports, queues, etc.) with
other software functions of your application or other applications resident in the system? If yes,
please explain.” The answers were mixed. Some say that there are no dependencies, while
others say that there are. The results are inconclusive.
Question A.2.6 asked, “What are some mechanisms that your application (software and
hardware) uses to ensure that “time-critical triggers” get handled at the appropriate priority and
gain the relevant resources to ensure that your performance requirements are achieved?” Priority
levels assigned to interrupts were mentioned by several people as the mechanism used to
2-1
schedule time-critical tasks. Some people mentioned that the computers they use have only one
priority level, such as the PowerPC. This fits well with the model that the system has a limited
number (m) of priority levels (as proposed for this research project). In this case, m = 1.
Question A.2.7 asked, “What type of reviews, analyses and testing does your team use to ensure
that time-critical functions will satisfy their performance requirements, especially in worst-case
condition scenarios?” The majority of respondents mentioned that they can obtain worst-case
execution time by analyzing the code. As for validation, most respondents use emulator or some
ad hoc approach to test. This is risky because an emulator can only show that it will work most
of the time. It does not show that it will work all of the time. There is a definite need for a
formal validation procedure which gives a guarantee that it will work all the time.
Question A.3.1 asked, “What approaches to message passing have your projects utilized?” The
answers are so different that it was difficult to draw any conclusions. It seems that message
passing mechanism is a function of the hardware and operating systems used by the organization.
This explains the diverse answers.
Question A.3.2 asked, “Do your messages communicate with each other? If yes, please explain
how.” The majority answered that messages do not communicate with each other.
Question A.4.1 asked, “What type of processors have you used for your systems?” The majority
of respondents currently use Intel processors; however, PowerPC seems to be gaining
momentum. A small number of respondents use Motorola or TI chips. By far, the largest
number use Intel family of processors.
Question A.4.2 asked, “Have you found any peculiarities with any of the processors that affect
the real-time scheduling analysis? If yes, please explain the peculiarities and how they were
addressed.” The answers to this question varied significantly. Some pointed out that the lack of
multiple interrupt priorities in PowerPC makes it difficult to schedule real-time tasks. This
confirms the hypothesis of this research effort that more priority levels make scheduling easier.
Some mentioned that the cache memory makes it difficult to analyze the worst-case running
time, since the execution time depends on the hit ratio of the cache. Some mentioned that the
pipeline processor also makes it difficult to estimate the worst-case running time.
Question A.4.3 asked, “Do your systems use a single processor or multiple processors? If
multiple processors, how is the system functionality distributed and handled across processors?”
Nine respondents said that they use a single processor while six respondents said that they use
multiple processors. One respondent mentioned that they use both single and multiple
processors. It seems that they are about evenly divided, with the single processor having a slight
edge.
Question A.5.1 asked, “What scheduling algorithms/approaches have you used to schedule your
system tasks at run time? Please match the algorithm (e.g., preemptive priority, round robin,
etc.) with the system type (e.g., display, communication, navigation, etc.).” The majority
responded that they use pre-emptive priority scheduling algorithm, of which the DM algorithm is
a member. One respondent mentioned that they use Rate Monotonic (RM) Analysis.
2-2
Question A.5.2 asked, “If you used priority scheduling, how many priorities levels were
assigned? How was priority inversion avoided? How did the number of priority levels compare
to the number of processes?” The majority of the respondents said that they used 3 to 20 levels.
One can conclude that the number of priority levels is relatively small, compared to the number
of real-time tasks in the system.
Question A.5.3 asked, “What kind of scheduling problems have you encountered in multitasking
systems and how were they addressed?” A fair number of respondents did not comment on this
question. Therefore, it was not possible to draw any valid conclusions.
Question A.5.4 asked, “Have you used real-time operating systems to support your schedule
guarantees? If yes, what kind of operating systems have you used and what kind of scheduling
challenges have you encountered?” Most respondents said that they did not use real-time
operating systems to support their schedule guarantees. For the few who said they did, they used
in-house proprietary systems. It seems that there is a learning curve here. If the tools are made
available to them free of charge, they may in fact use these tools in the future.
Question A.5.5 asked, “Do you verify what data gets dumped, due to priority settings and
functions getting preempted? If yes, how does it affect your system?” Most respondents replied
No or N/A. Therefore, there was insufficient data to draw any conclusions.
Question A.5.6 asked, “Do you use tools to assist in the real-time scheduling analysis? If yes,
what kind of tools? How are the outputs of these tools verified?” Most respondents replied No
or that they use emulators and simulators. This can create problems since these are not rigorous
and formal analyses.
Question A.5.7 asked, “What trends in commercial aviation systems do you think will challenge
the current scheduling approaches (i.e., may lead to the need for new scheduling algorithms)?”
Some said that multiple thread real-time deadline scheduling analysis will be the future trends
that challenge the current scheduling approaches. Some said that the desire to reuse, the desire to
inherit confidence from reuse, and the desire to use nondevelopmental items will be the major
challenges to the current scheduling approaches. These comments point to the importance of a
theory of scheduling on multiple processors.
Question A.6.1 asked, “After system development, do you verify that deadlines are met and
scheduling analysis assumptions are correct? If yes, please explain how.” The majority of the
respondents said that they verified that deadlines are met and scheduling analysis assumptions
are correct after system development.
Question A.6.2 asked, “In what areas of timing verification or validation have you encountered
problems and how were they addressed?” The answers were so diverse that it was difficult to
draw any conclusions. It seems that the problems encountered is highly dependent on the
specific problems and the hardware or software used in the company.
Question A.7.1 asked, “Does your testing allow for faults? If yes, please explain.” Most
respondents said that their testing allow for faults. This is mostly handled by injecting faults into
2-3
the system and checking to see how the system responds to the faults. Responses indicated that
no worst-case analysis is done; i.e., it is mostly done in an ad hoc manner.
Question A.8.1 asked, “In your opinion, what are the major issues regarding RTSA and its
verification?” Responses varied significantly and are summarized below.
• The operating systems are so general that they are of little use in dealing with real-time
systems.
From the responses of the questionnaire, the following conclusions were drawn:
• There is a need for scheduling analysis and verification in the avionics industry.
• The current practice is by ad hoc methods. Tools are seldom used either because they are
expensive and not available, or the operating systems are for general purpose and not
usable for real-time systems.
• Software developers do test for fault tolerance, but the main method used is by means of
fault injection which is rather ad hoc.
It was concluded that developing defined approaches and algorithms for scheduling, deadline
verification, and fault tolerance will significantly help the avionics industry. Furthermore, these
theories should be implemented into a software tool suite that can be made available to anyone
who desires to use it. As more and more people use these tools (which may need to be
qualified), future systems will be less error-prone and easy to maintain and modify.
2-4
3. TUTORIAL ON DIFFERENT SCHEDULING APPROACHES.
Whether a set of real-time tasks can meet all their deadlines depends on the characteristics of the
tasks (e.g., periods and execution times) and the scheduling algorithms used. Scheduling
algorithms can be classified as pre-emptive and non-pre-emptive. In non-pre-emptive
scheduling, a task once started must be executed to completion without any interruptions. By
contrast, pre-emptive scheduling permits suspension of a task before it completes, to allow for
execution of another more critical task. The suspended task can resume execution later on from
the point of suspension. While pre-emptive scheduling incurs more system overhead (e.g.,
context switching time due to pre-emptions) than non-pre-emptive scheduling, it has the
advantage that processor utilization (the percentage time that the processor is executing tasks) is
significantly higher than non-pre-emptive scheduling. For this reason, most of the scheduling
algorithms presented in the literature are pre-emptive scheduling algorithms.
There are three major approaches in designing pre-emptive scheduling algorithms for real-time
tasks: Clock-Driven, Processor-Sharing, and Priority-Driven. Each approach is described
below.
The Clock-Driven approach is the oldest method used to schedule real-time tasks. In this
method, a schedule is handcrafted and stored in memory before the system is put in operation.
At run time, tasks are scheduled according to the scheduling table. After the scheduler
dispatches a task, it will set the hardware timer to generate an interrupt at the next task switching
time. The scheduler will then go to sleep until the timer expires. This process is repeated
throughout the whole operation.
The Clock-Driven approach has several disadvantages that render it undesirable to use: (1) it
requires a fair amount of memory to store the scheduling table; (2) a slight change in task
parameters (e.g., execution time and period) requires a complete change of the scheduling table,
which can be very time-consuming, and (3) this approach is not adaptive to any change at run
time. For example, if a system fault occurs or a task runs for less (or more) time than predicted,
it is not clear how the scheduling decisions can be adapted to respond to the change.
The time slice has to be made very small to obtain a close approximation of processor sharing.
But when the time slice is very small, a significant amount of time will be spent in context
switching. This is a major drawback of the Processor-Sharing approach.
In the Priority-Driven approach, each task is assigned a priority. At run time, the ready task that
has the highest priority will receive the processor for execution. Priorities can be assigned at run
time (Dynamic-Priority) or fixed at the beginning before the operation starts (Fixed-Priority).
Fixed-Priority scheduling algorithms incur far less system overhead (context switching time)
3-1
than Dynamic-Priority scheduling algorithms, since the scheduler does not need to determine the
priority of a task at run time. Furthermore, Fixed-Priority scheduling algorithms can be
implemented at the hardware level by attaching the priority of a task to the hardware interrupt
level. On the other hand, processor utilization under Fixed-Priority scheduling algorithms is
usually not as high as Dynamic-Priority scheduling algorithms. It is known that Fixed-Priority
scheduling algorithms may yield a processor utilization as low as 70 percent, while Dynamic-
Priority scheduling algorithms may yield a processor utilization as high as 100 percent.
The two most well known Fixed-Priority scheduling algorithms are the Rate-Monotonic (RM)
and Deadline-Monotonic (DM) algorithms [3 and 5]. RM assigns the highest priority to the task
with the smallest period (or equivalently, the highest request rate), while DM assigns the highest
priority to the task with the smallest relative deadline. It should be noted that DM and RM are
identical if the relative deadline of each task is identical to its period. Leung and Whitehead [5]
have shown that DM is optimal for one processor, in the sense that any set of tasks that can be
feasibly scheduled by any Fixed-Priority scheduling algorithms can also be feasibly scheduled by
DM. Liu and Layland [3] have shown that RM is optimal when the relative deadline of each task
coincides with its period; it fails to be optimal if the deadline of some task is not identical to its
period. Both DM and RM fail to be optimal for two or more processors [6]. At the present time,
no scheduling algorithm is known to be optimal for two or more processors.
The following documents are specifically recommended to further describe the scheduling
approaches, and also see references 1-6.
• A.M. Van Tilborg and G.M. Koob, eds., Foundations of Real-Time Computing:
Scheduling and Resources Management, Kluwer, Boston.
3-2
4. DESCRIPTION OF SCHEDULING MODEL.
The research topics in this project were planned to be (1) priority assignment, (2) multiprocessor
scheduling, (3) fault tolerant issue, and (4) I/O activities. However, because of time limitations,
only the first two topics were studied. In this report, the scheduling model for the first two topics
is defined.
A periodic, real-time task, Ti, is characterized by a quadruple (si, ei, di, and pi), where si is the
initial request time, ei is the execution time, di is the relative deadline, and pi is the period. In this
characterization, Ti makes an initial request at time si, and thereafter at times si + kpi, k = 1, 2, . . .
The kth request requires ei units of execution time and it must be completed no later than the
deadline si + (k-1)pi + di. A real-time task system consists of n periodic, real-time tasks, and is
denoted by TS = ({ Ti } , { si } , { ei } , { di } , { pi }).
A schedule S for a real-time task system TS is said to be valid if the deadline of each request of
each task is met. Since the schedule is infinite, checking if the schedule is valid is a non-trivial
problem. TS is said to be feasible if there is a valid schedule for it. TS is said to be schedulable
by a particular scheduling algorithm if the scheduling algorithm produces a valid schedule for it.
A scheduling algorithm is said to be optimal if every feasible task system is schedulable by the
scheduling algorithm.
With respect to the above model, there are several important questions whose answers are
essential in validating the temporal correctness of a real-time task system. First, how does one
determine if a real-time task system is feasible? Second, how does one determine if a real-time
task system is schedulable by a particular scheduling algorithm? Third, what are the optimal
scheduling algorithms? By definition, a real-time task system is feasible if and only if it is
schedulable by an optimal scheduling algorithm. Thus, these three questions are interrelated.
There are several important assumptions associated with this model. First, ei is assumed to be
the maximum execution time required by Ti. At run time, it is assumed that Ti never requires
more than ei units of execution time at each request, although it could use less time. Second, it is
assumed that context switching time is negligible. If this is not a valid assumption, ei must be
adjusted to account for the time loss due to context switching. Third, the minimum time lapse
between two consecutive requests of Ti is pi. At run time, the time lapse between two
consecutive requests is at least pi; it could be more than pi, but not less. Fourth, the relative
deadline of each request is di. At run time, the relative deadline of each request can be longer
than di but not shorter. These assumptions must be strictly adhered to in order for the theory to
work.
4-1/4-2
5. RESULTS AND FUTURE WORK.
5.1 RESULTS.
This project consisted of two major jobs. The first job was to explore and report the industry
approaches to scheduling real-time tasks and the tools they use in the verification of temporal
correctness. A questionnaire was developed and sent to a number of industry representatives
who were involved in developing software for real-time systems.
From the responses of the questionnaire, the following conclusions can be drawn:
• There is a need for scheduling analysis and verification in the avionics industry.
• The current practice is by ad hoc methods. Tools are seldom used either because they are
expensive and not available, or the operating systems are for general purpose and not
usable for real-time systems.
• Software developers do test for fault tolerance, but the main method used is by means of
fault injection which is rather ad hoc.
It was concluded that developing defined approaches and algorithms for scheduling, deadline
verification, and fault tolerance will significantly help the avionics industry. Furthermore, these
theories should be implemented into a software tool suite that can be made available to anyone
who desires to use it. As more and more people use these tools (which may need to be
qualified), future systems will be less error-prone and easy to maintain and modify.
The second job consisted of developing scheduling algorithms and temporal verification tools for
a model of periodic, real-time tasks.
The report began with a discussion of the three approaches that have been used to schedule a
real-time task system: (1) Clock-Driven, (2) Processor-Sharing, and (3) Priority-Driven. It was
reasoned that the Priority-Driven approach is far superior to the Clock-Driven and Processor-
Sharing approaches. The report then reviewed the literature on Priority-Driven scheduling
algorithms, which can be divided into two categories: Dynamic-Priority and Fixed-Priority.
While Dynamic-Priority scheduling algorithms are more effective than Fixed-Priority scheduling
algorithms, they are rarely used in practice because of the overhead involved. Therefore, the
report concentrated on Fixed-Priority scheduling algorithms.
5-1
For multiprocessors, the problem of finding the minimum number of processors with m priority
levels to schedule a set of tasks was shown to be NP-hard. Two heuristics were provided, FF and
FFDU, for this problem. The special delivery case where each task’s execution time is one unit
was also considered. Under this model, an attempt was made to develop a utilization threshold,
U(n), below which a set of n tasks is always schedulable. For the unlimited priority levels, it was
⎛ 2 n −3 ⎞
conjectured that U (n) = ⎜ ∑ 1i ⎟ + 21n , which is better than the bound of n(21/ n − 1) given by Liu
⎝ i = n −1 ⎠
and Layland [3]. This conjecture was proved for two special cases.
− The special case when d1 and d2 are arbitrary has been proved. It remains to be
shown that the conjecture is valid when d3, d4, …, dn-1 are also arbitrary.
• Perform a study of fault-tolerant issues, which are concerned with schedulability analysis
when there are time losses due to transient hardware/software failures.
− A major issue in this area is to characterize the worst-case scenario due to time
losses.
• Perform a study of central processing unit (CPU) scheduling coupled with input/output
(I/O) activities.
− The main issue in this area is to couple pre-emptive scheduling (CPU scheduling)
with non-pre-emptive scheduling (I/O activities).
5-2
6. REFERENCES.
3. Liu, C.L., and Layland, J.W., “Scheduling Algorithms for Multiprogramming in a Hard
Real-Time Environment,” J. of ACM, Vol. 20, 1973, pp. 46-61.
5. Leung, J.Y-T. and Whitehead, J., “On the Complexity of Fixed-Priority Scheduling of
Periodic, Real-Time Tasks,” Performance Evaluation, Vol. 2, 1982, pp. 237-250.
6. Dhall, S.K. and Liu, C.L., “On a Real-Time Scheduling Problem,” Operations Research,
Vol. 26, 1978, pp. 127-140.
6-1/6-2
APPENDIX A—INDUSTRY SURVEY AND RESPONSES
Of the 16 surveys responding, with many individuals responsible for more than one role, the
breakdown was as follows:.
A-1
5 I am a Designated Engineering Representative (DER) who approves compliance To
DO-178B
What type of events are typically used to trigger time-critical functions of your real-time systems
(e.g., interrupts, data message queries, data refresh rates, (pilot) user input, change of state,
certain conditions, display refresh rates, etc.)?
1. Combination of fixed schedule (i.e., clock time) plus certain conditions (for example,
tracked signal weakness switches mode from tracking to acquisition)
2. Data inputs, data refresh rates, data queries, display refresh rates
5. Interrupts
10. Table driven scheduler based upon statically built table by an external, qualified tool.
The table contains information regarding I/O timing (when to send data and when to read
data), as well as process scheduling information
13. Typically we use interrupts. In some instances it is an external input, such as incoming
data, from another device that causes an interrupt to be generated.
14. Incoming data messages (interrupt or polled), timer expiration, user input (Key press,
toggle switch, etc), sensor state change (interrupts or polled).
A-2
15. Interrupts are used for the initiation of the real-time processing.
What are typical performance requirements that your real-time system must meet?
1. There are hardware and software deadlines. For the software, three independent bits need
to be assembled and delivered every 0.6 microseconds.
3. The clock ticks at 1 kHz. All scheduling operations are driven by the ‘ticking clock’.
User ‘frame’ times are typically 40-80 Hz
4. Servo loop closure without overshoot or significant lag. Service data bus and access
information from the data bus according to schedule. Determine fault state based on
successive out of range input for time frame.
6. Sample rates in excess of 50 Hhz. For DSP applications. Multiple 8 KHz. Sample rate
audio channels.
7. Level A assurance that all functions are completed before the next interrupt.
8. Max min limits on input/output events, gain phase margin on control loops, iteration rates
and transport delay times on selected functions
9. Throughput margins of 50% were required by government contract. There were similar
hardware reserve margin requirements on memory and I/O
11. All aircraft I/O includes transmission and jitter requirements. Typical data rates range
from 1 Hz to 80 Hz.
A-3
13. The hardware/computer shall operate without adverse affect on the engine or aircraft,
such as lost of engine thrust, adverse increase or decrease of engine thrust or cause in-
flight shut down.
14. As a military application our performance requirements are mainly based off of our 1553
bus rates. Thus we have certain message rates that our system must maintain for example
rates of 50Hz or 25Hz etc. In addition – we may interface to an external device that
requires data to be transferred to it within a few milliseconds after the data has changed.
15. Digital signal processing incoming data rate (radio IF): 200 ksam
Radio push-to-talk to RF on: < 50 ms; user input to display update: < 100 m
Sensor change to display update: < 100 ms
16. Performance requirements that are most critical include the time that the system detects
an engine threshold exceedance (such as overspeed; under speed; excessive engine
temperature; etc.) until the fuel flow is terminated to shut the engine down. This is
usually on the order of a few milliseconds.
Where are your performance requirements for time-critical functions typically defined?
(e.g., system requirements or interface control documents, software requirements document)
3. System requirements
8. Ours were specified by government contract- typically some version of the TADSTAND
9. Through system design analysis or from similarity to previous systems, we can usually
distinguish time-critical functions.
A-4
10. System requirements and interface control documents. On occasion, lower level
performance requirements may appear in the software requirements document. Hardware
documents also contain performance requirements as they pertain to the support of the
I/O and O/S.
12. Interface control document, computer hardware operation requirements and software
requirement documents
13. Critical time with max latencies would be defined in our System and Sub-System
Specification (i.e., our system requirements) document. 1553 message rates would be
defined in our Interface Control Documents and some timing would be documented in
our Software Requirements.
14. For inter-process communications timing. System requirements for input to output
timings and user interface performance, software requirement
15. These are typically defined in the software requirements document- with some being
defined in the system requirements document.
How do you distinguish time-critical functions from other functions in your application?
1. All requirements must be met eventually. Some deadlines are obviously easy to meet;
others will require some effort and/or have some risk associated with them. Perhaps it
would be helpful to imagine all requirements having a deadline to be filled in, and if a
recipient of an item (data, for example) really doesn’t care by when something is
accomplished, they should say so. NASA/JPL sometimes uses a time constraint network,
where timing is stated with respect to other events, rather than with respect to time
deadlines. For example, some aspects of a flight might not be required to be delivered
until the flight arrives at the gate.
2. By the time element of the requirement (i.e., must respond within 200 mS, or must do x
after y +/- z seconds.
6. Badly
A-5
7. I believe your question refers to arbitration done by the Executive. Levels of priority
were established and the Executive routine carried them out. Priorities were established
during the software design phase following analysis of the requirements.
8. Through system design analysis or from similarity to previous systems, we can usually
distinguish time-critical functions.
9. From an O/S perspective, all functions are time critical. That is, they have specific
timing constraints placed upon them that must be honored. The determination of who is
“more time critical” than others is a system architecture exercise.
10. We don’t
11. We don’t
14. All functions in the application are time-driven and are treated as time-critical- with the
exception of the low-level transmission and reception of serial data. This serial
communication is interrupt-driven and will be serviced when data is available from the
host computer to which the electronic control unit is communicating or when new data
needs to be sent to the host computer. The processing and responding to complete
messages in the system is also time-driven (a determination is made for whether a
completed message has been received that needs to be acted on). Consequently,
distinguishing from the time-critical functions and those that are not time-critical is done
by dedicating separate memory resources to the serial.
1. Multiple software entities can be incomplete at one time; knowledge about which ones
are incomplete is used in deciding which entities get resource use assigned to them.
2. Sometimes, might have multi-processor system that must meet data bus timing protocol.
Might be a hardware event that triggers the start of a time critical function.
3. The application has a dependency on the RTOS. The RTOS has a dependency on the
system and applications, e.g. cache usage, memory configurations (coherent or non-
coherent etc).
A-6
4. No
5. Yes. A system clock (counter/timer) is used for all software related timing functions.
The RTS owns the system clock. A hardware watch dog is also used, which must be
updated on a periodic basis
7. No
8. It depends- hardware limitations that were known were part of the design. These
dependencies were usually noted in the System and/or Software Requirements
Specification. They were occasionally “discovered” during the Requirements Analysis
phase- at great embarrassment.
9. Yes. Generally the interrupt servicing functions must share hardware devices and
memory with other applications.
10. Typically the answer is yes, when looking at I/O hardware devices. However, our
hardware designers use queues, state machines and other methods to abstract as much
timing critical “stuff” away from the software, as possible.
11. No
12. Yes. There is memory loader software that is activated only when certain hardware and
software conditions are set.
13. Yes, we must receive data from one subsystem and within a specified amount of time
pass the information onto another subsystem – in some case using the input data to
calculate the data that is to be output.
14. Time-critical functions typically share CPU, memory and other resources with other
functions
15. Yes, the time-critical functions do have hardware dependencies on I/O devices.
What are some mechanisms that your application (software and hardware) uses to ensure that
“time-critical triggers” get handled at the appropriate priority and gain the relevant resources to
ensure that your performance requirements are achieved?
A-7
1. A look-up table is provided (all at once) with input describing the tasks currently
incomplete. The content of the look-up table tells which software task is to receive the
processing resources next.
2. We don’t use RTOS, so we write software to achieve desired responses (when possible).
3. Interrupt locking is used, OS/kernel locking, semaphores and priority inheritance, work
queues for work deferrals, etc.
4. High Interrupt priority, also time critical code has hard vectors to interrupt service
routines that cannot be modified by a run time operation. We are concerned about
software maintenance activities adding new priority sets with higher priorities for less
important processes. We document heavily the priority rationale.
6. They are given the highest priority level within the RTOS. Or else they are handled
outside of the RTOS with independent interrupts
7. By restricting to the one interrupt, and insisting on completion of time critical tasks.
8. Careful Integration testing and thorough V&V was our preferred path. Demonstration,
analysis, whatever was appropriate.
10. All resources are defined statically at build time by a qualified tool. Processes are
assigned specific resources. This ensures the resources will be ready when needed, and
permits the O/S to detect any resource usage violations (i.e., partitioning violations).
12. Our software uses watch dog monitor and certain CPU interrupt handlers.
13. As we use a PowerPC with only one external interrupt the “time critical” issues have to
be designed into the system. There is NO way to prioritize interrupts in our system so the
top level design must take this issue into account. Software must ensure all interrupt
handlers are as quick as possible.
14. Hardware timers, prioritized hardware and software interrupts, pre-emptable and
prioritized tasks.
No (or very low iteration count) loops in round-robin systems.
15. There are no priority mechanisms in an interrupt occurs at a pre-defined interval and this
initiates real-time processing. The software verifies that the processing from the previous
interval has completed. If it has not, the first time the anomaly occurs, it is simply
A-8
recorded and processing for the next interval is initiated directly after the completion of
the tasks from the previous interval. If two such overruns occur in a row, then the
application simply executes an infinite loop, which will cause the engine to shut down.
In addition, there is a hardware watchdog discrete output that needs to get strobed each
interval. If the watchdog is NOT strobed during each interval, then the hardware will
cause the engine to shut down our software.
What type of reviews, analyses and testing does your team use to ensure that time-critical
functions will satisfy their performance requirements, especially in worst-case condition
scenarios?
4. Reviews are requirements based reviews. Analysis includes calculation of worst case
safety margin for timing, for memory usage, and for stack usage. Verification tests use
emulators that track the percentage of time spent different areas of the software, so we
can get a good estimate of how much timing margin remains in the testing stage.
5. Unit test are run on all related software modules. Integration tests are run to verify
functionality in the system. Built-in-tests check functionality each time the unit is
powered up.
7. Two aspects – one at the requirement level, one at the implementation level.
8. Requirements are peer reviewed for ‘within our product’ issues and reviewed with data
source / sink suppliers for external stuff. Code modules are all path tested and execution
time monitored. Each module is assigned its longest run time. The total of these times
for all modules that execute between timer interrupts are summed and must be less than
the minimum interrupt time.
9. For interrupts, endurance testing with appropriate monitors allows us to determine the
effective worst-case timing as well as the average conditions.
A-9
10. Automated test cases are developed that exercise the limits of the system. A “rogue
partition” is used to stress the partitioning aspects of the O/S.
12. All of them. Software requirements/design review, test readiness review, code/design
review, unit tests, integration tests, software/hardware system tests
13. The systems team uses MatrixX to output equations. These equations are then modeled
and simulated using MatLab which also does bode plots. The software team utilizes a
tool (WindView) to verify that the tasks are being scheduled as required and pre-empted
as required.
14. End to end testing of LRU inputs and outputs. Test cases designed to assure full loading
on data inputs
15. The worst-case interval time is measured and recorded. The fact that the system is so
time-deterministic combined with the above handling of overrun conditions assures that
while we are executing the engine control software, we will be meeting our time-critical
functions.
2. Typically we’ve used dual port ram for multiply processor systems, we don’t use mult-
threaded executives.
3. Message passing library is provided in the RTOS. Priority and FIFO based.
Semaphores of various types are also used. (Binary, Counting and MUTEX)
4. We pass messages in one of two ways. The preferred method is to place the data packet
in general purpose registers, and call the service routine with the knowledge that the
service routine will look for specific data in specific registers. An alternate way is to pass
a pointer to the data specific to the message to the service routine.
5. Events flags; rendezvous; data passed through shared memory, and through shared
buffers owned by the RTS.
A-10
6. RTOS based messages
7. Don’t do it
8. Or was Unfamiliar with the term- don’t understand the question. Our communications
software was either internally developed code to do data handling within the processor
system unit formatted to one of the popular protocols such as ARINC 429, RS 232,
RS422 MIL STD 1558, etc.
9. Shared memory accesses using semaphores, double buffering, and interrupt blocking
have all been used
11. None
12. ARINC
13. Information passed internal on the same processor is often passed via message queues.
Information passed to other processors is passed in dual-port memory.
15. And not via message passing mechanisms. The major components in the system
communicate largely via memory interfaces
Do your messages communicate with each other? If yes, please explain how:
1. Processing entities communicate with each other according to the following paradigm
“Fast but dumb vs. smart but slow.” If the messages are simple, the sender
communicates directly with the recipient. If the message is not simple, it goes to a
processing unit with more processing capability, which can figure out what’s to be done,
and inform all interested other processing
2. No
3. Tasks communicate with interrupt routines and other tasks. Messages are just the
carriers.
4. No
5. No
A-11
6. No. They only pass information to other tasks.
7. Still not clear how to answer- we have serial digital communications and traditional
analog /discrete I/O and follow a popular industry standard. Handshaking, parity etc. are
used as required.
8. No
9. No
11. No, our messages internal to our system tend to go one direction
12. Tasks and processes communicate with each other using messages, messages don’t
communicate with Each other.
13. N/A
1. Custom CPUs for the fast but dumb, and commercial CPUs for smart but slow.
2. 8051, 68332, TMS34020, and we are eyeing power PC style processors for future
products
7. TI9900 (I’m a really OLD guy), Intel 80C186, 80C386EX, 68HC11, Motorola 380020
and 380040(memory is hazy- probably wrong), TI 320C30 DSP (as I said, I’m an old
guy)
8. Intel 80186, Motorola 68HC11, Motorola 68HC16, Motorola 683XX and PowerPC403
A-12
9. Intel x86 family (386, 486, Pentium I, II, III)
12. Currently we are using the PowerPC for our controlling processor. Previously we used
the Intel i960MC.
Have you found any peculiarities with any of the processors that affect the real-time scheduling
analysis? If yes, please explain the peculiarities and how they were addressed:
3. We have found that the uncertainty in the number of instruction cycles to vector to an
interrupt service routine has led us to utilize external hardware clocks for timing in
certain cases
5. No
6. Slightly off topic – a long time ago a Texas Instruments processor which claimed an
asynchronous reset capability was shown to have a synchronous window when it did not
reset. We changed processors.
7. We were doing the work during a time when simulation tools didn’t exist, and when they
did- it was usually too rich for our pocketbook. The general approach was to “let it fly”
for many times the expected mission time and fully exercise all modes of the application
code and pray we had timing margins by looking at what was felt to be a “worst case”
data moving scenario. I’m certain those tools have developed and are more affordable
today.
A-13
8. Different processors have different mechanisms for assigning interrupt priorities and for
interrupt masking
12. The limitation of the PowerPC (not a peculiarity), as I see it, is that it only allows for one
external interrupt into the processor. As we rely on multiple interrupts, our system
hardware has to design external interrupt controls to handle the multiple interrupts – but it
is impossible to prioritize the interrupts since the PowerPC accepts only one external
interrupt. The Intel i960MC handled multiple interrupts that could easily be prioritized.
The interrupt limitation is addressed at design attempting to ensure that the interrupt will
be spaced out and then the software works to ensure the handlers are as quick as possible.
14. No
Do your systems use a single processor or multiple processors? If multiple processors, how is
the system functionality distributed and handled across processors?
1. Multiple. Minimize the amount of communication that must flow between processors.
Try to have a predominant direction for communication: A with respect to B is mainly A
sends to B.
3. Our current work uses a single processor. Many years ago a multiprocessor system was
used with shared memory.
4. Single
A-14
7. Redundant single
8. Our applications were generally single processor. We did do some multiple processor
work, but it was usually with some form of shared dual port registers or RAM. We felt
interaction needed to be minimized to mitigate control problems where one unit might
“jam” another one. Unambiguous synchronization was occasionally a problem.
10. Multiple processors, however, each is treated as an independent processor. That is, they
work as federated LRUs interconnected by an avionics bus.
11. Single
12. It is a dual channel operation with each channel having the same software. Each channel
is executing during operation with one channel being in control and active while the other
channel is in the stand-by mode.
13. Our system contains multiple processors but each processor handles a separate function.
One processor is the controller, another processor handles displays, another processor
communicates to recording devices, etc. Information between the processors is
communicated through dual-port shared memory.
14. Multiple. Display/keyboard, communications I/O, sensor I/O, high level radio functions,
low level radio functions, DSP Serial data link is typically used between processors.
A.5 SCHEDULING.
What scheduling algorithms/approaches have you used to schedule your system tasks at run
time? Please match the algorithm (e.g., preemptive priority, round robin, etc.) with the system
type (e.g., display, communication, navigation, etc.)
A-15
2. Do not use real time schedulers
3.
Round robin with time-outs Overheat detection systems
11. Our systems utilize very simple interrupt-based schedulers for efficiency and safety
reasons. There are no priorities – process run when invoked; completion is checked
using Boolean flags.
A-16
12. Foreground/background with minor frame Engine control
watch dog monitor
If you used priority scheduling, how many priorities levels were assigned? How was priority
inversion avoided? How did number of priority levels compare to number of processes?
1. Each process had its own priority level, that is, we really used process identification
directly, rather than a classification according to priority. Furthermore, depending upon
the situation at hand, different processes were chosen for resource allocation. That is, in
one situation where tasks A, B and C were all ready to run, process B would be chosen,
but in another situation, process C would be chosen. There was a look-up table which
was provided with an “address” constructed of the Booleans for each runnable task, true
for each of the tasks that were ready to run, and other situation bits of interest. The
content of the look-up table was the identification of the task that was to be run in that
situation. Each task had at most a very small critical section. On completion of each
task, the situation is detected and supplied as input to the look-up table. Each task is very
short.
2. N/A
3. Temperature system was event driven. Each task has a unique priority. 5 task system
4. Three priority levels, priority inversion was eliminated with using priority registers
initialized and subsequently left alone. There are six processes.
5. On one particular project: 12 levels. Priority inversion was avoided by careful software
design. Some tasks share the same priority number; (round robin effect); Total tasks: 15
7. I recall there were at least three levels, possibly as many as five- our applications were
tiny- less than 100K lines of HOL code.
A-17
8. 10-20 priority levels are typical. Priority inversion is avoided by limiting the amount of
resource sharing between widely disparate priority tasks.
9. N/A
10. N/A
11. Ten priority levels are assigned. Each rate group is assigned its own priority level.
Message queues are not deleted and non-blocking queues are defined.
12. Four tasks at different priority levels. Design high priority tasks do not wait on low
priority tasks
13. N/A
What kind of scheduling problems have you encountered in multitasking systems and how were
they addressed?
1. One (more instructions needed in a time interval) was solved by increasing the clock
speed on the processor. One (couldn’t figure out which task to run next examining
successive status bits, because status bits changed so quickly) was solved by latching
status into an address register for a look-up table. One (priority inversion) was addressed
by aggressively pruning the content of critical sections.
2. N/A
3. Working with duration time for periodic delays caused us to change the implementation
to work with Absolute time for delays (delay until) Problem solved in OS by
incrementing time in a non-preemptable block.
4. N/A
5. Not enough bandwidth for the processor (this is a common problem). Rates of tasks are
adjusted as well as shifting task work loads.
6. None
7. Crashes were painfully examined with support from debuggers and a logic analyzer.
Most troubles were traced to improper manipulation of the memory register stack rather
than some undocumented hardware feature.
A-18
8. I’ve encountered “deadly embrace” problems which were addressed by a re-design of the
affected tasks.
9. Don’t know
10. Minor frames overrun, which causes a watch dog interrupt thus cause a software reset
11. We have seen task overruns. When this occurs sometimes a code problem is found and
fixed. If indeed more is scheduled then can be completed in the allocated time – all
functions in the task are evaluated to determine which functions can be done at a slower
rate or divided up slightly different.
12. No comment
13. N/A
Have you used real-time operating systems to support your schedule guarantees? If yes, what
kind of operating systems have you used and what kind of scheduling challenges have you
encountered?
1. NO
4. Not really, we typically use C language asserts to detect their occurrence and then design
them out.
5. NA
6.
I left the field before using commercial RTOS
9. Operating system using Ada or assembly Keeping from cause a minor frame
overrun
A-19
10. VxWorks 5.4 Main challenge is to interface to our
custom hardware
Do you verify what data gets dumped, due to priority settings and functions getting preempted?
If yes, how does it affect your system?
1. N/A
2. We have verified context switching during our work on certification of the VxWorks/Cert
OS
3. Preempted functions because of priority settings are only delayed. In our designs, pre-
emption generally happens when we are trying to simultaneously transmit on an ARINC
429 bus while we are receiving data. This means functions that come up while the
transmitter is on will have to wait until it is off to complete. We guarantee that this will
happen in less than 200 mSec.
4. Yes. If the problem is severe enough then the system is declared invalid and
functionality is removed
5. No
6. N/A
8. Yes. The impact varies. In some cases the loss of data is acceptable and in other cases it
is absolutely critical that no data is lost.
9. Don’t know
10. N/A
11. No
12. Buffer sizes and interrupt/task priorities are selected to assure that no data are lost
13. N/A
A-20
A.5.6 SCHEDULING QUESTION F.
Do you use tools to assist in the real-time scheduling analysis? If yes, what kind of tools? How
are the outputs of these tools verified?
3. None yet
5. Real-time timing and analysis tools assisted Timing measurements and analysis of
by debugger tools data.
8. No
10. Scope and ARINC bus analyzer Monitor certain variables in software
12. NO
A-21
A.5.7 SCHEDULING QUESTION G.
What trends in commercial aviation systems do you think will challenge the current scheduling
approaches (i.e., may lead to the need for new scheduling algorithms)?
1. Desire to reuse, desire to inherit confidence from re-use, desire to use non-developmental
items
2. Don’t know
4. A virtual explosion in the use of very high speed data buses will be very challenging.
9. Don’t know.
12. The move to 178B will challenge our existing approach. Thus we are looking into off the
shelf RTOSs that handle these challenges. It seems that data protection and proof of code
coverage will be bigger challenges then the scheduling algorithms.
A.6 TIMING.
After system development, do you verify that deadlines are met and scheduling analysis
assumptions are correct? If yes, please explain how.
A-22
Of the 15 surveys responding, their answers were as follows:
1. Yes. Test all the paths through the logic, which we did, using test inputs and a logic
analyzer. Also, in a communication system, the achievable bit error rate vs. signal-to-
noise ratio is calculable. We could control the signal and the noise, and we used end-to-
end bit error rate testing over very long periods of time, and could observe performance
to be that predicted by theory.
4. Yes, we verify at the board level using functional tests, we incorporate flight tests with
parameter evaluation, and we measured during software verification and validation.
5. Yes. Real-time checks are continually made in the software. Status codes are stored in
NVRAM for after the fact viewing.
6. All deadlines specified as system or software requirements are verified. Less formal
analysis is not always verified.
7. As mentioned above, I’m an old guy and we were rather primitive in the 1970s and1980s.
9. Yes, via a combination of system/function testing (based upon system requirements) and
also standard software testing.
11. Yes. Per our software requirements, we review the timing analysis data.
12. Yes, several methods are used for verification. Running special system scripts an
oscilloscope is used to verify timing of certain events. 1553 Bus captures are done to
ensure specific data is sent out at the required time. Inputs are stimulated and latency to
the associated output is measured to verify it is within tolerance. In addition, inspections
are done to analyze that the code is performing certain operations at a specified time – if
the previous methods cannot be used to verify this.
13. Yes. Apply data at maximum rate to LRU inputs, check for timely and appropriate
response at LRU outputs. Error messages emitted on buffer overflow
14. Yes. The critical timing functions are verified by externally recording the time from the
input initiation to the required action. This recording is done external to the electronic
control unit by inputting stimuli to the unit and measuring the action from the system.
A-23
A.6.2 TIMING QUESTION B.
In what areas of timing verification or validation have you encountered problems and how were
they addressed?
1. Timing verification for us was much easier using test equipment than by using software
self-monitoring tools. Writing to a logic-analyzer accessible output register at known
points in the software can be helpful.
2. Biggest problems have been in area of system response times following application of
power. Most often means of addressing problem has been to shed tasks needed to be
performed before system responses are supplied (e.g., BIT).
3. Other area is in time base data processing systems where uneven work load has resulted
in some main loop overruns, solution has been to reallocate tasks to even processing out.
6. If an abnormality occurs, there is never enough data to help in analysis. If the problem is
very intermittent then special modified software may be needed to analyze the problem.
8. I think you’re asking a question that may be a no-brainer because we tended to be rather
conservative in our designs. It just wasn’t part of our culture to press the envelope since
we lacked adequate simulation tools that should be part of a designer’s kit in handling
timing issues.
10. Timing problems typically appear because of shortcomings in the original requirements.
11. Instrumentation requires access to the address and data buses; this is not possible in
production (sealed box) hardware.
12. Cannot test every possible failure mode or condition. It gets even harder to test multiple
failure modes/conditions.
13. Verifying the rate of data exchange between processors has been challenging. Special
“debug” code is left in a build to be used for verification
A-24
14. Unknown
15. None
A.7 FAULT-TOLERANT.
1. Situation bits in the “what to do next” look-up table included error conditions having
been signaled. Error detection and error handlers appear in the system at several levels.
We controlled the communication input, as well as the signal-to-noise ratio, and we could
inject faults in many places to test responses to faults.
3. N/A
4. Our testing incorporates fault injection at times to evaluate failure mechanism monitors.
For example, we turn off the watchdog toggle and evaluate the impact it has on timing.
We also write patches that allow delay loops to extend to infinity while doing nothing.
This helps us evaluate the effectiveness of our monitors and our protection schemes.
6. C language asserts are set up in the code to catch timing faults that occur during run-time.
These cause a pre-defined software fault that can be detected and traced external to the
system.
7. External to the processor system – yes. See above ‘all paths’ comments
8. Our faults were externally inserted through the I/O, but generally we ignored this area by
creating an environment where program execution was assumed by a Watchdog Timer
that was updated at some regular interval from the Executive. Being a UAV had its
advantages.
9. Yes. In some cases, things such as task slippage are completely acceptable
11. Yes. When we try to accommodate it by using other sensor data if condition is adverse
we switch to the stand-by channel
12. No
A-25
13. Unknown
14. No
In your opinion, what are the major issues regarding RTSA and its verification?
1. The greatest difficulty I have experienced is from sources of requirements who believe
what they want can be had sufficiently instantaneously that they do not examine their
needs, and are therefore not able to establish what their requirements are. The second
largest source of difficulty is from software teams that do not want to have to meet real
time requirements, and tend to claim that whatever the task, their code and their favorite
operating system is bound to be good enough. This gets reflected to the team manager
who cannot force the programmers to use the appropriate tools.
3. The OS, the hardware, the device drivers, are trying to abstract the performance of the
underlying application so that it appears to run on a virtual processing system. This
virtual processing system is trying to optimize performance by ‘globalizing’ information,
e.g. cache (instruction and data), pipelining, speculative instruction scheduling, shared
resources (e.g. memory). At the same time the OS is trying its best to keep throughput
high using buffering, interrupt driven drivers, asynchronous peripherals, bus snooping
etc. This makes time determinism very hard.
4. The major issue is not the first implementation of the real time system. It is the
modifications and changes that occur later as the system evolves, picks up more features
and functions. Regression testing is supposed to discover all those added anomalies, but
it doesn’t always.
5. The tools are very expensive and not always available; ore is training for the tools
available
6. Poor early analysis and design. Poor institutionalization of worst case throughput
analysis
8. It tends to be somewhat intuitive and has the potential for being sub-optimal
A-26
it does that). Consequently, the resulting documentation for an OS consists of very
detailed software requirements -- without any real system requirements. DO-178B and
the FAA/JAA seem to have a difficult time with this reality. I have attended SOI audits
were much discussion was spent on the requirements, so they were so detailed and didn't
trace up to high level requirements. This is a fact of life, when dealing with
platform/foundation/utility software.
11. I think the issues for the future mainly have to do with the verification not so much the
actual scheduling. Better non-intrusive tools with more visibility into the scheduling
aspects. Possibly the processors themselves need build-in hooks/functionality that can be
tapped into.
12. No comment
What other real-time scheduling experience or issues would you like to share?
2. None
4. None
5. None
6. None
7. None
6 yes
8 no
1 Maybe
A-27
III: If you said “yes” to II, what is your area of interest?
1. ATC systems
2. We certified Ada RTS’s, VxWorks/Cert, a BSP and continue certification work with a
Partitioned Integrated Modular Architecture. We are particularly interested in solving
time analysis problem. Right now we push the problem back to the user of the RTOS, but
I expect we will be asked many more questions about the contribution that the OS makes
to the timing of the application.
3. We have thought about this a lot, and while we can produce a lot of data we don’t know
what format this should be to be useful to the user.
5. Software development.
6. Verification and certification
8. I think Cots and partitioned operating systems are getting more & more attention – I’d
like to learn more about these
A-28
APPENDIX B—THE RESEARCH PROJECT DETAILS WITH ASSUMPTIONS
In this appendix, the known results are described in sections B.1 and B.2. The new results
obtained in this research effort are described in sections B.3 through B.6.
EDF can be implemented by maintaining a queue of active tasks (i.e., tasks that have made a
request but have not yet finished execution) arranged in ascending order of the deadlines of the
requests of the tasks. Whenever the processor becomes free for assignment, the task at the head
of the queue will be assigned to the processor. When a new request arrives, its deadline will be
compared with the deadline of the task that is currently executing, and if the deadline of the
newly arrived request is closer to the current time, it will receive the processor. The task that
was executing before will be pre-empted and put back in the queue. EDF is usually implemented
by software because of the operations involved. This makes it not as appealing as Fixed-Priority
scheduling algorithms, since the context switching time is higher than those of Fixed-Priority
scheduling algorithms. On the other hand, EDF yields a higher processor utilization than Fixed-
Priority scheduling algorithms.
Table B-1 shows a real-time TS whose EDF schedule is shown in figure B-1.
Ti si ei di pi
T1 0 2 6 15
T2 1 1 3 5
T3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
T3 T2 T1 T3 T1 T3 T2 T3 T2 T3 T3 …
B-1
The question of determining if a real-time task system (TS) is schedulable on one processor by
EDF is considered. This is tantamount to determining if the schedule produced by EDF is valid.
As the next theorem [B-3] shows, some simple cases of this question can be determined
efficiently.
to be schedulable by EDF.
Theorem 1 gives a simple test for schedulability in the special case where di = pi for each
n
1 ≤ i ≤ n . The TS is schedulable if, and only if, ∑
i=1
ei
pi ≤ 1.
There is no simple test for schedulability when di is not equal to pi for some 1 ≤ i ≤ n . If
n n
∑
i=1
ei
pi > 1 , one can safely say that it is not schedulable. On the other hand, if ∑
i=1
ei
pi ≤ 1 , one
n
cannot conclude that it is schedulable. Similarly, if ∑ i=1
ei
di ≤ 1 , one can safely say that it is
n
schedulable. On the other hand, if ∑i=1
ei
di > 1 , one cannot conclude that it is not schedulable.
While there is no simple test for the general case, one can still test schedulability by checking if
the deadline of each request of each task in the EDF schedule is met. For this method to work,
one needs to establish an a priori time-bound for which one needs to construct the EDF schedule.
If the initial request times of all tasks are identical, an obvious time-bound would be P = least
common multiple of { p1 ,..., pn } . This is because all tasks simultaneously make a request at their
initial request time, and then again simultaneously make a request at P time units later. The EDF
schedule will be cyclic with a cycle length equal to P. Thus, if the EDF schedule is valid for a
period of P time units, it will be valid for any length of time.
On the other hand, if the initial request times of the tasks are not identical, it is not clear that such
a time-bound necessarily exists. Leung and Merrill [B-3] showed that such a time-bound indeed
exists and is given by s + 2 P , where s = max{s1 ,..., sn } . In the following discussions,
min{s1 ,..., sn } = 0 is assumed.
B-2
Let S be the EDF schedule of the real-time TS. Define the configuration of S at time t, denoted
by CS(t), to be the n-tuple (e1,t ,…, en,t ), where ei,t is the amount of time for which Ti has executed
since its last request up until time t, and ei ,t is undefined if t < si . Leung and Merrill [B-3]
proved the following theorem.
Theorem 2: Let S be the EDF schedule of a real-time TS. TS is schedulable by EDF on one
processor if, and only if, (1) all deadlines in the interval [0, t2] are met in S, where t2 = s + 2 P ,
and (2) CS (t1 ) = CS (t2 ) , where t1 = t2 − P .
Note that the running time of the above algorithm is an exponential function of the input
parameters, p1 ,..., pn . One wonders whether there are more efficient algorithms, e.g., algorithms
whose running time is a polynomial function of the input parameters. Unfortunately, Leung and
Merrill [B-3] showed that it is unlikely that such an algorithm could be found, as the next
theorem shows.
NP-complete problems are a class of notorious computational problems. This class of problems
has the property that if any problem in the class has a polynomial-time algorithm, then every
problem in the class would have a polynomial-time algorithm. At the present time, none of the
NP-complete problems can be solved in polynomial time. Since this class contains many
notorious problems (such as traveling salesman problem, Hamiltonian path problem, etc.), which
had been studied for more than a century, it is widely conjectured that none of the NP-complete
problems can be solved in polynomial time. The reader is referred to the excellent book by
Garey and Johnson [B-4] for a discussion of the concept and implications of NP-completeness
and NP-hardness (which will be discussed later in this theorem, as well as in theorems 4, 7, 10,
and 11).
Leung [B-5] showed that EDF is not optimal for m > 1 processors. At the present time, no
simple algorithm is known to be optimal for more than one processor. Lawler and Martel [B-6]
used the idea of network flow to construct a valid schedule on m processors if the TS is indeed
feasible on m processors. At run time, the scheduler must schedule tasks according to the
scheduling table. This is essentially the Clock-Driven approach, which, as discussed in
section 3, is not desirable.
There are two general approaches in scheduling on m > 1 processors: the global approach and
the partition approach. In the global approach, the m processors are treated as a pool, and an
active task is assigned to an available processor if there is one. Otherwise, it will be put into a
B-3
waiting queue until a processor becomes available. The waiting queue is ordered by the
priorities of the tasks. By contrast, the partition approach partitions the set of tasks into m
groups, with each group of tasks assigned to a processor. Tasks assigned to a processor can only
be executed by that processor. The partition approach is generally preferred over the global
approach because of the ease in processor management and the availability of optimal algorithms
for one processor. The following discussions are restricted to the partition approach only.
The main goal in multiprocessor scheduling is to partition a set of real-time tasks into the
smallest number of groups such that each group is schedulable by EDF on one processor.
Unfortunately, this problem is also NP-hard, as shown by Leung and Whitehead [B-7]. (Note:
An NP-hard problem is at least as hard as an NP-complete problem, and possibly harder. At the
present time, there are no known polynomial-time algorithms to solve either an NP-complete or
an NP-hard problem. It is widely conjectured that none of these can be solved in polynomial
time.)
Theorem 4: The problem of partitioning a set of real-time tasks into the smallest number of
groups such that each group is schedulable by EDF on one processor is NP-hard.
Theorem 4 suggests that it is extremely unlikely to solve this problem in polynomial time. Thus,
there is a need to develop fast heuristics that will yield near-optimal solutions. In the literature,
there seems to be a pronounced absence of fast heuristics for the general case. This is probably
due to the fact that it is time-consuming to check if a set of tasks is schedulable on one processor
by EDF. However, for the special case where each task has its relative deadline identical to its
period, there is a simple test for schedulability; a set of n tasks is schedulable on one processor
n
by EDF if, and only if, ∑
i=1
ei
pi ≤ 1 . One would expect that fast heuristics exist for this special case.
As it turns out, this special case can be modeled as a bin packing problem. In the bin packing
problem, an infinite collection of unit-capacity bins and a list of pieces with sizes between 0 and
1 are given. The goal is to pack the pieces into a minimum number of bins so that no bin
contains pieces with sizes totaling more than 1. The bin packing problem is known to be NP-
hard, but there are numerous effective heuristics for it; see Coffman, Garey, and Johnson [B-8]
for a survey. One can model the task partition problem as a bin packing problem as follows.
Each processor is viewed as a unit-capacity bin. The tasks are treated as pieces with sizes given
by epii . The problem of partitioning the tasks into a minimum number of groups so that each
group is schedulable by EDF is equivalent to packing the pieces into a minimum number of bins
so that no bin contains pieces with sizes totaling more than 1.
As mentioned in section 3, the Deadline Monotonic (DM) algorithm is optimal on one processor
with respect to the Fixed-Priority scheduling discipline [B-7]. This means that any real-time TS
schedulable on one processor by any Fixed-Priority scheduling algorithm is also schedulable by
DM. It also means that a real-time TS is feasible on one processor with respect to Fixed-Priority
scheduling discipline if, and only if, it is schedulable by DM. Since DM is optimal on one
B-4
processor, there is no compelling reason to consider other Fixed-Priority scheduling algorithms.
For this reason, the study will be restricted to DM only.
DM assigns the highest priority to the task with the smallest relative deadline and the lowest
priority to the task with the largest relative deadline. When the relative deadline of each task is
identical to its period, DM converges to the Rate-Monotonic (RM) algorithm due to Liu and
Layland [B-2]. DM and RM can be implemented by attaching the priority of the task to the
hardware interrupt level; i.e., the task with the highest priority is assigned to the highest level
interrupt. Scheduling is implemented by hardware interrupt, and context switching is done in the
interrupt handling routine. Thus, the overhead involved in scheduling can be kept to a minimum.
Ti si ei di pi
T1 0 1 2 2
T2 1 1 4 4
T3 0 1 8 8
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
T 1 T2 T1 T3 T1 T2 T1 T1 T2 T1 T3 T1 T2 T1 T1 T2 T1 T3 T1 T2 …
Theorem 5: The schedule produced by DM is valid if the deadline of the first request of each
task is met when all tasks make their first request simultaneously at the same time.
So far, the study had been assuming that tasks make requests periodically. In some situations,
tasks may make requests at random times, but it is guaranteed that two consecutive requests of
the same task, say Ti, are separated by a minimum time interval, say pi. These kind of tasks will
be called sporadic tasks, while tasks defined earlier will be called periodic tasks. Note that DM
is also optimal and Theorem 5 is also applicable for sporadic tasks.
B-5
Liu and Layland [B-2] gave a sufficiency test for a real-time TS to be schedulable by DM (or
RM). Their sufficiency test assumes that the relative deadline of each task is identical to its
period. (Under this assumption, DM and RM are identical.) As the following theorem shows,
this gives a test to check schedulability faster than the above method.
Theorem 6: Let TS be a real-time TS consisting of n real-time tasks, where each task’s relative
n
deadline is identical to its period. TS is schedulable by DM (or RM) if ∑
i =1
ei
pi ≤ n(21/ n − 1) .
A simple test for schedulability consists of comparing the sum of the utilization factors ( )
ei
pi of
the n tasks against the value n(21/ n − 1) . If the sum of the utilization factors is less than or equal
to n(21/ n − 1) , then the real-time TS is schedulable by DM (or RM). Otherwise, there is no
conclusive evidence; it may or may not be schedulable. The function n(21/ n − 1) is a decreasing
function of n. When n approaches infinity, the function approaches ln 2 = 0.69....
Leung and Whitehead [B-7] showed that DM is not optimal for m > 1 processors. At the present
time, no simple algorithm is known to be optimal for more than one processor. Researchers have
considered the partition approach in scheduling on multiprocessor systems. Unfortunately,
theorem 7 was shown by Leung and Whitehead [B-7].
Theorem 7: The problem of partitioning a set of real-time tasks into a minimum number of
groups so that each group is schedulable by DM on one processor is NP-hard.
Theorem 7 shows that it is extremely unlikely that a fast algorithm exists to schedule a set of
real-time tasks on a minimum number of processors. Motivated by the computational
complexity, Dhall and Liu [B-9] considered two heuristics, the Rate-Monotonic-Next-Fit
(RMNF) and the Rate-Monotonic-First-Fit (RMFF) algorithms, both of them are adapted from
bin packing heuristics. They showed that the worst-case bounds for RMNF and RMFF are 2.67
4 × 21 3
and , respectively. Specifically, they showed the following theorem.
( )
1 + 21 3
Theorem 8: Let NRMNF and NRMFF be the numbers of processors required to feasibly schedule a
set of tasks by the RMNF and RMFF algorithms, respectively, and let NOPT be the minimum
number required. Then, as NOPT approaches infinity, 2.4 ≤ NNRMNF
OPT
≤ 2.67 and 2 ≤ N RMFF
N OPT
≤ 4× 21/ 3
(1+ 21/ 3 )
.
In section B.2, it was assumed that the system has as many priority levels as the number of real-
time tasks. Consequently, each task can be assigned a distinct priority. In practice, the number
of priority levels in a computer system is very limited and is far exceeded by the number of real-
time tasks. Thus, several tasks need be mapped to the same priority level. In the remainder of
this section, it is assumed that there are m priority levels into which n tasks are mapped,
where m < n. When tasks with the same priority make a request, it is assumed that they are
B-6
scheduled in a first-come, first-serve (FCFS) manner. This is usually how the hardware service
interrupts.
Two important questions naturally arise. First, how does one determine if a given priority
assignment produces a valid schedule? Note that several tasks at the same priority level may
simultaneously make a request, and it is not clear how to characterize the worst-case scenario.
Second, what is an optimal priority assignment? Here, an optimal priority assignment means that
it can always schedule any set of tasks that can be feasibly scheduled on m priority levels.
Pertaining to the first question, theorem 5 (stated in section B.2) is still applicable to this model,
but one more condition needs to be satisfied as well. The additional condition is that tasks at the
same priority level are serviced in the reverse order of their relative deadlines; i.e., the task with
the largest relative deadline is serviced first and the task with the smallest relative deadline is
serviced last. This is because tasks at the same priority level may make requests in the reverse
order of their relative deadlines. Since they are scheduled in a FCFS manner, this represents the
worst-case scenario. Thus, the following theorem exists.
Theorem 9: The schedule produced by a given priority assignment is valid if the deadline of the
first request of each task is met when all tasks make their first request simultaneously at the same
time, with the stipulation that tasks at the same priority level are serviced in the reverse order of
their relative deadlines (i.e., tasks with the largest relative deadline serviced first).
An alternative method is to carry out time-demand analysis, as described in reference B-10. Let
G denote a priority assignment, and let Gi denote the set of tasks having priority i, 1 ≤ i ≤ m . By
convention, assume that priority 1 is the highest priority and priority m is the lowest. Suppose
there are k tasks in Gi. Use Gi,j to denote the j-th task in the set. Without loss of generality,
assume that the relative deadline of Gi,j is less than or equal to that of Gi,j +1, for each 1 ≤ j ≤ k .
For a given task Ta ∈ Gi , suppose it makes a request at time 0, along with all tasks at an equal or
higher priority than Ta . Then, the total time demand wa (t ) of this request of Ta , along with all
tasks at an equal or higher priority than Ta in the time interval [0, t) is:
j <i
wa (t ) = ∑ e + ∑ ⎡⎢
Tl ∈Gi
l
Tl ∈G j
t
pl
⎤ el , for 0 < t ≤ d a
⎥
Task Ta can meet its deadline if, and only if, there is a time instant t, 0 < t ≤ d a , such that
wa (t ) ≤ t . Thus, all one needs to do is to check those time instants, t, which are integral
multiples of the periods of some tasks belonging to Gj, 1 ≤ j < i , to see if there is a time instant t
such that wa (t ) ≤ t .
It is easy to see that for the task set Gi, if Gi ,1 can meet its deadline, then all other tasks in Gi can
also meet its deadline. Therefore, to show that the schedule produced by a given priority
assignment is valid, all one needs to show is that Gi ,1 can meet its deadline using the time-
demand analysis, for all i = 1, 2, . . . , m.
B-7
The running time of the above procedure is a polynomial function of m, n and max{di } . It is a
pseudo-polynomial time algorithm. A pseudo-polynomial time algorithm is one that runs in
polynomial time with respect to the size of the input, provided that all integer parameters are
represented in unary (base 1) notation. (Note: normally, integer parameters are represented in
binary (base 2) notation. The effect of representing integers in unary is to inflate the size of the
input so that an exponential-time algorithm looks like polynomial.)
Again, pertaining to the second question, recall that DM assigns the highest priority to the task
with the smallest relative deadline and that DM is optimal when there are as many system
priority levels as the number of tasks. One can adapt DM to systems with limited priority levels
as follows. Assign the highest priority to the task with the smallest relative deadline. Keep on
assigning the same priority to the task with the next smallest relative deadline until it is infeasible
(according to the time-demand analysis given above), at which point assign the next priority
level to the task. This assignment is called the Deadline-Monotonic-with-Limited-Priority-Level
(DM-LPL) assignment.
Shown below is algorithm DM-LPL. It tries to assign m priority levels to n tasks, where m < n.
If the resulting assignment is not valid, it will output Not schedulable. In the algorithm, Gi
denotes the set of tasks with priority level i, 1 ≤ i ≤ m ; it is assumed that priority level 1 is the
highest priority and priority level m is the lowest.
Sort the jobs in ascending order of their relative deadlines; i.e., d1 ≤ d 2 ≤ ... ≤ d n .
Let Gi = ∅ for all i, 1 ≤ i ≤ m .
i = 1.
For j = 1 to n
Gi = Gi ∪ {T j } .
Use time-demand analysis to check if Gi,1 can meet its deadline.
If Gi,1 cannot meet its deadline, then
Gi = Gi - { Tj } .
If i + 1 > m, then output “Not Schedulable” and exit,
else
Gi +1 = Gi +1 ∪ {T j }
Use time demand analysis to check if Gi+1,1 can meet its deadline.
If Gi +1,1 cannot meet its deadline,
then output ”Not Schedulable” and exit.
Else i = i + 1.
Output G1 , G2 ,...Gm .
B-8
DM-LPL will be proved to be optimal in the sense that any TS feasible on one processor with m
system priority levels is also schedulable by DM-LPL. Before proving the theorem, the next
lemma will be proved first. Lemma 1 shows that if a TS is feasible, there is always a priority
assignment such that tasks with small relative deadlines have equal or higher priority than tasks
with large relative deadlines.
Lemma 1: If TS is feasible on one processor with m system priority levels, then there is a valid
priority assignment such that tasks with small relative deadlines have equal or higher priority
than tasks with large relative deadlines. In other words, there is no task with a large relative
deadline having a higher priority than a task with a small relative deadline.
Proof. Since TS is feasible, there must be a valid priority assignment, say G, for the set of tasks.
If G satisfies the property of the lemma, then the lemma is proven. Otherwise, there must be two
tasks, Ta and Tb with d a < db such that Tb ∈ Gk and Ta ∈ Gk +1 . Create a new priority assignment
G ' such that Tb has the same priority as Ta and G ' is still valid. Repeating this argument will
finally prove Lemma 1.
G ' = {G1' , G2' ,...Gm' } is defined as: Gk' = Gk − {Tb } , Gk' +1 = Gk +1 ∪ {Tb } , and for all others,
i ≠ k , k + 1 , G = Gi . G ' will be shown to be a valid priority assignment next.
i
'
Note that the only priority change in G ' is Tb; the priority of all other tasks remain the same as
before. Since a lower priority is assigned to Tb in G ' , it is clear that all tasks having equal or
higher priority as Tb in G can still meet their deadlines. Similarly, all tasks having lower priority
than Ta in G can also meet their deadlines, since the priority change of Tb will not affect their
operation. Therefore, it remains to prove that the tasks in Gk' +1 can meet their deadlines under
the new priority assignment. To prove this, it is sufficient to show that the task Gk' +1,1 can still
meet its deadline. Note that Gk' +1,1 is the same task as Gk +1,1 . Let this task be Tc.
Since Tc can meet its deadline under G, there must be a time instant t1, 0 < t1 ≤ d c , such that the
following inequality holds
j < k +1
wc (t1 ) = ∑
Ti ∈Gk +1
ei + ∑ ⎡⎢
Ti ∈G j
t1
pi
⎤ ei ≤ t1
⎥
j < k +1
wc' (t ) = ∑ ei + ∑ ⎡⎢ t
pi
⎤ ei ,
⎥ for 0 < t ≤ d c
Ti ∈Gk' +1 Ti ∈G 'j
B-9
By assumption, dc ≤ d a < db ≤ pa . Moreover, ⎡⎢ pt1b ⎤⎥ = 1 , since t1 ≤ dc ≤ pb . Thus, the
maximum response time for Tc at time t1 is
j < k +1
w (t1 ) =
'
c ∑ ei + ∑ ⎡⎢ t1
pi
⎤ ei
⎥
Ti ∈Gk' +1 Ti ∈G 'j
j < k +1
= ∑ ei + eb + ∑ ⎢⎡ t1
pi
⎤
⎥ ei
Ti ∈Gk +1 Ti ∈G 'j
j < k +1
= ∑ ei + ⎡⎢ pt1b ⎤⎥ eb + ∑ ⎡⎢ t1
pi
⎤ ei
⎥
Ti ∈Gk +1 Ti ∈G 'j
j < k +1
= ∑ ei + ∑ ⎡⎢ t1
pi
⎤ ei
⎥
Ti ∈Gk +1 Ti ∈G j
= wc (t1 )
≤ t1
≤ dc .
But this means that Tc can meet its deadline under G ' .
Proof. Let TS be a feasible task system on one processor and let G be a valid priority assignment
for TS. By Lemma 1, it can be assumed that in G tasks with small relative deadlines have equal
or higher priority than tasks with large relative deadlines. If G is identical to the priority
assignment obtained by DM-LPL, then the lemma is proved. Otherwise, there must be a task Tb
in Gk+1 that can be feasibly assigned to Gk. Without loss of generality, assume that Tb is Gk +1,1 .
Let Gk ,1 be Ta. Since Tb can be feasibly assigned to Gk, there must be a time instant t1,
0 < t1 ≤ d a , such that wa (t1 ) + eb ≤ t1 . Construct a new priority assignment G ' = {G1' , G2' ,...Gm' }
such that Gk' = Gk ∪ {Tb } , Gk' +1 = Gk +1 − {Tb } , and for all other i ≠ k , k + 1 , Gi' = Gi .
One can claim that G ' is also a valid priority assignment. It is easy to see that all tasks in Gi' ,
i ≠ k , k +1 , can still meet their deadlines, since Gi' = Gi for i ≠ k , k + 1 . Suppose Gk' +1,1 is Tc. If it
can be proved that both Ta and Tc can meet their deadlines under G ' , then G ' is also valid.
By assumption, Tb can be feasibly assigned to Gk. Therefore, under the new assignment G ' , Ta
can still meet its deadline. Since Tb can meet its deadline under G, there must be a time instant
t1, 0 < t1 ≤ db , such that
B-10
j < k +1
wb (t1 ) = ∑ ei + ∑ ⎡⎢ t1
pi
⎤ ei
⎥
Ti ∈Gk +1 Ti ∈G j
j < k +1
= ∑ ei + eb + ∑ ⎡⎢ t1
pi
⎤ ei
⎥
Ti ∈Gk' +1 Ti ∈G j
≤ t1
Theorem 6 shows that the processor utilization is at least 0.69 when the computing system has
infinite priority levels. When the computing system has limited priority levels, however, the
processor utilization can be quite poor. Consider n tasks with the following characteristics. All
n tasks have their relative deadlines identical to their periods. Task 1 has its execution time
much smaller than its period; i.e., e1 << p1 . Task 2 has its execution time equal to the period of
the first task (i.e., e2 = p1 ), but is much smaller than its own period (i.e., e2 << p2 ). Similarly,
task 3 has its execution time equal to the period of the second task (i.e., e3 = p2 ), but is much
smaller than its own period (i.e., e3 << p3 ). The remaining tasks have execution times and
periods following the same pattern. Since ei + ei +1 > pi for all 1 ≤ i < n , no two tasks can be
assigned the same priority level. Therefore, ⎡⎢ mn ⎤⎥ processors are needed, where each processor
has exactly m priority levels. Notice that the processor utilization in each processor is very low.
On the other hand, if the computing system has infinite priority levels, all n tasks can be
scheduled on one processor by DM.
The above example reveals that the processor utilization can be quite low for some pathological
situations. It is unlikely that this kind of situation occurs frequently in practice. The above
example also suggests that more processors are necessary when computing systems with limited
priority levels are used, compared with systems that have infinite priority levels. In this
subsection, the problem of scheduling tasks on multiprocessors with limited priority levels is
considered. The goal is to find the minimum number of processors necessary to schedule n
tasks. Such an algorithm will be called an optimal algorithm.
B-11
In a search for an optimal algorithm, a natural candidate is the greedy algorithm, which works as
follows.
• Starting with the first task, schedule as many tasks as possible by the DM-LPL algorithm
onto the current processor, until a task that cannot be feasibly scheduled by DM-LPL is
encountered, at which point schedule the task onto a new processor (which now becomes
the current processor).
• The above process is repeated until all tasks have been scheduled.
One wonders if the above greedy algorithm is optimal. Unfortunately, the answer is negative.
Consider the following six tasks, all of which have their relative deadlines identical to their
periods and all have their initial requests made at time 0. In the following, each task Ti is
represented by an ordered pair (ei , di ) : T1 = (1, 5), T2 = (2, 6), T3 = (3, 9), T4 = (5, 10), T5 =
(6, 16), and T6 = (1, 20). Suppose each processor has two priority levels. The greedy algorithm
yields three processors, with P1: G1 = {T1, T2}, G2 = { T3 }; for P2: G1 = {T4} , G2 = {T5}; and
for P3: G1 = {T6}. However, these six tasks can be scheduled on two processors such that P1:
G1 = {T1, T3}, G2 = {T5}; for P2: G1 = {T2}, G2 = {T4, T6}.
As it turns out, the problem of finding the minimum number of processors is NP-hard. By a
simple modification of the proof in reference B-7, the following theorem can be proved.
Theorem 11: The problem of finding the minimum number of processors with m priority levels
to schedule a set of tasks is NP-hard.
Theorem 11 suggests that there is no efficient algorithm to schedule a set of tasks on the
minimum number of processors. Motivated by the computational complexity of the problem,
two heuristics, called First-Fit (FF) and First-Fit-Decreasing-Utilization (FFDU), respectively,
are considered. The FF algorithm sorts the tasks in ascending order of their relative deadlines,
while the FFDU algorithm sorts the tasks in ascending order of their utilization factors ( epii ).
Both algorithms try to schedule the next task into the lowest-indexed processor by the DM-LPL
algorithm.
Algorithm FF:
B-12
For j = 1 to n
Assign task Tj as follows.
i = 1 (i is processor index).
While Tj has not been assigned
Use time-demand analysis to test if Tj can be assigned to GLi i .
If Tj can be assigned to GLi i
Assign Tj to GLi i .
Else if Li + 1 ≤ m and Tj can be assigned to GLi i +1
Li = Li + 1 .
Assign Tj to GLi i .
Else if i < k
i = i + 1.
Else
k = k + 1, i = k.
Output k.
Algorithm FFDU:
The worst-case bounds for FF and FFDU are not known yet.
B-13
B.5 UNIT-EXECUTION-TIME TASK SYSTEMS: UNLIMITED PRIORITY LEVELS.
In the previous section, it was shown that processor utilization can be quite low when the
computing system has limited priority levels. The situation could be better if each task has
identical execution time, say 1 unit. Unit-execution-time task systems are interesting in its own
right, since it models bus communication where each packet of information takes 1 unit time to
send. Here, the communication bus is the processor. In this section, the case where the
computing systems have unlimited priority levels is considered.
A set of n real-time tasks, where each task has 1 unit of execution time (i.e., ei = 1 for each
1 ≤ i ≤ n ) and the relative deadline of each task is identical to its period (i.e., di = pi for each
1 ≤ i ≤ n ), is considered. Thus, each task is completely characterized by its relative deadline (or
equivalently, its period), which is assumed to be an integer. Assume that the tasks have been
sorted in ascending order of their relative deadlines; i.e., d1 ≤ d 2 ≤ ... ≤ d n .
The goal is to find a utilization bound for n tasks, U(n), such that if the n tasks have total
utilization less than or equal to U(n), then they are always schedulable on a single processor by
DM. If the task set has total utilization larger than U(n), then it may or may not be schedulable
by DM, depending on the relationships among their relative deadlines. It is conjectured that:
A full proof has not been obtained. Two special cases as described in appendix B.5.1 and B.5.2
were proved.
Definition 1: A task set is said to perfectly utilize the processor with respect to a priority
assignment if it is schedulable, but a decrease in the largest period will make the task set
unschedulable under the same priority assignment.
Given the number of tasks n and T1(1, d1) where d1 ≤ n − 1 , construct T2(1,d2), T3(1, d3), …,
Tn(1,dn) as follows.
⎧di = ⎡ ddi ⎤ + (n + i − 4)
⎪ ⎢ 1⎥
⎨ (B-1)
⎪⎩t ≤ ⎡⎢ dt1 ⎤⎥ + (n + i − 4) for t < di
To define dn-1 and dn, define two variables t1 and t2, which will also be referred throughout the
report hereforth.
B-14
Let t1 be the maximum integer such that
⎧t1 = ⎡ dt1 ⎤ + 2n − 5
⎪ ⎢ 1⎥
⎨ (B-2)
⎪⎩t ≤ ⎡⎢ d1 ⎤⎥ + 2n − 5 for t < t1
t
⎧t2 = ⎡ dt2 ⎤ + 2n − 3
⎪ ⎢ 1⎥
⎨ (B-3)
⎪⎩t < ⎡⎢ d1 ⎤⎥ + 2n − 3 for t < t2
t
For convenience, the constructed instance will be called the perfect instance for n and d1. The
constructed perfect instance will be shown to have several properties, and then the instance will
be shown to perfectly utilize the processor.
( n −1) d1 − r
d2 = d1 −1 for some r such that 1 ≤ r ≤ d1 − 1 (B-4)
Proof: It has already been shown that d 2 is not a multiple of d1 . So, assume that
d 2 = k d1 + r for some integers k and r such that 1 ≤ r ≤ d1 − 1 . According to the
construction, d 2 satisfies equation B-1. Therefore, d 2 = k + 1 + n − 2 = k d1 + r , which
B-15
( n −1) d1 − r
implies that d 2 = d1 −1 . Throughout this section, when r is mentioned, by default, it
means r as defined above.
If ri ≤ d1 − 2 , then ⎡⎢ dd1i ⎤⎥ = ⎡⎢ ddi +1 1 ⎤⎥ . It is easy to show that di + 1 satisfies equation B-1 for
i + 1 , and it is indeed the maximum integer that satisfies equation B-1. Therefore,
di +1 = di + 1 and ri +1 = ri + 1 .
If ri = d1 − 1 , then ⎡⎢ id1 ⎤⎥ = ⎡⎢ d1i ⎤⎥ + 1 . Similarly, one can show that di + 2 is the maximum
d +2 d
In summary, for 2 ≤ i ≤ n − 3 ,
4. Either t1 = 2d 2 −1 , or t1 = 2d 2 − 2 , or t1 = 2d 2 − 3 . Either t2 = 2d 2 or t2 = 2d 2 + 1 .
• r = 1 . Then,
( )
2d 2 − 3 = 2 ⎡⎢ dd12 ⎤⎥ + n − 2 − 3
( )
= ⎡⎢ 2 dd21−3 ⎤⎥ + 2 + 2n − 7
= ⎡⎢ 2 dd21−3 ⎤⎥ + 2n − 5
One can easily show that t ≤ ⎡⎢ dt1 ⎤⎥ + 2n − 5 for t < 2d2-3 and t > ⎡⎢ dt1 ⎤⎥ + 2n − 5 for
t > 2d2-3. Therefore, by the definition of t1, t1 = 2d2 -3.
B-16
d1 + 2
• r< 2 .
(
2d 2 − 2 = 2 ⎡⎢ dd12 ⎤⎥ + n − 2 − 2 )
( )
= ⎡⎢ 2 dd21− 2 ⎤⎥ + 1 + 2n − 6
= ⎡⎢ 2 dd21− 2 ⎤⎥ + 2n − 5
d1 +1
• r> 2 .
(
2d 2 − 1 = 2 ⎡⎢ dd12 ⎤⎥ + n − 2 − 1 )
( )
= ⎢⎡ 2 dd21−1 ⎥⎤ + 2 + 2n − 5
= ⎡⎢ 2 dd21−1 ⎤⎥ + 2n − 5
• r≤ d1
2 .
(
2d 2 = 2 ⎡⎢ dd12 ⎤⎥ + n − 2 )
= ⎢⎡ 2dd12 ⎥⎤ + 1 + 2(n − 2)
= ⎡⎢ 2dd12 ⎤⎥ + 2n − 3
• r> d1
2 .
(
2d 2 + 1 = 2 ⎢⎡ dd12 ⎥⎤ + n − 2 + 1 )
= ⎡⎢ 2dd12 ⎤⎥ + 2n − 3
= ⎡⎢ 2 dd21+1 ⎤⎥ + 2n − 3
Using the same argument as for t1 and by the definition of t2, it can be shown that
t2 = 2d2 when r ≤ d21 and t2 =2d2+1 when r > d21 .
B-17
TABLE B-3. RELATIONSHIP BETWEEN dn-1, dn, AND d2 WHEN d1 IS EVEN
t1 t2 dn-1 dn
2≤r≤ d1
2 2d2 - 2 2d2 2d2 - 1 2d2 - 1
d1 + 2
2 ≤ r ≤ d1 − 1 2d2 - 1 2d2 + 1 2d2 - 1 2d2+2
t1 t2 d n −1 dn
r =1 2d 2 − 3 2d2 2d 2 − 3 2d 2
d1 −1
2≤r≤ 2 2d2 - 2 2d2 2d2 - 1 2d2 - 1
d1 +1
r= 2 2d2 - 2 2d2 + 1 2d2 - 2 2d2+2
d1 + 3
2 ≤ r ≤ d1 − 1 2d2 - 1 2d2 + 1 2d2 - 1 2d2+2
Proof: There are three cases depending on the relationship between dn and d2.
d1 −1
(i) d n = d n −1 = 2d 2 − 1 . From tables B-3 and B-4, dn = dn-1 happens when 2 ≤ r ≤ 2 if d1
is even or when 2 ≤ r ≤ d1
2 if d1 is odd. In either case, d n = 2d2 − 1 = 2kd1 + 2r − 1 and
3 ≤ 2r − 1 ≤ d1 − 1 . Therefore rn = 2r –1 ≥ 3.
6. The perfect instance T1, T2, … , Tn constructed as above perfectly utilizes the processor.
Proof: Some characterizations about the deadline monotonic schedule of the perfect
instance will be given first. The claim follows directly from these characterizations and
the definition of perfectly utilization.
B-18
a. There is no idle time before t1 .
It is enough to prove that at any time t < t1 , the number of requests R from T1,
T2, … , Tn-1 is greater than or equal to t. There are two cases depending on t.
• t = di .
If i = 1, at time t = d1 , R = n - 1. Since d1 ≤ n − 1 , t ≤ R . For 2 ≤ i ≤ n − 2 ,
R = ⎡⎢ dd1i ⎤⎥ + 2(i − 2) + ( n − i ) = ⎡⎢ dd1i ⎤⎥ + ( n + i − 4) . By construction, t = di = R .
• di < t < di +1 .
b. All requests from T2, … , Tn-1 during the period [0, t1) have been executed at t1 .
From the construction, d n −2 < t1 ≤ d n −1 < 2d 2 < 2di for 3 < i < n. So, at time t1 ,
T2 ,..., Tn −2 all make exactly two requests, and Tn −1 makes one request. Therefore,
the total number of requests made by T1, T2, … , Tn-1 is
R = ⎢⎡ dt11 ⎥⎤ + 2( n − 3) + 1 = ⎢⎡ dt11 ⎥⎤ + 2n − 5
which is equal to t1 . Therefore, all requests from T2 ,..., Tn −1 have been executed.
c. None of T3 ,..., Tn −2 will make any request during the period [t1, dn-1].
First, di < t1 , 3 ≤ i ≤ n-2. So T3, … , Tn-2 have all made their second requests
before t1 . On the other hand, t1 < d n − 1 < 2di . Therefore, T3, … , Tn-2 will not
make their third requests from t1 until d n − 1 .
d. There is no idle time during the period [t1, dn-1] and there is no request from
T1 , T2 ,..., Tn −1 at dn-1. Therefore, the first request from Tn will be executed at dn-1
and finished just before its deadline dn.
B-19
By (c), T3,…,Tn-2 will not make any request during the period [t1, dn-1), neither
will these tasks make any request at dn-1. Therefore, it remains to show that the
requests from T1, T2, and Tn-1 will occupy the period [t1, dn-1) and will not make
any requests at dn-1.
It has been shown that t1 is not a multiple of d1 and t1 < 2d2. So, neither T1 nor T2
will make a request at t1 . For Tn-1, there are two cases depending on the
relationship between t1 and dn-1:
• d n−1 = d n = t1 + 1 . Then, t1 = dn-1 < dn-1. Therefore, Tn-1 will not make a
request at t1. Hence, the first request of Tn will be executed at t1 and the
claim is true.
− t1 = 2d 2 − 3 and t2 = 2d 2 .
− t1 = 2d 2 − 1 and t2 = 2d 2 + 1 .
Therefore, at times 2d2 –1 and 2d2, the processor will execute the
requests from Tn-1 and T2, respectively.
B-20
So, T1 will make a request at 2d2 -1, and it will not make a request
at 2d2 and 2d2 + 1.
Claim 1: Given n, the perfect instance gets the minimum utilization when d1 = n-1.
Proof: For n ≤ 5, the claim can be proved by just enumerating all task sets. Tables B-5 and B-6
show the perfect instances for n = 4 and n = 5, respectively. For n = 3, the claim is obviously
true because there is only one possible value of d1 , which is 2. Therefore, assume that n ≥ 6 in
the following.
d1 d2 d3 d4 U
2 5 7 10 0.942857
3 4 5 8 0.908333
d1 d2 d3 d4 d5 U
2 7 9 11 14 0.916306
3 5 7 8 12 0.884524
4 5 6 7 10 0.859524
( n −1) d1 − r
For each n and d1 ≤ n − 1 , let the corresponding d 2 = d1 −1 for some 1 ≤ r ≤ d1 − 1 .
Let
1 1 ⎛ n−2 1 ⎞ 1 1
U (d1 ) = + +⎜∑ ⎟+ +
d1 d 2 ⎝ i =3 di ⎠ d n −1 d n
(B-6)
1 1 ⎛ n−2 1 ⎞ 1 1
= + +⎜∑ ⎟+ +
d1 d 2 ⎜ i =3 d 2 + i − 2 + ⎢⎣ i d+1r−−13 ⎥⎦ ⎟ d n −1 d n
⎝ ⎠
B-21
It will be proved that U (d1 ) attains its minimum when d1 = n − 1 by showing that (1) the
minimum of U (d1 ) occurs when d1 > ⎡⎢ n2 ⎤⎥ and (2) U (d1 + 1) < U (d1 ) for ⎡⎢ n2 ⎤⎥ ≤ d1 ≤ n − 2 .
1 1 1 1
+ ≥ + .
d n −1 d n 2d 2 − 1 2d 2 + 2
Applying the above inequality to equation B-6 and removing the floor function,
1 1 ⎛ n−2 1 ⎞ 1 1
U (d1 ) ≥ + +⎜∑ ⎟ + +
⎜ i + r −3 ⎟
d1 d 2 ⎝ i =3 d 2 + i − 2 + d1 −1 ⎠ 2d 2 − 1 2d 2 + 2
1 1 ⎛ n−2 d1 − 1 ⎞ 1 1
= + +⎜∑ ⎟+ +
d1 d 2 ⎝ i =3 (n + i − 3) d1 − 1 ⎠ 2d 2 − 1 2d 2 + 2
1 1 ⎛ n−2 d1 − 1 ⎞ 1 1
f (d1 ) = + +⎜∑ ⎟+ + (B-7)
d1 d 2 ⎝ i =3 (n + i − 3) d1 − 1 ⎠ 2d 2 − 1 2d 2 + 2
Next, it will be proved that f (d1 ) is a decreasing function and that U (n − 1) < f ( ⎡⎢ n2 ⎥⎤ ) .
B-22
a. f (d1 ) decreases with respect to d1 .
1 (n − 1 − r ) n−2
n+i−4
f '(d1 ) = − +
d1 ⎡( n − 1) d1 − r ⎤
2 2
+ ∑i = 3 ⎡( n + i − 3 ) d − 1⎤
2
⎣ ⎦ ⎣ 1 ⎦
2n − 2r − 2 n − r −1
+ +
⎡⎣( 2n − 3) d1 − 2r + 1⎤⎦ 2 ( n d1 − r − 1)
2 2
1 ( n − 1 − 1) n−2
n+i−4
<− 2 +
d1 ⎡( n − 1) d1 − d1 ⎤ 2
+ ∑i =3 ⎡( n + i − 3 ) d − d ⎤
2
⎣ ⎦ ⎣ 1 1⎦
2n − 2 − 2 n −1 −1
+ +
⎡⎣( 2n − 3) d1 − 2d1 ⎤⎦ 2 ( n d1 − d1 − d1 )
2 2
1 ⎛ (n − 2) n−2
n+i−4
< 2 ⎜ −1 + + ∑
d1 ⎜ ⎡⎣( n − 1) − 1⎤⎦ i =3 ⎡( n + i − 3 ) − 1⎤
2 2
⎝ ⎣ ⎦
2n − 4 n−2 ⎞
+ + ⎟
( 2n − 5 ) 2 ( n − 2 ) ⎟⎠
2 2
1 ⎛ 1 n−2
1 2n − 4 1 ⎞
= ⎜
d1 ⎜⎝
2
− 1 + + ∑ + + ⎟
(n − 2) i =3 ( n + i − 4 ) ( 2n − 5 ) 2 ( n − 2 ) ⎟⎠
2
<0
b. U (n − 1) < f ( ⎡⎢ n2 ⎥⎤ ) ≤ U ( ⎢⎡ n2 ⎥⎤ ) .
1 1 ⎛ n−2 1 ⎞ 1 1
U (n − 1) = + +⎜∑ ⎟ + + (B-8)
n − 1 n ⎝ i =3 n + i − 2 ⎠ 2n − 3 2n
• n is even.
B-23
f ( ⎡⎢ n2 ⎤⎥ ) = f ( n2 )
1 1 ⎛ n−2 n −1 ⎞ 1 1
= + + ⎜⎜ ∑ 2
⎟⎟ + + (B-9)
n
2 n + 1 ⎝ i =3 ( n + i − 3) 2 − 1 ⎠ 2n + 1 2n + 4
n
2 1 ⎛ n−2 n−2 ⎞ 1 1
= + + ⎜⎜ ∑ ⎟⎟ + +
n n + 1 ⎝ i =3 ( n + i − 3) n − 2 ⎠ 2n + 1 2n + 4
Let
⎛2 1 1 1 ⎞ ⎛ 1 1 1 1 ⎞
S1 = ⎜ + + + ⎟−⎜ + + + ⎟
⎝ n n + 1 2n + 1 2n + 4 ⎠ ⎝ n − 1 n 2n − 3 2n ⎠
5n3 + 13n 2 + 8n + 1 3n − 4
= −
n ( n + 1)( n + 2 )( 2n + 1) ( n − 1)( 2n − 3)
4n5 − 12n 4 − 27 n3 + 23n 2 + 27 n + 3
=
n ( n + 1)( n + 2 )( 2n + 1)( n − 1)( 2n − 3)
For 3 ≤ i ≤ n − 1 , let
⎛ 1 n−2 ⎞
Si = ⎜⎜ − ⎟⎟ .
⎝ n + i − 2 ( n + i − 3) n − 2 ⎠
f ( n2 ) − U (n − 1)
n−2
= S1 − ∑ Si
i =3
4n − 12n 4 − 27 n3 + 23n 2 + 27 n + 3
5 n−2
= − ∑ Si
n ( n + 1)( n + 2 )( 2n + 1)( n − 1)( 2n − 3) i =3
4n5 − 12n 4 − 27 n3 + 23n 2 + 27 n + 3 n
> − (n − 4) ⋅
n ( n + 1)( n + 2 )( 2n + 1)( n − 1)( 2n − 3) ( n + 1) ( n 2 − 2 )
1 4n 6 − 20n5 − 18n 4 + 95n3 − 19n 2 − 54n − 6
=
n + 1 n ( n − 1)( n + 2 )( 2n + 1)( 2n − 3) ( n 2 − 2 )
>0
B-24
• n is odd.
Then d1 = ⎡⎢ n2 ⎤⎥ = n +1
2 . By equation B-1, d 2 = n and r = n −1
2 . Applying to
equation B-7,
1 1 ⎛ n−2 ( n +1)
2 −1
⎞ 1 1
f ( ⎡⎢ ⎥⎤ ) = n +1 + + ⎜ ∑
n
⎟+ +
2
2
⎜ ( n +1) ⎟
n ⎝ i =3 ( n + i − 3) 2 − 1 ⎠ 2n − 1 2n + 2
2 1 n −1 ⎛ n−2 n −1 ⎞ 1 1
= + + + ⎜⎜ ∑ ⎟⎟ + + (B-10)
n + 1 n ( n + 1) n − 2 ⎝ i = 4 ( n + i − 3)( n + 1) − 2 ⎠ 2n − 1 2n + 2
5 1 1 ⎛ n−2 n −1 ⎞ 1
= + + + ⎜⎜ ∑ ⎟⎟ +
2 ( n + 1) n n + 2 ⎝ i = 4 ( n + i − 3)( n + 1) − 2 ⎠ 2n − 1
Let
⎛ 3 1 1 ⎞ ⎛ 1 1 1 ⎞
S1 = ⎜⎜ + + ⎟⎟ − ⎜ + + ⎟
⎝ 2 ( n + 1) n + 2 2n − 1 ⎠ ⎝ n − 1 2n − 3 2n ⎠
12n 2 + 17 n − 4 8n 2 − 13n + 3
= −
2 ( n + 1)( n + 2 )( 2n − 1) 2n ( n − 1)( 2n − 3)
8n5 − 40n 4 − 6n3 + 85n 2 − 41n + 6
=
2n ( n + 1)( n + 2 )( 2n − 1)( n − 1)( 2n − 3)
For i ≥ 4 , let Si = 1
n +i − 2 − ( n +i −3n)(−1n +1)− 2 . Si is a decreasing function with
respect to i. When i = 4, S4 = 1
n+2 − ( n +1)(n n−1+1)− 2 = n +1
.
( n + 2) ⎡⎢⎣( n +1)2 − 2⎤⎥⎦
n−2
f ( ⎡⎢ n2 ⎤⎥ ) − U (n − 1) = S1 − ∑ Si
i =4
> S1 − ( n − 5 ) S4
(n − 5) ( n + 1)
= S1 −
( n + 2 ) ( n 2 + 2n − 1)
1 24n 6 − 116n5 + 9n 4 + 195n3 − 105n 2 + 23n − 6
= ⋅
n + 2 2n ( n + 1)( n − 1)( 2n − 1)( 2n − 3) ( n 2 + 2n − 1)
>0
B-25
U ( d1 ) > f ( d1 ) for any d1 . Therefore, U (d1 ) > f ( ⎡⎢ n2 ⎤⎥ ) for any d1 ≤ n2 . It
was next proved that U (n − 1) < f ( ⎡⎢ n2 ⎥⎤ ) . Combining them together, the
minimum of U must occur when d1 ≥ n2 .
2. U ( d1 + 1) < U ( d1 ) for n
2 < d1 ≤ n − 2 .
Let d1' = d1 + 1 , and corresponding to r, dn-1 and dn, r′, d′n-1, and d′n are defined. Based on
tables B-3 and B-4, the relationship between d1, r, dn-1, dn, and r ' , d′n-1, d′n are shown in
tables B-7 and B-8. The tables are built as follows. First, because the case d1 > n2 is
being considered, by equation B-1, d 2 = n and r = n − d1 . When d1 is increased by 1, d′1
= d1+1, d′2 = d2, and r′ = r-1. From tables B-3 and B-4, the expression of dn-1 and dn only
changes when r changes at some values. So only these special values of r and r′ will be
examined. Given r and d1, depending on the parity of d1, check either table B-3 or B-4 to
find the corresponding dn-1 and dn for r. If dn-1 and dn are obtained from table B-3, then
get d′n-1 and d′n from table B-4, and if dn-1 and dn are obtained from table B-4, then get d′n-
1 and d′n from table B-3. Look at entries of the table according to the relationship
between r′1 and d′1 instead of r and d1. U(d1) is expressed by equation B-6, and U(d1+1)
can be expressed in the following equation:
1 1 ⎛ n−2 1 ⎞ 1 1
U (d1 + 1) = + +⎜∑ ⎟+ ' + ' (B-11)
d1 + 1 d 2 ⎜ i =3 n + i − 2 + ⎢⎣ d1 +1−1 ⎥⎦ ⎟ d n−1 d n
i + ( n − d1 −1) − 3
⎝ ⎠
Let H = ( 1
dn' −1
+ d1' −
n
) ( 1
dn−1
+ d1 .
n
)
Let ki = ⎢⎣ i + nd−1 −d11 −3 ⎥⎦ and ki' = ⎢⎣ i + n −d1d1 − 4 ⎥⎦ .
1 1 1 1
Let g (i ) = − = − .
n + i − 2 + ki n + i − 2 + ki n + i − 2 + ⎢⎣ d1 ⎥⎦ n + i − 2 + ⎢⎣ i + nd−1 −d11 −3 ⎥⎦
' i + n − d1 − 4
a. n
2 < d1 ≤ 2n3−6 . Then
⎧0 if 3 ≤ i < 2d1 − n + 2
⎪
ki = ⎨1 if 2d1 − n + 2 ≤ i < 3d1 − n + 1
⎪2 if 3d − n + 1 ≤ i ≤ n − 2
⎩ 1
⎧0 if 3 ≤ i < 2d1 − n + 4
⎪
ki' = ⎨1 if 2d1 − n + 4 ≤ i < 3d1 − n + 4
⎪2 if 3d − n + 4 ≤ i ≤ n − 2
⎩ 1
B-26
⎧ 1 1
⎪ 2d − 2d + 1 if i = 2d1 − n + 2
⎪ 1 1
⎪ 1 1
⎪ 2d + 1 − 2d + 2 if i = 2d1 − n + 3
⎪ 1 1
⎪ 1 1
⎪ − if i = 3d1 − n + 1
g (i ) = ⎨ 3d1 3d1 + 1
⎪ 1 1
⎪ − if i = 3d1 − n + 2
⎪ 3d1 + 1 3d1 + 2
⎪ 1 1
⎪ − if i = 3d1 − n + 3
⎪ 3d1 + 2 3d1 + 3
⎪0 for all other i
⎩
d1 + 3
For n
2 < d1 ≤ 2 n−6
3 , r = n − d1 > n+6
3 > 2 . From table B-8, d n = d n' and
d n −1 = d n' −1 = 2n −1 . Therefore, H = 0.
U (d1 ) − U (d1 − 1)
n−2
1
= − ∑ g (i ) − H
d1 ( d1 + 1) i =3
1 ⎛ 1 1 1 1 ⎞
= −⎜ − + − ⎟−0
d1 ( d1 + 1) ⎝ 2d1 2d1 + 2 3d1 3d1 + 3 ⎠
>0
2 2n -1 2n -1 1 2n - 3 2n
d1 + 2 ( d1 +1) −1
2 2n -1 2n + 2 2
2n -1 2n -1
d1 + 4 ( d1 +1) +1
2 2n -1 2n + 2 2
2n -2 2n + 2
B-27
TABLE B-8. THE CHANGES OF dn-1 AND dn WHEN d1 IS ODD AND IS
INCREASED BY 1
2 2n -2 2n -1 1 2n - 3 2n
d1 +1 ( d1 +1) − 2
2 2n -1 2n + 2 2
2n -1 2n -1
d1 + 3 ( d1 +1) 2n -1 2n -1
2 2n -1 2n + 2 2
b. 2 n −6
3 < d1 ≤ 2 n3−3 . Then
⎧0 if 3 ≤ i < 2d1 − n + 2
⎪
ki = ⎨1 if 2d1 − n + 2 ≤ i < 3d1 − n + 1
⎪2 if 3d − n + 1 ≤ i ≤ n − 2
⎩ 1
⎧0 if 3 ≤ i < 2d1 − n + 4
ki' = ⎨
⎩1 if 2d1 − n + 4 ≤ i ≤ n − 2
• d1 = 2 n −3
3 .
⎧ 1 1
⎪ 2d − 2d + 1 if i = 2d1 − n + 2
⎪ 1 1
⎪ 1 1
⎪ − if i = 2d1 − n + 3
g (i ) = ⎨ 2d1 + 1 2d1 + 2
⎪ 1 1
⎪ − if i = 3d1 − n + 1 = n − 2
⎪ 3d1 3d1 + 1
⎪0 for all other i
⎩
d1 + 3
r= n+3
3 = 2 , d n −1 = 2n − 1 , d n = 2n + 2 and d n' −1 = d n' = 2n − 1 . Therefore,
H= 2
2 n −1 − ( 2 n1−1 + 2 n1+ 2 ) = ( 2 n −1)( 2 n + 2 ) .
3
B-28
U (d1 ) − U (d1 − 1)
n−2
1
= − ∑ g (i ) − H
d1 ( d1 + 1) i =3
1 ⎛ 1 1 1 1 ⎞
= −⎜ − + − ⎟− H
d1 ( d1 + 1) ⎝ 2d1 2d1 + 2 3d1 3d1 + 1 ⎠
1 1
= − −H
2d1 ( d1 + 1) 3d1 ( 3d1 + 1)
1 1 3
= − −
2 2 n −3
3 ( 2 n −3
3 + 1) (2n − 3)(2n − 2) (2n − 1)(2n + 2)
1 ⎛ 7n − 9 3 ⎞
= ⋅⎜ − ⎟
2 ⎝ 2n(2n − 3)(n − 1) (2n − 1)(n + 1) ⎠
2n3 + 19n 2 − 34n + 9
=
4n(2n − 3)(n − 1)(2n − 1)(n + 1)
>0
• d1 = 2n−4
3 . Then
⎧ 1 1
⎪ 2d − 2d + 1 if i = 2d1 − n + 2
⎪ 1 1
⎪ 1 1
⎪ 2d + 1 − 2d + 2 if i = 2d1 − n + 3
⎪⎪ 1 1
g (i ) = ⎨ 1 1
⎪ 3d − 3d + 1 if i = 3d1 − n + 1 = n − 3
⎪ 1 1
⎪ 1 1
−
⎪ 3d + 1 3d + 2 if i = 3d1 − n + 2 = n − 2
⎪ 1 1
B-29
H = ( 2 n1− 2 + 2 n1+ 2 ) − ( 2 n1−1 + 2 n1+ 2 ) = 1
( 2 n −1)( 2 n − 2 ) .
U (d1 ) − U (d1 − 1)
n−2
1
= − ∑ g (i ) − H
d1 ( d1 + 1) i =3
1 ⎛ 1 1 1 1 ⎞
= −⎜ − + − ⎟−H
d1 ( d1 + 1) ⎝ 2d1 2d1 + 2 3d1 3d1 + 2 ⎠
1 2
= − −H
2d1 ( d1 + 1) 3d1 ( 3d1 + 2 )
1 2 1
= − −
2 2 n3− 4 ( 2 n3− 4 + 1) (2n − 4)(2n − 2) (2n − 1)(2n − 2)
9 1 1
= − −
2(2n − 4)(2n − 1) (2n − 4)(n − 1) (2n − 1)(n − 2)
1 ⎡ 5n − 7 1 ⎤
= ⋅⎢ −
2 ⎣ 2(n − 2)(2n − 1)(n − 1) (2n − 1)(n − 2) ⎥⎦
1 3
= ⋅
2 2(2n − 1)(n − 2)
>0
• d1 = 2 n −5
3 . Then
⎧ 1 1
−
⎪ 2d 2d + 1 if i = 2d1 − n + 2
⎪ 1 1
⎪ 1 1
⎪ 2d + 1 − 2d + 2 if i = 2d1 − n + 3
⎪ 1 1
⎪ 1 1
⎪ − if i = 3d1 − n + 1 = n − 4
g (i ) = ⎨ 3d1 3d1 + 1
⎪ 1 1
⎪ − if i = 3d1 − n + 2 = n − 3
⎪ 3d1 + 1 3d1 + 2
⎪ 1 1
⎪ − if i = 3d1 − n + 3 = n − 2
⎪ 3d1 + 2 3d1 + 3
⎪0 for all other i
⎩
r = n +3 5 > d12+3 and r ' = n +3 2 = ( d1 +21) + 2 . Note that 2n-5 is an odd number and
d1 is an integer. So, d1 = 2 n3−5 must also be odd. Checking the
corresponding entries of table B-8, d n = d n' = 2n + 2 and
d n−1 = d n' −1 = 2n − 1 . Therefore, H = 0 .
B-30
U (d1 ) − U (d1 − 1)
n−2
1
= − ∑ g (i ) − H
d1 ( d1 + 1) i =3
1 ⎛ 1 1 1 1 ⎞
= −⎜ − + − ⎟−0
d1 ( d1 + 1) ⎝ 2d1 2d1 + 2 3d1 3d1 + 3 ⎠
1
=
6d1 ( d1 + 1)
>0
c. 2 n −3
3 < d1 ≤ n − 2 .
⎧0 if 3 ≤ i < 2d1 − n + 2
ki = ⎨
⎩1 if 2d1 − n + 2 ≤ i ≤ n − 2
⎧0 if 3 ≤ i < 2d1 − n + 4
ki' = ⎨
⎩1 if 2d1 − n + 4 ≤ i ≤ n − 2
⎧ 1 1
⎪ 2d − 2d + 1 if i = 2d 1 − n + 2
⎪ 1 1
⎪
⎪ 1 1
g (i ) = ⎨ −
⎪ 2d 1 + 1 2d 1 + 2 if i = 2d 1 − n + 3
⎪
⎪0 for all other i
⎪
⎩
U (d1 ) − U (d1 + 1)
n−2
1
= − ∑ g (i ) − H
d1 ( d1 + 1) i =3
1 ⎛ 1 1 ⎞ (B-12)
= −⎜ − ⎟−H
d1 ( d1 + 1) ⎝ 2d1 2d1 + 2 ⎠
1
= −H
2d1 ( d1 + 1)
d1 + 2
• d1 = 2n−2
3 . Then r = 2 . Because 2n-2 is an even number and d1 is an
integer, d1 = 2n−2
3 must be even. From table B-7, d n' = d n' −1 = 2n − 1 ,
B-31
d n −1 = 2n −1 , and d n = 2n + 2 . Hence, H = 1
2 n −1 − 2 n1− 2 . Applying to
equation B-12 gives:
U (d1 ) − U (d1 + 1)
1 ⎛ 1 1 ⎞
= −⎜ − ⎟
2 2n−2
3 ( 2 n−2
3 + 1) ⎝ 2n − 1 2n − 2 ⎠
6n 2 + 15n − 3
=
4 ( n + 1)( n − 1)( 2n − 1) (2n + 1)
>0
d1 +1
• d1 = 2 n −1
3 . Then r = 2 . Because 2n-1 is an odd number and d1 is an
integer, d1 = 2 n −1
3 must be even. From table B-8, d n' = d n' −1 = 2n − 1 ,
d n −1 = 2n − 2 , and d n = 2n + 2 . Hence, H = 2
2 n −1 − ( 2 n1− 2 + 2 n1+ 2 ) . Applying
to equation B-12 gives:
U (d1 ) − U (d1 + 1)
1 ⎡ 2 ⎛ 1 1 ⎞⎤
= − − ⎜ + ⎟
2 2 n3−1 ( 2 n3−1 + 1) ⎣ 2n − 1 ⎝ 2n − 2 2n + 2 ⎠ ⎥⎦
⎢
5n − 1
=
4 ( 2n − 1)( n + 1)( n − 1)
>0
• 2n
3 < d1 < n − 2 . Then d n = d n' and d n −1 = d n' −1 . So H = 0 .
1
U (d1 ) − U (d1 + 1) = −0 > 0
2d1 (d1 + 1)
U (d1 ) − U (d1 + 1)
1 ⎡⎛ 1 1 ⎞ 2 ⎤
= − ⎢⎜ + ⎟−
2 ( n − 2 )( n − 2 + 1) ⎣⎝ 2n − 3 2n ⎠ 2n − 1 ⎥⎦
2 n 3 − 5n 2 + 8 n − 6
=
2n ( n − 2 )( n − 1)( 2n − 1) (2n − 3)
>0
B-32
Claim 2: Given the number of tasks n and 2 ≤ d1 ≤ n − 2 , the Perfect Instance for n and d1
constructed above yields the minimum utilization of n tasks that perfectly utilize the processor
among all those task sets that have the same d1 and d 2 as the Perfect Instance.
Proof: For n ≤ 9, the claim can be proved by enumeration. So, assume that n ≥ 10. Then d2 ≥ n
≥10. Let the perfect instance for d1 and n be TS = {T1 (1, d1 ), T2 (1, d 2 ),..., Tn (1, d n )} . It will be
proved by contradiction that TS has the minimum utilization among all task set instances that
have the same d1 and d 2 as TS. Suppose there is another task set
TS ' = {T1' (1, d1' ), T2' (1, d 2' ),..., Tn' (1, d n' )} that perfectly utilizes the processor and d1' = d1 and
d 2' = d 2 , but has a smaller utilization. Then there must be some task Tj, 3 ≤ j ≤ n, such that
d j < d 'j . Let Ti be such a task that has the minimum deadline. It will be shown that
i ∉ {3,..., n − 2} , i ≠ n − 1 , and i ≠ n case by case.
1. i ∉ {3,..., n − 2} .
Proof. Suppose on the contrary, there is an i, 3 ≤ i ≤ n − 2 , such that d ′i > di . Using time
demand analysis, it can be proved that: d ′i = d ′i +1 = …= d ′n = di + 1 < 2d 2 = 2d '2 .
n
1 n
1 n − i +1 n 1
U ′ −U ≥ ∑ −∑ = −∑
j =i d ′ j j =i d j di + 1 j =i d j
n − i + 1 n−2 1 1 1
> − (∑ + + )
di + 1 j =i d i + j − i d n −1 d n
• i = n−2.
Then, U ′ − U = 3
d n−2 +1
− ( dn1−2 + dn1−1 + d1n ) . By construction of di ,
3 1 1 1
U ′ −U > −( + + )>0
dn−2 + 1 dn−2 dn−2 + 2 dn−2 + 2
3 1 1 1
U ′ −U > −( + + )>0
dn−2 + 1 dn−2 dn−2 + 1 dn−2 + 4
B-33
• For all other i ,
n − i +1 n 1 n − i +1 n 1
U ′ −U = −∑ > −∑ >0.
di + 1 j =i d j di + 1 j =i d i + j − i
2. i ≠ n −1 .
• d n −1 = t1 .
For 2 ≤ j ≤ n - 2, dj < t1 < 2d2 ≤ 2d′j. So T′j makes exactly two requests before
time t1 . The total number of requests from T′1, T′2 …, T′n-1 before t1 is
R = ⎡⎢ dt11 ⎤⎥ + 2(n − 3) + 1 . By equation B-2, R = t1 . None of T′1, T′2 ,…, T′i-1, T′i+1, …,
T′n-2 will make their third requests at time t1. Because d 'n −1 > d n −1 = t1 , T 'n−1 will
not make a request at time t1. Thus, the first request of T′n will be executed. This
means that d ′n = d ′n −1 = d n −1 + 1 ≤ 2d 2 .
For 3 ≤ j ≤ n − 2 , by assumption, d ′ j ≤ d j .
1 1 1 1 2 1 1
U ′ −U ≥ + −( + )= −( + )
d 'n −1 d 'n d n −1 d n d n −1 + 1 d n −1 d n
By tables B-3 and B-4, one can find that for each pair of d n −1 and d n such that
d n −1 = t1 , the right-hand side of the above inequality is positive, i.e., U ′ − U > 0 .
• d n−1 = d n = t1 + 1 .
It will be shown that in this case it is impossible that d ′n −1 > d n −1 . By tables B-3
and B-4, t1 = 2d 2 − 2 < 2d ' j , where 2 ≤ j ≤ n − 2 . So at time t1, task T ' j makes
exactly two requests. By equation B-2, all requests from T′1, …, T′n-1 will be
executed before t1 , and no request is released at t1 by task T ' j , where
2 ≤ j ≤ n − 2 , because t1 < 2d ' j . It has been proved that t1 is not a multiple of
d '1 = d1 . Thus, T '1 will not make a request at t1. If d 'n −1 > d n −1 , T 'n −1 will not
make the second request at t1. In order to be a task set perfectly utilizing the
processor, the first request of T′n must be executed at t1. Therefore,
d 'n = d ′n −1 = d n −1 , which is a contradiction to the assumption that d 'n −1 > d n −1 .
B-34
3. i ≠ n.
Suppose i = n , i.e., Tn is the only task such that di < d 'i . There are two cases to consider
depending on the relationship between d n and d 2 .
• d n ≤ 2d 2 .
• dn = 2d2 + 2 .
It is known that d '3 ≥ d '2 = d 2 . To prove the claim, there are two cases to
consider, depending on whether d '3 > d 2 or d '3 = d 2 . If d ′3 > d 2 , then T ′3 will
not make its third request before d n = 2d 2 + 2 . So, using the same argument for
the case d n ≤ 2d 2 , it can be shown that d ′n = d n , which contradicts the assumption
that dn < d′n.
Consider now the case d ′3 = d 2 . The proof is quite involved. The idea is to show
that the task set TS ′ can be obtained from TS by a series of modifications of di
and d n for i ≥ 3 . Each step can be described as follows: Starting from i = 3, if
d ′i < di , then decrease di to d ′i and if necessary, increase d n to make the
modified task set to continue perfectly utilizing the processor. Let the modified
d n be d n (i ) , the new task set be TSi , and U i be the total utilization of TSi . Let S
denote the maximum index i such that d 'i < di . By assumption, S ≤ n − 1 . After
S steps, the instance TS ′ is obtained. Let U denote the total utilization of the
perfect instance. In the following, it will be proved that U < U ′ , which is a
contradiction. For convenience let U 2 = U and d n (2) = 2d 2 + 2 .
The proof is by induction on i - the index of the task besides Tn that is changed at
each step.
B-35
Induction hypotheses: There are three hypotheses:
1. For 3 ≤ i ≤ S − 1 , either d n (i ) = d n (i − 1) + 1 , or
d n (i ) = d n (i − 1) + 2 ; 2d 'i < d n (i − 1) ≤ 2d i .
2. For 3 ≤ i ≤ S − 1 , d n (i ) ≤ 3d 2 , and
⎡ i + 2r − d1 − 2 ⎤ ⎡ i − (2d1 − 2r ) ⎤
d n (i ) = 2 d 2 + i + ⎢ ⎥ = 2d 2 + i + ⎢ ⎥ (B-13)
⎢ d1 − 1 ⎥ ⎢ d1 − 1 ⎥
3. U < U i for 3 ≤ i ≤ S .
Base case: i = 3 .
a. r = d1 − 1 .
b. r < d1 − 1 .
B-36
is 1 more than that of T3 in the original instance. In order to make the
new instance perfectly utilizes the processor, d n needs to be increased by
at least 1. It is enough that d n is increased by exactly 1. Because
d n = 2d 2 + 2 is not a multiple of d1 , T1 will not make any request at time
d n . The period of any other tasks T j , 4 ≤ j ≤ n − 1 , is not changed. So
T ' j makes exactly two requests before time d n + 2 , which is the same as
in the original task set. Therefore, d n (3) = d n + 1 = 2d 2 + 3 .
⎡ 3 + 2r − d1 − 2 ⎤ ⎡ 3 − (2d1 − 2r ) ⎤
d n (3) = 2d 2 + 3 + ⎢ ⎥ = 2d 2 + 3 + ⎢ ⎥
⎢ d1 − 1 ⎥ ⎢ d1 − 1 ⎥
3. U < U3 .
Since only the deadlines of T3 and Tn are modified, the difference between
the total utilization of the new instance U 3 and that of the original instance
U is
1 1 1 1 1 1 1 1
U3 − U = + −( + ) = + −( + ).
d3 d n (3) d3 d n d 2 d n (3) d3 2d 2 + 2
Induction step: Suppose the hypotheses are true for 3, 4, …, i-2, where i ≤ S.
Now there is the task set TSi −1 . One needs to change di to d′i, and change
d n (i − 1) to d n (i ) if necessary. The hypotheses need to be proved to be true for i .
B-37
So there is no need to increase d n (i − 1) , which means that d n (i ) = d n (i − 1) .
Therefore, U i −1 < U i . By induction, U < U i −1 . Thus, U < U i . Furthermore, for all
i < j ≤ n − 1 , d′i ≤ d′j. Therefore, 2d ' j ≥ d n (i ) . So the decrease of d j to d′j will
not make any change of d n ( j ) , i.e., d n ( j ) = d n (i) . However, this can only make
the total utilization increased. Therefore, it must be true that dj =d′j. This means
that TS ′ = TSi and U < U i = U ′ , which is a contradiction.
Now the remainder part of the hypotheses will be proved. There are two cases to
consider, i ≤ n − 2 and i = n − 1 .
1. i ≤ n−2.
⎡ i + 2r − d1 − 2 ⎤ ⎡ i − (2d1 − 2r ) ⎤
d n (i ) = 2 d 2 + i + ⎢ ⎥ = 2d 2 + i + ⎢ ⎥
⎢ d1 − 1 ⎥ ⎢ d1 − 1 ⎥
B-38
c. U < Ui .
2. i = n −1 .
a. d n (n − 2) < 3d 2 .
B-39
b. d n (n − 2) = 3d 2 .
In this case, if dn-1 is changed to d′n-1, it is not true that one only needs to
increase d n (n − 2) by 1 or 2 as before, because after d n (n − 2) is increased
by 1 or 2 , T′2, and T′3 will make their fourth requests. It may happen that
while these requests are being executed, T′4, …also make requests during
this period. So there is no chance to execute the first request of T′n before
finishing other requests with higher priority.
In order to prove the hypotheses, one cannot do the simple calculation this
time. One needs to compare U and U n −1 term by term. d′i can be
estimated based on the induction.
⎡ d (i − 1) − 1 ⎤
d 'i ≤ ⎢ n ⎥⎥
⎢ 2
⎡ 2d 2 + (i − 1) + ⎡ i −1−(2d d−11− 2 r ) ⎤ − 1 ⎤
=⎢ ⎢ 1 ⎥ ⎥
⎢⎢ 2 ⎥⎥
⎡ i − 2 + ⎡ i −1−(2d d−11− 2 r ) ⎤ ⎤
≤ d2 + ⎢ ⎢ 1 ⎥⎥
⎢⎢ 2 ⎥⎥
i − 2 + ⎡⎢ i −(2dd11−−12 r ) ⎤⎥
≤ d2 +
2
⎡i + r − 3⎤ ⎡ i + r − 2 − (d1 − 1) ⎤
di = d 2 + i − 2 + ⎢ ⎥ = d2 + i − 2 + ⎢ ⎥
⎢ d1 − 1 ⎥ ⎢ d1 − 1 ⎥
⎡ i − (d1 − r + 1) ⎤
= d2 + i − 2 + ⎢ ⎥
⎢ d1 − 1 ⎥
(n + i − 3)d1 − 1
≤
d1 − 1
⎡ i − (d1 − r + 1) ⎤ ⎡ i − (2d1 − 2r ) ⎤ ⎡ i − ( d1 − r + 1) ⎤
⎢ ⎥ −1 ≤ ⎢ ⎥≤⎢ ⎥
⎢ d1 − 1 ⎥ ⎢ d1 − 1 ⎥ ⎢ d1 − 1 ⎥
i − 2 + ⎡⎢ d1 −1 ⎤⎥
i − ( d1 − r +1)
d 'i ≤ di −
2
i − ( d1 − r +1)
i − 2 + d1 −1
≤ di −
2
(2n + i − 3)d1 − r − 3
≤
2(d1 − 1)
B-40
1 1 d − d 'i
− = i
d 'i di di d 'i
i −( d1−r +1)
i −2+ d1−1
≥ 2
di d 'i
( i − 2)( d1 −1) + ( i − d1 + r −1)
2( d1 −1)
≥ ( n + i −3) d1 −1 (2 n + i −3) d1 − r − 3
d1 −1 2( d1 −1)
(i − 3)d1 + r + 3
≥ (d1 − 1)
[(n + i − 3)d1 − 1][(2n + i − 3)d1 − r − 3]
(i − 3)d1
≥ (d1 − 1)
[(n + i − 3)d1 ][(2n + i − 3)d1 ]
d1 − 1 i −3
≥ ⋅
d1 (n + i − 3)(2n + i − 3)
i =n n
1 1 1 1
U n −1 − U = ∑ ( − )=∑( − )
i =1 d 'i di i =3 d 'i di
n−2
1 1 1 1 1
>∑( − )+( − ) + (0 − )
i =3 d 'i di d 'n −1 d n −1 dn
n−2
1 1 2 1 1
>∑( − )+( − ) + (0 − )
i =3 d 'i di 3d 2 2d 2 − 2 2d 2 + 2
n−2
1 1 d 22 + 2
=∑( − )−
i =3 d 'i di 3d 2 (d 22 − 1)
n−2
1 1 1 1
> (∑ − )− +
i =3 d 'i di 3d 2 d 2 (d 22 − 1)
n−2
1 1 1 1
>∑( − )− +
i =3 d 'i di 3d 2 d 2 (n 2 − 1)
n−2
1 1 d1 − 1 d1 − 1
> (∑ − )−( + )
i =3 d 'i di 3[(n − 1)d1 − r ] [(n − 1)d1 − r ](n 2 − 1)
n−2
1 1 d1 − 1 d1 − 1
> (∑ − ) −[ + ]
i =3 d 'i di (3n − 6)d1 (n − 2)(n 2 − 1)d1
d1 − 1 n − 2 i −3 1 1
> [(∑ )−( + )]
d1 i =3 ( n + i − 3)(2n + i − 3) (3n − 6) (n − 2)(n 2 − 1)
> 0 when n ≥ 10
B-41
B.5.2 PERFECT INSTANCE WHEN d1 AND d2 ARE ARBITRARY.
In the following, an instance of n tasks for given d1 and d2 will be constructed. For convenience,
the constructed instance will be called the perfect instance for n, d1, and d2. It will first be shown
that the instance perfectly utilizes the processor. It is then proved that the instance constructed
for d1 = n - 1 has the minimum total utilization among all task instances that have n tasks and
perfectly utilize the processor.
Given the number of tasks n, T1(1, d1) and T2(1, d2), where d1 ≤ n-1, d2 ≤ n and d1≤ d2,
construct T3(1, d3), . . . , Tn(1, dn) as follows:
⎧d = ⎡ di ⎤ + ⎡ di ⎤ + (n + i − 6)
⎪ i ⎢ d1 ⎥ ⎢ d2 ⎥
⎨ (B-14)
⎪t ≤ ⎡ dt ⎤ + ⎡ dt ⎤ + (n + i − 6) for t ≤ di
⎩ ⎢ 1⎥ ⎢ 2⎥
2. In order to define dn-1 and dn, define the first two variables t1 and t2
⎧t = ⎡ t1 ⎤ + ⎡ t1 ⎤ + (2n − 7)
⎪ 1 ⎢ d1 ⎥ ⎢ d2 ⎥
⎨ (B-15)
⎪t ≤ ⎡ dt ⎤ + ⎡ dt ⎤ + (2n − 7) for t ≤ t1
⎩ ⎢ 1⎥ ⎢ 2⎥
⎧t = ⎡ t2 ⎤ + ⎡ t2 ⎤ + (2n − 5)
⎪ 2 ⎢ d1 ⎥ ⎢ d2 ⎥
⎨ (B-16)
⎪t < ⎡ dt ⎤ + ⎡ dt ⎤ + (2n − 5) for t < t2
⎩ ⎢ 1⎥ ⎢ 2⎥
If t2 = kd3 + 1 for some k, then dn = t2 + 1 and dn-1 = t1. Otherwise, t2 ≠ kd3 + 1 for any k.
If t2 = t1 +2, let dn-1 = dn = t1 +1 = t2 –1; else t2 ≥ t1 +3, let dn-1 = t1 and dn = t2. It will first
be proved that the constructed perfect instance has several properties. It will then be
proved that this instance perfectly utilizes the processor.
Proof. Consider first the case 3 ≤ i ≤ n-2. Let di = ki1d1 +r i1 = ki2d2 + ri2, where ki1and ki2 are
integers, 0 ≤ ri1 ≤ d1 – 1 and 0 ≤ ri2 ≤ d2 - 1. It will be proved that both ri1 and ri2 are greater than
0; if not, then there are two cases to consider.
B-42
a. Either ri1 = 0 or ri2= 0. Suppose that ri1= 0, then ⎡⎢ ddi +1 1 ⎤⎥ = ⎡⎢ dd1i ⎤⎥ + 1 and ⎡⎢ ddi +2 1 ⎤⎥ = ⎡⎢ dd2i ⎤⎥ .
Therefore, equation B-14 also holds for i when t = di + 1. But by definition, di is the
largest integer that satisfies equation B-14. This is a contradiction. The same conclusion
can be reached for the case ri2= 0.
b. ri1 = ri2= 0, because 2 ≤ d1 ≤ d2, ⎡⎢ did+1 2 ⎤⎥ = ⎡⎢ dd1i ⎤⎥ + 1 and ⎡⎢ ddi +2 2 ⎤⎥ = ⎡⎢ dd2i ⎤⎥ + 1 . Therefore,
equation B-14, also holds when t = di + 2, again, a contradiction.
Proof. Let di = ki1 d1 + ri1 = ki2d2 + ri2, where 1 ≤ ri1 ≤ d1 –1 and 1 ≤ ri2 ≤ d2 -1. Similarly, let
di+1 = k(i+1)1 d1 + r(i+1)1= k(i+1)2 d2 + r(i+1)2 ,where 1 ≤ r(i+1)1 ≤ d1 - 1 and 1 ≤ r(i+1)2 ≤ d2 – 1. There
are three cases to consider:
Then ⎡⎢ ddi +1 1 ⎤⎥ = ⎡⎢ dd1i ⎤⎥ + 1 and ⎡⎢ ddi +2 1 ⎤⎥ = ⎡⎢ dd2i ⎤⎥ . Let t = di + 1. It is easy to show that equation B-
14 holds for i + 1, but for any t′ > t, equation B-14 does not hold. Therefore, di+1 = di + 1.
Suppose that ri1 = d1-1 and ri2 ≤ d2-3, then ⎡⎢ did+1 2 ⎤⎥ = ⎡⎢ dd1i ⎤⎥ + 1 and ⎡⎢ ddi +2 2 ⎤⎥ = ⎡⎢ dd2i ⎤⎥ . Let t = di +
2. One can show that equation B-14 holds for i + 1, but for any t′ > t , equation B-14
does not hold. Therefore, di+1 = di + 2 in this case. For the case ri1 ≤ d1 - 3 and ri2 = d2 -
1, the proof is the same.
Using the same argument as above, one can prove that di+1 = di + 3.
B-43
• r1 = r2 = 1. See table B-9.
d2 = 3 d2 = 4 d2 = 5 or 6 or 7 d2 ≥ 8
d1 = 2 2d3 – 15 2d3 – 11 2d3 – 9 2d3 – 7
d1 = 3 2d3 – 9 2d3 – 7 2d3 – 6 2d3 – 6
d1 ≥ 4 - 2d3 – 7 2d3 – 5 2d3 – 5
d2 = 3 d2 = 4 d2 ≥ 5
d1 = 2 2d3 – 11 2d3 – 9 2d3 – 7
d1 = 3 2d3 – 6 2d3 – 6 2d3 – 6
d1 ≥ 4 2d3 – 5 2d3 – 5
• r1 = 1 and 3 ≤ r2 ≤ d2
2 . See table B-11.
d2
TABLE B-11. VALUE OF t1 WHEN r1 = 1 AND 3 ≤ r2 ≤ 2
• r1 = 1 and r2 > d2
2 . If d2 = 5 and r2 = 3, then t1 = 2d3 –7; otherwise, see table B-12.
d2 + 1 d +2 d +3 d2 + 4 d2 + 5
r2 = or 2 or 2 r2 = r2 ≥
2 2 2 2 2
d1 = 2 2d3 – 5 2d3 – 5 2d3 – 3
d1≥ 3 2d3 – 4 2d3 – 3 2d3 – 3
d1
• 2 ≤ r1 ≤ 2 and r2 = 1.
d1 d2
• 2 ≤ r1 ≤ 2 and 2 ≤ r2 ≤ 2 .
B-44
• 2 ≤ r1 ≤ d21 and r2 > d2
2 , or r1 > d1
2 and 2 ≤ r2 ≤ d2
2 . See tables B-13 and B-14,
respectively.
d1
TABLE B-13. VALUE OF t1 WHEN 2 ≤ r1 ≤ 2 AND r2 > d2
2
d2 + 1 d2 + 2 d2 + 3
r2 = r2 = r2 ≥
2 2 2
2d3 – 3 2d3– 3 2d3 – 2
d1 d2
TABLE B-14. VALUE OF t1 WHEN r1 > 2 AND 2 ≤ r2 ≤ 2
d1 + 1 d1 + 2 d1 + 3
r1 = r1 = r1 ≥
2 2 2
2d3 – 3 2d3– 3 2d3 – 2
d1
• r1 > 2 and r2 = 1. See table B-15.
d1
TABLE B-15. VALUE OF t1 WHEN r1 > 2 AND r2 = 1
d1 + 1 d1 + 1 d1 + 2 d +3 d1 + 4
r1 = and d1 = 3 r1 = and d3 > 3 r1 = or 1 r1 ≥
2 2 2 2 2
2d3 – 5 2d3 – 4 2d3– 4 2d3 – 3
d1
• r1 > 2 and r2 > d2
2 . See table B-16.
d1
TABLE B-16. VALUE OF t1 WHEN r1 > 2 AND r2 > d2
2
d2 + 1 d2 + 2 d2 + 3
r2 = r2 = r2 ≥
2 2 2
d1 + 1
r1 = 2d3 – 3 2d3– 3 2d3 – 2
2
d +2
r1 = 1 2d3– 3 2d3– 3 2d3 – 1
2
d +3
r1 ≥ 1 2d3– 2 2d3 – 1 2d3 – 1
2
To check the correctness of the values of t1 for different cases, it is sufficient to show that in each
case
B-45
t1 = ⎢⎡ dt11 ⎥⎤ + ⎢⎡ dt12 ⎥⎤ + (2n − 7)
d2 = 3 d2 = 4 d2 ≥ 5
d1= 2 2d3 – 8 2d3 – 4 2d3 – 4
d1 = 3 2d3 – 5 2d3 – 3 2d3 – 3
4≤ d1 ≤ d2 - 2d3 – 3 2d3 – 3
d2
• r1 = 1 and 2 ≤ r2 ≤ 2 . See table B-18.
d2
TABLE B-18. VALUE OF t2 WHEN r1 = 1 AND 2 ≤ r2 ≤ 2
r2 = 2 r2 ≥ 3
d1= 2 2d3– 4 2d3 – 2
d1 ≥ 3 2d3 – 2 2d3 – 2
• r1 = 1 and r2 > d2
2 . See table B-19.
d2 + 1 d2 + 2 d2 + 3 d2 + 4
r2 = r2 = r2 = r2 ≥
2 2 2 2
2d3 – 2 2d3 – 2 2d3 2d3
B-46
d1
• 2 ≤ r1 ≤ 2 and r2 = 1.
t2 = 2d3- 2.
d1 d2
• 2 ≤ r1 ≤ 2 and 2 ≤ r2 ≤ 2 .
t2 = 2d3 – 1.
d2 + 1 d2 + 2 d2 + 3
r2 = r2 = r2 ≥
2 2 2
2d3 – 1 2d3 2d3
d1 d2
TABLE B-21. VALUE OF t2 WHEN r1 > 2 AND 2 ≤ r2 ≤ 2
d1 + 1 d1 + 2 d1 + 3
r1 = r1 = r1 ≥
2 2 2
2d3 – 1 2d3 2d3
d1
• r1 > 2 and r2 = 1. See table B-22.
d1
TABLE B-22. VALUE OF t2 WHEN r1 > 2 AND r2 = 1
d1 + 1 d +2 d1 + 3 d1 + 4
r1 = or 1 r1 = r1 ≥
2 2 2 2
2d3 – 2 2d3 2d3
d1
• r1 > 2 and r2 > d2
2 . See table B-23.
B-47
d1
TABLE B-23. VALUE OF t2 WHEN r1 > 2 AND r2 > d2
2
d2 + 1 d2 + 2 d2 + 3
r2 = r2 = R2 ≥
2 2 2
d1 + 1
r1 = 2d3 – 1 2d3 + 1 2d3+ 1
2
d1 + 2
r1 = 2d3 + 1 2d3 +1 2d3 + 1
2
d +3
r1 ≥ 1 2d3+ 1 2d3+ 1 2d3 + 1
2
Claim 7: Based on t1 and t2, dn-1 and dn are constructed as shown in tables B-24 to B-26.
d2 = 3 d2 = 4 d2 = 5 or 6 d2 ≥ 7
d1 =2 dn-1 = 2d3 – 15 dn-1 = 2d3 – 11 dn-1 = 2d3 – 9 dn-1 = 2d3 – 7
dn = 2d3 – 8 dn = 2d3 – 4 dn = 2d3 – 4 dn = 2d3 – 4
d1 = 3 dn-1 = 2d3 – 9 dn-1 = 2d3 – 7 dn-1 = 2d3 – 6 dn-1 = 2d3 – 6
dn = 2d3 – 5 dn = 2d3 – 3 dn = 2d3 – 3 dn = 2d3 – 3
4 ≤ d1≤ d2 - dn-1 = 2d3 – 7 dn-1 = 2d3 – 4 dn-1 = 2d3 – 4
- dn = 2d3 – 3 dn = 2d3 – 4 dn = 2d3 – 4
d2 = 3 d2 = 4 d2 ≥ 5
d1 =2 dn-1 = 2d3 – 11 dn-1 = 2d3 – 9 dn-1 = 2d3 – 7
dn = 2d3 – 4 dn = 2d3– 4 dn = 2d3– 4
d1 = 3 dn-1 = 2d3 – 6 dn-1 = 2d3 – 6 dn-1 = 2d3 – 6
dn = 2d3 – 2 dn = 2d3 – 2 dn = 2d3 – 2
d1 ≥ 4 dn-1 = 2d3 – 5 dn-1 = 2d3 – 5
dn = 2d3 – 2 dn = 2d3 – 2
B-48
d2
• r1 = 1 and 3 ≤ r2 ≤ 2 . See table B-26.
d2
TABLE B-26. VALUE OF dn-1 AND dn WHEN r1 = 1 AND 3 ≤ r2 ≤ 2
d1 = 2 and r2 = 3 d1 = 2 and r2 ≥ 4 d2 ≥ 3
dn-1 = 2d3 – 7 dn-1 = 2d3 – 5 dn-1 = 2d3 – 3
dn = 2d3 – 2 dn = 2d3 – 2 dn = 2d3 – 3
d2
• r1 = 1 and 3 ≤ r2 ≤ 2 .
If d2 = 5 and r2 = 3, then dn-1 = 2d3-7 and dn = 2d3-2; otherwise, see table B-27.
d2
TABLE B-27. VALUE OF dn-1 AND dn WHEN r1 = 1 AND 3 ≤ r2 ≤ 2
d2 + 1 d +2 d2 + 3 d2 + 4 d2 + 5
r2 = or 2 r2 = r2 = r2 ≥
2 2 2 2 2
d1 = 2 dn-1 = 2d3 – 5 dn-1 = 2d3 – 5 dn-1 = 2d3 – 5 dn-1 = 2d3 – 3
dn = 2d3 – 2 dn = 2d3 dn = 2d3 dn = 2d3
d1 ≥ 3 dn-1 = 2d3 – 3 dn-1 = 2d3 – 4 dn-1 = 2d3 – 3 dn-1 = 2d3 – 3
dn = 2d3 – 3 dn = 2d3 dn = 2d3 dn = 2d3
d1
• 2 ≤ r1 ≤ 2 and r2 = 1.
dn-1 = dn = t1 + 1 = 2d3 - 2.
d2 + 1 d2 + 2 d2 + 3
r2 = r2 = r2 ≥
2 2 2
dn-1 = 2d3 – 2 dn-1 = 2d3 – 3 dn-1 = 2d3 – 1
dn = 2d3 – 2 dn = 2d3 dn = 2d3 – 1
B-49
d1 d2
TABLE B-29. VALUE OF dn-1 AND dn WHEN r1 > 2 AND 2 ≤ r2 ≤ 2
d1 + 1 d1 + 2 d1 + 3
r1 = r1 = r1 ≥
2 2 2
dn-1 = 2d3– 2 dn-1 = 2d3 – 3 dn-1 = 2d3 – 1
dn = 2d3 – 2 dn = 2d3 dn = 2d3 – 1
d1
• r1 > 2 and r2 = 1. See table B-30.
d1
TABLE B-30. VALUE OF dn-1 AND dn WHEN r1 > 2 AND r2 = 1
d1 + 1 d +1 d +2 d1 + 3 d1 + 4
r1 = and d1=3 r1 = 1 and d1 >3 r1 = 1 r1= r1 ≥
2 2 2 2 2
dn-1 = 2d3 – 5 dn-1 = 2d3 – 3 dn-1 = 2d3 – 3 dn-1 = 2d3 – 4 dn-1 = 2d3 – 3
dn = 2d3 dn = 2d3 – 3 dn = 2d3 dn = 2d3 dn = 2d3
d1
• r1 > 2 and r2 > d2
2 . See table B-31.
d1
TABLE B-31. VALUE OF dn-1 AND dn WHEN r1 > 2 AND r2 > d2
2
d2 + 1 d2 + 2 d2 + 3
r2 = r2 = r2 ≥
2 2 2
d1 + 1 dn-1 = 2d3 – 2 dn-1 = 2d3 – 3 dn-1 = 2d3 – 2
r1=
2 dn = 2d3 – 2 dn = 2d3 + 2 dn = 2d3 + 2
d1 + 2 dn-1 = 2d3 – 3 dn-1 = 2d3 – 3 dn-1 = 2d3 – 1
r1=
2 dn = 2d3 + 2 dn = 2d3 + 2 dn = 2d3 + 2
d1 + 3 dn-1 = 2d3 – 2 dn-1 = 2d3 – 1 dn-1 = 2d3 – 1
r1 ≥
2 dn = 2d3 + 2 dn = 2d3 + 2 dn = 2d3 + 2
• dn - 1 = t2, i.e., dn = t2 + 1.
By construction of dn, this case happens when t2 = kd3 + 1 for some k. From the tables in
claim 7, one can see that this case happens only when r1 > d21 and r2 > d22 , where t2 = 2d3
d1
+ 1. Let d3 = k1d1 + r1. Then t2 = 2k1d1+2r1+1. Since 2 ≤ r1 ≤ d1 − 1 ,
B-50
d1 + 1 ≤ 2r1 + 1 ≤ 2d 1 − 1 < 2d1 . Therefore, (2k1 + 1)d1 < t2 < (2k1 + 2)d1, which implies
that t2 cannot be a multiple of d1. Similarly, one can prove that t2 is not a multiple of d2.
• dn - 1 = t2 - 1, i.e., dn = t2.
Suppose that t2 - 1 is a multiple of d1 but not d2, then ⎡⎢ t2d−1 1 ⎤⎥ = ⎡⎢ dt21 ⎤⎥ − 1 and ⎡⎢ t2d−21 ⎤⎥ = ⎡⎢ dt22 ⎤⎥ − 1 .
Hence, equation B-16 also holds for t2 - 1. However, by definition, t2 is the minimum
integer that satisfies equation B-16. This is a contradiction. Using a similar argument,
one can prove that dn - 1 cannot be a multiple of d2.
• dn - 1 = t2 - 2, i.e., dn = t2 - 1.
In this case, dn = dn-1, which means that t2 = t1 + 2. By equation B-15 and B-16, one must
have ⎡⎢ t2d−1 2 ⎤⎥ = ⎡⎢ dt21 ⎤⎥ and ⎡⎢ t2d−2 2 ⎤⎥ = ⎡⎢ dt22 ⎤⎥ . Thus, dn - 1 or t2 - 2 cannot be a multiple of d1 or d2.
Theorem 12: The perfect instance T1, T2, . . . , Tn constructed above perfectly utilizes the
processor.
Below are some lemmas about the DM schedule for the perfect instance. The theorem follows
directly from these lemmas and the definition of perfect utilization.
Lemma 2: For the DM schedule of the perfect instance, there is no idle time before t1.
Proof. It is enough to prove that at any time t < t1, the number of requests R from T1, . . . , Tn-2,
Tn-1 is greater than or equal to t. There are two cases depending on t.
• t = di.
t ≤ dj for j ≥ i. So Ti, …, Tn-1 each makes exactly one request before t. On the other hand,
dk ≤ t ≤ 2dk for 3 ≤ k ≤ i –1. So T3, . . ., Ti-1 each makes exactly two requests.
Therefore, the total number of requests before time t from T1, . . . , Tn-1 is
R = ⎡⎢ dt1 ⎤⎥ + ⎡⎢ dt2 ⎤⎥ + n + i − 6 . By construction, t =di = R.
• t < dj for j ≥ i+1. So Ti+1, … ,Tn-1 each makes exactly one request before t. On the other
hand, dj < t ≤ 2dj for 3 ≤ j ≤ i. So T3, . . ., Ti each makes exactly two requests.
Therefore, at time t, the total number of requests from T1, . . . , Tn-1 is
R = ⎡⎢ dt1 ⎤⎥ + ⎡⎢ dt2 ⎤⎥ + n + (i + 1) − 6 . By the construction of di+1, t ≤ R.
B-51
Proof. One can show that the number of requests from T1, . . . ,Tn-1 during the period [0, t1) is
equal to t1.
Lemma 4: None of T4, . . ., Tn-2 will make a request during the period [t1, dn-1].
Proof. Because d j < t1 < d n − 1 ≤ 2d3 + 1 < 2d j for 4 ≤ j ≤ n-2, Tj makes exactly two requests
before t1 and will not make its third request before dn-1.
Lemma 5: There is no idle time during the period [t1, dn-1) and there is no request from T1, T2,
T3, and Tn-1 at dn-1. Therefore, the first request from Tn will be executed at dn-1.
Proof. Let t1 ≤ t ≤ dn-1. To prove that there is no idle time from t1 to dn-1, it is sufficient to
show that at t the number of requests from all tasks except Tn is greater than or equal to t.
Consider first d n = t2 ≤ 2d3 . At t the requests from T1 and T2 are ⎡⎢ dt1 ⎤⎥ and ⎡⎢ dt2 ⎤⎥ , respectively.
Task Tj, 2 ≤ j ≤ n-1, has two requests. Together with T3, … , Tn-1, there are 2n-6 requests. The
total number of requests by T1, . . . , Tn-1 is R = ⎡⎢ dt1 ⎤⎥ + ⎡⎢ dt2 ⎤⎥ + (2n − 6) . By the definition of t2,
t < ⎡⎢ dt1 ⎤⎥ + ⎡⎢ dt2 ⎤⎥ + (2n − 5) ≤ R .
It will now be shown that there is no request released at dn-1. It has already been shown that dn-1
is not a multiple of d1 or d2, so neither T1 nor T2 makes a request at dn-1. Since either d3 < dn-1 <
2d3 or d n − 1 = 2d3 + 1 , T1 will not make its third or fourth request at dn-1. By construction, either
dn-1 = dn-1 – 1 or dn-1 < dn-1 < 2dn-1. So Tn-1 will not make its second or third request at time dn-1.
Combining Lemmas 2, 3, and 4, the processor is busy from time 0 to dn-1, all requests from
T1, …, Tn-1 have been executed before time dn-1, and there is no request released at time dn-1. So
the first request of Tn will be executed at time dn-1. By definition, the perfect instance perfectly
utilizes the processor. Thus, theorem 12 is proved.
Lemma 6: Given n and d 2 ≥ d1 > n2+1 , let the perfect instance for n , d1, and d2 be
TS = {T1 (1, d1 ), T2 (1, d 2 ), T3 (1, d3 ), ..., Tn (1, d n )} , then d3 = n + 1.
Claim 9: Given n and d 2 ≥ d1 > n2+1 , let the perfect instance for n , d1, and d2 be
TS = {T1 (1, d1 ), T2 (1, d 2 ), T3 (1, d3 ), ..., Tn (1, d n )} , and the total utilization be U(d2). Let
B-52
TS ′ = {T1 (1, d 1 ), T2 (1, d 2' = d 2 + 1), T3' (1, d 3' ), ..., Tn' (1, d n' )} be the perfect instance for n , d1, and
d 2 + 1 , and its total utilization be U (d 2 + 1) , then U(d2) ≥ U(d2+1).
n
The proof is by showing that U (d 2 ) − U (d 2 + 1) = ∑ d1 − d1' > 0 for every possible d1 and d2. The
i i
i =1
cases are listed in table B-32.
d2
d1 [ n +2 2 , 2 n3− 4 ] 2 n −3
3
2 n−2
3
2 n −1
3
2n
3
2 n +1
3
2n+2
3 [ 2 n3+3 , n − 3] n-2 n-1
[ n +2 2 , 2 n3− 4 ] 7 8 9 10 11 11 11 12 12 15
2 n −3
3 * 16 ** ** 17 ** ** 12 12 15
2 n−2
3 * * 18 ** ** 17 ** 12 12 19
2 n −1
3 * * * 20 ** ** 21 13 14 19
2n
3 * * * * 20 ** ** 13 22 19
2 n +1
3 * * * * * 20 ** 13 23 19
2n+2
3 * * * * * * 20 24 23 19
[ 2 n +3
3 , n − 3] * * * * * * * 25 23 19
n-2 * * * * * * * * 26 19
n-1 * * * * * * * * * 27
From lLemma 6, it follows that d3 = d′3 = n+1, r1 = n + 1 − d1 , r2 = n + 1 − d 2 and r′2 = r2-1. Let
S1 = ∑ i =3 ( d1' − d1i ) and S2 = ( d '1 + d1' ) − ( dn1+1 + d1n ) .
n−2
Then U (d 2 ) − U (d 2 + 1) = 1
d2 ⋅( d2 +1) − S1 − S2 .
i n +1 n
The values of di, 4 ≤ i ≤ n, depend on the values of d1 and d2. Since S1 and S 2 depend on the
values of di, their values depend on d1 and d2 as well. The claim will be proved for each case.
For all cases, the idea is the same: From d1 and d2, one can obtain r1 , r2 and r′2. Applying claim
7, it is easy to calculate S2. To calculate S1, one needs to find di and d′i for 4 ≤ i ≤ n-2.
Specifically, one needs to express di and d′i in terms of d3 . As it has been shown in claim 4, di+1
is usually greater than di by 1 . Thus, di = d3+i-3. But it has been proved that di cannot be a
multiple of d1 or d2. Therefore, one needs to do some corrections to this formula to jump over
those multiples of d1 or d2. Initially, di is increased by 1 to get di+1, so di = d3 + i-3. At some
point di = 2d1 –1. To jump over 2d1, the formula becomes di = d3+i-2. After that di is
continuously increased by 1 to di+1, until at another point where di = 2d2-1 or di = 3d1-1, then the
formula is changed again. These points (the indices of the tasks i, 3 ≤ i ≤ n-2) where the formula
is corrected varies with d1 and d2.
B-53
2( n − 2)
Lemma 7: U(d2) ≥ U(d2+1) when n+2
2 < d1 ≤ d 2 ≤ 3 .
d1 + 6 d2 +6 d2 +3
Proof. 2 < r1 ≤ d1 − 1 , 2 < r2 ≤ d 2 − 1 , and 2 < r2' ≤ d 2 − 3 .
By table B-31 in claim 7, d′n-1 = dn-1, and d′n = dn. Therefore, S2=0.
2 d2 + 2 2 d 2'
To calculate S1, first consider the case d1 < d2. 3 = 3 < d1 < d 2 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪⎪ 3
di = ⎨d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1
⎪d + i if 3d1 − n ≤ i ≤ 3d 2 − n − 2
⎪ 3
⎪⎩d3 + i + 1 if 3d 2 − n − 1 ≤ i ≤ n − 2
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
⎪⎪ 3
d i' = ⎨d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪d + i if 3d1 − n ≤ i ≤ 3d 2 − n + 1
⎪ 3
⎪⎩d3 + i + 1 if 3d 2 − n + 2 ≤ i ≤ n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-33.
1 1 1 1 5
S1 = ( + )−( − )=
2d 2 3d 2 2d 2 + 2 3d 2 + 3 6d 2 ⋅ (d 2 + 1)
U (d 2 ) − U (d 2 + 1) = 1
d2 ⋅( d 2 +1) − 6 d2 ⋅(5d2 +1) − 0 > 0
B-54
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 1 if 2d1 − n + 2 ≤ i ≤ 3d1 − n − 1
⎪d + i + 1 if 3d − n ≤ i ≤ n − 2
⎩ 3 1
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if i = 2d1 − n + 2
⎪⎪ 3
d i' = ⎨d3 + i − 1 if 2d1 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪d + i if i = 3d1 − n or 3d1 − n + 1
⎪ 3
⎪⎩d3 + i + 1 if 3d1 − n + 2 ≤ i ≤ n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-34.
1 1 1 1
S1 = ( + )−( − )
2d1 + 1 3d1 + 1 2d1 + 2 3d1 + 3
1 1 1 1 1
U (d 2 ) − U (d 2 + 1) = − (( + )−( − )) − 0 > 0 .
d 2 (d 2 + 1) 2d1 + 1 3d1 + 1 2d1 + 2 3d1 + 3
d1 + 6 d2 +5 d2 + 2
Proof. r1 ≥ 2 , r2 = n+6
3 = 2 , d 2' = d 2 + 1 = 2n
3 , and r2' = n +3
3 = 2 . By table B-31 in claim 7,
d′n = dn and d′n-1 = dn-1 = 2d3 + 2 = 2(n+2). Therefore, S2 = 0.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪⎪ 3
di = ⎨d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1
⎪d + i if 3d1 − n − 1 ≤ i ≤ 3d 2 − n − 2
⎪ 3
⎪⎩d3 + i + 1 if i = n − 4 or n − 3 or n − 2
B-55
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 + 2 − n
⎪
di = ⎨ 3
'
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪⎩d3 + i if 3d1 − n ≤ i ≤ n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-35.
i 2d2 – n + 1 2d2 – n + 2 n -4 n -3 n -2
1 1 1 1
S1 = ( + )−( + )
2d 2 2n − 3 2d 2 + 2 2n
U (d 2 ) − U (d 2 + 1) = 1
d2 ⋅( d 2 +1) − ( 21d2 + 2 n1−3 ) − ( 2 d21+ 2 + 21n ) − 0 > 0 .
d1 + 6 d2 + 4 d 2' +1
Proof. r1 ≥ 2 , r2 = n +5
3 = 2 , d 2' = d 2 + 1 = 2 n +1
3 , and r2' = n+2
3 = 2 . By table B-31 in claim 7,
dn-1 = 2d3-1 = 2n+1, dn = 2d3+2 = 2n+4, d′n-1 = 2d3 –2 = 2n, and d′n = 2d3 + 2 = 2n + 4.
Therefore,
1
S2 =
2n(2n + 1)
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪⎪ 3
di = ⎨d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1
⎪d + i if 3d1 − n ≤ i ≤ 3d 2 − n − 2 = n − 4
⎪ 3
⎪⎩d3 + i + 1 if i = n − 3 or n − 2
B-56
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
⎪
di = ⎨ 3
'
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪⎩d3 + i if 3d1 − n ≤ i ≤ n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-36.
i 2d2 – n + 1 2d2 – n + 2 n -4 n -3
1 1
S1 = +
2d 2 (d 2 + 1) 2n( n − 1)
U (d 2 ) − U (d 2 + 1) = 1
d2 ⋅( d2 +1) − ( 2 d2 (1d2 +1) + n (21n −1) ) − 2 n (21n +1) > 0 .
d1 + 6 d2 +3
Proof. r1 ≥ 2 , r2 = n+4
3 = 2 , d 2' = d 2 + 1 = 2n+2
3 , and r2 = n +1
3 = d2
2 . By tables B-29 and B-31 in
claim 7, dn-1 = 2d3-1 = 2n+1, dn = 2d3+2 = 2n+4, and d′n-1 = d′n = 2d3 -1 = 2n+1. Therefore,
3
S2 =
(2n + 1)(2n + 4)
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪⎪ 3
di = ⎨d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1
⎪d + i if 3d1 − n ≤ i ≤ 3d 2 − n − 2 = n − 3
⎪ 3
⎪⎩d3 + i + 1 if i = n−2
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
⎪
d i' = ⎨ 3
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪⎩d 3 + i if 3d1 − n ≤ i ≤ n − 2
B-57
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-37.
i 2d2 – n + 1 2d2 – n + 2 n -3
di 2d2 + 1 2d2 + 2 2n
1 1
S1 = +
2d 2 (d 2 + 1) 2n(2n − 1)
U (d 2 ) − U (d 2 + 1) = 1
d2 ⋅( d2 +1) − 2 d2 (1d2 +1) − 2 n (21n −1) − (2 n+1)(2
3
n + 4) > 0 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪
di = ⎨ 3
⎪d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1
⎪⎩d3 + i if 3d1 − n ≤ i ≤ n − 2
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
⎪
d i' = ⎨ 3
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪⎩d3 + i if 3d1 − n ≤ i ≤ n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-38.
i 2d2 – n + 1 2d2 – n + 2
di 2d2 + 1 2d2 + 2
B-58
1
S1 =
2d 2 (d 2 + 1)
1. d2 = 2n
3 .
d1 + 6 d ' −1
r1 ≥ , r2 = n 3+3 = d22+ 2 , d 2' = d 2 + 1 = 2 n3+3 , and r2' = n3 = 22 . By tables B-29 and B-31 in
2
claim 7, dn-1 = 2d3-1 = 2n+1, dn = 2d3+2 = 2n+4, and d′n-1 = d′n=2d3 -1 =2n+1. Therefore,
3
S2 =
(2n + 1)(2n + 4)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d2 +1) − 2 d2 (1d2 +1) − (2 n +1)(2
3
n + 4) > 0 .
2. d2 = 2n
3 .
d1 + 6 d ' −2
r1 ≥ , r2 = n +3 2 = d22+1 , d 2' = d 2 + 1 = 2 n3+ 4 , and r2' = n3−1 = 22 . By tables B-29 and B-31
2
in claim 7, dn-1= 2d3-2 = 2n, dn = 2d3+2 = 2n+4, and d′n-1 = d′n= 2d3 -1 = 2n+1. Therefore,
2 1 1
S2 = − −
2n + 1 2n + 4 2n
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d2 +1) − 2 d2 (1d2 +1) − ( 2 n2+1 − 2 n1+ 4 − 21n ) > 0 .
3. d2 = 2n
3 .
d1 + 6 d2 −3
r1 ≥ 2 , r2 = n +1
3 = d2
2 , d 2' = d 2 + 1 = 2 n+5
3 , and r2' = n−2
3 = 2 . By table B-29 in claim 7,
dn-1 = dn = 2d3 -1 = 2n + 1 and d′n-1 = d′n = 2d3 -1 = 2n+1. Therefore, S2 = 0.
U (d 2 ) − U (d 2 + 1) = 1
d2 ⋅( d2 +1) − 2 d2 (1d2 +1) − 0 > 0 .
d1 + 4
Proof. r1 ≥ n+2
3 ≥ 2 , 4 ≤ r2 ≤ n3 < d2
2 , and 3 ≤ r2' = r2 − 1 < d2
2 . By table B-29 in claim 7, dn = d′n
and dn-1 = d′n-1. Therefore, S2 = 0.
B-59
2 d2 − 2
1. d1 ≤ 3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n
⎪
di = ⎨ 3
⎪d3 + i − 1 if 3d1 − n + 1 ≤ i ≤ 2d 2 − n − 1
⎪⎩d3 + i if 2d 2 − n ≤ i ≤ n − 2
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n
⎪
di = ⎨ 3
'
⎪d3 + i − 1 if 3d1 − n + 1 ≤ i ≤ 2d 2 − n + 1
⎪⎩ d3 + i if 2d 2 − n + 2 ≤ i ≤ n − 2
1 1 1
S1 = − =
2d 2 2d 2 + 2 2d 2 ⋅ (d 2 + 1)
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) 2d 2 ⋅ (d 2 + 1)
2 d 2 −1
2. d1 = 3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n = 2d 2 − n − 1
⎪d + i if 2d 2 − n ≤ i ≤ n − 2
⎩ 3
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n = 2d 2 − n − 1
⎪
d i' = ⎨ 3
⎪d3 + i − 1 if i = 2d 2 − n or 2d 2 − n + 1
⎪⎩d 3 + i if 2d 2 − n + 2 ≤ i ≤ n − 2
1
S1 =
2d 2 (d 2 + 1)
B-60
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) 2d 2 ⋅ (d 2 + 1)
3. d1 = 2 d2
3 .
⎧ d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨d 3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪ d +i if 2d 2 − n + 1 ≤ i ≤ n − 2
⎩ 3
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n = 2d 2 − n
⎪
d i' = ⎨ 3
⎪d3 + i − 1 if i = 2d 2 − n + 1
⎪⎩ d3 + i if 2d 2 − n + 2 ≤ i ≤ n − 2
1
S1 =
2(2d 2 + 1)(d 2 + 1)
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) 2(2d 2 + 1) ⋅ (d 2 + 1)
2 d 2 +1
4. d1 = 3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪d + i if 2d 2 − n + 1 ≤ i ≤ n − 2
⎩ 3
⎧ d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨ d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 1 .
'
⎪d + i if 2d 2 − n + 2 ≤ i ≤ n − 2
⎩ 3
1
S1 =
2d 2 (d 2 + 1)
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) 2d 2 ⋅ (d 2 + 1)
B-61
2 d2 + 2
5. d1 = 3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪
di = ⎨ 3
⎪d3 + i − 1 if i = 2d 2 − n + 1
⎪⎩d3 + i if 2d 2 − n + 2 ≤ i ≤ n − 2
If d 2 = n − 2 ,
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
d i' = ⎨ 3
⎩d 3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2 = n − 2
Otherwise,
⎧d 3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2 .
'
⎪d + i if 2d 2 − n + 3 ≤ i ≤ n − 2
⎩ 3
1 1 3
S1 = − =
2d 2 2d 2 + 3 2d 2 (2d 2 + 3)
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) 2d 2 ⋅ (d 2 + 1)
2 d2 +3
6. d1 = 3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪
di = ⎨ 3
⎪d3 + i − 1 if i = 2d 2 − n + 1 or 2d 2 − n + 2
⎪⎩d3 + i if 2d 2 − n + 3 ≤ i ≤ n − 2
⎧d 3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
'
⎪d + i if 2d 2 − n + 3 ≤ i ≤ n − 2
⎩ 3
B-62
If i = 2d2 - n + 1, then di = n + 1 + 2d2 - n + 1 –1 = 2d2 + 1 and d′i = di – 1 = 2d2.
1 1 1
S1 = − =
2d 2 2d 2 + 2 2d 2 (d 2 + 1)
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0
d 2 ⋅ (d 2 + 1) 2d 2 ⋅ (d 2 + 1)
2 d2 + 4
7. d1 ≥ 3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪
di = ⎨ 3
⎪d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1
⎪⎩d3 + i if 3d1 − n ≤ i ≤ n − 2
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 + 2 − n
⎪
di = ⎨ 3
'
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1
⎪⎩ d3 + i if 3d1 − n ≤ i ≤ n − 2
1 1
S1 = −
2d 2 2d 2 + 2
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( 21d2 − 2 d21+ 2 ) − 0 > 0 .
d 2'
Proof. r1 ≥ n+2
3 > d1
2 , 4 ≤ r2 ≤ n3 < d2
2 , and 3 ≤ r2' = r2 − 1 < 2 . By table B-29 in claim 7, dn = d′n
and dn-1 = d′n-1. Therefore, S2 = 0.
B-63
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪d + i − 1 if 2d − n + 1 ≤ i ≤ n − 2
⎩ 3 2
⎧ d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨ d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 + 2 − n
'
⎪ d + i − 1 if 2d − n + 3 ≤ i ≤ n − 2
⎩ 3 2
1 1
S1 = −
2d 2 2d 2 + 2
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( 21d2 − 2 d21+ 2 ) − 0 > 0 .
d1 + 3
Proof. r1 ≥ 2 , r2 = 3 , d 2' = d 2 + 1 = n − 1 , and r2' = 2 . By claim 7, dn = dn-1 = 2d3 –1 = 2n +1
and d′n = d′n-1 = 2d3 –1 = 2n +1. Therefore, S2=0.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n = n − 4
⎪d + i − 1 if i = n − 3 or n − 2
⎩ 3
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
d i' = ⎨ 3
⎩d 3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2 = n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-39.
i n -3 n -2
di 2n-3 2n-2
d′i 2n-4 2n-3
B-64
1
S1 =
2(n − 1)(n − 2)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2( n −1)(1 n − 2) − 0 > 0 .
d1 + 4
Proof. r1 ≥ 2 , r2 = 2, and r′2 = 1. By table B-29 in claim 7, d n −1 = d n = 2d3 − 1 = 2n + 1 ,
d n' −1 = 2d3 − 3 = 2n − 1 , and d n' = 2d3 = 2n + 2 . Therefore,
4n + 1 2
S2 = −
(2n − 1)(2n + 2) 2n + 1
1. d1 ≤ 2n−4
3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n
⎪
di = ⎨ 3
⎪d3 + i − 1 if 3d1 − n + 1 ≤ i ≤ 2d 2 − n − 1 = n − 3
⎪⎩d3 + i if i = n − 2
⎧d 3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n
'
⎪d + i − 1 if 3d − n + 1 ≤ i ≤ n − 2
⎩ 3 1
1
S1 = .
2(n − 1)(2n − 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2( n −1)(2
1
n −1) − ( (2 n −1)(2 n + 2) − 2 n +1 ) > 0
4 n +1 2
.
2. d1 = 2 n −3
3 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n = n − 3
⎪d + i if i = n − 2
⎩ 3
B-65
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d 'i = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 3d1 − n = n − 3
⎪d + i − 1 if i = n − 2
⎩ 3
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2( n −1)(2
1
n −1) − ( (2 n −1)(2 n + 2) − 2 n +1 ) > 0 .
4 n +1 2
d2 +5 d2 + 2
Proof. r1 = r2 = n+6
3 = 2 , d 2' = d 2 + 1 = 2n
3 , and r2' = n +3
3 = 2 . By claim 7,
d n −1 = 2d3 − 1 = 2n + 1 , dn = 2d3 + 2 = 2n + 4, d′n-1 = 2d3-1 = 2n +1, and d′n = 2d3 + 2 = 2n + 4.
Therefore, S2 = 0.
⎧d3 + i − 3 if i ≤ 2d 2 − n + 1
⎪
di = ⎨d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d 2 − n − 1
⎪d + i + 1 if i = n − 3 or n − 2
⎩ 3
⎧d3 + i − 3 if i ≤ 2d 2 − n + 1
⎪d + i − 2 if i = 2d 2 − n + 2
⎪
d i' = ⎨ 3
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d 2 − n − 1 = n − 4
⎪⎩d 3 + i if i = n − 3 or n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-40.
i 2d2 – n + 2 n -3 n -2
di 2d2 + 2 2n-1 2n
d′i 2d2+1 2n-2 2n-1
1 1 1 1 1 1
S1 = ( + )−( + )= + .
2d 2 + 1 2n − 2 2d 2 + 2 2n (2d 2 + 1)(2d 2 + 2) 2n(n − 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( (2 d2 +1)(2
1
d 2 + 2) + 2 n ( n −1) ) − 0 > 0 .
1
B-66
Lemma 17: U(d2) ≥ U(d2+1) when d1 = 2 n −3
3 or 2n−2
3 and d2 =d1+1.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if i = 2d1 − n + 2
⎪
di = ⎨ 3
⎪d3 + i − 1 if 2d1 − n + 3 = 2d 2 − n + 1 ≤ i ≤ 3d1 − n − 1 = n − 4
⎪⎩d3 + i if i = n − 3 or n − 2
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪d + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
⎪
d i' = ⎨ 3
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d1 − n − 1 = n − 4
⎪⎩ d3 + i if i = n − 3 or n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-41.
i 2d2 – n + 1 2d2 – n + 2
di 2d2 + 1 2d2 + 2
d′i 2d2 2d2+ 1
1 1 1 1 1
S1 = ( + )−( + )= .
2d 2 2d 2 + 1 2d 2 + 1 2d 2 + 2 2d 2 (d 2 + 1)
1. d1 = 2 n −3
3 and d 2 = 2n
3 .
d1 + 5 d2 + 2 d 2' −1
r1 = n+6
3 = 2 , r2 = n +3
3 = 2 , d 2' = d 2 + 1 = 2 n+3
3 , and r2' = n3 = 2 . By tables B-31 and
B-29 in claim 7, d n −1 = 2d3 − 1 = 2n + 1 , dn = 2d3 + 2 = 2n + 4, and d′n-1 = d′n = 2d3 – 1 =
2n + 1. Therefore,
2 1 1 3
S2 = −( + )=
2n + 1 2n + 1 2n + 4 2(2n + 1)(n + 2)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2 d2 (1d2 +1) − (2 n +1)(2
3
n + 4) > 0 .
B-67
2. d1 = 2n−2
3 and d 2 = 2 n +1
3 .
d1 + 4 d 2 +1 d 2' − 2
r1 = n +5
3 = 2 , r2 = n+2
3 = 2 , d 2' = d 2 + 1 = 2n+4
3 , and r2' = n −1
3 = 2 . By table B-31 in
claim 7, dn-1 = 2d3 – 1 = 2n +1, dn = 2d3 + 2 = 2n +4, d′n-1 = 2d3 – 2 = 2n, and d′n= 2d3 + 2
= 2n +4. Therefore,
1 1 1 1 1
S2 = ( + )−( + )=
2n 2n + 4 2n + 1 2n + 4 2n(2n + 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2 d2 (1d2 +1) − 2 n (21n +1) > 0 .
d2 + 4 d 2' +1
Proof. r1 = r2 = n +5
3 = 2 , d 2' = d 2 + 1 = 2 n +1
3 , and r2' = n+2
3 = 2 . By table B-31 in claim 7,
dn-1 = 2d3-1 = 2n+1, dn = 2d3+2 = 2n+4, d′n-1 = 2d3 –2 = 2n, and d′n = 2d3+2 = 2n+4 . Therefore,
1 1 1 1 1
S2 = ( + )−( + )=
2n 2n + 4 2n + 1 2n + 4 2n(2n + 1)
⎧d3 + i − 3 if i ≤ 2d 2 − n + 1
⎪
di = ⎨d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d 2 − n − 1
⎪d + i + 1 if i = n − 2
⎩ 3
⎧d3 + i − 3 if i ≤ 2d 2 − n + 1
⎪d + i − 2 if i = 2d 2 − n + 2
⎪
di = ⎨ 3
'
⎪d3 + i − 1 if 2d 2 − n + 3 ≤ i ≤ 3d 2 − n − 1 = n − 3
⎪⎩d 3 + i if i = n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
table B-42.
i 2d2 – n + 2 n-2
di 2d2 + 2 2n
d′i 2d2+ 1 2n -1
1 1 1 1 1 1
S1 = ( +)−( + )= +
2d 2 + 1 2n − 1 2d 2 + 2 2n (2d 2 + 1)(2d 2 + 2) 2n(2n − 1)
B-68
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( (2 d2 +1)(2
1
d 2 + 2) + 2 n (2 n −1) ) − 2 n (2 n +1) > 0 .
1 1
Proof.
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
di = ⎨ 3
⎩d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ n − 2
⎧ d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
d i' = ⎨ 3
⎩ d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ n − 2
S1 = 0
1. d1 = 2n−2
3
d1 + 4
r1 = 2 , r2 = 2 and r′2 = 1. By tables B-29 and B-30 in claim 7, one can get table B-43.
4n + 1 2
S2 = −
(2n − 1)(2n + 2) 2n + 1
1 4n + 1 2
U (d 2 ) − U (d 2 + 1) = −( − )−0 > 0.
d 2 ⋅ (d 2 + 1) (2n − 1)(2n + 2) 2n + 1
2. d1 = 2 n −1
3 .
d1 + 3
r1 = 2 , r2 =2, and r′2 =1. By tables B-29 and B-30 in claim 7, one can get table B-44.
B-69
n 2
S2 = −
(n − 1)(n + 1) 2n + 1
1 n 2
U (d 2 ) − U (d 2 + 1) = −( − ) > 0.
d 2 ⋅ (d 2 + 1) (n − 1)(n + 1) 2n + 1
3. d1 = 2n
3 .
d1 + 2
r1 = 2 , r2 =2, and r′2 = 1. By tables B-29 and B-30 in claim 7, one can get table B-45.
3
S2 =
(2n − 1)(2n + 2)
1 3
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) (2n − 1)(2n + 2)
4. d1 = 2 n +1
3 .
d1 +1
r1 = 2 , r2 =2, and r′2 =1. By tables B-29 and B-30 in claim 7, one can get table B-46.
1
S2 =
n(2n − 1)
1 1
U (d 2 ) − U (d 2 + 1) = − >0
d 2 ⋅ (d 2 + 1) n(2n − 1)
5. 2n+2
3 ≤ d1 ≤ n − 2 .
3 ≤ r1 ≤ d1
2 , r2 =2 and r′2 = 1. By tables B-29 and B-30 in claim 7, one can get table B-47.
B-70
TABLE B-47. dn-1, dn, d′n-1 AND d′n WHEN 2n+2
3 ≤ d1 ≤ n − 2 AND d2 = n-1
1
S2 =
n(2n − 1)
1 1
U (d 2 ) − U (d 2 + 1) = − > 0.
d 2 ⋅ (d 2 + 1) n(2n − 1)
⎧d + i − 3 if i ≤ 2d 2 − n + 1
di = ⎨ 3
⎩d3 + i − 1 if 2d 2 − n + 1 ≤ i ≤ 3d 2 − n − 1 = n − 2
⎧d 3 + i − 3 if i ≤ 2d 2 − n + 1
⎪
d i = ⎨d 3 + i − 2 if i = 2d 2 − n + 2
'
⎪d + i − 1 if 2d − n + 3 ≤ i ≤ 3d − n − 1 = n − 2
⎩ 3 2 2
1 1 1
S1 = + =
2d 2 + 1 2d 2 − 2 (2d 2 + 1)(2d 2 + 2)
1. d1 = d 2 = 2 n −1
3 .
d2 +3 d 2'
r1 = r2 = n+4
3 = 2 , d 2' = d 2 + 1 = 2n+2
3 , and r2' = n +1
3 = 2 . By tables B-29 and B-30 in claim
7, dn-1 = 2d3-1 = 2n+1, dn = 2d3+2 = 2n+4, and d′n-1 = d′n = 2d3 -1 = 2n+1. Therefore,
2 1 1 3
S2 = −( + )=
2n + 1 2n + 1 2n + 4 (2n + 4)(2n + 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( (2 d2 +1)(2
1
d 2 + 2) ) − (2 n + 4)(2 n +1) > 0 .
3
B-71
2. d1 = d 2 = 2n
3 .
d2 + 2 d 2' −1
r1 = r2 = n +3
3 = 2 , d 2' = d 2 + 1 = 2 n+3
3 , and r2' = n3 = 2 . By claim 7, dn-1 = 2d3-3 = 2n-1,
dn=2d3+2 = 2n+4, d′n-1 = 2d3 –3 = 2n-1, and d′n = 2d3 = 2n+2 . Therefore,
1 1 1 1 1
S2 = ( + )−( + )= .
2n − 1 2n + 4 2n − 1 2n + 2 2(n + 2)(n + 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( (2 d2 +1)(2
1
d 2 + 2) ) − 2( n + 2)( n +1) > 0 .
1
3. d1 = d 2 = 2 n +1
3 .
d 2 +1 d 2' − 2
r1 = r2 = n+2
3 = 2 , d 2' = d 2 + 1 = 2n+4
3 , and r2' = n −1
3 = 2 . By claim 7, dn-1 = dn = 2d3 – 2 =
2n, and d′n-1 = d′n = 2d3 – 2 = 2n. Therefore, S2 = 0.
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( (2 d2 +1)(2
1
d 2 + 2) ) − 0 > 0 .
4. d1 = d 2 = 2n+2
3 .
d 2' − 3
r1 = r2 = n +1
3 = d2
2 , d 2' = d 2 + 1 = 2 n+5
3 , and r2' = n−2
3 = 2 . By claim 7, dn-1 = dn = 2d3 – 2 =
2n, d′n-1 = d′n = 2d3 – 2 = 2n. Therefore, S2 = 0.
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − ( (2 d2 +1)(2
1
d 2 + 2) ) − 0 > 0 .
d1 + 3 d 2' − 3
Proof. r1 = n+4
3 = 2 , r2 = n +1
3 = d2
2 , d 2' = d 2 + 1 = 2 n+5
3 , and r2' = n−2
3 = 2 . By claim 7, dn-1 = dn =
2d3 – 1 = 2n +1, and d′n-1 = d′n = 2d3 – 1 = 2n + 1. Therefore, S2 = 0.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if i = 2d1 − n + 2
⎪d + i − 1 if 2d − n + 3 = 2d − n + 1 ≤ i ≤ 3d − n − 1 = n − 2
⎩ 3 1 2 1
⎧ d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨ d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
'
⎪ d + i − 1 if 2d − n + 3 ≤ i ≤ 3d − n − 1 = n − 2
⎩ 3 2 1
B-72
Based on the above expressions, the cases when di and d′i take different values are listed in
the table B-48.
i 2d2 – n + 1 2d2 – n + 2
di 2d2 + 1 2d2 + 2
d′i 2d2 2d2+1
1 1 1 1 1
S1 = ( + )−( + )= .
2d 2 + 1 2d 2 + 2 2d 2 2d 2 + 1 2d 2 (d 2 + 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2 d2 (1d2 +1) − 0 > 0 .
d1 + 2
Proof. r1 = 2 , r2 = 3 , d 2' = d 2 + 1 = n −1 , and r′2 = 2. By table B-29 in claim 7,
d n −1 = 2d3 − 3 = 2n − 1 , d n = 2d3 = 2n + 2 , d′n-1 = 2d3 – 3 = 2n-1, and d′n = 2d3 = 2n +2.
Therefore, S2=0.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n = n − 4
⎪d + i − 1 if i = n − 3 or n − 2
⎩ 3
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
d i' = ⎨ 3
⎩d 3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2 = n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
the table B-49.
i n -3 n -2
di 2n -3 2n -2
d′i 2n -4 2n -3
1
S1 =
2(n − 1)(n − 2)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2( n −1)(1 n − 2) − 0 > 0 .
B-73
Lemma 23: U(d2) ≥ U(d2+1) when 2 n +1
3 ≤ d1 ≤ n − 3 and d2 = n-2.
d1 +1
Proof. 4 ≤ r1 ≤ 2 , r2 = 3 , d 2' = d 2 + 1 = n −1 , and r′2 = 2. By table B-29 in claim 7,
d n −1 = d n = 2d3 − 2 = 2n and d′n-1 = d′n = 2d3 – 2 = 2n. Therefore, S2=0.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n = n − 4
⎪d + i − 1 if i = n − 3 or n − 2
⎩ 3
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
d i' = ⎨ 3
⎩d 3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2 = n − 2
Based on the above expressions, the cases when di and d′i take different values are listed in
the table B-50.
i n -3 n -2
di 2n -3 2n -2
d′i 2n -4 2n -3
1
S1 =
2(n − 1)(n − 2)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2( n −1)(1 n − 2) − 0 > 0 .
d 2'
Proof. It can be shown that r1 = n +1
3 = d1
2 , r2 < d2
2 , and r2' < 2 . By claim 7, dn-1 = dn = 2d3 – 2 =
2n and d′n-1 = d′n = 2d3 – 2 = 2n. Therefore, S2 = 0.
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪d + i − 1 if 2d − n + 1 ≤ i ≤ n − 2
⎩ 3 2
⎧d 3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨d 3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n + 2
'
⎪d + i − 1 if 2d − n + 3 ≤ i ≤ n − 2
⎩ 3 2
B-74
Based on the above expressions, the cases when di and d′i take different values are listed in
the table B-51.
i 2d2 – n + 1 2d2 – n + 2
di 2d2 + 1 2d2 + 2
d′i 2d2 2d2+1
1
S1 =
2d 2 (d 2 + 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2 d2 (1d2 +1) − 0 > 0 .
⎧d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
di = ⎨d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 − n
⎪d + i − 1 if 2d − n + 1 ≤ i ≤ n − 2
⎩ 3 2
⎧ d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨ d3 + i − 2 if 2d1 − n + 2 ≤ i ≤ 2d 2 + 2 − n
'
⎪ d + i − 1 if 2d − n + 3 ≤ i ≤ n − 2
⎩ 3 2
1 1
S1 = −
2d 2 2d 2 + 2
1 1 1
U (d 2 ) − U (d 2 + 1) = −( − )−0 > 0.
d 2 ⋅ (d 2 + 1) 2d 2 2d 2 + 2
B-75
Now, consider the case where d1 = d2.
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
di = ⎨ 3
⎩d3 + i − 1 if 2d1 − n + 2 ≤ i ≤ n − 2
⎧ d3 + i − 3 if 3 ≤ i ≤ 2d1 − n + 1
⎪
d i = ⎨ d3 + i − 2 if i = 2d1 − n + 2
'
⎪ d + i − 1 if 2d − n + 3 ≤ i ≤ n − 2
⎩ 3 1
1 1 1
S1 = − =
2d1 + 1 2d1 + 2 (2d 2 + 1)(2d 2 + 2)
1 1
U (d 2 ) − U (d 2 + 1) = − −0 > 0.
d 2 ⋅ (d 2 + 1) (2d 2 + 1)(2d 2 + 2)
Proof. r1 = r2 = 3 and r′2 = 2. By claim 7, dn = dn-1 = 2d3-2 = 2n and d′n = d′n-1 = 2d3 -2 = 2n.
Therefore, S2 = 0.
⎧d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1 = n − 3
di = ⎨ 3
⎩ d3 + i − 1 if i = n − 2
⎧ d + i − 3 if 3 ≤ i ≤ 2d1 − n + 1 = n − 3
d i' = ⎨ 3
⎩ d3 + i − 2 if i = n − 2
1
S1 = .
2(n − 1)(2n − 3)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 2( n −1)(2
1
n −3) − 0 > 0 .
B-76
Lemma 27: U(d2) ≥ U(d2+1) when d1 = d2 = n-1.
Proof. r1 = 3, r2 = 2 and r′2 =1. The values of dn-1, dn , d′n-1, and d′n are listed in table B-52.
1
S2 = .
2n(2n + 1)
U (d 2 ) − U (d 2 + 1) = 1
d 2 ⋅( d 2 +1) − 0 − 2 n (21n +1) > 0 .
It was planned that the same problem be studied as in section B.5, with the assumption that each
processor has m priority levels. Unfortunately, because of limited time, no significant progress
was made. It was, however, conjectured that the same threshold holds for computing systems
with m priority levels, where m < n.
B.7 REFERENCES.
B-2. Liu, C.L. and Layland, J.W., “Scheduling Algorithms for Multiprogramming in a Hard
Real-Time Environment,” J. of ACM, Vol. 20, 1973, pp. 46-61.
B-3. Leung, J.Y-T. and Merrill, M.L., “A Note on Preemptive Scheduling of Periodic, Real-
Time Tasks,” Information Processing Letters, Vol. 11, 1980, pp. 115-118.
B-4. Garey, M.R. and Johnson, D.S., Computers and Intractability: A Guide to the Theory of
NP-Completeness, Freeman, New York, 1979.
B-5. Leung, J.Y-T., “A New Algorithm for Scheduling Periodic, Real-Time Tasks,”
Algorithmica, Vol. 4, 1989, pp. 209-219.
B-6. Lawler, E.L. and Martel, C.U., “Scheduling Periodically Occurring Tasks on Multiple
Processors,” Information Processing Letters, Vol. 12, 1981, pp. 9-12.
B-7. Leung, J.Y-T. and Whitehead, J., “On the Complexity of Fixed-Priority Scheduling of
Periodic, Real-Time Tasks,” Performance Evaluation, Vol. 2, 1982, pp. 237-250.
B-77
B-8. Coffman, Jr., E.G., Garey, M.R., and Johnson, D.S., “Approximation Algorithms for Bin
Packing: A Survey,” in Approximation Algorithms for NP-hard Problems, D. Hochbaum
ed., PWS Publishing Company, 1996.
B-9. Dhall, S.K. and Liu, C.L., “On a Real-Time Scheduling Problem,” Operations Research,
Vol. 26, 1978, pp. 127-140.
B-10. Liu, J.W.S., Real-Time Systems, Prentice Hall, New Jersey, 2000.
B-78
APPENDIX C—THE IMPLEMENTATION OF THE ALGORITHM DM-LPL
/******************
greedy-single.h
*********************/
#ifndef _GREEDY_SINGLE_H_
#define _GREEDY_SINGLE_H_
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <limits.h>
/******************
greedy-single.c
Function:
int greedy-single(int num_task, int num_priority, Task *task_sys)
input:
num_task: number of tasks
num_priority: number-of-prioritie levels
task_sys: pointer to the tasks array
output: The priority assignment to tasks
Algorithm:
1. sort the tasks in non-decreasing order of deadline
2. starting with priority level 1 and the first task, repeat for each task:
2-1: try to assign the current task to the current priority level
2-2: if 2-1 fails, if there is no task assigned the current priority level,
then the task system is not schedulable, return ‘-1’
else if there are available priorities,
C-1
increase the current priority level by ‘1’
else
there is not enough priorities, return ‘0’
2-3: if 2-1 succeeds, let the next task be the current task.
******************/
#include “greedy-single.h”
/********************************************************/
int i;
int return_val;
int cur_priority;
int cur_exe_sum;
int cur_first_task;
/**************************************/
/** Sort according to deadline ***/
/**************************************/
/**************************************/
/********** assign priority in a greedy way **********/
/*********************************************/
cur_priority = 1;
C-2
/* the task with the smallest deadline in the current level */
cur_first_task = 0 ;
task_sys[cur_first_task].priority = cur_priority;
/* assign the first task with highest priority */
cur_exe_sum = task_sys[cur_first_task].exe ;
/* the total execution time of the tasks in the current priority level */
if(task_sys[i].priority == INFINITY){
/* task i can’t be assigned to the current level */
if(i== cur_first_task)
C-3
break; /* i is the first task in the current level. So the tasks are not schedulable. */
if(cur_priority < num_priority) { /* try to assign to the next level*/
cur_priority = cur_priority +1;
cur_first_task = i;
cur_exe_sum = 0;
}
else /* not enough priority level */
break;
}
else{ /* assign the next task */
i=i+1;
}
}
return (return_val);
{
return (((Task *)task1)->deadline - ((Task * )task2)->deadline);
C-4