PLC Automata A New Class of Implementable Reatime Automata
PLC Automata A New Class of Implementable Reatime Automata
www.elsevier.com/locate/tcs
Abstract
We introduce PLC-automata as a new class of automata which are tailored to deal with
real-time properties of programmable logic controllers (PLCs). These devices are often used in
industrial practice to solve controlling problems. Nevertheless, PLC-automata are not restricted
to PLCs, but can be seen as a model for all polling systems. A semantics in an appropriate
real-time temporal logic (duration calculus) is given and an implementation schema that ts the
semantics is presented in a programming language for PLCs. A case study is used to demonstrate
the suitability of this approach. We dene several parallel composition operators, and present
an alternative semantics in terms of timed automata for which model-checkers are available.
c 2001 Elsevier Science B.V. All rights reserved.
Keywords: Real time; Specication; Formal methods; Duration calculus; PLC
1. Introduction
In this paper we propose a language to specify real-time systems that ts both the
needs of computer scientists and programmers of such systems. Formal specication and
verication of real-time systems that are used in practice depend on the communication
between the scientist who models the behaviour of the system by formal methods and
the programmer who is working in practice with it.
This language which we call “PLC-automata” is motivated by the experiences we
made in the UniForM-project [21] with an industrial partner. The aim of the project
is the development of real-time systems in a workbench using combinations of formal
methods. We present a formal semantics that allows formal reasoning and proving
1 This research was supported by the German Ministry for Education and Research (BMBF) as part of
the project UniForM under Grant No. FKZ 01 IS 521 B3 and by the Leibniz Programme of the German
Research Council (DFG) under Grant Ol 98=1-1.
E-mail address: [email protected] (H. Dierks).
0304-3975/01/$ - see front matter
c 2001 Elsevier Science B.V. All rights reserved.
PII: S 0 3 0 4 - 3 9 7 5 ( 0 0 ) 0 0 0 8 9 - X
62 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
correctness using the duration calculus [33] as semantic basis. We also give an im-
plementation of such systems in a particular hardware called programmable logic con-
trollers (PLC).
These PLCs are very often used in practice to implement real-time systems. The
reason is that they provide both an automatic polling mechanism and convenient
methods to deal with time by explicit timers in their programming languages. Nev-
ertheless, every computer system can be used to implement the proposed language if
a comparable handling of time and an explicit polling is added.
Furthermore, the language can be regarded as a denition of a small but imple-
mentable subset of timed automata [2]. See Section 10 for details where this
formalism is used to dene an operational semantics for PLC-automata. The main
dierence between both approaches is the polling concept. Another dierence is that
PLC-automata assume an asynchronous way to react to inputs while timed automata
react synchronously to inputs.
Programmable logic controllers (PLC) are often used in industry for solving tasks
calling for real-time problems like railway crossings, trac control, or production cells.
Due to this special application background PLCs have features for making the design
of time- and safety-critical systems easier:
• PLCs have input and output channels where sensors and actuators respectively can
be plugged in.
• They behave in a cyclic manner where every cycle consists of the following phases:
◦ Polling all inputs and storing the read values.
◦ Computing the new values for the outputs.
◦ Updating all outputs.
The repeated execution of this cycle is managed by the operating system. The
only part the programmer has to adapt is the computing phase. Thus, PLCs are
implemented polling machines realising the typical method of solving time-critical
problems in reality.
• Depending on the program and on the number of inputs and outputs there is an
upper time bound for a cycle that can be used to calculate the reaction time.
• Convenient standardised libraries are given to simplify the handling of time.
Although these characteristics are quite useful, PLC-programmers have to face the
following problem: If an input signal does not hold for at least the maximum amount
of time needed for a cycle, one cannot be sure that the PLC will ever read this signal.
This problem can be solved either by
• changing the sensors used in the setting or by
• using PLCs that are fast enough.
The decision in which way the problem should be solved depends on availability and
costs of both faster PLCs and sensors that assure longer lasting signals.
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 63
Another important feature of PLCs is that they can be coupled: the output of one PLC
can be the input of another PLC. In fact, their operating systems do not dierentiate
between a sensor’s input and a PLC’s input and between an output to actuators or to
PLCs, respectively. Thus, the programmer is again obliged to consider how long an
output signal from one PLC will be held and how long it must be held to make sure
that it has been noticed by the other PLC. In physically distributed applications several
busses are in use to connect PLCs. They introduce delays which have to be considered
in the parallel composition of PLCs.
Note that these considerations are an advantage of using PLCs. They oblige the
programmer to check both the sensor and cycle time, which makes the assumptions
concerning the hardware explicit.
In this section we propose a formalism which is designed for both the needs of com-
puter scientists and of engineers programming PLCs. Engineers, often being electrical
engineers, are used to develop PLC-programs in assembler-like languages or languages
that are closely related to circuit diagrams.
In the UniForM-project [21] we made the experience that automata-like pictures can
serve as a common basis for computer scientists and engineers because the engineers
gave them a semantics suitable to PLCs in an intuitive way. This was the motivation
for us to formalise these pictures and to dene a formal semantics for them in a suitable
temporal logic. On the one hand, this allows formal reasoning; on the other hand, this
respects the behaviour of PLCs and the intuitive semantics given by the programmers.
In a railway case study of the UniForM-project we are dealing with problems like
the following one:
Example 1. Consider a train detecting sensor that signals “tr” (train) if a train is
approaching and “no tr” (no train) if not. Unfortunately, the sensor can stutter for up
to 4 s after a train has passed the sensor. Assume that the temporal distance between
two subsequent trains is at least 6 s. Develop a system that lters the stuttering.
Thus, counting the trains that have passed the sensor simply means to count the changes
from “N” to “T”. The stuttering is ltered by ignoring the input for 5 s. Note that we
have to assume an upper bound for the cycle time in order to detect subsequent trains
correctly. A semantics of these automata should enable us to calculate this upper bound.
This sort of automaton and the informal description of its behaviour is a result of our
discussions with industrial experts.
A more sophisticated problem is:
Example 2. Consider the train detecting sensor given in Example 1 again, but now
let it be equipped with a watch dog that detects failures of the device by the signal
“Error”. The task is now to lter the stuttering and to recognise such errors as soon
as possible.
after a change from “N” to “T”. In this case the automaton would not change to “X”
immediately because it is required to stay for at least 5 s in “T”.
To solve this problem we introduce a set of inputs for each state of the automaton
for a special treatment: The informal meaning of a state equipped with a delay time
t and a set A of inputs is that inputs contained in A are ignored for the rst t s
staying in this state. Inputs outside A are never ignored, i.e. they force the automaton
to react immediately. Fig. 4 shows how this extension can be used to solve Example 2.
It behaves the same way as the automaton of Fig. 1 provided that no “Error” signal
occurs. If an “Error” occurs, the automaton of Fig. 4 changes to state “X” regardless
in which state it was before and how long it was there.
In summary, we dene an automaton-like structure extended by some components:
The additional components are needed to model PLC-behaviour and to enrich the
language for dealing with real-time aspects. The represents the upper bound for a
cycle of a PLC and enables us to model this cycle in the semantics. The functions
St and Se attach to each state of A a delay time and a set of inputs. We want the
automaton to remain in state q for at least St (q) seconds (“t” stands for “time delay”)
provided that only inputs in Se (q) are read (“e” stands for “expected inputs”). E.g. in
Fig. 4 we want the system to hold output “T” for at least 5 s provided that only inputs
in {no tr; tr} are read. In other words, inputs in Se (q) are ignored for the rst St (q)
seconds.
An equivalent description is that the state q is held for St (q) seconds, but if during
this period an input in \Se (q) is read the automaton will react within one cycle. Note
that an input lasting only very shortly need not to be noticed. That means that an input
can either hold and be read (e.g. the second “no tr” in Fig. 2) or hold shortly and not
be read (e.g. the rst “tr” in Fig. 2).
PLC-automata look similar to timed automata [2] but the details are dierent. In our
approach we deal with reaction times; this is made precise in the semantics dened
in Section 5. In Section 10 we will give an alternative timed automaton semantics
for PLC-automata and in this semantics we have to make the asynchronous behaviour
of PLC-Automata and the reaction times explicit, because timed automata represent a
synchronous approach without reaction times.
In this paper we use duration calculus (abbreviated DC), a dense time interval tem-
poral logic developed by Zhou Chaochen and others [33, 27, 16], as the predicate lan-
guage to describe properties of real-time systems. This choice is mostly motivated by
our previous experience and acquired
uency in this logic, but also by the convenience
with which the interval and continuous time aspects of DC allow us to express and
reason about reaction times of components and durations of states.
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 67
4.1. Motivation
We consider the gas burner case study of the ProCoS-project to illustrate the usage
of DC as high-level specication language. The gas burner [29] is triggered by a
thermostat; it can directly control a gas valve and monitor the
ame (Fig. 5).
This physical system is modelled by three Boolean observables: “hr” (heatrequest)
represents the state of the thermostat, “
” (
ame) represents the presence of a
ame
at the gas valve, “gas” represents the state of the gas valve. One of the top-level
requirements is that
in every period shorter than 30 s gas must not leak for more than 4 s.
This is expressed by the following DC-formula:
Z
‘630 ⇒ (gas ∧ ¬
)64 (1)
R
Here the -operator accumulates all durations of leaks (modelled by the assertion
gas ∧ ¬
) over a given interval. Hence, (1) can be read as follows: For every interval
() of a length of at Rmost 30 s (‘630) the sum of all leak durations within that
intervalRis at most 4 s ( (gas ∧ ¬
)64).
The -operator is the main advantage of the DC because it allows to reason about
the sum of speciÿc durations which is not possible in other real-time logics like TCTL
for timed automata [1]. With this operator it is not dicult to specify properties like
quasi-fairness or mutual exclusion of processes. For example, quasi-fairness of two
processes P1 and P2 which are to enter a critical section cs1 and cs2 , is specied by
Z Z
‘
‘¿10 ⇒ (P1 = cs1 ) − (P2 = cs2 ) 6 :
10
This formula forbids that one process occupies its critical section substantially longer
(i.e. 10%) than the other one during an interval of at least 10 s. It says that in every
interval () of at least
R 10 s (‘¿10) the distance (| · · · − : : : |) between the occupation
times of P1 and P2 ( Pi = csi ; i = 1; 2) is at most ten percent of the time (6‘=10).
Mutual exclusion is simply specied by
Z
(P1 = cs1 ∧ P2 = cs2 ) = 0
68 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
which means that the accumulated durations of the simultaneous occupations is 0. Note
that this formula means that P1 and P2 are in their critical section simultaneously for
at most nitely many points in every interval. Due to the integration nitely many
points do not play a role for the validity of a DC-formula.
4.2. Syntax
Formally, the syntax of duration calculus distinguishes terms, duration terms and
duration formulae. Terms have a certain type and are built from time-dependent
observables obs like gas or
, rigid variables x representing time-independent variables,
and are closed under typed operators op:
::= obs | x | op()
where is a vector of terms. Note that op include nullary operators, i.e., constants.
Terms of Boolean type are called state assertions. We use S; P and occasionally Q
for a typical state assertion.
Duration terms are of type real but their values depend on a given time interval.
The simplest duration term is the symbol ‘ denoting the length of the given interval.
The name durationR calculus stems from the fact that for each state assertion S there is
a duration term S measuring the duration of S, i.e. the accumulated time S holds in
the given interval. Formally,
Z
::= ‘ S opreal ()
4.3. Semantics
The semantics of duration calculus is based on an interpretation I that assigns a
xed meaning to each observable, rigid variable and operator symbol of the language.
To an observable obs the interpretation I assigns a function
obsI : Time → Dobs
with Time = R¿0 . This induces inductively the semantics of terms and hence state
assertions. For a state assertion S it is a function
SI : Time → Bool
where Bool is identied with the set {0; 1}.
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 69
The semantics of a duration term is denoted by I() and yields a real value
depending onR a given time interval [b; e] ⊆ Time. In particular, ‘ denotes the length
of [b; e] and S the duration of the state assertion S in [b; e] as given by the integral.
Formally,
I(‘)[b; e] = e − b;
Z Z e
I S [b; e] = SI (t) dt:
b
where we use the convention that dq0 e is an abbreviation of dSA = q0 e. Next, we want
to describe the behaviour of the automaton in a state q. The cyclic behaviour of PLCs
has to be re
ected in the semantics to achieve a realistic modelling. One question the
semantics should answer is: When a state q is entered, what kind of input can in
uence
the behaviour of the PLC? The answer to this question is:
• only the inputs after entering q and,
• only the inputs during the last cycle-time .
This is expressed by the following predicates where A ranges over all sets of inputs
6 A ⊆ . In the formulae we use A as an abbreviation for IA ∈ A and (q; A)
with ∅ =
for SA ∈ {(q; a) | a ∈A}, respectively:
Statement (3) formalises the fact that after a change of the automaton’s state to q, only
the set of inputs A that is valid after the change can have an eect on the behaviour in
the future. Statement (4) represents the formalisation of the cyclic behaviour of PLCs.
A PLC reacts only to inputs that occurred during the last cycle. Preceding inputs are
forgotten and cannot in
uence the behaviour of the PLC-automaton anymore.
The quantication over all nonempty subsets of the input alphabet was motivated
by the behaviour of the PLCs. The more we know about the inputs during the last
72 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
Fig. 8.
cycle the more we know about the actions of the PLC. For example, it is necessary
that an input a is held for at least seconds to assure that the PLC can only react
to this input. This is directly re
ected in the semantics as well. If there is an interval
of length , predicate (4) can be applied to this interval with A = {a}. Consequently,
after this interval only transitions with label a are allowed.
Fig. 8 exhibits a possible history of the PLC-automaton in Fig. 4. The application
of statement (3) needs two time points. The rst point is a change to a state q of the
automaton. In Fig. 8 the system changes from T to N at time t0. The second time
point is later than the rst and requires the state to be constant between both time
points. Predicate (3) assures that after the second time point there is an interval in
which the state is either q or a state (q; a) where a is an input that was valid between
both time points.
Hence, the application of (3) to the history of Fig. 8 yields the following results: At
t1 the state can remain in N only. After t2, t3, and t4 one knows that only changes
to T are possible. At t5 and t6 no change is forbidden by (3).
For the application of (4) two time points are needed, too. The distance between
both points has to be seconds and the state has to be constant in between. Due to (4)
we know again that after the second time point the state is either q or a state (q; a)
where a is an input that was valid between both time points.
The application of (4) to Fig. 8 now gives us the information that
• after t2 and t3 the state remains N or changes to T ,
• after t4 the state is not allowed to change, and
• after t5 and t6 changes to T are forbidden.
Note that (4) cannot be applied in such a way that we gain information at t1 because
the state changed in less than seconds before t1.
For states without a stability requirement we expect a change to (q; a) in at most
2 seconds. For states with a stability requirement we expect this behaviour after the
required period of time. This leads us to additional statements in the semantics:
Fig. 9.
Statement (5) says that the automaton reacts in less than 2 seconds to inputs which
force a change if there is no stability required for q. Note that less than seconds are
needed to nish the current cycle and seconds are needed to react to this input in the
worst case. Formula (6) states this behaviour after St (q) seconds: if St (q) seconds have
elapsed the automaton reacts to inputs which force a change in less than 2 seconds.
In case we know that the automaton has just changed the state then we want to be
able to exploit the information that within the next seconds another reaction to the
inputs in A has to occur. This is formalised by (7).
In Fig. 9 a history is given where these predicates can be used to get information
about the behaviour. Statement (5) requires an interval where the state q is constant
and no delay requirement is given, i.e. St (q) = 0. If within this interval only inputs
were valid which cause a state change, then (5) implies that the length of the interval
is shorter than 2. In the gure we can apply this formula to the interval [t0; t1] and
get the information that it cannot be longer than 2.
For the application of (6) we need an interval where the state is q (with St (q) ¿ 0)
only and the length of the interval is longer than St (q) seconds. Thus, we can apply
this formula to the interval [t1; t3] because the state is T only and the length is longer
than 5 s. Since only inputs where true within [t2; t3] which force a state change, we
know by (6) that t3 − t2 ¡ 2 has to hold.
For (7), an interval of length seconds is required where the state is q (with
St (q) = 0) and that the state has just changed before the beginning of the interval. Then
we know that the state must change just after the interval again if there was no input
a during the interval with q = (q; a). The interval [t3; t4] fulls these requirements.
Hence, we know that after t4 the state has to change. Note that there is no restriction
given by (7) on which succeeding states are allowed. We apply (3) to get more
information.
74 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
Fig. 10.
However, we have to take into account the cyclic behaviour of the hardware again. In
particular, we should require that if q is left during the stability phase then there has
to be an input not contained in Se (q) at most seconds ago:
6St (q)
St (q) ¿ 0 ⇒ d¬qe; dqe; dq ∧ Ae −−→dq ∨ (q; A\Se (q))e (9)
Fig. 10 presents a history of the PLC-automaton in Fig. 4 where (8) and (9) are applica-
ble. To apply these formulae we need a change of the state into q with St (q) ¿ 0. This
happens at t0 where the automaton enters T with St (T ) = 5.
Statement (8) is applicable to all time points t 0 less than 5 s later than t0 where
the state was constant between [t0; t 0 ]. The result of the application is that the state
remains in q after t 0 or changes to a state (q; a) after t 0 where a is an input that
has held somewhere between t0 and t 0 and is not contained in Se (q). Thus, we can
apply (8) to all intervals of the form [t0; ti] with i ∈ {1; : : : ; 7}. For 16i64 we get the
information that no change of the state after ti is allowed. If 56i67 only a change
to X is allowed after ti.
Formula (9) is applicable to all time points t 0 less than 5 s later than t0 and more
than later than t0. It requires the state to be constant between t0 and t 0 . The result of
the application is that the state remains in q after t 0 or changes to a state (q; a) after t 0
where a is an input that has held somewhere between t 0 – and t 0 and is not contained
in Se (q). Hence, we cannot apply (9) to the interval [t0; t1] since the dierence between
t0 and t1 is less than seconds. For i ∈ {2; : : : ; 7} the dierence is big enough and the
application yields the following: After t2, t3, t4, and t6 the state has to remain in T
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 75
Fig. 11.
after the corresponding time point, and after t5 and t7 the state is allowed to remain
in T or to switch to X after the corresponding time point.
Furthermore, we know that the automaton reacts according to the input if there is a
set A that is valid for the last 2 seconds and disjoint from Se (q):
Note that in contrast to (6) predicate (10) does not require that the delay time has
elapsed. We demonstrate the meaning of these formulae by the interpretations given in
Fig. 11. The application of (10) requires an interval where a state q has held and no
input contained in Se (q) or with a loop from q to q was valid. Then the interval has
to be shorter than 2. In the left diagram of Fig. 11 there is one interval where (10)
is applicable: [t2; t3]. Hence, we can derive t3 − t2 ¡ 2.
To apply (11) one needs a time point t where the state changes to a state q with
St (q) ¿ 0 and where the state remains in q for the next seconds and no input in
Se (q) is valid in that phase. Then the state has to leave q at t + . We can apply this to
the right diagram of Fig. 11, because at t4 the state changes to T and in the following
seconds this state is held and the input is never no tr or tr. Hence, the state must
leave T at t5 = t4 + . Note that (11) is not applicable to the interval [t0; t1] because
an input in Se (T ) was valid during that interval.
Formulae (3), (7)–(9), and (11) require a change from d¬qe to dqe to restrict
the possible behaviour. But for the initial state there is no change and therefore the
assertions are not applicable in this case. This can be expressed by ve corresponding
assertions suitable for the initial state; these are given in Appendix A.
Finally, the relation between the observables SA and OA is established by
This formula says that for each interval I 6Time ⊆ Time the observable OA is at time
point t ∈ I equal to !(SA (t)) except for single points.
76 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
In this section we want to describe how the PLC-automata can easily be imple-
mented in PLCs. To this end we use the standardised language “ST” (structured text
[19, 24, 20]) that provides all usual basic constructs of imperative languages and that is
used in practise for programming PLCs. We illustrate its usage by means of an exam-
ple. Let A = (Q; ; ; q0 ; ; Se ; St ;
; !) be a PLC-Automaton. Without loss of generality,
we assume Q = {1; : : : ; n}, = {1; : : : ; m}, and q0 = 1. Then the behaviour of it can be
implemented by the ST -program in Fig. 12.
For all three cases it is shown what the PLC has to do. If St (q) = 0 for a state
q, it just has to poll the input and act accordingly (∗ state = i ∗). Otherwise it has
to call the timer with the corresponding time value St (q) (∗ state = k ∗). Setting
the parameter IN to TRUE makes the timer start running for PT seconds if it has not
started already. The next statement reads the output Q of the timer. The latter is TRUE
i the time since the starting of the timer has not exceeded PT. By negating this
output and storing the result in time up we have a
ag that is true i the time is
up. Thus, the rst two statements for the case state=k in the listing start the timer
if needed and register whether the stability time is over or not. Now the PLC has
to check the input. If it is an input that is not in Se (k) (* u ∈= Se (k) *) the PLC
changes the variable state accordingly and in the case of a state change stops the
timer by calling it with IN set to FALSE. Otherwise (* v ∈ Se (k) *) it does the
same provided that the time is over. Finally, the output is computed. This ST-program
is executed once in each cycle of the PLC. So it is the body of an implicit loop-forever
statement.
This section demonstrates one major advantage of using PLC-automata for specifying
controllers: the semantics given in Section 5 allow formal reasoning in DC leading to
exible theorems. Just to give an idea of what can be formally established we state
the following theorem and demonstrate its usefulness.
This theorem provides information on how long it takes at most to reach a certain
set of states. Often it is necessary that the controller enters a set of states 0 provided
that a special set A of inputs holds. E.g. we built the PLC-automaton in Fig. 4 such
that the set {X } of states is entered provided the set {Error} of inputs holds. Usually,
the controller is therefore specied in such a way that it will reach 0 after several
transitions. Provided that 0 ⊇ n (Q; A) for some n ∈ N0 the theorem below estimates
the delay until 0 is reached in the worst case. That means that the theorem will give
us an upper time bound for the PLC-automaton in Fig. 4 to reach state “X” when
reading input “Error” because {X } = 1 ({T; N; X }; {Error}).
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 77
CASE state OF
.
.
.
i: (* state = i, no stability required *)
state:= (i,input);
(* end of state = i *)
.
.
.
k: (* state = k, stability required *)
timer(IN:=TRUE, PT:=t#St (k));
time up:=NOT timer.Q;
CASE input OF
.
.
.
u: (* u ∈= Se (k) *)
state:= (k,u);
IF state < > k THEN
timer(IN:=FALSE, PT:=t#St (k));
END IF;
.
.
.
v: (* v∈ Se (k) *)
IF time up THEN
state:= (k,v);
IF state < > k THEN
timer(IN:=FALSE, PT:=t#St (k));
END IF;
END IF;
.
.
.
END CASE;
(* end of state = k *)
.
.
.
END CASE;
output:= !(state);
Fig. 12. The ST-program.
78 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
We denote by n (; A) the set of states that can be reached by n transitions with
an input in set A starting in a state contained in . This is inductively dened by
df
0 (; A) =
df
n+1 (; A) ={(q; a) | q ∈ n (; A); a ∈ A} for all n ∈ N0
with
Xk k6n∧
df
cn = + max s(i ; A) ∃1 ; : : : ; k ∈ \n (; A): (14)
∀16j ¡ k:
i=1 j+1 ∈ (j ; A)
where
(
df
St () + 2 if St () ¿ 0 ∧ A ∩ Se () 6= ∅
s(; A) = (15)
; otherwise
The proof of this theorem can be found in Appendix B. We can apply Theorem 4 to
the automaton in Fig. 4 and get, for example, the assertions below:
5+3
d{N; T } ∧ no tre → dN e
2
d{N; T; X } ∧ Errore →dX e
dT ∧ tre →dT e
The rst assertion can be gained from Theorem 4 by application with = {N; T },
A = {no tr}, and n = 1. It states that if the PLC-automaton is not in state X and reads
just no tr-inputs then the system will be in state N in at most 5 + 3 seconds. The
second assertion is a result of the theorem with = {N; T; X }, A = {Error}, and
n = 1. It says that the automaton will switch to state X whenever the input Error has
held for 2 seconds. The last assertion uses = {T }, A = {tr}, and n = 0. It assures
that the PLC-automaton remains in T if during the last seconds only the input tr
has held.
8. A case study
The following case study illustrates how fast and eciently real-time systems can
be specied and implemented by PLC-automata in comparison with the conventional
ProCoS-style. To this end we choose the gas burner case study of the ProCoS-project
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 79
Table 1
Requirement Is rened by semantic clause
(16) (2)
(17) – (18) (3) with A=
(19) (8) with A=
(20) (6) with A = and assuming that 2 6 0
(21) (3) with A = {¬hr} resp. A = {hr ∧
}
(22) – (23) (5) with A = {hr}; {¬hr}, or A = {¬
}
to cycle in the worst case. And this problem normally corresponds to the question:
“How much money do we have to spend for the hardware in order to guarantee that
the upper time bound is not violated?”
In [9, 11] the interested reader can nd an algorithm that generalises the result that
we can nd a PLC-automaton that renes the specication in terms of Implementables.
This algorithm synthesises a PLC-automaton from a specication using Implementables.
This algorithm works provided that the specication does not contain contradictory
constraints. Moreover, in [9, 11] it is shown that the algorithm produces correct results
in the sense that the semantics of the synthesised PLC-automaton renes the given
specication.
In the previous sections we introduced a formalism that re
ects the intuition and daily
practice of engineers who implement real-time controllers. In this section we dene
dierent parallel composition operators for PLC-automata motivated by the dierent
manifestations of parallelism for PLCs. Roughly speaking, the parallel composition of
PLC-automata can be represented by the conjunction of the semantics of each PLC-
automata. But there are three phenomena which can change the character of the parallel
composition of PLC-automata (cf. Fig. 14):
Transmission: Depending on the transmission medium between two PLCs the
behaviour of both can vary. The media can introduce transmission delays or errors.
To get a provably correct system one has to model the actual transmission medium in
a semantically adequate manner.
Pipelining: The input of one PLC-automaton can be the output of another PLC-
automaton. If both automata are implemented on the same PLC, it is possible to de-
scribe the allowed behaviour in more detail.
Synchronisation: Suppose two PLC-automata implemented on the same PLC share
an input. Depending on the construction of both automata it may be possible that a
certain combination of states is not reachable due to the synchronisation on the shared
input. The semantics of the parallel composition should be strong enough to establish
such behaviour.
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 81
9.1. Transmission
We present a uniform approach to model transmission of information between dier-
ent PLCs. Basically, transmission between two PLC-automata is a relation between the
output-observable of the rst automaton and the input-observable of the second one.
We describe this relation by DC-formulae speaking about both observables.
Suppose that the output of PLC-automaton A is the input of PLC-automaton B via
the medium m. We denote this connection with the following symbol:
m
A B:
m
The semantics of A B is dened as follows:
m df
<A B= = <A=DC ∧ <B=DC ∧ <m=OIAB ;
Informally speaking, the possible outputs of sm(t) at time t0 ∈ Time are the inputs that
t sm(t)
were valid during ] max(0; t0 − t); t0 [. We use A B as an abbreviation for A B.
9.2. Pipelining
Unlike transmission, pipelining assumes that we consider two PLC-automata that
are implemented on the same PLC. In principle, pipelining could be modelled as an
82 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
“internal transmission” of data between the automata in the same way as in Section 9.1,
but we would loose information that results from the common implementation.
In the pipelining case we know that the result computed by the rst automaton during
a cycle is used in the same cycle by the second automaton as input. That means every
output of the rst automaton will be read by the second one. If both automata change
state in the same cycle, the external observer will notice these changes simultaneously.
To model this we have to use more than the two observables of the transmission case.
Hence, we are not able to express pipelining as a special case of transmission.
Transmission from a PLC-automaton A to automaton B requires the source-code of
both automata to be organised as follows:
• The declaration part has to contain uniquely named variables for both automata.
• The body of A has to precede the body of B.
• The output of A has to be a subset of the input of B.
• In the body of B the input-variable has to be replaced by the name of the output-
variable of A.
Because of the similarity of pipelining and transmission we use A B to denote a
pipelining from automaton A to B. The semantics of A B is given by
df
<A B= = <A=DC ∧ <B=DC ∧ pipe(A; B);
where pipe(A; B) is the conjunction of the following formulae ranging over all states
q ∈ QA and all q0 ∈ QB and all sets A with ∅ 6= A ⊆ A . Read as min(A ; B ).
&
_
d¬(q ∧ q )e; dq ∧ q ∧ Ae → q ∧ q0 ∨
0 0
A (q; a) ∧ B (q0 ; !A (A (q; a))) (23a)
a∈A
_
∨ q ∧ B (q0 ; !A (q)) (23b)
a∈A∩Se; A (q); St; A (q)¿0
_
∨ A (q; a) ∧ q0
(23c)
a∈A; !A (A (q;a))∈Se; B (q0 ); St; B (q0 )¿0
&
_
dq ∧ q ∧ Ae → q ∧ q0 ∨
0
A (q; a) ∧ B (q0 ; !A (A (q; a))) (24a)
a∈A
_
∨ q ∧ B (q0 ; !A (q)) (24b)
a∈A∩Se; A (q); St; A (q)¿0
_
∨ A (q; a) ∧ q0
(24c)
a∈A; !A (A (q;a))∈Se; B (q0 ); St; B (q0 )¿0
This pair is similar to (3) and (4) but they restrict the progress of both automata
involved. In (23a) and (24a) we allow simultaneous steps. In (23b) and (24b) we
allow a change of the second automaton without a change of the rst one provided
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 83
that an input is read for which a delay is valid. Eqs. (23c) and (24c) allow steps of
the rst component without a change of the second one provided that the new state of
the rst one should be delayed.
Note that the formulae above allow nonsimultaneous steps even if the delay times
have elapsed. Hence, we need further formulae to disallow this kind of behaviour.
Suppose St; A (q)¿0:
dq0 ∧ A
dqeSt;A (q)+2 ∧ true;
∨ d¬q0 e; dq0 ∧ Ae
&
_
→ q ∧ q0 ∨ A (q; a) ∧ B (q0 ; !A (A (q; a))) (25a)
a∈A
_
∨ A (q; a) ∧ q0
(25b)
a∈A; !A (A (q;a))∈Se; B (q0 ); St; B (q0 )¿0
Note that the RHS of this formula consists of the combinations of states given in
(23a) and (23c). Similarly, the case of St;B (q0 )¿0:
0 St; B (q0 )+2 dq ∧ Ae
dq e ∧ true;
∨ d¬qe; dq ∧ Ae
&
_
→ q ∧ q0 ∨ A (q; a) ∧ B (q0 ; !A (A (q; a))) (26a)
a∈A
_
∨ q ∧ B (q0 ; !A (q))
(26b)
a∈A∩Se; A (q); St; A (q)¿0
9.3. Synchronisation
In the case of automata that are implemented on the same PLC and share the input
we can benet from the knowledge that during each cycle the same input is read by
both automata. Thus it is often the case that certain combination of states are not
reachable in such a synchronised system. We enhance our semantics by predicates that
allow us to establish such phenomena.
To this end, we dene the semantics of two PLC-automata A and B which share
the input-observable I = IA = IB (in symbols: A k B or simply AkB if I is clear)
I
in a similar way as before:
df
<A k B = <A=DC ∧ <B=DC ∧ syn(A; B)
I
where syn(A; B) is the conjunction of the following formulae ranging over all states
q ∈ QA and all q0 ∈ QB and all nonempty sets A that are subsets of the range of I.
84 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
Again, read as min(A ; B ). Note that these formulae are dened in analogy to the
formulae for the semantics of pipelining:
&
_
d¬(q ∧ q0 )e; dq ∧ q0 ∧ Ae → q ∧ q0 ∨ A (q; a) ∧ B (q0 ; a)
a∈A
_
∨ q ∧ B (q0 ; a)
a∈A∩Se; A (q); St; A (q)¿0
_
∨ A (q; a) ∧ q0
(27)
a∈A∩Se; B (q0 ); St; B (q0 )¿0
&
_
dq ∧ q0 ∧ Ae → q ∧ q0 ∨ A (q; a) ∧ B (q0 ; a)
a∈A
_
∨ q ∧ B (q0 ; a)
a∈A∩Se; A (q); St; A (q)¿0
_
∨ A (q; a) ∧ q0
a∈A∩Se; B (q0 ); St; B (q0 )¿0
d¬(v1 ∧ i1)e; dv1 ∧ i1e → d(v1 ∧ i1)e ∨ (v2 ∧ i2) ∨ (v2 ∧ i1)e
| {z } | {z }
ig bn
d¬(v2 ∧ i2)e; dv2 ∧ i2e → d(v2 ∧ i2)e ∨ (v1 ∧ i1) ∨ (v2 ∧ i1)e
| {z } | {z }
id;pg bn
d¬(v2 ∧ i1)e; dv2 ∧ i1e → d(v2 ∧ i1)e ∨ (v1 ∧ i1) ∨ (v2 ∧ i2)e
| {z } | {z }
id;pg ig
These formulae prove that the synchronised system will never enter the combination
v1 ∧ i2 which corresponds due to (12) to ignition without gas. We start in v1 ∧ i1. The
86 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
rst formula says that the system can only switch to v2 ∧ i1 or v2 ∨ i2. Similarly, the
following formulae assure the exclusion of a change to the critical state.
If we implement A and B on the same PLC it is reasonable to implement A rst,
i.e. A B. If we analyse pipe(A; B) in the same way as syn(B; C) before we can
nd the following properties for the whole system:
From these formulae we can conclude with some simple DC-arguments that v2 holds
i A is in m3 or m4. Due to the denition of the output of B we know that gas
holds i v2 is true. Hence, from the observer’s point of view the PLC-Automaton of
Fig. 13 and A B are equivalent.
Consider now a system in which A and B are implemented on distinct PLCs. We
assume that the transmission medium is the standard medium sm() with an arbitrary
¿0. We are interested in the delay between the reactions of A and B that is intro-
duced by the transmission. Assume that the cycle time of B is B . By the semantics
of A B we know that the following holds:
dOA ∈ {id; pg}e → dIB ∈ {id; pg}e
dOA ∈ {ig; bn}e → dIB ∈ {ig; bn}e
By Theorem 4 applied to B with all states and both sets of inputs above we get the
following assertions:
2
dSB ∈ {v1; v2} ∧ IB ∈ {id; pg}e →B dv1e
2
dSB ∈ {v1; v2} ∧ IB ∈ {ig; pg}e →B dv2e
1
2
Note that SB ∈ {v1; v2} is equivalent to true. Because dP1 e → dP2 e and dP2 e → dP3 e
1 +2
implies dP1 e → dP3 e we can summarise these to
+2
dOA ∈ {id; pg}e → B dv1e
+2
dOA ∈ {ig; bn}e → B dv2e
That means that the worst-case delay of the gas signal is + 2B .
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 87
1 Note that “locations” refers to the timed automaton and “states” to the PLC-automaton.
88 H. Dierks / Theoretical Computer Science 253 (2001) 61–93
to react has not occurred yet. “2” denotes that polling and testing have happened. The
system decided not to react to the input. “3” denotes that polling and testing have
happened. The system decided to react to the input. The second component of the
locations denotes the latest input event while the third component contains the latest
polled input. The last component represents the current state of the PLC-automaton.
There are three clocks in use: Clock x measures how long the latest input is valid,
clock y measures how long the current state is valid, and clock z measures the time
of the current cycle. Transitions that change the rst component of the locations are
labelled with poll, test, and tick. The remaining transitions (28) are labelled with
inputs and are not restricted anyhow. They change the second component which repre-
sents the latest input-event. The third component decribes the input which is polled by
the system. The polling has to happen after an amount of time since the beginning
of the cycle. To this end the clock z is used. This clock denotes the elapsed time
for the current cycle. Hence, the polling transition (29) is labelled with the condition
z¿0. Furthermore, it is not allowed to poll an input at the same time point where
it gets valid. Hence, we introduced the clock x denoting the time since the last input
is valid and restricted the poll-event with x¿0. Otherwise the system could react to
input that was valid only for a point of time. After the polling the testing has to occur
(30)–(32). These transitions re
ect the decision of the system whether to react to the
polled b input or not. It depends on the denitions of Se , St , and the value of the y-
clock which denotes the time how long the current state q is valid. It can only decide
to ignore the input when b ∈ Se (q) and St (q)¿0 are true (30) and moreover the delay
time has not elapsed: y6St (q). Finally, the tick-events nish the cycle (33) – (35).
Depending on the previous decision by the test-event the state may change or not. All
necessary clocks are reset. Due to the invariants z6 for all locations we know that a
cycle consisting of a poll- , a test- , and a tick-event has to happen within seconds
because only the tick-event resets z.
Tool support is indispensible for the development of correct software. In [31, 32]
the reader can nd the description of a tool supporting software development with
PLC-automata. It allows to edit PLC-automata with hierarchical extensions as dened
in [15], i.e. PLC-automata with mechanisms to structure a design in a way similar
to StateCharts [17, 18]. With this tool the user can build networks of PLC-automata,
simulate these networks, perform some static timing analysis, and translate them into
input for the Model-Checkers Uppaal [4] and Kronos [7]. Furthermore, we look for
systematic ways to develop PLC-Automata from specications. A rst result of this
research is presented in [9, 11], where a synthesis algorithm is presented for the subset
of DC-formulae called Implementables (cf. Section 8).
We made also comprehensive case studies to evaluate our approach: Academic
case studies like the gas burner (Section 8), the Production Cell [23, 22], and the
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 89
Acknowledgements
I would like to thank E.-R. Olderog and all other members of the “semantics group”
in Oldenburg for detailed comments and various discussions on the subject of this
paper. Furthermore, I would like to thank H. Becker for enhancing the readability of
this paper.
Proof. The proof is by contradiction. We use the following properties which are easy
to prove with (; A) ⊆ :
Due to the nite variability of S we can split the second interval into nitely many
subintervals where only one state in occurs.
⇒ ∃m ∈ N0 ; 0 ; : : : ; m ∈ : ∀06i ¡ m: i 6= i+1
∧true; (dAecn ∧ d0 e; : : : ; dm e); d¬n (; A)e; true
⇒ ∃m ∈ N0 ; 0 ; : : : ; m ∈ : ∀06i ¡ m: i 6= i+1
∧true; (dAecn ∧ d0 e; : : : ; dm e); d¬n (; A)e; true
∧∀i ∈ {2; : : : ; m}: i ∈ i−1 (1 ; A)
If m ¿ 0 we can apply the same argument to get m ∈= n (; A): Assume that m ∈ n
(; A) holds. From (3) we can conclude that after the dm e-phase only states in
df
= (m ; A) ∪ {m } are allowed. With the assumption and (; A) ⊆ it is the
case that ⊆ n (; A) which contradicts the requirement that after the dm e-phase a
d¬n (; A)e-phase follows. In the case of m = 0 we can use (4) to get the same result
because from (14) we know that cn ¿. Hence, we get
⇒ ∃m ∈ N0 ; 0 ; : : : ; m ∈ : ∀06i ¡ m: i 6= i+1
∧ true; (dAecn ∧ d0 e; : : : ; dm e); d¬n (; A)e; true
∧ m ∈= n (; A) ∧ ∀i ∈ {2; : : : ; m}: i ∈ i−1 (1 ; A)
Because of (; A) ⊆ it is not possible that i ∈ (i ; A) holds for an i ∈ {1; : : : ; m}.
Otherwise this would contradict m ∈= n (; A). Furthermore, m6n must hold: If m¿n
we have m ∈ m−1 (1 ; A) which implies m ∈ n (; A).
We are now able to derive upper time bounds for the di e-intervals with i¿1. If
St (i ) = 0 or St (i ) ¿ 0 ∧ A ∩ Se (i ) = ∅ we can use (7) and (11) resp. which give us
the bound of seconds. In the case of St (i ) ¿ 0 ∧ A ∩ Se (i ) 6= ∅ we get the bound
St (i ) + 2 by (6). Thus we have the following:
Let us now consider two cases: either the length of the d0 e-interval is less than or
not:
The rst case is a contradiction because cn is the length of the dAe-interval and the
accumulated upper time bounds for the di e require a length less than
m
X
+ s(i ; A):
i=1
Due to the denition of cn both cannot hold. In the second case we know from (4) that
1 ∈ (0 ; A). Hence, m = n is not possible due to m ∈ m−1 (1 ; A), thus m ∈ m (0 ; A)
but m ∈= n (; A).
⇒ false
∨ ∃m ∈ {0; : : : ; n − 1}; 0 ; : : : ; m ∈ : ∀06i ¡ m: i 6= i+1
∧ true; (dAecn ∧ d0 e¿ ; d1 e6s(1 ; A) ; : : : ; dm e6s(m ; A) );
d¬n (; A)e; true
∧ m ∈= n (; A) ∧ ∀i ∈ {1; : : : m}: i ∈ i (0 ; A)
∧ ∀i ∈ {0; : : : ; m}: i ∈= (i ; A)
We derive upper time bounds for the d0 e-interval. If St (0 ) = 0 or St (0 ) ¿ 0 ∧ A ∩ Se
(0 ) = ∅ we can use (5) and (10) resp. giving us the bound of ¡ 2 seconds. In the
case of St (0 ) ¿ 0 ∧ A ∩ Se (0 ) 6= ∅ we get the bound ¡ St (0 ) + 2 by (6) which can
be weakened to ¡ s(0 ; A) + . Thus we have the following:
This is a contradiction as in the previous case, because the accumulated upper time
bounds for the di e require a length less than
m
X
+ s(i ; A):
i=0
⇒ false
References
[1] R. Alur, C. Courcoubetis, D. Dill, Model-checking for real-time systems, 5th Annu. IEEE Symp. on
Logic in Computer Science, IEEE Press, New York, 1990, pp. 414 – 425.
[2] R. Alur, D.L. Dill, A theory of timed automata, Theoret. Comput. Sci. 126 (1994) 183–235.
[3] R. Alur, T. Henzinger, E. Sontag (Eds.), in: Hybrid Systems III, Lecture Notes in Computer Science,
vol. 1066, Springer, Berlin, 1996.
[4] J. Bengtsson, K.G. Larsen, F. Larsson, P. Pettersson, Wang Yi, Uppaal – a tool suite for automatic
verication of real-time systems, in: R. Alur, T. Henzinger, E. Santag (Eds.), Hybrid Systems III,
Lecture Notes in Computer Science, vol. 3, Springer, Berlin, 1996, pp. 232–243.
[5] D. Bosscher, I. Polak, F. Vaandrager, Verication of an audio control protocol, in: H. Langmaack,
W.-P. de Roever, J. Vytopil (Eds.), Formal Techniques in Real-Time and Fault-Tolerant Systems,
Lecture Notes in Computer Science, vol. 863, Springer, Berlin, 1994, pp. 170–192.
[6] J. Bowen, C.A.R. Hoare, H. Langmaack, E.-R. Olderog, A.P. Ravn, ProCoS II: A ProCoS II Project
Final Report, Chapter 3 Number 59 in Bulletin of the EATCS, European Association for Theoretical
Computer Science, June 1996, pp. 76 –99.
[7] C. Daws, A. Olivero, S. Tripakis, S. Yovine, The tool Kronos, in: R. Alur, T. Henzinger, E. Santag
(Eds.), Hybrid Systems III, Lecture Notes in Computer Science, vol. 3, Springer, Berlin, 1996, pp.
208–219.
[8] H. Dierks, PLC-Automata: a new class of implementable real-time automata, in: M. Bertran, T. Rus
(Eds.), ARTS’97, Lecture Notes in Computer Science, Mallorca, Spain, vol. 1231, Springer, Berlin,
May 1997, pp. 111–125.
[9] H. Dierks, Synthesising controllers from real-time specications, in: 10th Internat. Symp. on System
Synthesis, IEEE Computer Society, New York, September 1997, pp. 126 –133, short version of [11].
[10] H. Dierks, Comparing model-checking and logical reasoning for real-time systems, in: Workshop Proc.
of the ESSLLI’98, 1998, pp. 13–22.
[11] H. Dierks, Synthesizing controllers from real-time specications, IEEE Transac. Comput.-Aided Design
Integrated Circuits Systems 18 (1) (1999) 33–43.
[12] H. Dierks, C. Dietz, Graphical specication and reasoning: Case study generalized railroad crossing,
in: J. Fitzgerald, C.B. Jones, P. Lucas (Eds.), FME’97, Lecture Notes in Computer Science, vol. 1313,
Graz, Austria, Springer, Berlin, September 1997, pp. 20 –39.
[13] H. Dierks, A. Fehnker, A. Mader, F.W. Vaandrager, Operational and logical semantics for polling
real-time systems, in: Ravn, Rischel (Eds.), Formal Techniques in Real-Time and Fault-Tolerant
Systems, Lecture Notes in Computer Science, vol. 1486, Lyngby, Denmark, Springer, Berlin, September
1998. pp. 29 – 40, short version of [14].
[14] H. Dierks, A. Fehnker, A. Mader, F.W. Vaandrager, Operational and logical semantics for polling
real-time systems, Technical Report CSI-R9813, Computer Science Institue Nijmegen, Faculty of
Mathematics and Informatics, Catholic University of Nijmegen, April 1998, full paper of [13].
H. Dierks / Theoretical Computer Science 253 (2001) 61–93 93
[15] H. Dierks, J. Tapken, Tool-supported hierarchical design of distributed real-time systems, in: Proc. 10th
EuroMicro Workshop on Real Time Systems, IEEE Computer Society, June 1998, pp. 222–229.
[16] M.R. Hansen, Zhou Chaochen, Duration calculus: logical foundations, Formal Aspects Comput. 9 (1997)
283–330.
[17] D. Harel, Statecharts: a visual formalism for complex systems, Sci. Comput. Programming 8 (1987)
231–274.
[18] D. Harel, On visual formalisms, Comm. ACM 31 (5) (May 1988) 514–530.
[19] IEC International Standard 1131-3, Programmable Controllers, Part 3, Programming Languages, 1993.
[20] K.-H. John, M. Tiegelkamp, SPS-Programmierung Mit IEC 1131-3, Springer, Berlin, 1995 (in German).
[21] B. Krieg-Bruckner, J. Peleska, E.-R. Olderog, D. Balzer, A. Baer, UniForM – universal formal methods
workbench, in: U. Grote, G. Wolf (Eds.), Statusseminar des BMBF Softwaretechnologie, BMBF, Berlin,
March 1996, pp. 357–378.
[22] C. Lewerentz (Ed.), Formal Development of Reactive Systems: Case Study “Production Cell”, Lecture
Notes in Computer Science, vol. 891, Springer, Berlin, 1995.
[23] C. Lewerentz, T. Lindner (Eds.), Case Study “Production Cell”, Forschungszentrum Informatik,
Karlsruhe, 1994.
[24] R.W. Lewis, Programming industrial control systems using IEC 1131-3, The Institution of Electrical
Engineers, 1995.
[25] O. Maler, S. Yovine, Hardware timing verication using kronos, in: Proc. 7th Conf. on Computer-based
Systems and Software Engineering, IEEE Press, New York, 1996.
[26] M. Muller-Olm, Modular Compiler Verication, Lecture Notes in Computer Science, vol. 1283, Springer,
Berlin, 1997.
[27] A.P. Ravn, Design of embedded real-time computing systems, Technical Report 1995-170, Technical
University of Denmark, 1995.
[28] A.P. Ravn, H. Rischel (Eds.), Formal Techniques in Real-Time and Fault-Tolerant Systems, Lecture
Notes in Computer Science, vol. 1486, Lyngby, Denmark, Springer, Berlin, September 1998.
[29] A.P. Ravn, H. Rischel, K.M. Hansen, Specifying and verifying requirements of real-time systems, IEEE
Trans. Software Eng. 19 (1993) 41–55.
[30] M. Schenke, Development of correct real-time systems by renement, Habilitation Thesis, University
of Oldenburg, April 1997.
[31] J. Tapken, Interactive and compilative simulation of PLC-Automata, in: W. Hahn, A. Lehmann (Eds.),
ESS’97, SCS, October 1997, pp. 552–556.
[32] J. Tapken, H. Dierks, MOBY=PLC – Graphical Development of PLC-Automata, in: A.P. Ravn, H.
Rischel (Eds.), Formal Techniques in Real-Time and Fault-Tolerant Systems, Lecture Notes in Computer
Science, vol. 1486, Lyngby, Denmark, Springer, Berlin, September 1998, pp. 311–314.
[33] Zhou Chaochen, C.A.R. Hoare, A.P. Ravn, A calculus of durations, Inform. Proc. Lett. 40=5 (1991)
269–276.