Comparison Between Networks
Comparison Between Networks
Introduction
timization and controller design for the devices connected to the network must be analyzed.
The performance metrics of network systems that impact the requirements of control
systems include access delay, transmission time, response time, message delay, message collisions (percentage of collision), message throughput (percentage of packets discarded), packet
size, network utilization, and determinism boundaries. For control systems, candidate control networks generally must meet two main criteria: bounded time delay and guaranteed
transmission; that is, a message should be transmitted successfully within a bounded time
delay. Unsuccessfully transmitted or large time-delay messages from a sensor to an actuator, for example, may deteriorate system performance or make systems unstable. Several
protocols have been proposed to meet these requirements for control systems. They include
Ethernet (IEEE 802.3:CSMA/CD), Token Bus (IEEE 802.4), Token Ring (IEEE 802.5), and
CAN (CSMA/AMP). Control networks are typically based on one of two medium access protocols, CAN (used by Smart Distributed System (SDS), DeviceNet [9], and CAN Kingdom)
and the Token Ring or Bus (used by Process Field Bus (PROFIBUS) [10], Manufacturing
Automation Protocol (MAP) [11], ControlNet [12], and Fiber Distributed Data Interface
(FDDI) [13]).
In this article, we consider how each of these types of control networks could be used as a
communication backbone for a networked control system connecting sensors, actuators, and
controllers. A detailed discussion of the medium access control sublayer protocol for each
network is provided. For each protocol, we study the key parameters of the corresponding
network when used in a control situation, including network utilization, magnitude of the
expected time delay, and characteristics of time delays. Simulation results are presented
for several dierent scenarios, and the advantages and disadvantages of each network are
summarized.
Ethernet (CSMA/CD)
Ethernet uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
mechanism for resolving contention on the communication medium. The CSMA/CD protocol
is specied in the IEEE 802.3 network standard and is described brie
y as follows [15], [16],
[17]. When a node wants to transmit, it listens to the network. If the network is busy, it
waits until the network is idle; otherwise it transmits immediately. If two or more nodes
listen to the idle network and decide to transmit simultaneously, the messages of these
transmitting nodes collide and the messages are corrupted. While transmitting, a node
must also listen to detect a message collision. On detecting a collision between two or more
messages, a transmitting node stops transmitting and waits a random length of time to
retry its transmission. This random time is determined by the standard Binary Exponential
Backo (BEB) algorithm. The time before trying again is randomly chosen between 0 and
(2i ; 1) slot times, where i denotes the ith collision event detected by the node and one slot
time is the minimum time needed for a round-trip transmission. However, after 10 collisions
have been reached, the interval is xed at a maximum of 1023 slots. After 16 collisions,
the node stops attempting to transmit and reports failure back to the node microprocessor.
Further recovery may be attempted in higher layers [17].
The Ethernet frame format is shown in Fig. 1 [17], [18]. The total overhead is 26 bytes.
The Data Packet Frame size is between 46 and 1500 bytes. There is a nonzero minimum
data size requirement because the standard states that valid frames must be at least 64
bytes long, from Destination Address to Checksum (72 bytes including Preamble and Start
of Delimiter). If the data portion of a frame is less than 46 bytes, the Pad eld is used to ll
out the frame to the minimum size. There are two reasons for this minimum size limitation.
First, it makes it easier to distinguish valid frames from \garbage." When a transceiver
detects a collision, it truncates the current frame, which means that stray bits and pieces
of frames frequently appear on the cable. Second, it prevents a node from completing the
transmission of a short frame before the rst bit has reached the far end of cable, where it
may collide with another frame. For a 10-Mbps Ethernet with a maximum length of 2500
m and four repeaters, the minimum allowed frame time or slot time is 51.2 s, which is the
time required to transmit 64 bytes at 10 Mpbs [17].
Advantages: Because of low medium access overhead, Ethernet uses a simple algorithm
for operation of the network and has almost no delay at low network loads [11]. No communication bandwidth is used to gain access to the network compared with the token bus
or token ring protocol. Ethernet used as a control network commonly uses the 10 Mbps
standard (e.g., Modbus/TCP); high-speed (100 Mbps or even 1 Gbps) Ethernet is mainly
Disadvantages: Ethernet is a nondeterministic protocol and does not support any message
prioritization. At high network loads, message collisions are a major problem because they
greatly aect data throughput and time delay which may be unbounded [11], [20]. The
Ethernet capture eect existing in the standard BEB algorithm, in which a node transmits
packets exclusively for a prolonged time despite other nodes waiting for medium access,
causes unfairness, and results in substantial performance degradation [21]. Based on the
BEB algorithm, a message may be discarded after a series of collisions; therefore, end-toend communication is not guaranteed. Because of the required minimum valid frame size,
Ethernet uses a large message size to transmit a small amount of data.
Several solutions have been proposed for using Ethernet in control applications. For
example, every message could be time-stamped before it is sent. This requires clock synchronization, however, which is not easy to accomplish, especially with a network of this type
[22]. Various schemes based on deterministic retransmission delays for the collided packets
of a CSMA/CD protocol result in an upper-bounded delay for all the transmitted packets.
However, this is achieved at the expense of inferior performance to CSMA/CD at low to
moderate channel utilization in terms of delay throughput [14]. Other solutions also try to
prioritize CSMA/CD (e.g., LonWorks) to improve the response time of critical packets [23].
Using switched Ethernet by subdividing the network architecture is another way to increase
its eciency [17].
regenerate the token if the token holder stops transmitting and does not pass the token to its
successor. Nodes can also be added dynamically to the bus and can request to be dropped
from the logical ring.
The message frame format of ControlNet is shown in Fig. 2 [12]. The total overhead is
7 bytes, including preamble, start delimiter, source MAC ID, cyclic redundancy check (or
CRC), and end delimiter. The Data Packet Frame, namely Lpacket or Link Packet Frame,
may include several Lpackets that contain the size, control, tag, data, and an individual
destination address with total frame size between 0 and 510 bytes. The size eld species
the number of byte pairs (from 3 to 255) contained in an individual Lpacket. Each byte pair
must include the size, control, tag, and link data elds.
The ControlNet protocol adopts an implicit token-passing mechanism and assigns a
unique MAC ID (from 1 to 99) to each node. As in general token-passing buses, the node
with the token can send data; however, there is no real token passing around the network.
Instead, each node monitors the source MAC ID of each message frame received. At the
end of a message frame, each node sets an \implicit token register" to the received source
MAC ID + 1. If the implicit token register is equal to the node's own MAC ID, that node
may now transmit messages. All nodes have the same value in their implicit token registers,
preventing collisions on the medium. If a node has no data to send, it just sends a message
with an empty Lpacket eld, called a null frame.
The length of a cycle, called the Network Update Time (NUT) in ControlNet or Token
Rotation Time (TRT) in general, is divided into three major parts: scheduled, unscheduled,
and guardband, as shown in Fig. 3. During the scheduled part of a NUT, each node can
transmit time-critical/scheduled data by obtaining the implicit token from 0 to S . During
the unscheduled part of a NUT, each node from 0 to U shares the opportunity to transmit
non-time-critical data in a round-robin fashion until the allocated unscheduled duration is
expired. When the guardband time is reached, all nodes stop transmitting, and only the
node with lowest MAC ID, called the \moderator," can transmit a maintenance message,
called the \moderator frame," which accomplishes the synchronization of all timers inside
each node and publishing of critical link parameters such as NUT, node time, S , U , etc.
If the moderator frame is not heard for two consecutive NUTs, the node with the lowest
MAC ID will begin transmitting the moderator frame in the guardband of the third NUT.
Moreover, if a moderator node notices that another node has a lower MAC ID than its own,
it immediately cancels its moderator role.
Advantages: The token bus protocol is a deterministic protocol that provides excellent
throughput and eciency at high network loads [11], [14]. During network operation, the
token bus can dynamically add nodes to or remove nodes from the network. This contrasts
with token ring case, where the nodes physically form a ring and cannot be added or removed
dynamically [11]. Scheduled and unscheduled segments in each NUT cycle make ControlNet
suitable for both time-critical and non-time-critical messages.
Disadvantages: Although the token bus protocol is ecient and deterministic at high net-
work loads, at low channel trac its performance cannot match that of contention protocols.
In general, when there are many nodes in one logical ring, a large percentage of the network
time is used in passing the token between nodes when data trac is light [14].
The CAN protocol supports two message frame formats: standard CAN (version 2.0A, 11-bit identier) and
[9], [19].
The frame format of DeviceNet is shown in Fig. 4 [9]. The total overhead is 47 bits, which
includes start of frame (SOF), arbitration (11-bit Identier), control, CRC, acknowledgment
(ACK), end of frame (EOF), and intermission (INT) elds. The size of a Data Field is
between 0 and 8 bytes. The DeviceNet protocol uses the arbitration eld to provide source
and destination addressing as well as message prioritization.
Advantages: CAN is a deterministic protocol optimized for short messages. The message
priority is specied in the arbitration eld (11-bit identier). Higher priority messages always
gain access to the medium during arbitration. Therefore, the time delay of transmission of
higher priority messages can be guaranteed.
Disadvantages: The major disadvantage of CAN compared with the other networks is
the slow data rate (maximum of 500 Kbps). Thus the throughput is limited compared with
other control networks. The bit synchronization requirement of the CAN protocol also limits
the maximum length of a DeviceNet network. CAN is also not suitable for transmission of
messages of large data sizes, although it does support fragmentation of data that is more
than 8 bytes.
The discontinuities seen in Figs. 5 and 6 are caused by data fragmentation (i.e., the
maximum size limitation per message). The maximum data sizes are 1500, 504, and 8 bytes
for Ethernet, ControlNet, and DeviceNet, respectively. The
at portion of the Ethernet plots
for small data sizes is due to the minimum data size requirement (46 bytes).
Twait
Ttx
(1)
Tpost
Because the preprocessing and postprocessing times are typically constant compared
with the waiting time and the transmission time, and are a limitation of computer processing
parameters rather than network physical and protocol parameters, we will ignore them in
the following discussion. The queueing time, Tqueue, is the time a message waits in the buer
at the source node while previous messages in the queue are sent. It depends on the blocking
time of previous messages in queue, the periodicity of messages, and the processing load.
Although Tqueue is dicult to analyze, we include it in our simulations. In some control
applications, old messages are discarded, eectively setting Tqueue to zero; however, this
strategy may require a nonstandard network protocol. In the following subsections, we will
analyze the blocking time, frame time, and propagation time for each of the three candidate
control networks.
Blocking Time
The blocking time, which is the time a message must wait once a node is ready to send it,
depends on the network protocol and is a major factor in the determinism and performance
of a control network. It includes waiting time while other nodes are sending messages and
the time needed to resend the message if a collision occurs.
Ethernet Blocking Time
We rst consider the blocking time for Ethernet, which includes time taken by collisions
with other messages and the subsequent time waiting to be retransmitted. The BEB algorithm described earlier indicates a probabilistic waiting time. An exact analysis of expected
blocking time delay for Ethernet is very dicult [24]. At a high level, the expected blocking
time can be described by the following equation:
E fTblock g =
16
X
E fTk g + Tresid;
k=1
(2)
where Tresid denotes the residual time that is seen by node i until the network is idle, and
E fTk g is the expected time of the kth collision. E fTk g depends on the number of backlogged
and unbacklogged nodes as well as message arrival rate at each node. For the 16th collision,
the node discards this message and reports an error message to the higher level processing
units [17]. It can be seen that Tblock is not deterministic and may be unbounded due to the
discarding of messages.
ControlNet Blocking Time
In ControlNet, if a node wants to send a message, it must wait to receive the token from
the logically previous node. Therefore, the blocking time, Tblock , can be expressed by the
transmission time and token rotation time of previous nodes. The general formula for Tblock
can be described by the following equation:
Tblock = Tresid +
j 2Nnoqueue
j 2Nqueue
(3)
where Tresid is residual time needed by the current node to nish transmitting, Nnoqueue
and Nqueue denote the sets of nodes with messages and without messages in the queues,
10
respectively, and Tguard is the time spent on the guardband period, as dened earlier. For
example, if node 10 is waiting for the token, node 4 is holding the token and sending messages,
and nodes 6, 7, and 8 have messages in their queues, then Nnoqueue = f5; 9g and Nqueue =
f4; 6; 7; 8g. Let nj denote the number of messages queued in the j th node and let Tnode
be the maximum possible time (i.e., token holding time) assigned to each node to fully
utilize the network channel; for example, in ControlNet Tnode = 827:2 s which is a function
of the maximum data size, overhead frame size, and other network parameters. Ttoken is
the token passing time, which depends on the time needed to transmit a token and the
propagation time from node i ; 1 to node i. ControlNet uses an implicit token, and Ttoken
is simply the sum of Tframe with zero data size and Tprop. If a new message is queued for
sending at a node while that node is holding the token, then Tblock = Ttx(j;nj ) , where j is
the node number. In the worst case, if there are N master nodes on the bus and each one
has multiple messages to send, which means each node uses the maximum token holding
time, then Tblock = Pi2Nnodenfjg min(Ttx(i;ni); Tnode), where the min function is used because,
even if it has more messages to send, a node cannot hold the token longer than Tnode (i.e.,
Ttx(j;nj ) Tnode ). ControlNet is a deterministic network because the maximum time delay
is bounded and can be characterized by (3). If the periods of each node and message are
known, we can explicitly describe the sets Nnoqueue and Nqueue and nj . Hence, Tblock in (3)
can be determined explicitly.
DeviceNet Blocking Time
The blocking time, Tblock , in DeviceNet can be described by the following equation [25]:
(k)
Tblock
= Tresid +
(k;1)
X d Tblock
+ Tbit e T (j);
tx
(4)
(j )
Tperi
where Tresid is residual time needed by the current node to nish transmitting, Nhp is the set
(j )
of nodes with higher priority than the waiting node, Tperi
is the period of the j th message,
and dxe denotes the smallest integer number that is greater than x. The summation denotes
the time needed to send all the higher priority messages. For a low-priority node, while it
is waiting for the channel to become available, it is possible for other high-priority nodes to
be queued, in which case the low-priority node loses the arbitration again. This situation
accumulates the total blocking time. The worst-case Tresid under a low trac load is:
8j 2Nhp
Tresid = 8jmax
Ttx(j);
2N
node
(5)
where Nnode is the set of nodes on the network. However, because of the priority-arbitration
mechanism, low-priority node/message transmission may not be deterministic or bounded
under high loading.
11
Frame Time
The frame time, Tframe , depends on the size of the data, the overhead, any padding,
and the bit time. Let Ndata be the size of data in terms of bytes, Novhd be the number of
bytes used as overhead, Npad be the number of bytes used to pad the remaining part of the
frame to meet the minimum frame size requirement, and Nstu be the number of bytes used
in a stung mechanism (on some protocols). The frame time can then be expressed by the
following equation:
(6)
The values Ndata , Novhd , Npad, and Nstu 3 can be explicitly described for the Ethernet,
ControlNet, and DeviceNet protocols [24].
Propagation Time
The propagation time, Tprop, depends on the signal transmission speed and the distance
between the source and destination nodes. For the worst case, the propagation delays from
one end to the other of the network cable for these three control networks are: Tprop = 25:6
s for Ethernet (2500 m), Tprop = 10 s for ControlNet (1000 m), and Tprop = 1 s for
DeviceNet (100 m). The length in parentheses represents the typical maximum cable length
used. The propagation delay is not easily characterized because the distance between the
source and destination nodes is not constant among dierent transmissions. For comparison,
we will assume that the propagation times of these three network types are the same, say,
Tprop = 1 s (100 m). Note that Tprop in DeviceNet is generally less than one bit time
because DeviceNet is a bit-synchronized network. Hence, the maximum cable length is used
to guarantee the bit synchronization among nodes.
Case Studies
In this section, we dene critical network parameters, and then study two cases of
networked control systems: a control network system with 10 nodes, each with 8 bytes of
data to send every period, and an SAE vehicle example with 53 nodes [25]. Matlab4 is used
to simulate the MAC sublayer protocols of the three control networks. Network parameters
such as the number of nodes, the message periods, and message sizes can be specied in
the simulation model. In our study, these network parameters are constant. The simulation
The bit-stung mechanism in DeviceNet is as follows: if more than 5 bits in a row are `1', then a `0' is added and
vice versa. Ethernet and ControlNet use Manchester biphase encoding, and, therefore, do not require bit-stung.
4
Matlab is a technical computing software developed by the Mathworks Inc.
3
12
program records the time delay history of each message and calculates network performance
statistics such as the average time delay seen by messages on the network, the eciency and
utilization of the network, and the number of messages that remain unsent at the end of the
simulation run.
For control network operation, the message connection type must be specied. Practically, there are three types of message connections: strobe, poll, and change of state
(COS)/cyclic. In a strobe connection, the master device broadcasts a strobed message to a
group of devices and these devices respond with their current condition. In this case, all
devices are considered to sample new information at the same time. The time delay between sampling at the source device and receiving at the destination device is the sum of the
transmission time and the waiting time at the source node. In a poll connection, the master
sends individual messages to the polled devices and requests update information from them.
Devices only respond with new signals after they have received a poll message. COS/cyclic
devices send out messages either when their status is changed (COS) or periodically (cyclic).
Although COS/cyclic seems most appropriate from the traditional control systems point of
view, strobe and poll are commonly used in industrial control networks [9].
Based on these dierent types of message connections, we consider the following three
releasing policies. The rst policy, which we call the \zero releasing policy," assumes every
node tries to send its rst message at t = 0 and sends a new message every period. This
type of situation occurs when a system powers up and there has been no prescheduling of
messages or when there is a strobe request from the master. The second policy, namely, the
\random releasing policy," assumes a random start time for each node; each node still sends
a new message every period. The possible situation for this releasing policy is the COS
or cyclic messaging where no pre-schedule is done. In the third policy, called \scheduled
releasing policy", the start-sending time is scheduled to occur (to the extent possible) when
the network is available to the node; this occurs in a polled connection.
In addition to varying the release policy, we also change the period of each node to
demonstrate the eect of trac load on the network. For each releasing policy and period,
we calculate the average time delays of these ten nodes and the eciency and utilization
of the three dierent control networks; we also record the number of unsent and failure or
discarded messages of each network. Further, we examine the eect of node numbering on
the network performance. We then compare the simulation results to the analytic results
described above. For ControlNet and DeviceNet, the maximum time delay can be explicitly
determined. For Ethernet, the expected value of the time delay can be computed using the
BEB algorithm once the releasing policy is known.
13
For a given running time, say Tmax = 10 sec, we can calculate the average time delays
from each node for each network.
avg
Tdelay
X [
= N1
i2Nnode
PM i T (i;j)
( )
j =1
delay ];
(
i
)
M
(7)
where N is the total number of nodes, Nnode is the set of nodes, and M (i) is the number of
messages requested at node i. We assume all messages are periodic; thus the total number
of messages is equal to the total running time divided by the period of messages (i.e., M (i) =
(i )
bTmax =Tperi
c, where bxc denotes the largest integer number less than x). The average delay
can be computed for the entire network, as shown, or for the ith node.
Network Eciency
We will dene the eciency of a network, Pe , as the ratio of the total transmitting
time to the time used to send messages, including queueing time, blocking time, and so on;
that is,
Pi2N PM i Ttx(i;j)
node j =1
Pe =
:
(8)
sum
Tdelay
Therefore, Pe ! 1 denotes that all the time delay is due to the transmission delay and the
network performance is good. On the other hand, Pe ! 0 means that most of the time
delay is due to message contention or collision.
( )
Network Utilization
The utilization of a network, Putil , is dened by the ratio of the total time used to
transmit data and the total running time; that is,
Putil =
( )
j =1
Tmax
(9)
(i;j )
where Tretx
is the time taken to retransmit the (i; j )th message. Putil describes the percentage
of eective bandwidth used by the nodes or, conversely, the utilization of the network. If
Putil ! 0, there is sucient bandwidth left on the network for other purposes. If Putil ! 1,
the network is saturated, and we have to redesign the network layout or reassign the trac
load. Note that for Ethernet, under high loading condition, Putil can approach 1. However,
eective data communication can approach zero (i.e., Pe ! 0) because Putil is dominated
(i;j )
by Tretx
.
14
Control systems need required information to be transmitted successfully and immediately. If the information transmission on a network induces lost messages, the system
performance may deteriorate or even become unstable. Hence, it is very important to evaluate a network protocol according to the number or the possibility of unsent messages.
15
larger than the availability of network), the utilization approaches 100% and the time delay
increases (it will become unbounded as the simulation time goes to innity). Therefore, the
network becomes unstable.
The average time delays for the three releasing policy are shown in Figure 8. The zero
releasing policy has the longest average delays in every network because all nodes experience
contention when trying to send messages. Although the Ethernet data rate is much faster
than DeviceNet, the delays due to collisions and the large required message size combine to
increase the average time delay for Ethernet in this case. For a typical random releasing
policy, average time delays are reduced because not all nodes try to send messages (or
experience network contention) at the same time, although some contention still exists. The
scheduled releasing policy makes the best use of each individual network; the average time
delay of these releasing policies is only the transmission time. When the networks are not
saturated, the average time delay is just equal to the frame time (i.e., the time taken to
transmit a message frame). In this case, all three networks maintain a constant time delay.
For the two deterministic control networks (i.e., DeviceNet and ControlNet), if the network is not saturated (i.e., Putil 100%), there are no messages queued at the buer.
However, for Ethernet, there are always some messages queued at the buer, and, moreover,
some messages are discarded due to the BEB algorithm. We also notice that the average time
delay of each message is not constant, even though the network is not saturated. The discarded messages and nonconstant time delay may make Ethernet unsuitable in this loading
situation for control applications.
Table 2 compares the performance of three releasing policies for a message period of
5000 s. The network eciency is low for the zero and random releasing policies, but can
reach 100% in the scheduled releasing policy if the trac load is not saturated. With a
message period of 5000 s, none of the networks are saturated, and the network utilization
of ControlNet, and DeviceNet is the same for the three releasing policies. However, for
Ethernet, the network utilization in the zero and random releasing policies is dierent from
that in the scheduled releasing policy because, when messages collide, there are multiple
transmissions for one message.
Again using the 5000 s period, we also demonstrate the time delay history of the messages sent by three dierent nodes on the network (nodes 1, 5, and 10 in Fig. 9). DeviceNet
is the only network that can guarantee constant time delay for all three releasing policies;
this is due to the priority arbitration mechanism and the periodicity of messages. Hence,
the qualitative performance of DeviceNet is independent of releasing policy in this case (i.e.
when the network is not saturated). As the trac load increases (high frequency of messages), only higher priority nodes can gain access to the network and maintain bounded
16
transmission time delays, but low-priority messages cannot access the network and remain
unsent. Using the scheduled and random releasing policies, the Ethernet message collision
probability is low at low message rates. Hence, Ethernet generally has a constant time delay
in low trac loads. However, when collisions occur as in the zero or random releasing policy,
the time delay is not constant and is dicult to predict, as shown in Figs. 9(a) and (b).
Although ControlNet only exhibits a constant time delay under scheduled releasing policy
conditions, it always guarantees a bounded time delay when the trac load is not saturated,
no matter what the releasing policy, as shown in Fig. 9.
The delays experienced by all the messages on the network are combined in Fig. 10.
Again, with a message period of 5000 s, none of the networks are saturated. In Ethernet,
shown in Fig. 10(a), the zero and random releasing policies demonstrate the nondeterministic
property of time delay in Ethernet, even though the trac load is not saturated. Fig. 10(b)
shows that message time delay of ControlNet is bounded for all releasing policies; we can
estimate the lower and upper bounds based on the formulae derived in the timing analysis
section. Due to the asynchronicity between the message period and the token rotation period,
these time delays exhibit a linear trend with respect to the message number. The simulation
results for DeviceNet, shown in Fig. 10(c), demonstrate that every node in DeviceNet has a
constant time delay which depends only on the message number. The estimated mean time
delay (1091) for Ethernet in Fig. 10(a) is computed for the case of the zero releasing policy
from (2) and the variance is taken as twice the standard deviation. This estimated value
is close to the simulated value (1081) given in Table 2. The maximum and minimum time
delays for ControlNet and DeviceNet are computed from (3), (4), and (6).
17
Table 4 illustrates three cases based on three dierent node numbering assignments: the
node numbering in case I is ordered based on the functional groups, and the other two cases
are ordered by message periods (case II is decreasing and case III is increasing). Note that
the total transmission times for all 53 messages are 3052:8; 1366:4; and 8470 in Ethernet,
ControlNet, and DeviceNet, respectively.
The comparison of case I simulation results for the three control networks is shown
in Fig. 11. For the zero releasing policy, Ethernet has a larger message time delay value
and standard deviation among all 53 nodes. This is due to the high probability of message
collision with the zero releasing policy. For DeviceNet, as expected, there is a linear trend for
message time delay over the message node numbers. However, for ControlNet, the average
message time delay is nearly the same over all 53 nodes. This is because the medium access
control in ControlNet is based on the token rotation mechanism, and every node gains access
to the network medium at a deterministic period. Fig. 11 also shows lower and consistent
variability of time delays for ControlNet and DeviceNet compared to that of Ethernet.
For the random releasing policy, ControlNet still demonstrates a constant time delay
over all 53 nodes, but Ethernet and DeviceNet do not. However, for DeviceNet, nodes with
small node numbers still have a shorter message time delay than nodes with large node
numbers. The average message time delays of the random releasing policy are shorter than
those of the zero releasing policy, as shown in Table 4. Ethernet still has high variability of
time delay compared to ControlNet and DeviceNet.
For the scheduled releasing policy, Ethernet demonstrates outstanding results because
of its high transmission speed and zero possibility of message collisions. Because the total
transmission time of all 53 messages is 8470 s in DeviceNet, the last few nodes with short
message periods (e.g., nodes 39{53) have higher variability of message time delays, especially,
nodes 39, 42, 43, and 49, which have large data sizes and shorter message periods. However,
the results for ControlNet are similar for all releasing policies. For multiperiod systems,
ControlNet is less sensitive to the releasing policies, and it is very dicult to coordinate
the system parameters and maintain the minimum time delay, as we did in the 10-node
simulation.
In this SAE example setting, shown in Table 3, there are six dierent message periods.
To study the eect of node numbering, we regroup these nodes by their message periods.
We rst let those nodes with shorter message periods have lower node numbers and nd
that the nodes within one group have similar performance for the ControlNet and DeviceNet
systems but vary between node groups. However, there is not much dierence in performance
between node groups for Ethernet. Note that the simulation results within one group are
similar to the 10-node example we studied in the rst part of this section.
18
Fig. 12 shows the time delay characteristics of two releasing policies in DeviceNet for
three cases of node numbering. Because of the priority mechanism on node numbers, the
time delay prole exhibits a linear trend over node numbers. However, there is little variance
for the ordered by increasing message period case in the zero releasing policy. This is because
high frequency nodes are high priority nodes and have little diculty accessing the network
medium. For the scheduled releasing policy, both the average time delay and the variance are
smaller than those in the zero releasing policy. The last few nodes of the decreasing message
period numbering case have large time delay variance with both releasing policies because
they are lower priority nodes with high frequency messages and always lose the contention
battle when one arises. The gap between nodes 33 and 34 due to the fact that the total
transmission times from nodes 1 to 33 is more than 5000 s and node 34 always loses the
contention to those nodes with higher priority. A similar situation occurs between nodes 51
and 52.
19
because of its high data rate, Ethernet can be used for aperiodic/non-time-critical and large
data size communication, such as communication between workstations or machine cells. For
intra-workstation cell communication with controller, sensors, and actuators, deterministic
network systems are generally more suitable for meeting the characteristics and requirements
of control systems. For control systems with short and/or prioritized messages, DeviceNet
demonstrates better performance. The scheduled and unscheduled messaging capabilities in
ControlNet make it suitable for time-critical and non-time-critical messages. ControlNet is
also suitable for large data size message transmission.
Our future eorts will focus on controller design for networked control systems, which
can dier signicantly from the design of traditional centralized control systems. This work
will include conducting experimental studies of control networks for control applications. We
plan to use this analysis, along with performance evaluation of candidate control networks
presented here, as the basis for future message scheduling and control algorithm designs for
networked control systems.
Acknowledgments
The authors wish to acknowledge the contributions of John Korsakas and the support of
the National Science Foundation Engineering Research Center for Recongurable Machining Systems at the University of Michigan under grant EEC95-92125, the Open DeviceNet
Vendor Association, and ControlNet International.
References
[1] L.H. Eccles, \A smart sensor bus for data acquisition," Sensors, vol. 15, no. 3, pp. 28{36,
1998.
[2] S. Biegacki and D. VanGompel, \The application of DeviceNet in process control," ISA
Transactions, vol. 35, no. 2, pp. 169{176, 1996.
[3] L.D. Gibson, \Autonomous control with peer-to-peer I/O networks," Sensors, vol. 12,
no. 9, pp. 83{90, Sep. 1995.
[4] A. Ray, \Introduction to networking for integrated control systems," IEEE Control
Systems Magazine, vol. 9, no. 1, pp. 76{79, Jan. 1989.
[5] G. Schickhuber and O. McCarthy, \Distributed eldbus and control network systems,"
Computing & Control Engineering, vol. 8, no. 1, pp. 21{32, Feb. 1997.
[6] D. Song, T. Divoux, and F. Lepage, \Design of the distributed architecture of a machinetool using FIP eldbus," in Proceedings of IEEE Int'l Conf. on Application-specic
Systems, Architectures, Processors, Los Alamitos, CA, pp. 250{260, 1996.
20
[7] R.S. Raji, \Smart networks for control," IEEE Spectrum, vol. 31, no. 6, pp. 49{55, June
1994.
[8] Y. Koren, Z.J. Pasek, A.G. Ulsoy, and U. Benchetrit, \Real-time open control architectures and system performance," CIRP Annals - Manufacturing Technology, vol. 45, no.
1, pp. 377{380, 1996.
[9] DeviceNet Specications, Boca Raton, Florida, Open DeviceNet Vendors Association,
2.0 edition, 1997.
[10] G. Cena, C. Demartini, and A Valenzano, \On the performances of two popular eldbuses," in Proceedings of IEEE International Workshop on Factory Communication
Systems, Barcelona, Spain, pp. 177{186, Oct. 1997.
[11] J.D. Wheelis, \Process control communications: Token Bus, CSMA/CD, or Token
Ring?" ISA Transactions, vol. 32, no. 2, pp. 193{198, July 1993.
[12] ControlNet Specications, Boca Raton, Florida, ControlNet International, 1.03 edition,
1997.
[13] S. Saad-Bouzefrane and F. Cottet, \A performance analysis of distributed hard real-time
applications," in Proceedings of IEEE International Workshop on Factory Communication Systems, Barcelona, Spain, pp. 167{176, Oct. 1997.
[14] S.A. Koubias and G.D. Papadopoulos, \Modern eldbus communication architectures
for real-time industrial applications," Computers in Industry, vol. 26, no. 3, pp. 243{252,
Aug. 1995.
[15] D. Bertsekas and R. Gallager, Data Networks, Englewood Clis, NJ, Prentice-Hall Inc.,
second edition, 1992.
[16] B.J. Casey, \Implementing Ethernet in the industrial environment," in IEEE Industry
Applications Society Annual Meeting, Seattle, WA, vol. 2, pp. 1469{1477, Oct. 1990.
[17] A.S. Tanenbaum, Computer Networks, Upper Saddle River, NJ, Prentice-Hall Inc.,
third edition, 1996.
[18] H. Bartlett and J. Harvey, \The modeling and simulation of a pick and place computerintegrated manufacturing (CIM) cell," Computers in Industry, vol. 26, no. 3, pp. 253{260,
Aug. 1995.
[19] G. Paula, \Building a better eldbus," Mechanical Engineering, pp. 90{92, June 1997.
[20] V.K. Khanna and S. Singh, \An improved `piggyback Ethernet' protocol and its analysis," Computer Networks and ISDN Systems, vol. 26, no. 11, pp. 1437{1446, Aug.
1994.
[21] K.K. Ramakrishnan and H. Yang, \The Ethernet capture eect: Analysis and solution,"
in 19th Conference on Local Computer Networks, Minneapolis, MN, pp. 228{240, Oct.
1994.
21
[22] J. Eidson and W. Cole, \Ethernet rules closed-loop system," InTech, pp. 39{42, June
1998.
[23] J.R. Moyne, N. Naja, D. Judd, and A. Stock, \Analysis of Sensor/Actuator Bus
Interoperability Standard Alternatives for Semiconductor Manufacturing," in Sensors
Expo Conference Proceedings, Sep. 1994.
[24] F.-L. Lian, J.R. Moyne, and D.M. Tilbury, \Performance evaluation of control networks: Ethernet, ControlNet, and DeviceNet," Technical Report: UM-MEAM-99-02,
http://www.eecs.umich.edu/~impact, Feb. 1999.
[25] K. Tindell, A. Burns, and A.J. Wellings, \Calculating Controller Area Network (CAN)
message response times," Control Engineering Practice, vol. 3, no. 8, pp. 1163{1169,
Aug. 1995.
[26] F.-L. Lian, J.R. Moyne, and D.M. Tilbury, \Control performance study of a Networked
Machining Cell," To appear in Proceedings of American Control Conference, Chicago,
Illinois, June 2000.
Nomenclature
Ndata
Novhd
Npad
Nstuff
Pe
Putil
Tblock
Tbit
Tdcode
Tdcomp
Tdelay
Tframe
Tguard
Tmax
Tnode
Tperi
Tretx
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
Tpost
Tpre
Tprop
Tqueue
Tresid
Tscode
Tscomp
Ttx
Ttoken
Twait
Nhp
Nnode
Nnoqueue
Nqueue
Tdest
Tsrc
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
22
Table 1.
Table 2.
Simulation Result of Three Releasing Policies with Message Period of 5000 s (10-node
case).
1081
241
1221
172
151
620
58
32
222
Ethernet
ControlNet
DeviceNet
5.3
13.3
18.2
33.0
21.1
35.8
100
100
100
Ethernet
ControlNet
DeviceNet
34.5
6.4
44.4
16.4
6.4
44.4
11.5
6.4
44.4
Eciency (%)
Utilization (%)
23
Table 3.
The data size (byte) and message period (ms) of the 53 nodes. Not all nodes have the same
period, but each node has its own constant period of messages needed to be sent. These 53
nodes can be grouped into six groups by their message periods, which are 5 ms (8 nodes),
10 ms (2 nodes), 20 ms (1 node), 50 ms (30 nodes), 100 ms (6 nodes), and 1000 ms (6
nodes).
(byte)
8
8
8
8
8
8
8
8
8
8
8
8
1
4
1
1
2
1
1
3
(ms)
100
100
1000
100
1000
100
5
5
5
100
5
100
1000
50
50
50
50
20
50
50
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
(byte)
2
3
1
1
1
1
1
1
8
8
2
8
1
8
1
2
1
1
7
1
(ms)
1000
50
50
50
50
50
50
50
10
10
50
5
1000
50
50
1000
50
50
50
50
41
42
43
44
45
46
47
48
49
50
51
52
53
(byte)
1
8
8
1
1
1
1
1
8
2
8
1
8
(ms)
50
5
5
50
50
50
50
50
5
50
50
50
50
24
Table 4.
Case I
Node Numbering
Subsystem
Ordering
Case II
Decreasing
Message Period
Ordering
Zero
Rand.
Sch'd
Case III
Increasing
Message Period
Ordering
Zero Rand.
Sch'd
Ethernet
ControlNet
DeviceNet
3268
558
2176
122
417
428
58
439
453
3377
555
2021
103
406
305
58
458
568
3362
562
2202
80
407
253
58
444
386
Ethernet
ControlNet
DeviceNet
1.7
5.6
9.1
47.2
7.5
46.2
100.0
7.1
43.7
1.8
5.6
9.8
55.8
7.7
64.8
100.0
6.8
34.8
1.8
5.6
9.0
71.8
7.7
78.1
100.0
7.0
51.2
Ethernet
ControlNet
DeviceNet
47.6
7.9
49.7
16.6
7.9
49.7
14.5
7.9
49.7
47.5
7.9
49.7
16.3
7.9
49.7
14.5
7.9
49.7
47.5
7.9
49.7
15.4
7.9
49.7
14.5
7.9
49.7
Eciency (%)
Utilization (%)
25
Bytes
Preamble
Start of Destination
Delimiter Address
Source
Address
0-1500
0-46
Data
Length
Data
Pad Checksum
46-1500 Bytes
Overhead = 22 Bytes
OH = 4 Bytes
Bytes
0-510
Preamble
Start of
Delimiter
Source
MAC ID
LPackets
CRC
End
Delimiter
OH = 3 Bytes
Overhead = 4 Bytes
LPacket
LPacket
Control
Size
Byte
........
LPacket
Tag
Data
2 or More
0-506
Time
0
1
0
1
1
2
2
3
3
4
4
0
S
7
8
1
2
10
11
U
10
12
Unscheduled
Guardband
26
Message Frame
Bus Idle
Arbitration Field
11-Bit Identifier
SOF
Control
r1 r0
Data Field
DLC
CRC Field
ACK
EOF
Int
Bus Idle
15 Bits
RTR
Delimiter
Delimiter
Slot
10
Transmission Time ( s)
10
Ethernet
ControlNet
DeviceNet
10
10
10
10 0
10
10
10
Data Size (bytes)
10
10
Figure 5. A comparison of the transmission time vs. the data size for the three networks.
0.8
0.6
0.4
Ethernet
ControlNet
DeviceNet
0.2
0 0
10
10
10
Data Size (bytes)
10
10
Figure 6. A comparison of the data coding eciency vs. the data size for each network.
27
Task
Init
Enter
Queue
Send
Last Bit
Send
First Bit
Leave
Queue
Receive
First Bit
Tscode
Tscomp
Tqueue
T block
Task
End
Receive
Last Bit
T frame
Source
Node
Tprop
T frame
Dest.
Node
Tdcode
T pre
T wait
Source Node
T post
Ttx
Network Channel
Tdcomp
Dest. Node
Ts rc
Tdest
Time
Figure 7. A timing diagram showing time spent sending a message from a source node to a destination
node.
Tdelay = Tdest ;Tsrc = Tpre + Twait + Ttx + Tpost. Delays occur at the source node due to computation
and coding of the message, queueing at the source node, and blocking due to network trac. Once
the message is sent, there is a propagation time delay (due to physical length of network) and a
transmission time delay (due to message size). At the destination node, there are again decoding
and computation delays before the data can be used.
28
10
10
Ethernet
ControlNet
DeviceNet
5
10
10
10
10
Random Releasing Policy
10
10
10
10
10
10
Scheduled Releasing Policy
10
10
10
10
10
Message Period (s)
10
Figure 8. A comparison of the average time delay vs. message period (10-node case).
29
4
10
10
Ethernet
ControlNet
DeviceNet
10
10
20
40
60
Messages of Node 1
80
10
100
10
40
60
Messages of Node 5
80
100
80
100
20
40
60
Messages of Node 5
80
100
20
40
60
Messages of Node 10
80
100
10
10
10
10
10
10
10
10
10
40
60
Messages of Node 1
20
10
20
10
10
10
10
Ethernet
ControlNet
DeviceNet
10
10
10
20
40
60
Messages of Node 10
80
100
(a)
10
(b)
= 5000 s
peri
10
10
Ethernet
ControlNet
DeviceNet
10
10
20
40
60
Messages of Node 1
80
100
20
40
60
Messages of Node 5
80
100
20
40
60
Messages of Node 10
80
100
10
10
10
10
10
10
10
10
(c)
Figure 9. The time delay history of nodes 1, 5, and 10 with a period of 5000 s and using the three releasing
policies (10-node case).
30
9000
700
8000
7000
600
5000
4000
3000
500
Time Delay (s)
6000
400
300
200
2000
100
1000
0
0
20
40
60
80
100
Message Number of Each Node (Period=5000s)
(a) Ethernet
0
0
20
40
60
80
100
Message Number of Each Node (Period=5000s)
(b) ControlNet
3500
Zero Relasing Policy
3000
2500
2000
1500
1000
500
0
0
20
40
60
80
100
Message Number of Each Node (Period=5000s)
(c) DeviceNet
Figure 10. Message time delay associated with three releasing policies (10-node case).
The estimated mean, maximum, and minimum values are computed from the network analysis for
the zero and scheduled releasing policies.
31
x 10
Ethernet
ControlNet
DeviceNet
2
1
0
0
10
20
30
40
Random Releasing Policy
50
10
20
30
40
Scheduled Releasing Policy
50
5000
4000
3000
2000
1000
0
0
5000
4000
3000
2000
1000
0
0
10
20
30
Node Nubmer
40
50
Figure 11. Statistical demonstration of message time delay on three control networks for the SAE example:
node numbering by subsystem.
Note: The symbols \circle," \cross," and \diamond" denote the message time delay means of
Ethernet, ControlNet, and DeviceNet, respectively; the dashdot, dashed, and solid lines represent
the amount of message time delay variance computed from the simulation result as twice the
standard deviation of the time delay.
32
10000
3000
2000
5000
1000
0
0
0
0
10
20
30
40
50
Case II: Ordered by Decreasing Message Period
10
20
30
40
50
Case II: Ordered by Decreasing Message Period
5000
10000
5000
0
0
10
20
30
40
50
Case III: Ordered by Increasing Message Period
4000
3000
2000
1000
0
0
10
20
30
40
50
Case III: Ordered by Increasing Message Period
5000
4000
10000
3000
2000
5000
1000
0
0
10
20
30
Node Number
40
50
0
0
10
20
30
Node Number
40
50
Figure 12. Statistical demonstration of message time delay on DeviceNet for the SAE example
Note: The symbol \diamond" denotes the message time delay means; the solid lines represent the
amount of message time delay variance computed from the simulation result as twice the standard
deviation of the time delay.