2005-IEEE-Gateway-basedmulticast Protocol - A Novel Multicast Protocol For Mobile Ad Hoc Networks
2005-IEEE-Gateway-basedmulticast Protocol - A Novel Multicast Protocol For Mobile Ad Hoc Networks
Abstract: Multicasting in mobile ad hoc networks (MANETs) poses several challenging problems.
Though a number of multicast protocols for MANETs exist in the literature, there are many areas
where improvements are desirable and possible. A new multicast protocol is proposed for
MANET, called the gateway-based multicast protocol (GBMP) that seeks to improve upon the
existing protocols in terms of the speed and the cost of the multicast tree repair mechanism, the
transmission efficiency and the amount of control overhead. GBMP achieves these improvements
by using a number of novel features, such as both global and local maintenance of group-shared
trees, a bidirectional multicast tree repair mechanism and the suppression of unnecessary
acknowledgments. In addition, we have introduced a new metric called weighted occurrence
of consecutive packet loss to measure the discontinuity in data packet delivery. Extensive
simulation study shows that GBMP outperforms the more established ODMRP and ADMR
protocols in a number of important performance metrics under different traffic patterns and source
node counts.
4 Performance evaluation
Before sending the attach acknowledgment (AAck), a node
waits for a random delay according to its priority. During In this Section, we compare the performances of ODMRP,
the wait, if it does not receive any AAck meant for the ADMR and GBMP through extensive simulations.
initiator of the AReq, the node sends out the AAck;
otherwise, it does not send any. The node sending the AAck 4.1 Simulation environment
is called the attach point node (APN). The receiver that We evaluate the performance of our proposed protocol
initiated the AReq responds to the first AAck it receives by GBMP, and compare it with ODMRP and ADMR using
sending an attach notification (ANtf) towards the APN. GloMoSim [21]. Both ODMRP and ADMR are mature
The intermediate node on the path towards the APN, if it is protocols and have their respective IETF drafts. The
not already an FG node, becomes one for this multicast ODMRP has to flood the join query frequently so as to
group; otherwise, it leaves PRM if it is in this mode. Then refresh the multicast architecture timely to cope with node’s
the intermediate node forwards the ANtf to the APN. mobility. Also, the lifetime for an FG node should not be
When the APN receives the ANtf, if it is in PRM, it leaves too long, otherwise, the bandwidth efficiency decreases.
this mode. Thus, the receiver resumes receiving data packets Thus, we choose 3 s for the join query interval and 4 s for
from a new path. the lifetime of an FG node. For ADMR, a source sends a
data packet every 30 s using network flooding. For GBMP,
3.5.3 Co-operation between UNILR and GJQ is sent every 60 s, and LJQ is sent every 9 s. The large
RIAP: The bidirectional link repair is achieved by the durations for multicast architecture refreshing are chosen
co-operation between UNILR and RIAP. The following for ADMR and GBMP since these two protocols employ
rules are applied for this purpose: link repair mechanisms.
The radio transmission range is 250 m. The propagation
(a) When an upstream FG node fails to receive MAX_ path-loss model is free space. IEEE 802.11 is used as the
TIMES_NO_PSV_ACK (set to three) consecutive MAC layer protocol. Channel capacity is assumed to be
PsvAcks (the loss of three consecutive packets is 2 Mbit/s. Each simulation runs for 1000 s. In all the
considered to be the result of a link breakage. So the simulation runs, there is one multicast group. All multicast
UNILR is initiated after three consecutive passive members retain their membership throughout the simula-
acknowledgments are lost) from one of its downstream tion. The sources start transmitting CBR traffic at the fifth
nodes (which is not in PRM) that it is responsible for, second of the simulation time and stops transmitting at
and the downstream node’s status is ‘FG node as well as 1000th second. There are 50 mobile nodes in an area of
receiver’ or ‘FG only’, the upstream node initiates the 1000 m 1000 m. From these nodes, 20 nodes are randomly
UNILR for this downstream node; otherwise, UNILR chosen to be multicast group members. For simplicity,
is not initiated. multicast sources are randomly selected from the 20 group
(b) When an ‘FG node as well as receiver’ fails to receive six members and these sources are also receivers. The mobility
data packets, it initiates the RIAP. (The loss of three model is random-way-point. All the nodes are moving at
more data packets for an ‘FG node as well as receiver’ is the same speed, which varies from one simulation to
used to separate the initiation of UNILR and RIAP.) another. The pause time is assumed to be 5 s.
IEE Proc.-Commun., Vol. 152, No. 6, December 2005 815
Two different traffic patterns are defined. For the traffic (WOCPL) to be
pattern 1 (represented as TP-1), constant bit rate data N
P
packets, which are 256 bytes long, are sent at an interval i2 OCPLi
of 250 ms; for traffic pattern 2 (TP-2), each source sends WOCPL ¼ i¼4N
constant bit rate data packets, which are 256 bytes long, P
with an interval of 100 ms. The two traffic patterns do not ði OCPLi Þ
i¼1
represent any particular application, but are defined to test
the protocols’ capability in delivering data packets under where OCPLi stands for the number of occurrences of
different traffic intensity. consecutive i data packets loss. We let N ¼ 30 in our
simulation; we do not record loss of more than 30
4.2 Metrics consecutive packets as that implies that partitioning has
The following metrics are used to compare the performance occurred, which has nothing to do with the performance
of the three protocols: of the protocols. The denominator represents the total
Packet delivery ratio (PDR): the percentage of data number of data packets lost due to link breakages, while
the numerator is the weighted number of data packets
packets that is delivered to (or received by) all the receivers,
lost (normally, a loss of a small number of data packets is
which is defined as
not a serious problem for real-time traffic; hence this
P
NR
number is calculated from four to N;). WOCPL measures
ðthe number of data packets actually delivered to receiver iÞ
i the discontinuity in the data packet delivery. The smaller
100%
P
NR the value of WOCPL for a protocol is, the less the
ðthe number of data packets should be delivered to receiver iÞ
i protocol is suffering from consecutive packet loss, and
the more suitable it is for real-time traffic.
where NR is the number of receivers. PDR demonstrates a
protocol’s ability to deliver data packets and is directly 4.3 Simulation results
related to the performance of upper layers.
Number of data packets delivered per data packet 4.3.1 Impact of mobility and traffic load
transmitted (transmission efficiency, TE): indicates the
multicast efficiency of the protocol, which is defined as 4.3.1.1 Single source case: Figures 5 and 6 show
the performance of the three protocols under two different
P
NR
traffic patterns when there is one source in the network. In
ðthe number of data packets actually delivered to receiver iÞ
i most of the cases, GBMP has the highest packet delivery
NP
NW ratio with the lowest control overheads and the lowest
ðthe number of data packets transmitted by node jÞ normalised packet overhead among the three while
i
exhibiting slightly lower transmission efficiency than that
where NNW is the number of nodes in the network. of ADMR.
Control overhead (CO): the number of control packets GBMP and ADMR have higher packet delivery ratio
transmitted per data packet delivered to the receivers, which (PDR) than ODMRP in most of the mobility scenarios.
is defined as The reasons are as follows. First, for single source case,
NP
the network topology is actually a source-based tree. For
NW
ðthe number of control packets transmitted by node jÞ GBMP and ADMR, because they employ loosely
j structured multicast forwarding trees, the number of
P
NR sources in the network has very little effect on the
ðthe number of data packets actually delivered to receiver iÞ topology. However, in ODMRP, in the case of a single
i
source, enough redundancy is not there in the topology.
Control byte overhead (CBO): the number of control bytes Secondly, ODMRP requires the source to flood the join
transmitted per data packet delivered to the receivers, which query frequently in order to cope with the link breakages
is defined as and the suboptimal routes, and it has no mechanism to
repair broken links. GBMP and ADMR, nevertheless,
NP
NW
ðthe length in bytes of control packets transmitted by node jÞ have their own mechanisms for link breakage repair, so
j that broken links can be repaired promptly and therefore
P
NR the packet loss decreases. Thirdly, ODMRP’s frequent
ðthe number of data packets actually delivered to receiver iÞ sending of broadcast packets (join query) increases
i
collisions, which decreases the number of data packets
Normalised packet overhead (NPO): the total number of received. However, with the help of the link repair
all the data and control packets transmitted per data packet mechanisms, both GBMP and ADMR need not refresh
delivered to the receivers, which is defined as the multicast architecture so often as ODMRP does,
leading to higher packet delivery ratio.
NP
NW Note that the PDR of ADMR is lower than that of
ðthe number of data packets transmitted by node jÞ
j GBMP. This is due to ADMR’s pruning mechanism, in
! which a node N prunes itself from the multicast tree if it has
NP
NW
þ ðthe number of control packets transmitted by node jÞ no downstream nodes. With the pruning mechanism,
j ADMR is able to achieve high transmission efficiency (see
P
NR Figs. 5b and 6b) as less data packets are forwarded.
ðthe number of data packets actually delivered to receiver iÞ Nevertheless, in a mobile network, it is possible that after
i
some time, node N is required to forward data packets
Consecutive packet loss: the distribution of the number of again for its ‘new’ downstream nodes. In ADMR, the only
data packets lost consecutively at the receivers. We define way for node N to re-embark on the multicast forwarding
the weighted occurrence of consecutive packet loss tree is to rely on its downstream nodes’ local or global
816 IEE Proc.-Commun., Vol. 152, No. 6, December 2005
ODMRP GBMP ADMR
1.5
CO
TE
0.85 0.35
1.4 0.30
0.80 1.3 0.25
1.2 0.20
0.75
1.1 0.15
0.70 1.0 0.10
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
a b c
15 1.5 9
1.4 8
13
1.3 7
11 6
WOCPL
1.2
CBO
9 NPO 1.1
5
4
1.0
7 3
0.9 2
5
0.8 1
3 0.7 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
d e f
CO
TE
0.18
1.4
0.8 1.3 0.16
1.2 0.14
1.1 0.12
0.7 1.0 0.10
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
a b c
8 1.1 18
16
7 1.0 14
6 12
WOCPL
0.9
CBO
NPO
10
5
8
0.8
4 6
0.7 4
3
2
2 0.6 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
d e f
repair mechanism. In contrast, GBMP uses the PRM at so that newly arriving downstream nodes can be quickly
the FG nodes to reduce data packet forwarding while discovered and node N returns to normal forwarding status
remaining on the tree; also, GBMP sends heartbeat packets, earlier. In this manner, although the transmission efficiency
IEE Proc.-Commun., Vol. 152, No. 6, December 2005 817
of GBMP is slightly lower than ADMR, packet delivery N start a timer to monitor the result of the local repair.
ratio is improved and control overhead is reduced. PRM When the timer expires and if there is still no data packet
thus makes a good trade-off between the transmission coming from their upstream nodes, the receivers on this
efficiency and packet delivery ratio. subtree will initiate their own global repair. When the global
In the traffic pattern TP-1, the control overheads and repairs are triggered, the control overheads incurred are
normalised packet overheads of GBMP and ADMR are proportional to the number of receivers on the subtree and
less than those of ODMRP (Figs. 5c and 5e). The reason the number of nodes in the network. In GBMP, however,
is that the flooding intervals of GBMP and ADMR whenever a link breakage that cannot be repaired by
(to optimise the multicast architecture) are longer than that UNILR occurs, the affected receivers initiate the RIAP
of ODMRP. Although the local repairing mechanisms of independently. Both the UNILR and the RIAP are
GBMP and ADMR generate extra overheads, the benefits restricted in local areas. Therefore, the overheads are only
outweigh the overheads. Nevertheless, in the heavier traffic proportional to the number of nodes in the nearby areas
pattern TP-2 (Figs. 6c and 6e), the control overheads of (within two hops) and the number of receivers affected by
ADMR are higher than that of ODMRP, and the the link breakage. Consequently, the control overheads of
normalised packet overheads of ADMR and GBMP GBMP are less than that of ADMR.
approach that of ODMRP as node speed increases. The The control byte overhead (CBO) performance varies in
reasons for the above performance in TP-2 are as follows. different traffic patterns. For TP-1, GBMP has the least
Under the heavier traffic load, all three protocols suffer CBO (Fig. 5d). This is due to its reduced control overheads.
from frequent data packet loss, and therefore the link However, under TP-2, GBMP’s CBO is a bit higher than
repairing procedures are initiated more frequently in GBMP ODMRP’s (Fig. 6d). This is because in heavy traffic, the
and ADMR. GBMP employs only local repair mechanism, possibility of losing data packets increases due to increased
and its control overheads increase moderately, and there- collisions. Hence, GBMP uses more control packets.
fore, the control overheads of GBMP are less than However, considering that, in practice, not all the nodes
ODMRP and ADMR. ADMR, however, utilises a global move with the same high speed (in contrast, the nodes in
repair mechanism once the local repair fails, and as a result our simulations do), therefore normally there will be less
the control overheads increase, especially at high speeds. control packets, and consequently reducing CBO.
However, due to the high transmission efficiency of GBMP
and ADMR, their NPO are still less than that of ODMRP. 4.3.1.2 Multiple source case – three sources:
In TP-2, we note that ADMR has higher control Figures 7 and 8 demonstrate the performance of the three
overheads than GBMP. This is mainly due to ADMR’s protocols under the two traffic patterns when there are three
global repair mechanism. Let us recall that whenever a link sources in the network.
breakage occurs, ADMR lets the node, say node N, which Under TP-1, the packet delivery ratio is the highest in
is immediately downstream of the broken link, initiate a ODMRP, while under TP-2 it is highest in GBMP. In a
local repair. The nodes that are on the subtree ‘below’ node relatively light traffic load (i.e. TP-1), the advantage of the
CO
TE
1.1 0.3
0.88 1.0
0.9
0.8 0.2
0.84
0.7
0.80 0.6 0.1
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
a b c
12 2.0 10
11 9
1.8 8
10
9 1.6 7
WOCPL
6
CBO
NPO
8
1.4 5
7
4
6 1.2 3
5 2
1.0
4 1
3 0.8 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
d e f
CO
TE
1.1 0.20
0.84 1.0
0.9
0.15
0.80 0.8
0.7
0.76 0.6 0.10
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
a b c
9 1.8 12
8 1.6 10
7
1.4 8
WOCPL
6
NPO
CBO
5 1.2 6
4 1.0 4
3
0.8 2
2
1 0.6 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
speed, m/s speed, m/s speed, m/s
d e f
mesh used in ODMRP is evident. With the provision of source case under traffic patterns TP-1 and TP-2,
moderate redundancy, ODMRP is able to cope with respectively; Figs. 7f and 8f demonstrate the WOCPL for
mobility well. For GBMP and ADMR, the topology the three-source case. In all four scenarios, GBMP delivers
redundancy is less than that of ODMRP, which leads to a the best WOCPL performance at all the speeds. GBMP’s
bit lower packet delivery ratio than ODMRP. However, UNILR and RIAP promptly repair the broken links and
ODMRP’s high PDR is achieved at the cost of lower therefore the consecutive data packet loss instances are
transmission efficiency and more overheads. Moreover, reduced.
under higher traffic load (i.e. TP-2), GBMP gives the best The performance of ADMR is not as good as GBMP.
PDR performance due to its effective local repair mechan- ADMR’s repair mechanism relies on the operation of a
isms. The degradation in PDR for ODMRP under heavy single node (say node N) for the local repair; and after the
traffic load is because of the increased collisions, which local repair fails, affected receivers initiate their own global
occur due to more packets being forwarded in the mesh. repairs. A delay exists between the initiation of local repair
With the increase of node mobility, especially above and global repair, during which receivers may continue to
30 m/s, GBMP’s transmission efficiency is slightly better suffer from packet loss. Furthermore, even though the local
than that of ADMR (Figs. 7b and 8b), which proves that repair succeeds, because of the mobility in the network, the
the PRM performs well under heavy traffic load at high subtree ‘below’ node N may have other link breakages.
speeds. Consequently, the timers, which are used by the receivers on
Under TP-1, both GBMP’s and ADMR’s control this subtree to monitor the result of the local repair, may
overheads and normalised packet overheads are much less expire before all the broken links are repaired. Therefore,
than ODMRP’s. However, under TP-2, the control over- some of the receivers may still initiate the global repairs
head and control byte overhead of GBMP are slightly independently. So, this process delays the resumption of the
higher than those of ODMRP at speeds above 30 m/s, data packet delivery, without reducing control overheads
which should be acceptable, considering that significantly.
ODMRP has the most severe discontinuity in receiving
(1) in our simulation runs, node speeds are much higher data packets because, instead of utilising a repair mechan-
than those likely to be in practical scenarios. ism, all the sources flood the network every three seconds,
(2) GBMP yields higher packet delivery ratio, higher resulting in more collisions, which worsens the continuity
transmission efficiency and lower normalised packet in data packet delivery.
overheads.
5 Conclusions