0% found this document useful (0 votes)
8 views

Paper on Net delays

Uploaded by

Nosheen Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Paper on Net delays

Uploaded by

Nosheen Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Received: 19 December 2019 Revised: 10 September 2020 Accepted: 28 September 2020 IET Communications

DOI: 10.1049/cmu2.12061

ORIGINAL RESEARCH PAPER

Study on performance of AQM schemes over TCP variants in


different network environments

Salman Muhammad1 Touseef Javed Chaudhery2 Youngtae Noh1

1
Department of Electrical and Computer Abstract
Engineering, Inha University, South Korea
Increasing the size of memory in network devices leads to the problem of a persistently full
2
Department of Computer Science, Air University, buffer (a.k.a, bufferbloat). The objective of this study is to compare the recently introduced
Pakistan
Controlled Delay (CoDel) scheme with the traditional method of active queue manage-
ment, such as Random Early Detection (RED) algorithms over TCP variants. To explore
Correspondence
Youngtae Noh, Department of Electrical and the potential of CoDel over RED, TCP variants have been assessed at three settings: vari-
Computer Engineering, Inha University, 100 able congestion and fixed payload (VCFP), variable payload and fixed congestion (VPFC),
Inharo, Nam-gu, 22212, Republic of Korea. Email:
and high congestion and high payload (HCHP). We assessed the CoDel and RED schemes
[email protected]
for active queue management (AQM) using three performance metrics: link utilization,
*
Study on Performance of AQM schemes over TCP drop rate, and queuing delay. The analytical results show that CoDel outperformed RED in
Variants in Different Network Environments most aspects over variants of TCP because of its auto-tuning and auto-adjustment features.
However, RED outperformed CoDel in a few cases. In the VCFP setting, RED recorded a
Funding information lower drop rate overall TCP variants. Moreover, in the VPFC setting, RED with a payload
Inha University
of 500–1000 bytes performed better in terms of drop rate. Finally, in the HPHC setting,
there were two cases where RED, over TCP NewReno and Vegas, performed well in terms
of drop rate.

1 INTRODUCTION The queuing latency caused by large buffers is referred to


as bufferbloat [3], and it is critical for frequently used TCP-
The progressive development of computer applications has based applications, such as the file transfer protocol (e.g. Drop-
continued at pace in the last few years. Applications available box, Google Drive) and hypertext transfer protocol (HTTP)
nowadays have more features embedded into them than prede- etc., because large queues lead to long round-trip times (RTTs)
cessors. Approximately 90% of Internet traffic is estimated to and lower throughput. Increasing the memory size in network
be serviced by the Transmission Control Protocol (TCP) [1]. devices poses a variety of challenges—for instance, the large
Moreover, the use of social media, streaming, and files transfer buffers that cable and ADSL Internet service-providers (ISPs)
services has witnessed remarkable spikes that have elevated the use to shape network traffic1 create long queues that can delay
chances of Internet congestion. Vendors in various sectors of packets for several hundred milliseconds [4]. In the network
the Internet are attempting to place excess buffers in network devices the use of large buffer leads to increase in user-perceived
devices to prevent packet drop and increase link utilisation [2]. latency [5, 6], and [7]. The delays experienced due to buffers in
However, the large buffers in network devices conflict with the 3G/4G networks have also been reported in [8]. Similarly, the
nature of the TCP. From a technical aspect, the TCP increases existing cellular-based networks uses large buffers at the base
its transmission rate continuously until the buffer is filled. As stations to tackle the bursty traffic. However, such buffers leads
soon as a packet drop is sensed (which is regarded as a sig- to a problem known as long flow completion time, which affects
nal of congestion), the TCP cuts its sending rate. Consequently, the Quality of Experience (QoE) and has been revealed in [9].
large buffers, primarily designed to prevent packet drops, create There is no prior communication infrastructure in multi-hop
excessive queuing latency.
1
To cope with the required traffic profile, either all or parts of the datagrams are delayed.
Traffic shaping is used for bandwidth management, and is common in packet-switched net-
works.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is
properly cited.
© 2020 The Authors. IET Communications published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology

IET Commun. 2021;15:93–111. wileyonlinelibrary.com/iet-com 93


17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
94 SALMAN ET AL.

wireless networks, and the information is exchanged via a chain queue length exceeds the upper threshold, consequently caus-
of relay nodes. The buffers in the relay nodes lead to queu- ing an increased drop rate and decreased throughput. In the
ing latency which degrades the overall performance [10]. The literature there are several variants of RED available such as
bufferbloat problem is a matter of concern in Wireless LAN Adaptive–RED (ARED) [64], Learning–Automata–Like–RED
too as stated in [11, 12] and [13]. (LALRED) [23], RED–Exponential (RED_E) [24], Nonlinear
To eliminate bufferbloat, we cannot completely omit the use RED (NRED) [25], and Effective–RED (ERED) [26]. How-
of buffers in network devices because this will lead to a dras- ever, all the variants have similar functionality and senses the
tic increase in the number of dropped packets even with minor congestion based on the queue length.
congestion, and thus the link will suffer from underutilisation. To overcome the RED algorithm’s drawbacks, some auto-
Rather than buffer management, a better approach is to manage tuned AQMs were recently introduced, of which the Controlled
the queues actively (i.e. by discriminating bad queues from the Delay scheme (CoDel) is among the most robust. The author
good ones). Active queue management (AQM) is an effective have claimed that CoDel is parameterless, controls delay regard-
way to mitigate bufferbloat. Although bufferbloat and problems less of link rates, traffic loads, and round-trip delays, and adapts
generated by it have been known for over three decades [19], to changing link rates [27]. CoDel estimates the sojourn time2
AQM has not been deployed for long time owing to the com- of each packet and compares it with a certain defined threshold.
plexity of parameter settings over changing network links. If the sojourn time is greater than the threshold, the dropping
Fortunately there are several ways to combat with bufferbloat flag switches to high. If the dropping flag remains high for a
problem. BBR TCP [14] is a transport layer solution proposed pre-defined interval, the packet is dropped and the interval is
by Google LLC (Google) which measures the delivery rate and updated according to the control law defined in CoDel; other-
round-trip time of a connection. Based on this measurement it wise, the dropping flag is reset and the packet is forwarded to the
creates a model with the recent maximum bandwidth and min- network. When a packet is dropped by CoDel, the TCP senses
imum round-trip delay. BBR leverages this model to maximise the packet drop and adjusts its congestion window.
the amount of data it can allow in flight anytime in the network. In any AQM, the common objective is to sense congestion
Another recently introduced transport layer solution is C2TCP either in the form of queue length (e.g. RED) or queue delay (e.g.
[20]. C2TCP runs over the top of conventional throughput- CoDel) and inform the TCP of it. The TCP in turn adjusts its
based TCP. This protocol works flexibly with satisfying the strict congestion window, and the sending rate is thus reduced. There
delay requirements for variety of applications whilst maintaining are different types of TCPs, where each has a specific purpose.
the maximum possible throughput. C2TCP meets various target In addition, each TCP has a different approach to controlling its
delays regardless of any needful for profiling the network state, congestion window. An AQM is highly influenced by the choice
channel prediction, and sophisticated rate adjustment. There are of TCP used and the network environment provided. In this
numerous solutions available at the network layer such as, RED study, the influence of three parameters on the performance
[21], CoDel [22], PIE [15] ,BLUE [38] and COBALT [35] etc. of the AQM over variants of TCP is investigated: drop rate,
The Random Early Detection (RED) scheme is an early AQM link utilisation, and the delay experienced by every packet from
scheme. The RED algorithm sets its probability of packet drop ingress to egress. We use three different scenarios to conduct a
based on upper and lower bounds to maintain an average queue detailed evaluation of CoDel with varying levels of congestion
length. If the average queue length is smaller than the lower and changing payload sizes over various TCP variants. The sce-
threshold, the incoming packets are enqueued without being narios are (i) variable congestion and fixed payload size (VCFP),
dropped. On the contrary, if the average queue length starts to (ii) variable payload size and fixed congestion (VPFC), and (iii)
exceed the upper threshold, the incoming packet is randomly high congestion and high payload size (HCHP). The traditional
marked (or dropped), which ultimately sends a congestion sig- RED AQM is used as a benchmark to compare with CoDel.
nal to the transport layer, which reduces the throughput. As The remainder of this paper is structured as follows. Sec-
long as the average queue length is in between the upper and tion 2 gives some related work on congestion control using
lower thresholds, each arriving packet is marked with a proba- AQM schemes. Section 3 presents our contribution. Section 4
bility p, where p is a function of average queue length. If p is details the network topology, and provides a brief review of each
large enough, the packet is dropped, and is otherwise enqueued. TCP variant and the applied AQM. Section 5 describes the eval-
As mentioned above, RED was introduced in the 1990s, but uation, including the simulation setup, performance metrics, and
its implementation has been challenging due to the complex- a discussion of the results. Section 6 concludes this paper.
ity of its configuration over dynamic network links. The first
weakness of RED is the fluctuation of its average queue length
with the different level of congestion as reported in [16, 17], and 2 RELATED WORK
[18]. That is, at lower congested link, the average queue length
is near the lower threshold; and at higher congestion, the aver- Augustu et al. [28] has worked on the cross comparison between
age queue length is greater than or equal to upper threshold. TCP and AQM algorithms. They have considered two conges-
Hence the average queuing delay is not predictable beforehand. tion levels (i.e. 16 and 64 flows) to test the capabilities of several
Another weakness of RED is related to the throughput, which TCPs over AQM variants. The simulation is based on evaluating
is sensitive to the parameter settings and load on traffic. In other
words, the performance of RED deteriorate when the average 2
The time taken by a packet from the enqueuing to the dequeuing process
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 95

the TCP performance metric such as goodput and RTT and a tion delay, and varying load in a wireless environment. They
network metric, that is, queue occupancy. However this paper have evaluated four AQMs, namely RED, Fixed-Parameter Pro-
lacks other dynamic changes in simulation environment, such portional Integral (PI), Model Predictive Control (MPC), and
as variable packet size which is very common in mixed inter- the Self-Tuning Regulator (STR). Their assessment is solely
net flows. They have not investigated the actual packet sojourn related to the queue length and drop probabilities at the net-
time in network layer, which precisely depicts the packet delay work layer.
in a bufferbloat. lastly, they claims that at high congestion the The study in [37] has considered three types of TCPs, namely
general performance trend of different AQMs does not change TCP Illinois, TCP Westwood, and TCP Vegas, and two AQMs,
significantly when moving from a TCP variant to another. How- that is, CoDel and DropTail. They have investigated the RTT,
ever this is not correct and we have proven it in our result sec- throughput, and fairness for all the TCPs by varying the target
tion (Figure 20). delays of the CoDel.
In another branch of study [29] a testbed experiment is Table 1 summarises the aforementioned works on the
conducted on some AQM schemes such as CoDel, PIE and bufferbloat problem using varieties of AQM schemes.
Adaptive-RED using various congestion levels (4, 16 and 64
flows) subject to change in target delay (from 1ms to 30ms).
CoDel shows stable behaviour in terms of RTT and goodput 3 CONTRIBUTION
compared to the other two AQMs, however the assessment is
limited to TCP metrics and have not been evaluated for the net- To evaluate the performance of any AQM scheme it is impor-
work layer. Similarly another testbed experiment in [33] evalu- tant to test it over different TCPs and over different congestion
ates the TCP metrics (RTT and throughput) for several AQMs levels. In previous studies the AQM algorithms performance is
including CoDel and PIE etc. The main focus of their work is solely evaluated on transport layer [28, 29, 33]. Some authors,
to use different buffer sizes and mitigate the bufferbloat prob- such as [30, 32, 39] and [35] have assessed the network layer
lem by leveraging CoDel algorithms and byte queue limit (BQL) metric, however they either used fewer metric or conducted
solution. Grazia1 et al. [34] have surveyed a cross-comparison their experiments on a single TCP which is not sufficient to
of popular TCP and AQM variants. They have investigated explore the full capabilities of the investigated AQM schemes.
the TCP performance in terms of goodput, RTT, and fairness. In this paper we have conducted a detailed evaluation of popular
However, they did not consider any of the network layer perfor- AQMs such as CoDel and RED on network layer considering
mance metrics. all the possible performance metric over different TCPs. This
In [30] the load transient taking place at the edge of network work will enable the researcher to see the behaviour of CoDel
has been investigated over AQM schemes including CoDel, PIE and RED in more depth from the network perspective over var-
and an aggressive version of RED known as HRED [31]. The ious TCP types and in different congestion settings. Our main
authors considered variable congestion flows to assess the queu- contribution are summarised as follows.
ing delay observed at the bottleneck routers, however, this study
lack the detail evaluation of other metric such as link utilisation ∙ We use different traffic scenarios, such as (A) variable conges-
and drop rate etc. tion and fixed payload VCFP, (B) variable payload and fixed
The authors in [32] have conducted a testbed experiment congestion VPFC and (C) high congestion and high pay-
and evaluated the AQMs in terms of packet drop, latency and load HCHP. We exploit such varying level of congestion and
throughput. Nonetheless, their result evaluation is very shallow changing payload sizes to evaluate three performance metric
and they have not mapped the TCPs function with each AQM. such as, drop rate, link utilisation and queuing delay of the
Ye et al. [39] has applied some tweak to the existing CoDel AQM algorithms in network layer.
scheme (a.k.a. ACoDel-IT) by introducing adoptive tuning for ∙ We show how the TCPs behaviour (congestion window)
interval and adoptive tuning for target and interval (ACoDel- reflects on the AQMs performance (link utilisation, drop rate
TIT). They tested the aforementioned algorithms in 5 scenarios and queuing delay). Each AQM is assessed over six different
with different congestion level and link capacities. The author types of TCPs.
assessed several performance metric such as queuing delay, ∙ As an impact of both AQM schemes on drop rate, link util-
packet drop and link utilisation over TCP NewReno. However isation and queuing delay, we show its influence on fairness
ACoDel-IT and ACoDel-TIT have not been evaluated over dif- among the TCP flows and the number of packets retransmit-
ferent TCPs. Similarly COBALT [35] (combination of CoDel ted by each flow.
and Blue) is another recently introduced AQM and has been
tested in different traffic scenarios, such as light TCP traffic (5
TCP flows), heavy TCP traffic (50 TCP flows) and mix TCP and 4 TOPOLOGY AND TRAFFIC
UDP traffic (5 TCP and 2 UDP). The performance is compared
with CoDel in terms of queue delay and queue occupancy. Yet In this study, a standard dumbbell topology is used, as shown
it is not examined over different TCP variants to further expose in Figure 3. It enables us to observe the impact of congestion
its behaviour. produced by multiple flows on a bottleneck link router. The
Kennedy et al. [36] have demonstrated the AQMs robustness network traffic used is according to the File Transfer Proto-
in some dynamic settings like varying link capacities, propaga- col (FTP) [40]. There are n FTP pairs, where each connection
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
96 SALMAN ET AL.

TABLE 1 Overview of recent work

Transport Layer Network Layer


Paper AQM Variant Considered TCP Variant Considered Performance Metrics Performance Metrics

Augustu et al. [28] DropTail, ARED, PIE, Yeah, Westwood, Cubic, RTT and goodput Queue occupancy
CoDel, GREEN and Hybla, Illinois, Vegas,
PINK Newreno and HighSpeed
Khademi et al. [29] CoDel, PIE and ARED SACK RTT and goodput N/A
Jarvinen et al. [30] CoDel, PIEupdate, SACK1 N/A Queuing delay and delay
PIEthresh, HRED, spike duration
HRED (aggressive), SFQ
CoDel
Vyakaranal et al. [32] CoDel, PIE and RED Reno, Vegas and BIC Packet drop, latency and N/A
throughput
Cardozo et al. [33] CoDel, PIE, FQ-CoDel and Reno RTT and throughput N/A
BQL scheme
Grazia1 et al. [34] DropTail, ARED, CoDel, Yeah, Westwood, Cubic, RTT, goodput and fairness N/A
PIE, GREEN, PINK Hybla, Compound, Vegas,
Newreno and HighSpeed
Ye et al. [39] ACoDel-IT, ACoDel-TIT, NewReno RTT Queueing delay, link
CoDel, PIE and ARED utilisation and packet
drops
Palmei et al. [35] CoDel and COBALT Reno RTT and goodput Queue delay and queue
occupancy
Kennedy et al. [36] RED, PI, STR and MTP Standard TCP TCP window size Queue length and drop
probability
Dzivhani et al. [37] CoDel and DropTail Westwood, Illinois, Vegas RTT, throughput and N/A
fairness

between the i th FTP server (FSi ) and FTP client (FCi ) pair (BDP) networks, congested networks, or fair flow rates. In addi-
is established over the TCP. These connections use various tion to this, a TCP intended for high-speed performance never
TCPs, and are investigated over different levels of conges- perform betters if it is used for a wireless network. In this study,
tion, starting from lower to medium and high. Three, 10, we consider six variants of the TCP from different domains:
and 14 FTP sources are used to represent the low, medium, TCP NewReno [42], TCP Vegas [43], Compound TCP [44],
and high levels of congestion, respectively. Furthermore, each TCP SACK [45], TCP Cubic [46], and TCP Westwood [47].
TCP with different payload sizes is assumed to be at higher These TCPs all fall into one of three categories: (A) congestion
and fixed levels of congestion. We consider a case with high collapse, (B) lossy or wireless, and (C) high-speed TCPs. TCP
congestion and a high payload. The routers are configured NewReno (reactive/loss based)3 , TCP SACK (loss based) and
with the CoDel (or RED) algorithm, and its performance TCP Vegas (proactive/delay based)4 are congestion collapse-
is investigated in terms of packet sojourn time, link utilisa- based TCPs, whereas Compound TCP (cTCP) (loss based and
tion, average persistent queue, and drop rate over each TCP delay based) and TCP Cubic (loss based) are high-speed TCPs.
variant. TCP Westwood (loss based with bandwidth estimation) is a
wireless TCP.
Whenever a packet is dropped by an AQM, it is taken as
4.1 Transmission control protocol (TCP) a congestion signal by the TCP, which instantly react to it by
reducing its congestion window, which reduces the transmission
The applications that require reliable transportation of data use rate. The congestion window is differently controlled by each
the TCP as it offers peer connectivity at the transport layer and variant of the TCP. Because CoDel senses congestion based on
handles handshakes between connections. It encapsulates the queue delay unlike RED, which detects it based on queue length,
incoming data bytes into packets and transmits them by assign- it is important to investigate CoDel’s performance over differ-
ing them sequence numbers. An acknowledgment is sent to the ent variants of TCP versus that of RED at different levels of
sender if the sequence number is received; otherwise, the packet congestion and packet sizes. This can give us a clear idea of
is re-transmitted upon a pre-defined timeout [41]. The TCP is whether a) queue delay-based algorithms are better than queue
also responsible for balancing load in the congested network.
3
There are different types of TCPs, and each is designated for Reactive: The obtainable bandwidth of the connection is discovered based on network
losses. Loss-based: Network congestion is detected by packet loss.
a specific purpose. Some TCPs function over wireless networks 4
Proactive: The incipient congestion is found by observing variations in the throughput.
whereas others are intended for high-bandwidth-delay product Delay based: The obtainable bandwidth in a network is estimated by packet delay.
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 97

TABLE 2 TCP variants along with their description

TCP Variant Base Predecessor Intended Feature

TCP NewReno Loss based TCP Reno Congestion collapse


TCP SACK Loss based TCP NewReno Congestion collapse
TCP Vegas Delay based TCP Reno Congestion collapse
Compound TCP Delay-loss based Vegas and HS-TCP High speed
CUBIC-TCP Loss based BIC and H-TCP High speed
TCP Westwood Loss based with bandwidth estimation TCP Reno Wireless

length-based algorithms, and, b) if so, whether it is superior for between the implementation of TCP SACK and that of TCP
any given TCP or only for specific ones. We have therefore cho- Reno is noticeable when multiple packets are dropped from a
sen TCPs of different features that can adequately test the capa- single window of data. The TCP SACK mechanism goes into
bilities of CoDel. A brief description of the investigated TCP the fast recovery operation when its sender receives dup-ACKs.
variants is provided in Table 2. In the same way as Reno, it reduces its congestion window to
half its size and retransmits the packet. TCP SACK introduces
a new variable pipe during fast recovery to estimate the num-
4.1.1 TCP NewReno ber of outstanding packets in the network. The pipe variable is
treated as follows: When either an old packet is retransmitted or
The Reno Algorithm works better when the loss of a single a new one is sent, It is increased by one. On the contrary, a dup-
packet occurs in a single window of data. However, when multi- ACK received by the sender (SACK option field) states that the
ple packets are lost in the same window, it faces two prominent out-of-order data have been delivered at the receiver, and pipe
problems. First, Reno frequently takes a timeout as described is reduced by one.
in [48]. Second, multiple fast retransmission and window reduc- Any missing packet at the receiver forms a list, and as the
tions occur as described in [49]. When an out-of-order segment sender get an opportunity to send, it retransmits the next packet
arrives at the receiver, a duplicate acknowledgment (dup-ACK) from the list. If the list is empty (no missing packet), a new
is instantly sent to the sender. The sender can receive a dup- packet is sent. The TCP SACK leaves the fast recovery oper-
ACK for any of several reasons, such as network failure, reorder- ation when it receives a recovery notification, which means
ing of the data segment by a network, replication of data seg- that all outstanding packets have been acknowledged during
ment, or replication of acknowledgment. Upon the receipt of fast recovery.
three dup-ACKs, the sender assumes that a data segment has
been lost. The TCP therefore retransmits the missing segment
without waiting for the retransmission timer’s expiry. This phe- 4.1.3 TCP Vegas
nomenon is referred to as fast retransmit or fast recovery.
In TCP NewReno, the aforementioned problems associ- TCP Vegas is a modified version of Reno that is more aggres-
ated with TCP Reno have been solved. Unlike Reno, the TCP sive because it triggers its fast retransmission immediately upon
NewReno is not pulled out of the fast recovery operation when receiving the first dup-ACK (rather than waiting for three dup-
partial ACKs5 are received–where this is treated as an indica- ACKs, as in case of RENO). For every segment sent, TCP Vegas
tion of multiple packet losses in a row from a single window– measures its RTT alongside (and the duration of the timeout is
that can be retransmitted intelligently. TCP NewReno is sus- computed). If a dup-ACK is received, TCP Vegas checks for it
tained in the fast recovery mode until all outstanding data have the expiry of its timeout period and retransmits the segment if
been acknowledged. it has expired. Furthermore, TCP Vegas integrates a bandwidth
estimation feature that intelligently estimates the bandwidth by
using the difference between the actual ”a” and the expected
4.1.2 TCP SACK ”e” rates of flow. If the values of e and a are very close, the
congestion window (cwnd ) is increased in size because the net-
The TCP SACK uses a nearly identical congestion control algo- work has the capacity to allow a to attain e. cwnd is reduced if
rithm to that used by Reno. The SACK option follows the for- a is significantly lower than e, and this condition is regarded as
mat discussed in [50] and does not affect the original conges- an indication of incipient congestion. TCP Vegas uses a window
tion control algorithm when added to the TCP. Furthermore, size based on the bandwidth estimation. The normalised differ-
the TCP SACK uses the same recovery method as Reno and is ence ‘Diff’ of a particular segment in TCP Vegas is computed as
identically robust against out-of-order packets. The difference follows [51].

5
This identifies outstanding packets at the beginning of fast recovery duration Diff = (𝚎 − 𝚊)BaseRTT, (1)
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
98 SALMAN ET AL.

ALGORITHM 1 Simulation parameters gestion avoidance models (and thus it is referred to as com-
1: if Diff <Alpha then pound TCP) to solve the problem faced by Reno. It has two
2: Increase the Congestion Window (CWND) linearly in the next
special features: (1) When a network is sensed to have been
RTT, that is, CWND ++ underutilised, it aggressively increases its window size to achieve
3: else if Diff >Beta then
the desired throughput. (2) Once the link is full and a bottle-
neck queue has formed, further window increases can cause the
4: Reduce the CWND linearly in the next RTT, that is, CWND - -
problem of TCP unfairness. Therefore, a delay-based approach
5: else (like Vegas) is used to reduce the sending rate. The sending
6: Do not change CWND; window win is therefore controlled by the loss-based and delay-
7: end if based components and is computed as follows:

win = min(cwnd − dwnd, awnd ), (2)


ALGORITHM 2 Pseudocode for CoDel
where dwnd is the delay-based component derived from TCP
1: For each enqueuing packet (pkt)
Vegas. It renders the cTCP more scalable in high-BDP net-
2: St ← Now − Et
works. awnd is the advertised window from the receiver and cwnd
3: if St ≥ Td then is the loss-based component that is nearly identical to the con-
4: Ds ← 1; vention congestion window. cwnd is increased on the arrival of
5: end if ACK and is calculated as follows:
6: while Ds = 1 do
1
if first _above_time = 0 then cwnd = cwnd + . (3)
7: cwnd + dwnd
8: first _above_time ← 1;
√ When a new connection is stared up, the cTCP mirrors the slow
9: Di ← Now + Interval ∕ N ;
behaviour of Reno. As discussed in [53], an exponential increase
10: else if Now > Dropnext then
can work well even in a fast and long-distance network. The
11: N ← N + 1;
delay-based component dwnd is set to zero at the start of a con-
12: Drop(pkt ); nection but works effectively during the congestion phase.
13: else if St ≤ Td then
14: first _above_time ← 0;
15: Ds ← 0; 4.1.5 CUBIC transmission control protocol
16: Dequeue(pkt )
(CUBIC-TCP)
17: end if
Compared with the conventional TCP [54], CUBIC-TCP lever-
18: end while ages a Cubic function to substitute the linear window increase
function. This not only improves scalability, but also increases
stability in long-distance and fast networks.
where BaseRTT6 is the minimum round-trip time. During con- Fast and long-distance networks usually suffer from the prob-
gestion avoidance, TCP Vegas controls its windows in a linear lem of low utilisation [55] when using a standard TCP (Reno).
fashion. It defines two thresholds Alpha and Beta that are Congestion is followed by a slow increase in the size of the con-
compared with the normalised difference Diff and consequently gestion window that leads to a large bandwidth delay product
control the congestion window as follows: (BDP). This issue is associated with standard TCP and is appli-
cable to Reno-style TCP standards [56], such as SACK [57],
TFRC [58], and SCTP [59].
4.1.4 Compound TCP In CUBIC-TCP, the congestion control algorithm of the
conventional TCP is modified to tackle the aforementioned
Compound TCP (cTCP) is widely deployed in older operating problem. The CUBIC-TCP inherits its window-increase func-
systems of Microsoft, such as Windows XP, Windows Server tion from its predecessor, the BIC-TCP [60]. The window is
2003, Windows Vista, and Windows 7. Reno is a congestion increased aggressively until it is close to the saturation point.
collapse-based TCP model that suffers from the problem of However because of its proximity to the saturation point, the
underutilisation if used in high-BDP (bandwidth-delay-product) increase in window size slows further. The CUBIC-TCP uses
networks. Because Reno requires a long time to stretch its win- the following window growth function:
dow to such a value (High BDP), the cTCP uses the synergic
approach of combining both loss-based and delay-based con- W (t )cubic = c(t − t p )3 + Wmax , (4)

6
where Wmax is the previously known size of the window (right
BaseRTT is the RTT of a segment in an uncongested state as described in [52]. The
expected flow rate is basically computed from BaseRTT. before congestion occurs and the window reduces in size in fast
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 99

recovery), c is a constant that expands the window aggressively in Reno). TCPw prevents excessively conservative reductions in
in high-BDP networks, t is the time that has elapsed after the ssthresh and cwnd and thus guarantees faster recovery.
reduction in the size of the congestion window in response to
Dup-ACK (or ECN-Echo ACKs), and t p is the period taken by
Equation (4) to grow the window size to Wmax (if no loss event 4.2 Applied AQM schemes
occurs), which can be computed as follows:
In this study, the RED and CoDel schemes were applied to
√ the routers, and their performance metrics were compared over
3 Wmax (1 − 𝛽cubic ) TCP variants in different network environments.
tp = , (5)
c

where 𝛽cubic is the decay factor for CUBIC multiplication, that 4.2.1 Random early detection (RED)
is, as a packet loss is sensed, the given window size is reduced to
This algorithm detects network congestion by measuring queue
W (0)cubic = Wmax 𝛽cubic . (6) length. RED uses an exponential weighted-mean sliding model
to calculate average queue length AvgQ;
Reliant on the given size of the window, Cubic operates in three
modes. First, it operates in a TCP-friendly region if the size AvgQt +1 = (1 − 𝜔)AvgQt + 𝜔Qinst , (7)
of the window is smaller than the cwnd that a standard TCP
achieves in time t after the previous loss event has occurred. where 𝜔 is weight , AvgQt is the average queue length at time t ,
Second, Cubic operates in the concave7 region if the given win- and Qinst is the instantaneous queue length. Bases on Equation
dow size is smaller than Wmax . Finally, Cubic runs in the con- (7), the probability of packet drop is computed as follows:
vex8 region if the given window size is larger than Wmax . These
feature render the Cubic stable and scalable, and provide RTT ⎧0 if AvgQ < Tmin
fairness (the same as in BIC-TCP). It is also friendly to the stan- ⎪
dard TCP. ⎪ AvgQ − Tmin
Pdrop =⎨ MaxP if Tmin < AvgQ < Tmax , (8)
⎪ Tmax − Tmin
CUBIC-TCP has been tested and deployed over the Internet,
and has shown significant improvements compared with other ⎪1 if AvgQ > Tmax
types of TCP. ⎩

where Tmax and Tmin represent the maximum and the minimum
4.1.6 TCP Westwood thresholds, respectively, and MaxP denotes the maximum prob-
ability of dropping a packet. If AvgQ is in between Tmax and Tmin ,
In a wired network, packets are lost due to congestion at the the probability of packet drop PD (modified form of Equation 8)
router. However, in a wireless medium, channel impairments can be computed as
(fading radio or noisy channels) are expected to cause such
losses. TCP Reno cannot discriminate wireless loss from con- Pdrop
PD = , (9)
gestion loss. Consequently, it severely reduces the size of its con- 1 − CPdrop
gestion window while reacting to wireless loss, and this reduces
the transmission rate. TCP Westwood (TCPw) is the modified where C is the packet counter that shows the number of incom-
version of TCP Reno where the rate of acknowledgment of ing packet in the queue since the previous packet was dropped.
reception is monitored by the sender of TCPw and, from this, If the operations are globally synchronous, they can be han-
the data packet rate attained by the connection is estimated. dled efficiently by the RED algorithm while controlling tran-
When the sender detects packet loss, the sender of TCPw sets sient congestion [62]. The main drawback of RED is the
appropriate thresholds for the slow start (ssthresh) and conges- dependence of its performance on parameter tuning [63] [64].
tion window (cwnd ) by using bandwidth estimation techniques, AvgQ is also highly sensitive to the level of congestion.
that is, the available bandwidth is probed during the congestion
avoidance phase. If (n) dup-ACKs are received, the maximum
capacity of the network is reached. Therefore, ssthresh is assigned 4.2.2 CoDel
with the available pipe size, cwnd is set to be equal to ssthresh, and
the congestion avoidance phase starts probing once again for a CoDel is a recently proposed approach to mitigate bufferbloat.
new, attainable bandwidth. Thus, cwnd is reduced to the esti- Each packet’s time is recorded at input to the enqueuing process,
mated bandwidth rather than its window size being halved (as and at dequeuing, the time difference, referred to as sojourn
time St , is calculated to determine the queuing delay. Based on
7
In congestion avoidance, when an ACK is received, cwnd is increased as St , certain actions are taken. First, St is compared with a thresh-
W (t + RTT )cubic − cwnd
(for each ACK). old value (known as the target delay ‘Td ’). If St is greater than Td ,
8
cwnd
The convex profile guarantees that the cwnd grows gradually at start and slowly increases the algorithm goes into the dropping state ‘Ds ’ and sets the next
its growth rate.
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
100 SALMAN ET AL.

FIGURE 1 Dropping state

drop interval ‘Di ’ as shown in Figure 1. Before reaching Di , if


the sojourn time falls below Td , it exits ‘Ds ’ and the packet is
simply forwarded.
The control law for the next drop interval Di is defined as

Interval
Di = Now + √ , (10)
N

where N is the drop count. If N increase, Di declines and,


consequently, more packets are dropped, thereby emptying the
buffer. The purpose of Interval is to allow enough time to the
sender to react to a packet drop, that is, reduce its sending rate,
to reduce St to below Td . The interval considered for the sender
to use should be at least the average RTT over the Internet. By
default, Interval is set to 100 ms and target delay to be 5% of
Interval [22]. The dropping state is cleared if the sojourn time
is below the target delay value during the dropping state. The
Pseudocode for CoDel [61] is given in the following table while
the process is shown in the flowchart in Figure 2.
FIGURE 2 CoDel’s flowchart

5 EVALUATION

To evaluate the performance of CoDel and compare it with that


of RED over different TCPs in different network environments,
we configured the ns-2 simulator [65] with three settings. First,
TCP variants were used at different levels of congestion with
CoDel and RED as AQMs, Second, the AQMs are tested over
different payloads while keeping the level of congestion fixed,
and high congestion with large packet sizes were then used to
assess the performance of both AQMs for extreme cases. We
also considered six types of TCPs: TCP NewReno, Compound
TCP, TCP Westwood, TCP SACK, TCP Vegas, and TCP Cubic.
Section 5.1 overviews the baseline model, factors, and param-
eters. Section 5.2 illustrates the performance metrics to assess
FIGURE 3 Network topology in the baseline simulation scenario
the behaviour of CoDel over TCP variants and different lev-
els of congestion. Section 5.3 compares the results of all TCPs
under the VCFP, VPFC, and HCHP scenarios. the ns-2 was further processed in MATLAB 2019b for anal-
ysis and plotting purposes. We used a computer with Intel(R)
Core(TM) i7 CPU for the simulation.
5.1 Simulation environment Figure 3 shows the network model with a dumbbell topology.
This scenario illustrates several FTP peers that act as network
We installed Network Simulator 2—a.k.a., ns-2—(version 2.36) traffic generators. The FTP client FC sends data to the FTP
on Ubuntu 16.04 operating system. For writing the script for our server FS via two routers on the link. FC and FS are linked to the
topology we used OTcl programming language. We also used routers through a 15-Mbps bandwidth link with a 15-ms prop-
AWK language for the data extraction. The data acquired from agation delay. The routers form a bottleneck link at a speed of
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 101

TABLE 3 Simulation parameters

No. Parameter Value

1. TCP variants NewReno, SACK, Compound, Westwood, Vegas and Cubic


2. Link capacity 15 Mbps for node–router, 3 Mbps for router–router (bottleneck link)
3. Link delay 15 ms node to router, 0 ms router to router
4. Buffer size 1000 Packets
5. Traffic type FTP
6. Queuing algo CoDel and RED
7. Packet size 540 Bytes, 1040 bytes, and 1460 bytes
8. Simulation time 100 s

3 Mbps with zero propagation delay. As it is clear in this topol- where


∑ LU denotes the link utilisation of the bottleneck link,
ogy, the router–router link does not feature a propagation delay, n n the total number of bytes dequeued by the AQM algo-
B is
because our focus is to measure the sojourn time, adding extra rithm, 𝜂 is bits per byte, and BW is the capacity of the band-
jitters or delays can leads to erratic values. Therefore it is con- width at the bottleneck. For LU = 100% the pipe is 100% full,
sidered to be the best scenario to test the performance of the and is empty when it is zero.
investigated AQMs under ideal conditions.
In the VCFP experiment, numerous levels of congestion were
used, that is, low, medium, and high levels. In the low levels of 5.2.2 Drop rate
congestion, there were as few as three FTP pairs, whereas in the
medium and high levels of congestion, there were six and 14 The drop rate is the ratio of the number of packets dropped
FTP pairs, respectively. The connection between FCn (subscript by an AQM to the total enqueued packets. This parameter
n denotes the number of nodes, that is, three, six, or fourteen) tests the robustness of an AQM. In contrast, an ideal AQM
and FSn is established by the TCP. does not drop (or drop very less amount of) packets even in
For the experimental setup of VPFC, the dumbbell topology high congestion state whilst maintaining a reasonable amount
was used with the maximum and fixed levels of congestion, of packet’s queue in the buffer. However, In practice, when
and packets of different sizes were transmitted by FCn . The rest the congestion increases, it violates the dropping state of the
of the setup for this experiment was same as that of the first AQM which results in increasing the dropping of packets. The
one. In the final experiment, that is, HCHP, high congestion dropped packet is anticipating by the TCP as a congestion sig-
and a high payload size were considered. All experiments were nal, which results in slowing down its sending rate. The role of
repeated for each TCP variant. The buffer size was fixed to an AQM is to retain the good queues in the buffer and avoid the
1,000 packets, the default size for CoDel. The trials showed excessive queuing delay (by draining the bad queues). Note that
the impacts of different levels of congestion, payload sizes, this drop rate is strictly related to the network layer, it does not
and TCP variants on the performance of the CoDel algorithm. take into account the amount of packets dropped by the link or
The TCPs used in these experiments were NewReno, cTCP, channel at physical layer (which is collectively considered by the
TCPw, SACK, Cubic, and Vegas. The variants were added to retranmission packets of TCP).
NS2 version 2.36. Table 3 illustrates the simulation parameters
and setup for the experiment.
5.2.3 Queue length
5.2 Performance metrics This shows the dynamic queue behaviour of CoDel. It is the
average number of standing queues during the entire period of
The following performance metrics were used to evaluate the the simulation.
CoDel algorithm over TCP variants. In the following equations,
n represents the number of recurrences and t refers to the sim-
ulation time in seconds. 5.2.4 Queuing delay

5.2.1 Link utilisation This metric shows the delay experienced by a packet in the
queue from ingress to egress. In contrast, queuing delay is the
It is a ratio of the rate of packets dequeued by an AQM to the time taken by a packet when it goes into a buffer, stays in
available link capacity, that is the queue and then dequeues from it. The sojourn time (or
( )
∑ Bn queuing delay) is handled differently by different AQMs. For
n Δt 𝜂 instance, RED algorithm cares for maintaining the queue length
LU = × 100%, (11) in the buffer, and do not hassle for the delay experienced by a
BW
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
102 SALMAN ET AL.

packet from enqueue to the dequeue. On the other hand, CoDel 10-3

is concerned more about the queuing delay. It restricts the pack- 14

ets from passing the buffer within the interval of certain tar-
12
get delay. If a packet did not meet this requirement then it is
dropped9 . Note that the sojourn time is strictly related to the

Queuing Delay (ms)


10
network layer and it does not takes into account the propaga-
tion delays of the link or channel. On the contrary, the round trip 8
latency or delay can be obtained from the round-trip-time (RTT)
of the TCP which is the summation of queuing delay and propa- 6

gation delays. To narrow down our analysis (to network buffers)


4
we use queuing delay to rigorously consider the packet’s delay CoDel with TCP-Cubic RED with TCP-Cubic
CoDel with Newreno RED with Newreno
from ingress to egress. The queuing delay is expressed in mil- 2
CoDel with TCP-Vegas RED with TCP-Vegas
CoDel with TCP-Sack RED with TCP-Sack
liseconds. CoDel with Compound-TCP RED with Compound-TCP
CoDel with Westwood-TCP RED with Westwood-TCP
0
2 4 6 8 10 12 14
Congestion Level (Number of Flows)
5.2.5 Flow utilisation
FIGURE 4 Comparison in terms of average queuing delay (VCFP)
It illustrates the utilisation of each TCP flow: between CoDel and RED
(∑ (f))
n Bn
f (N ) 𝜂 5.3.1 Variable congestion and fixed payload
Δt
FU = , (12) (VCFP) setting
BW
∑ (f) Average queuing delay: Our first simulation evaluates the perfor-
where n Bn is the total number of bytes in each flow– f = mance of CoDel versus RED at different levels of conges-
{1, … , N }, during the transmission period, f (N ) is the num- tion with different types of TCPs. The average queuing delay
ber of TCP flows (levels of congestion) used in the simula- is shown in Figure 4, which demonstrates that CoDel is all time
tion, 𝜂 is bits per byte, and BW is the link capacity of the experiencing less queuing delays than RED at different levels
bottleneck bandwidth. We characterised the difference among of congestion and with any variant of TCP. Moreover the gap
flows with the coefficient of variance, which is the ratio of between RED and CoDel is small at lower level of congestion.
the standard deviation to the mean of the flows (in any TCP However, as the level of congestion grows in terms of number
variant). of flows, the RED’s queuing delay increases.
On the contrary, CoDel’s queuing delay decayed with an
increase in the level of congestion. In the RED algorithm, the
5.2.6 Retransmission packets minimum queue length threshold Tmin was fixed to five packets,
which means that at any time, it held five packets and none was
This metric is the ratio of total number of retransmitted pack- dropped irrespective of the delay. Because RED controls queu-
ets to the total amount of sent packets for the entire simulation ing occupancy in terms of packets rather than the delay expe-
period. This metric is related to TCP connection. If any packet rienced by them, it monitored the average queue length of the
is dropped either in network layer, or physical layer, it is retrans- incoming traffic. As this exceeded the upper threshold, packets
mitted accordingly. were dropped to dissipate the standing queues, and if the queue
length remained between upper and lower thresholds, packets
were dropped randomly based on Equation (9).
5.3 Results and discussion Conversely, the CoDel algorithms has no fixed packets, and
directly controls packet delay rather than queue length. More-
In this section, we use the aforementioned three scenarios to over, CoDel’s controller uses the stochastic gradient learning
describe the performance of CoDel as compared with RED procedure as shown in Equation (2). The control loop of CoDel
using various TCP variants. We focus on link utilisation and the starts with the dropping of few packets if the level of congestion
delay experienced by each packet from enqueuing to dequeu- is small. However, the drop rate increase with the level of con-
ing (packet sojourn time), and the drop rate incurred by the gestion to ensure that bad queues are drained and only good
AQM used. ones remain in the buffer.
Figure 5 illustrates packet dropping for different levels of
congestion for TCP Cubic, which clearly shows that when con-
gestion was high (14 flows) the drop rate was large and drained
the persistent queue (as shown in Figure 6.) Consequently, the
9
For dropping, CoDel gives a certain margin of interval, if the packet dequeues within that reduced queues experienced shorter delays. For instance, CoDel
interval then it is not dropped, otherwise it is dropped. with Cubic experienced considerably longer delay with six flows,
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 103

180
3 flows 115
6 Flows
160 10 Flows 110
14 Flows
140 105
Number of Packets dropped

100
120

Link Utilization (%)


95
100
90
80
85
60
80

40 CoDel with TCP-Cubic CoDel with Newreno


75
CoDel with TCP-Vegas CoDel with TCP-Sack
20 CoDel with Compound-TCP CoDel with Westwood-TCP
70
RED with TCP-Cubic RED with Newreno
RED with TCP-Vegas RED with TCP-Sack
0 60
0 10 20 30 40 50 60 70 80 90 100 RED with Compound-TCP RED with Westwood-TCP
Simulation Time (Sec)
2 4 6 8 10 12 14
Congestion Level (Number of Flows)
FIGURE 5 CoDel over TCP Cubic: Packets dropped at different levels of
congestion
FIGURE 7 CoDel versus RED: link utilisation at VCFP

style TCP10 . However, as the level of congestion increased,


5000
its delay experience became steady (with RED) or declines
4500
(with CoDel).
CoDel and RED performed better with TCP Vegas because
Average Persistant Queue (Bytes)

4000
it is based on congestion collapse TCP and possesses the band-
width estimation feature, which enables it to learn about the
3500
available bandwidth. The AQMs (CoDel and RED) subse-
quently drained the bad queues. Likewise, TCP SACK also per-
3000 formed well because it is a Reno-style TCP and has a built-in
SACK Option that intelligently reports the DUP-ACKs, and, as
2500 a result, the transmission rate is well controlled at higher con-
CoDel with TCP-Cubic RED with TCP-Cubic
CoDel with Newreno RED with Newreno
gestion. TCP NewReno is a loss-based TCP that has no prior
2000
CoDel with TCP-Vegas
CoDel with TCP-Sack
RED with TCP-Vegas
RED with TCP-Sack
knowledge of bandwidth, and can respond to only dropped
CoDel with Compound-TCP RED with Compound-TCP packets by an AQM scheme. Its implementation is considerably
CoDel with Westwood-TCP RED with Westwood-TCP
better with CoDel than with RED. cTCP is used for high-speed
2 4 6 8 10 12 14
Number of Flows networks but uses the congestion avoidance algorithm of Reno,
which makes it TCP friendly. It delivered average performance
FIGURE 6 CoDel versus RED: Comparison of average queuing length when used in congestion collapse networks. It performs much
(VCFP) better if used in a high-speed network.
Link utilisation: Figure 7 shows the performance of CoDel
versus RED in terms of link utilisation at different levels of
but as the number of flows increased to 10, the drop rate congestion. The link utilisation was computed for all TCP vari-
increased (and queuing delay decreased as shown in Figure 4). ants using Equation (11). For each level, the average value was
Because of the higher drop rate, a large number of queues were derived. It is clear that link utilisation depends on the choice of
dissipated (this increase in drop rate had adverse effects on link AQM as well as the TCP variant that establishes the connection.
utilisation that are discussed in the following section). CoDel was better than RED over all TCP variants (except TCP
Figure 6 shows the queuing delay, at low congestion (one to Cubic) at different levels of congestion. As mentioned above, we
three FTP sources), there was a small difference in queue size considered a small BDP network with zero propagation delay
between RED and CoDel except in Vegas (which has a built- in the bottleneck link. Therefore, TCP Cubic used the TCP-
in bandwidth estimation feature). But as congestion increased, friendly region to achieve the same throughput as the standard
the number of standing queues of RED drastically increased TCP, which is why its utilisation curve oscillated around 75–85%
whereas that of CoDel decreased (owing to the controller’s of the total capacity available for all levels of congestion (in case
stochastic gradient learning procedure). TCP Cubic did not per- of CoDel). However, in case of RED, TCP Cubic performed
form well compared with the other variants. As we considered
a congestion collapse network (with a 3 Mbps bottleneck link 10
Cubic is intended for only high-speed networks. If it is used in congestion collapse
speed and 0 ms propagation delay) rather than a high-BDP networks, it delivers poorer performance than the RENO-style TCPs because they are
network, its performance did not surpass that of any RENO- designed for congestion collapse networks.
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
104 SALMAN ET AL.

30
cubic-CoDel
cubic-CoDel
20 cubic-RED
9 cubic-RED
10
8 0
10 20 30 40 50 60 70 80 90
CWND (number of Packets)

30 Newreno-CoDel
7 20 Newreno-RED

10

CWND (number of Packets)


6
0
30 10 20 30 40 50 60 70 80 90
5 SACK-CoDel
20 SACK-RED
4 10
0
3 10 20 30 40 50 60 70 80 90
30 Vegas-CoDel
20 Vegas-RED
2
10
1 0
30 10 20 30 40 50 60 70 80 90
cTCP-CoDel
20 cTCP-RED
40 41 42 43 44 45 46 47 48 49
10
Simulation Interval (40-50 sec)
0
30 10 20 30 40 50 60 70 80 90
TCPw-CoDel
FIGURE 8 Cubic: Comparison of cwnd between CoDel and RED at 20 TCPw-RED
VCFP 10
0
10 20 30 40 50 60 70 80 90
Sumulation Time (Seconds)
well at high congestion levels (10 – 14 flows) and the utilisation
curve is oscillating around 85–90%. FIGURE 9 CoDel versus RED: Comparison over TCP variants (three
flows) in terms of cwnd . T
In Figure 11, the average drop rate of CoDel with Cubic is
much higher than that of RED. If the packets were frequently
dropped as a result of higher queues building up, Wmax shown
in Equation (4) could not have reached the maximum operating This behaviour can be explained by analysing the values of
limit (concave or convex region), and the congestion window cwnd for the three and six FTP sources shown in Figures 9 and
would have had to frequently be reduced as shown in Equation 10, respectively. A single flow is considered (picked randomly)
(6). On the contrary, RED algorithm dropped fewer packets on from both cases and the cwnd value is averaged for every 5 s
average and utilised more network links. Thus, Cubic’s window for a simulation interval of 100 s. At a lower level of congestion
increased to a greater extent than that of CoDel. This is shown (three to six FTP sources), the difference was not distinct from
in a time snapshot of Cubic’s ‘cwnd ’ analysis for CoDel versus the analysis of the value of cwnd of Cubic with CoDel and RED.
RED in Figure 8. The link utilisations of Cubic with both AQMs (Figure 7) at
Figure 8 shows the congestion window of a single flow, con- three and six FTP sources had a minute difference (Cubic with
sidered randomly from among the 14 flows at higher levels of CoDel had a slightly higher utilisation than RED). However,
congestion. The behaviours of the flows were similar. For clar- NewReno, which is more suitable for our topology, recorded
ity, a snapshot was taken at the center (simulation time from 40 higher network utilisation with both AQMs. This is also evi-
to 50 s). In all flows, Cubic’s cwnd with CoDel had a smaller dent from the value of cwnd , where CoDel exhibited a more
value than that of RED for the following reasons: (1) CoDel consistent behaviour than RED (i.e. transmitted more packets
ensured that the sojourn time was shorter than the target delay, than RED). Our third TCP variant, SACK, performed better
and as this was crossed, a packet was dropped. (2) The greater with CoDel at all levels of congestion. Its behaviour was promi-
the number of packet drops occurred, the frequent was the con- nent in terms of the size of the congestion window, where the
gestion signal sent to the Cubic’s sender, and, consequently, the value of cwnd of SACK with CoDel reached to its peak more
smaller the values of cwnd was—that is, sending rate was reduced frequently than with RED (with three FTPs).
according to Equation (6). RED dropped packets based on a However with six FTPs, we did not observe a prominent dif-
queue length, and was not as aggressive as CoDel, and thus ference (because at six flows, SACK recorded similar utilisa-
dropped fewer packets. Thus, RED with Cubic operated in the tions for both AQMs). Vegas controlled its window based on
concave region that resulted in sending more packets. the fluctuation in RTT. While there was no propagation delay
Because TCP NewReno and other Reno-style TCPs (such as in the bottleneck link, the overall end-to-end RTT delay was
SACK) are designed for congested networks, they fully utilised influenced by queuing delay at the routers. CoDel created far
the link. The overall performance of CoDel with Vegas, cTCP, shorter queue delays than RED as shown in Figure 4. Vegas
and TCPw was better than with RED. In Figure 7, most TCPs with CoDel at three FTP sources exhibited consistent perfor-
exhibit strange behaviour with the three FTP sources, where the mance but RED recorded peaks more often (which impacted
utilisation plots declined for all TCP variants except NewReno utilisation, where Vegas with RED had slightly higher utilisation
(with both RED and CoDel) and cTCP (with CoDel). They then than CoDel at three FTPs). However, as the traffic increased
increased linearly (except cTCP with CoDel, which declined at to six FTPs the value of cwnd of Vegas with CoDel increased
six FTP sources). more than that of RED. cTCP controlled cwnd based on a
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 105

30 0.1
cubic-CoDel CoDel with TCP-Cubic
20 cubic-RED CoDel with Newreno
0.09
CoDel with TCP-Vegas
10 CoDel with TCP-Sack
0.08 CoDel with Compound-TCP
0 CoDel with Westwood-TCP
30 10 20 30 40 50 60 70 80 90 RED with TCP-Cubic
0.07

Normalized Drop Rate


Newreno-CoDel RED with Newreno
20 Newreno-RED RED with TCP-Vegas
0.06 RED with TCP-Sack
10
CWND (number of Packets)

RED with Compound-TCP


0.05 RED with Westwood-TCP
0
30 10 20 30 40 50 60 70 80 90
SACK-CoDel 0.04
20 SACK-RED
0.03
10
0.02
0
10 20 30 40 50 60 70 80 90
30 0.01
Vegas-CoDel
20 Vegas-RED 0
2 4 6 8 10 12 14
10
Congestion Level (Number of Flows)
0
10 20 30 40 50 60 70 80 90
30 FIGURE 11 CoDel versus RED: Comparison in terms of drop rate
cTCP-CoDel
20 cTCP-RED (VCFP)
10

0
10 20 30 40 50 60 70 80 90 10-3
30
TCPw-CoDel
20 TCPw-RED 14

10
12
0
10 20 30 40 50 60 70 80 90
Sumulation Time (Seconds) 10
Queuing Delay (ms)

FIGURE 10 CoDel versus RED: Comparison over TCP variants (six 8


flows) in terms of cwnd
6

4 CoDel with TCP-Cubic RED with TCP-Cubic


CoDel with Newreno RED with Newreno
delay-based component (as did Vegas), and had same func- CoDel with TCP-Vegas RED with TCP-Vegas
tion for cwnd as the standard TCP. Therefore, cTCP performed 2 CoDel with TCP-Sack RED with TCP-Sack
CoDel with Compound-TCP RED with Compound-TCP
much better with CoDel than with RED because the former CoDel with Westwood-TCP RED with Westwood-TCP
induced shorter delay (as shown in Figure 4). The function of 600 700 800 900 1000 1100 1200 1300 1400
Packet Size (Bytes)
Vegas benefited from this shorter delay and set its throughput
accordingly (using the HTCP window increase function). West- FIGURE 12 Comparison of average queuing delay of CoDel and RED at
wood is designed for wireless and lossy scenarios. Because, in high congestion and different packet sizes.
the case considered here, a packet loss occurred only when there
was congestion. TCPw increased its window size to a greater
extent with CoDel than with RED at three FTPs. However, with to maintain a short queuing delay, but also led to an increased
six FTPs, we did not observe any prominent difference in its average drop rate.
windows size.
Drop rate: Figure 11 presents the drop rate of CoDel versus
RED at different levels of congestion. During the simulation, 5.3.2 Variable payload and fixed congestion
the average drop rate of RED was lower than that of CoDel over (VPFC) setting
all TCP variants except NewReno at higher congestion, where
RED and CoDel exhibited the same behaviour. In Figure 7, as Average queuing delay: Figure 12 shows the average queuing delay
NewReno attempts to utilise the maximum capacity of the avail- for a fixed level of congestion (higher) and different payloads
able link with both RED and CoDel algorithms, and aggres- over the TCP variants. CoDel delivered much better perfor-
sively increases window size, its drop rate increases to greater mance than RED when the packet size was small (i.e. 500
that with other TCP types. Moreover, the TCP Cubic dropped bytes). As packet size increased, CoDel’s average queuing delay
more packets with CoDel than RED. The packet-dropping increased, and the gap in performance between CoDel and
behaviour of TCP Cubic was clear from CoDel’s law of con- RED decreased at all TCP variants. For instance, CoDel with
trol law (shown in Figure 5), which drained long queues. CoDel TCP NewReno at a payload of 500 bytes experienced a delay
learned from the level of congestion and adapted its queue of 5.9 ms, and RED with the same connection and payload
length. Figure 4 shows that if a packet sojourn time exceeded exhibited a delay of 13 ms. It therefore, experienced a delay 7.9
the target delay, it was dropped. This mechanism enabled CoDel ms shorter (a difference of 75.13%) than that of RED. With
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
106 SALMAN ET AL.

110
1

105
0.995
100

95 0.99

Normalized CDF
Link Utilization (%)

90
CoDel with TCP-Cubic
0.985 CoDel with Newreno
85
CoDel with TCP-Vegas
CoDel with TCP-Sack
80
CoDel with Compound-TCP
0.98
CoDel with Westwood-TCP
75 CoDel with TCP-Cubic CoDel with Newreno RED with TCP-Cubic
CoDel with TCP-Vegas CoDel with TCP-Sack RED with Newreno
CoDel with Compound-TCP CoDel with Westwood-TCP
70 0.975 RED with TCP-Vegas
RED with TCP-Cubic RED with Newreno
RED with TCP-Vegas RED with TCP-Sack
RED with TCP-Sack
RED with Compound-TCP RED with Westwood-TCP RED with Compound-TCP
65
RED with Westwood-TCP
600 700 800 900 1000 1100 1200 1300 1400
Packet Size (Bytes) 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0.055 0.06 0.065 0.07
Queuing Delay (Sec)
FIGURE 13 CoDel versus RED: Comparison in terms of link utilisation
at different packet sizes FIGURE 15 CDF of the queuing delays for CoDel and RED over TCP
variants considered at HCHP

0.12
formance was consistent and adaptive regardless of the payload
0.1 size. However, RED increased its average drop rate when queue
length became longer than the threshold. The drop rate of TCP
Normalized Drop Rate

0.08
Cubic was lower with RED for all packet sizes. CoDel and RED
exhibited similar behaviours with TCP SACK at higher conges-
tion and larger packet sizes.
0.06

0.04
CoDel with TCP-Cubic RED with TCP-Cubic 5.3.3 High congestion and high payload
CoDel with Newreno
CoDel with TCP-Vegas
RED with Newreno
RED with TCP-Vegas
(HCHP) setting
0.02
CoDel with TCP-Sack RED with TCP-Sack
CoDel with Compound-TCP RED with Compound-TCP
CoDel with Westwood-TCP RED with Westwood-TCP Cumulative distribution function (CDF) of the queuing delay: Figure 15
600 700 800 900 1000 1100 1200 1300 1400 quantifies the long term queuing behaviours of CoDel and RED
Packet Size (Bytes) over TCP variants at HCHP as a cumulative distribution func-
tion of queuing delay. Each TCP variant underwent queue delays
FIGURE 14 CoDel versus RED: Comparison in terms of drop rate with
at different times. The queuing delays of both AQMs were
different packet sizes
very different. CoDel over TCP variants had a queuing delay
of shorter than 25 ms for most packets, whereas for RED,
a packet size of 1460 bytes, CoDel experienced a delay 2.9 ms this was approximately 40 ms. It is clear from Figure 15 that
shorter delay than that of RED (difference of 24.9%). CoDel delivered exceptional performance in terms of queuing
Link utilisation: Figure 13 shows the link utilisation for VPFC delay compared with RED over all TCP variants. As an example,
over the TCP variants. The behaviour of RED with all TCP vari- CoDel with TCP SACK had 99% of its packets dequeued with
ants was stable at different payloads, and it utilised between 86% a queuing delay of 22.11 ms, whereas this delay was 40.68 ms
and 93% of the link. However, the link utilisation of CoDel for RED.
violently oscillating with changes in payload and TCP vari- Link utilisation: Figure 16 shows the behaviour of CoDel com-
ant. This is because CoDel is delay based, and adapted to the pared with RED in terms of link utilisation at HCHP over dif-
increase in congestion. However, RED monitors queue length, ferent TCP variants. It was averaged every 5 s for a simulation
and thus dropped packets when the queue length exceeded a interval of 100 s. It is clear that CoDel with high congestion
defined threshold. and large payload size tended to utilise more of the available
Drop rate: Figure 14 illustrates the behaviour of CoDel com- link. However, RED remained in the underutilised region, that
pared with RED over TCP variants with VPFC. It is evi- is, below 95% (except for TCP NewReno, on which it utilised
dent that with packet sizes ranging from 500 bytes to 1000 approximately 97% of the link capacity).
bytes, RED’s average packet drop rate was lower than that of Burst drop rate: We show the burst drop rates of the AQMs in
CoDel. However, at 1460 bytes, CoDel performed better with Figure 17. The burst drop rate of CoDel was stable and fluc-
NewReno, Vegas, cTCP, and TCPw. An increased in the byte tuating less than that of RED. CoDel delivered smooth per-
size of a packet led to an increase in queue length. CoDel’s per- formance for all TCP variants except NewReno, whereas RED
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 107

1
CoDel
100% RED
0.95

0.9

CoDel with TCP-Cubic CoDel with TCP-Sack


99%

Fairness among Flows


0.85 CoDel with Newreno CoDel with Compound-TCP
Link utilization

CoDel with TCP-Vegas CoDel with Westwood-TCP


0.8
98%
35 40 45 50 55 60
1
97%
0.95

0.9 96%

0.85 RED with TCP-Cubic RED with TCP-Sack


RED with Newreno RED with Compound-TCP
95%
0.8 RED with TCP-Vegas RED with Westwood-TCP

30 35 40 45 50 55 60 Cubic newreno Vegas SACK cTCP TCPw


Simulation Interval 30-60s TCP Variants

FIGURE 16 CoDel versus RED: Comparison in terms of link utilisation FIGURE 18 CoDel versus RED: Fairness in bandwidth sharing among
at HCHP multiple flows for all TCP variants (high congestion: 14 flows; large packet size:
1460 bytes)

0.12

0.11 CoDel
0.14 RED
0.1 Average Retransmission Packets (for all flows) 0.135
0.09
0.13
CoDel with TCP-Cubic CoDel with TCP-Sack
0.08
CoDel with Newreno CoDel with Compound-TCP 0.125
Burst Drop Rate

CoDel with TCP-Vegas CoDel with Westwood-TCP


0.07
0.12
35 40 45 50 55 60
0.115
0.12

0.11 0.11

0.1 0.105

0.09 0.1

0.08 RED with TCP-Cubic RED with TCP-Sack 0.095


RED with Newreno RED with Compound-TCP
0.07 RED with TCP-Vegas RED with Westwood-TCP 0.09

35 40 45 50 55 60
Cubic newreno Vegas SACK cTCP TCPw
Simulation Interval 30-60s
TCP Variants
FIGURE 17 CoDel versus RED: Comparison in terms of burst drop rate
at HCHP FIGURE 19 CoDel versus RED: Average retransmitted packets for all
TCP variants (high congestion: 14 flows; large packet size: 1460 bytes)

fluctuated owing to frequent and simultaneous packet drops.


(almost 12–14 ms) was lower than that of RED (approximately
The overall packet drops for CoDel were greater than those
20 ms) over all TCP variants. However, the first quantiles of
of RED. However, for most TCP variants, packet drops with
both AQMs were similar. A prominent difference was observed
CoDel did not occur frequently: An example is NewReno. The
in their average drop rates. For instance, RED with Cubic and
burst drop rates for the other TCP variants were similar for both
TCPw TCPs had a lower drop rate in its first quantile but higher
AQMs. The burst drop rate in Figure 17 is the average number
drop rate in the third quantile than when CoDel was used. How-
of packet drops every 5 s. The simulation was repeated with
ever, RED with cTCP and SACK TCPs had higher drop rates
every TCP variant for CoDel and RED. Therefore each figure
in the both first and the third quantiles. The drop rates in the
summarises 120 drop rate samples and several thousand per-
first and third quantiles of RED with NewReno and Vegas were
packet drops in the enqueued samples.
lower than those for CoDel. NewReno with both AQMs had
Sensitivity analysis: Figure 20 shows the overall analysis of the
high link utilisation at a higher drop rate, which is as expected
investigated AQMs over TCP variants at HCHP. The three per-
from the previous analysis. In sum, with respect to utilisation11
formance metrics (i.e. queuing delay, drop rate, and link utili-
and queuing delay, CoDel outperformed RED, but was inferior
sation) are plotted with a box plot representing the first (lower
to it in terms of drop rate.
part) and third quantiles (upper part) for each metric.
The first and third quantiles for the normalised utilisation
of CoDel were better than for RED (for all TCP variants). 11
The link utilisation of CoDel was mostly above 95% (except with Cubic), whereas that
Moreover, the third quantile for the queuing delay in CoDel for RED was in between 90% and 95% (except the first quantile of Cubic)
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
108 SALMAN ET AL.

Link Utilization 1

0.95

0.9

Cubic Newreno Vegas Sack cTCP TCPw


Average Queuing Delay

0.04

0.02

0
Cubic Newreno Vegas Sack cTCP TCPw

0.105
Drop Rate

0.1

0.095

0.09
Cubic Newreno Vegas Sack cTCP TCPw
TCP Variants

FIGURE 20 CoDel versus RED: Overall performance at high congestion (14 flows) and large packet size (1460 bytes). Link utilisation, drop rate and queuing
delay were measured over TCP variants

TABLE 4 Flow utilisation of TCP variants over CoDel

Jain’s
TCP Variant Flow 1 Flow 2 Flow 3 Flow 4 Flow 5 Flow 6 Flow 7 Flow 8 Flow 9 Flow 10 Flow 11 Flow 12 Flow 13 Flow 14 Fairness

TCP Cubic 0.993 1.003 1.138 0.946 0.875 1.003 0.923 0.884 0.763 1.043 0.946 0.963 0.958 0.873 0.9919
TCP NewReno 1.311 1.285 1.287 0.892 0.521 1.421 1.197 0.880 1.028 1.433 1.034 1.032 1.058 1.380 0.9543
TCP Vegas 1.192 1.016 1.101 1.015 1.088 1.019 1.027 1.078 0.950 1.075 0.977 0.987 0.991 1.004 0.9966
TCP SACK 1.042 1.109 0.981 1.105 0.986 1.069 1.129 0.994 1.102 1.098 1.033 0.925 1.071 0.977 0.9966
cTCP 1.161 1.063 1.117 1.057 1.116 1.005 0.916 1.101 1.034 1.131 1.044 1.004 0.964 0.818 0.9926
TCPw 1.086 1.022 0.854 1.131 1.044 1.076 1.013 1.021 1.143 1.031 1.006 0.948 1.075 1.068 0.9954

Flow utilisation and retransmission packets: This metric is related to remaining TCP variants with CoDel has more fairness than that
the performance of the TCP, and reflects the utilisation of each of RED. The detailed quantitative analysis is given in Tables 4
flow and the fairness of bandwidth sharing among flows. Packet and 5, respectively, where each column shows flow utilisation
retransmission occurs when there is either a missing packet, or (as discussed in Section 5.2.5).
a particular packet has taken more than usual to acknowledge Figure 19 shows the average packet retransmission for the
(timeout). simulation time—100 s—under high congestion (14 flows) with
Figure 18 illustrates fairness among TCP flows using Jain’s a large packet size (1460 bytes). CoDel dropped more pack-
Fairness Index for high congestion (14 flows) and large payload ets than RED, which with any TCP variant retransmitted fewer
sizes (1460 bytes) scenario. The Cubic, NewReno and cTCP packets. This is detailed in the Tables 6 and 7. Each column
with RED are more fairer than CoDel counterpart, however, the shows the number of retransmitted packets per TCP flow.

TABLE 5 Flow utilisation of TCP variants over RED

Jain’s
TCP Variant Flow 1 Flow 2 Flow 3 Flow 4 Flow 5 Flow 6 Flow 7 Flow 8 Flow 9 Flow 10 Flow 11 Flow 12 Flow 13 Flow 14 Fairness

TCP Cubic 1.078 1.015 1.062 1.051 0.968 1.072 1.024 1.100 1.061 0.947 1.019 0.993 1.114 1.064 0.9980
TCP NewReno 1.360 1.449 1.196 1.104 0.888 1.189 1.023 1.205 1.424 1.158 0.856 0.543 1.208 0.927 0.9563
TCP Vegas 1.188 0.954 0.993 1.014 0.989 1.023 1.062 0.933 1.064 1.061 0.977 0.978 1.096 0.959 0.9959
TCP SACK 1.198 1.118 1.066 0.967 1.070 1.226 1.157 1.036 1.059 1.085 0.975 0.947 1.030 1.094 0.9945
cTCP 1.010 1.066 1.132 0.994 1.130 1.085 0.905 1.104 1.052 0.952 1.012 1.144 1.029 1.025 0.9958
TCPw 1.275 1.032 0.952 1.052 1.059 0.787 1.140 0.988 1.035 1.063 1.176 1.081 0.951 1.013 0.9890
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 109

TABLE 6 Retransmission packets of TCP variants over CoDel

TCP Variant Rtx 1 Rtx 2 Rtx 3 Rtx 4 Rtx 5 Rtx 6 Rtx 7 Rtx 8 Rtx 9 Rtx 10 Rtx 11 Rtx 12 Rtx 13 Rtx 14 Mean(Rtx)

TCP Cubic 0.1223 0.1208 0.1267 0.1321 0.1218 0.1431 0.1409 0.1412 0.123 0.1389 0.1424 0.1332 0.1424 0.1168 0.1318
TCP NewReno 0.1181 0.1117 0.1207 0.1281 0.1303 0.1195 0.1433 0.1278 0.1406 0.1437 0.1154 0.14 0.1397 0.1273 0.1290
TCP Vegas 0.0974 0.0953 0.1024 0.1139 0.1088 0.0944 0.1144 0.1149 0.1228 0.1136 0.1053 0.1066 0.1121 0.1194 0.1087
TCP SACK 0.1331 0.1321 0.1301 0.1222 0.1305 0.1366 0.1456 0.132 0.1354 0.1352 0.1519 0.1292 0.1345 0.143 0.1351
cTCP 0.126 0.1282 0.1243 0.1401 0.1323 0.1302 0.1221 0.1458 0.1432 0.1437 0.1289 0.1288 0.1259 0.1293 0.1321
TCPw 0.1251 0.1315 0.1378 0.1264 0.1262 0.1283 0.145 0.1464 0.1374 0.1342 0.1442 0.127 0.1294 0.1449 0.1346

TABLE 7 Retransmission packets of TCP variants over RED

TCP Variant Rtx 1 Rtx 2 Rtx 3 Rtx 4 Rtx 5 Rtx 6 Rtx 7 Rtx 8 Rtx 9 Rtx 10 Rtx 11 Rtx 12 Rtx 13 Rtx 14 Mean(Rtx)

TCP Cubic 0.0954 0.0997 0.1259 0.1133 0.1135 0.1154 0.1039 0.1099 0.105 0.1073 0.1155 0.1176 0.1117 0.1121 0.1104
TCP NewReno 0.1161 0.1132 0.1135 0.1187 0.1295 0.1031 0.1263 0.1158 0.1047 0.135 0.1261 0.1224 0.14 0.1128 0.1198
TCP Vegas 0.0893 0.108 0.1035 0.0885 0.0856 0.0991 0.1065 0.0995 0.097 0.0964 0.0949 0.1019 0.0953 0.1065 0.0980
TCP SACK 0.0984 0.1192 0.1072 0.1139 0.121 0.1121 0.1229 0.1177 0.1199 0.1165 0.1198 0.1125 0.1151 0.1233 0.1157
cTCP 0.113 0.1073 0.1175 0.1356 0.1073 0.1203 0.1142 0.1064 0.1159 0.1079 0.1118 0.1099 0.1154 0.113 0.1140
TCPw 0.1135 0.1118 0.1153 0.1168 0.1087 0.1014 0.1251 0.1203 0.1143 0.1223 0.1161 0.1292 0.1154 0.1068 0.1155

6 CONCLUSION 7 DISCUSSION AND FUTURE WORKS


In this paper, we explored queuing delay, link utilisation, and Given that AQM can mitigate the bufferbloat problem, We are
drop rate in two AQM schemes, CoDel and RED, over six types also interested to include some other AQM schemes such as
of TCP variants (i.e. TCP Cubic, TCP NewReno, TCP SACK, PIE and variant of CoDel and PIE, that is, FQ-CoDel [67]
TCP Vegas, Compound TCP, and TCP Westwood). We chose and FQ-PIE [66] to conduct a more detailed analysis. The
a variety of TCPs intended for different applications. We used PIE scheme is a counterpart of CoDel, which is a light weight
three scenarios to evaluate the performance of CoDel as com- scheme and can control the queuing delay very effectively. We
pared with RED: variable congestion and fixed payload (VCFP), have given some of its examples in our GitHub12 that the read-
variable payload and fixed congestion (VPFC), and high conges- ers can further examine on TCP variants to see its behaviour
tion and high payload (HCHP). We first examined the AQMs in over various congestion.
terms of the performance metrics, where CoDel outperformed In the future we intend to use diverse traffic types such
RED in terms of average queuing delay and normalised link as web traffic (PackMime [68]) and video traffic like DASH
utilisation (except in VCFP, where RED over cubic TCP had [69] etc. to further assess the impact of bufferbload and AQM
better utilisation at a high level of congestion). However, in scheme’s performance.
terms of drop rate, RED dominated the CoDel scheme in sev- In addition the TCP-BBR is a good candidate for mitigat-
eral cases. RED in VCFP had a lower drop rate than CoDel; sim- ing the bufferbloat problem, that can be used with the afore-
ilarly, in VPFC, RED’s drop rate was smaller than that of CoDel mentioned AQM schemes. This can be interesting to tackle the
at low and medium congestion; moreover, In HCHP, its drop bufferbloat problem both at layer three and four and a lot of
rate was lower for some TCPs, such as NewReno and Vegas. performance improvement is expected to be achieved. We have
We also evaluated the performance of the TCPs using CoDel put our code in the GitHub for the readers to use it for their
and RED in terms of fairness among flows and retransmission future work.
of packets per flow. We used Jain’s Fairness Index for per-flow
utilisation to show fairness in bandwidth sharing among multi- ACKNOWLEDGEMENT
ple flows. NewReno, Cubic, and cTCP showed better fairness This work was supported by a research grant from Inha
with RED while the other TCPs showed this with CoDel. We University.
also determined the average retransmitted packets of all flows
over the TCP variants with HCHP, which provided an impor- ORCID
tant comparison at each flow. The retransmission of packets Salman Muhammad https://orcid.org/0000-0003-4754-805X
in all considered TCPs was lower in the RED scheme than Touseef Javed Chaudhery https://orcid.org/0000-0003-1443-
in CoDel. 3571

12
https://github.com/salmanpolito/Bufferbloat-AQM-performance-over-TCP-variants-
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
110 SALMAN ET AL.

REFERENCES 24. Abdel-Jaber, H.: An exponential active queue management method based
1. Lan, K.C., Heidemann, J.: A measurement study of correlations of internet on random early detection. J. Comp. Netw. Commun. 2020, 1–11 (2020)
flow characteristics. Comp. Netw. 1(16), 46–62 (2006) 25. Zhou, K., Li, K.L.V.: Nonlinear RED: A simple yet efficient active queue
2. Alfredsson, S., et al.: Impact of TCP congestion control on bufferbloat management scheme. Comp. Netw. 50(18), 3784–3794 (2006)
in cellular networks. In: 2013 IEEE 14th International Symposium on A 26. Abbasov, B., Korukoglu, S.: Effective RED: An algorithm to improve
World of Wireless, Mobile and Multimedia Networks (WoWMoM), vol. 6, RED’s performance by reducing packet loss rate. J. Netw. Comp. Appl.
pp. 1–7. IEEE, Piscataway (2013) 32(3), 703–709 (2009)
3. Gettys, J., Nichols, K.: Bufferbloat: Dark buffers in the internet. Queue 27. Nichols, K., Jacobson, V.: Controlling queue delay. Commun. ACM 7(1),
11(29), 40–54 (2011) 42–50 (2012)
4. Chirichella, C., Rossi, D. To the Moon and back: Are internet bufferbloat 28. Grazia, C.A., et al.: A cross-comparison between TCP and AQM algo-
delays really that large? In: 2013 Proceedings IEEE INFOCOM, vol. 4, pp. rithms: Which is the best couple for congestion control? In: 2017 14th
3297–3302. IEEE, Piscataway (2013) International Conference on Telecommunications (ConTEL), vol. 6, pp.
5. Dischinger, M., et al.: Characterizing residential broadband networks. In: 75–82. IEEE, Piscataway (2017)
Proceedings of the 7th ACM SIGCOMM Conference on Internet Mea- 29. Khademi, N., et al.: The new AQM kids on the block: An experimental
surement, vol. 10, pp. 43–56. ACM, New York (2007) evaluation of CoDel and PIE. In: 2014 IEEE Conference on Computer
6. Sundaresan, S., et al.: Broadband internet performance: A view from the Communications Workshops (INFOCOM WKSHPS), vol. 4, pp. 85–90.
gateway. ACM SIGCOMM Comp. Commun. Rev. 8(15), 134–45 (2011) IEEE, Piscataway (2014)
7. Kreibich, C., et al.: Illuminating the edge network. In: Proceedings of the 30. Järvinen, I., Kojo, M.: Evaluating CoDel, PIE, and HRED AQM tech-
10th ACM SIGCOMM Conference on Internet Measurement, vol. 11, pp. niques with load transients. In: 39th Annual IEEE Conference on Local
246–259. ACM, New York (2010) Computer Networks, vol. 9, pp. 159–167. IEEE, Piscataway (2014)
8. Jiang, H., et al.: Tackling bufferbloat in 3G/4G networks. In: Proceedings 31. Hamadneh, N., et al.: HRED, an active queue management algorithm for
of the 2012 Internet Measurement Conference, vol. 11, pp. 329–342. ACM, TCP congestion control. Recent Pat. Comp. Sci. 12(3), 212–217 (2019)
New York (2012) 32. Vyakaranal, S.B., Jayalaxmi, G.N.: Performance evaluation of TCP using
9. Dong, P., et al.: Receiver-side TCP countermeasure in cellular networks. AQM schemes for congestion control. In: 2018 Second International Con-
Sensors 19(12), 2791 (2019) ference on Advances in Electronics, Computers and Communications
10. Jude, M., et al.: Throughput stability and flow fairness enhancement of (ICAECC), pp. 1–6. IEEE, Piscataway (2018)
TCP traffic in multi-hop wireless networks. Wirel. Netw. 26, 4689–4704 33. Vyakaranal, S.B., Naragund, J.G.: Performance evaluation of TCP using
(2020) AQM schemes for congestion control. In: 2018 Second International Con-
11. Showail, A., et al.: An empirical evaluation of bufferbloat in IEEE 802.11 ference on Advances in Electronics, Computers and Communications
n wireless networks. In: 2014 IEEE Wireless Communications and Net- (ICAECC), vol. 2, pp. 1–6. IEEE, Piscataway (2018)
working Conference (WCNC), vol. 4, pp. 3088–3093. IEEE, Piscataway 34. Grazia, C., et al.: Transmission control protocol and active queue manage-
(2014) ment together against congestion: Cross-comparison through simulations.
12. Ferlin-Oliveira, S., et al.: Tackling the challenge of bufferbloat in multi- SIMULATION 95(10), 979–993 (2019)
path transport over heterogeneous wireless networks. In: 2014 IEEE 22nd 35. Palmei, J., et al.: Design and evaluation of COBALT queue discipline. In:
International Symposium of Quality of Service (IWQoS), vol. 5, pp. 123– 2019 IEEE International Symposium on Local and Metropolitan Area
128. IEEE, Piscataway (2014) Networks (LANMAN), vol. 7, pp. 1–6. IEEE, Piscatawa (2019)
13. Hien, D.T., et al.: A software defined networking approach for guarantee- 36. Okokpujie, K., et al.: Comparative analysis of the performance of various
ing delay in Wi-Fi networks. In: Proceedings of the Tenth International active queue management techniques to varying wireless network condi-
Symposium on Information and Communication Technology, vol. 12, pp. tions. Int. J. Electr. Comp. Eng. 9(1), 359–368 (2019)
191–196. ACM, New York (2019) 37. Dzivhani, M., Khmaies, O.: Performance evaluation of TCP
14. Cardwell, N., et al.: BBR: Congestion-based congestion control. Queue congestion control algorithms for wired networks using NS-
10(1), 20–53 (2016) 3 simulator. In: IEEE AFRICON, pp. 1–7. IEEE, Piscataway
15. Pan, R., et al.: PIE: A lightweight control scheme to address the bufferbloat (2019)
problem. In: 2013 IEEE 14th International Conference on High Perfor- 38. Feng, W.C., et al.: The BLUE active queue management algorithms.
mance Switching and Routing (HPSR), vol. 7, pp. 148–155. IEEE, Piscat- IEEE/ACM Trans. Netw. 11(7), 513–528 (2002)
away (2013) 39. Ye, J., Leung, K.C.: Adaptive and stable delay control for combating
16. May, M., et al.: Reasons not to deploy RED. In: 1999 Seventh International bufferbloat: Theory and algorithms. IEEE Syst. J. 7(31), 1285–1296 (2019)
Workshop on Quality of Service, vol. 5, pp. 260–262. IEEE, Piscataway 40. Postel, J., Reynolds, J.: File transfer protocol. RFC 765 10, 1–69 (1985)
(1999) 41. Postel, J.: Transmission control protocol. RFC 793 9, 1–89 (1981)
17. Misra, V., et al.: Fluid-based analysis of a network of AQM routers support- 42. Parvez, N., et al.: An analytic throughput model for TCP NewReno.
ing TCP flows with an application to RED. In: Proceedings of the Con- IEEE/ACM Trans. Netw. 11(3), 448–461 (2009)
ference on Applications, Technologies, Architectures, and Protocols for 43. Brakmo, L.S., Peterson, L.L.: TCP Vegas: End to end congestion avoidance
Computer Communication, vol. 8, pp. 151–160. ACM, New York (2000) on a global internet. IEEE J. Sel. Areas Commun. 10, 1465–1480 (1995)
18. Ott, T.J., et al.: Sred: Stabilized red. In: Proceedings of IEEE 44. Song, K.T., et al.: Compound TCP: A scalable and TCP-friendly conges-
INFOCOM’99: Conference on Computer Communications—Eighteenth tion control for high-speed networks. PFLDnet 2006 2, 1–8 (2006)
Annual Joint Conference of the IEEE Computer and Communications 45. Mathis, M., et al. TCP selective acknowledgment options. RFC 2018 10(1),
Societies, vol. 3, pp. 1346–1355. IEEE, Piscataway (1999) 1–12 (1996)
19. Nagle, J.: On packet switches with infinite storage. IEEE Trans. Commun. 46. Xu, L., et al.: Cubic for fast long-distance networks. RFC 8312 2, 1–18
4, 435–438 (1987) (2018)
20. Abbasloo, S., et al.: C2TCP: A flexible cellular TCP to meet stringent delay 47. Mascolo, S., et al.: TCP westwood: Bandwidth estimation for enhanced
requirements. IEEE J. Sel. Areas Commun. 37(4), 918–932 (2019) transport over wireless links. In: Proceedings of the 7th Annual Interna-
21. Floyd, S., Jacobson, V.: Random early detection gateways for congestion tional Conference on Mobile Computing and Networking, vol. 7, pp. 287–
avoidance. IEEE/ACM Trans. Netw. 8, 397–413 (1993) 297. ACM, New York (2001)
22. Nichols, K., et al.: Controlled delay active queue management. RFC 8289 48. Hoe, J.C.: Improving the start-up behavior of a congestion control
1, 1–25 (2018) scheme for TCP. ACM SIGCOMM Comp. Commun. Rev. 8(28), 270–280
23. Misra, S., et al.: Random early detection for congestion avoidance in wired (1996)
networks: A discretized pursuit learning-automata-like solution. IEEE 49. Floyd, S.: TCP and successive fast retransmits. Technical report, 5(22), 1–4
Trans. Syst. Man Cybern. B (Cybernetics) 40(1), 66–76 (2010), 2 (1995)
17518636, 2021, 1, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cmu2.12061 by INASP/HINARI - PAKISTAN, Wiley Online Library on [22/10/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
SALMAN ET AL. 111

50. Mathis, M., et al.: TCP selective acknowledgment options. RFC 2018 10(1), 63. Feng, W.C., et al.: A self-configuring RED gateway. In: Proceedings
1–12 (1996) of IEEE INFOCOM’99: Conference on Computer Communications—
51. Mo, J., et al.: Analysis and comparison of TCP Reno and Vegas. Eighteenth Annual Joint Conference of the IEEE Computer and Com-
In: Proceedings of IEEE INFOCOM’99: Conference on Computer munications Societies, vol. 3, pp. 1320–1328. IEEE, Piscataway (1999)
Communications—Eighteenth Annual Joint Conference of the IEEE 64. Floyd, S., et al.: Adaptive RED: An algorithm for increasing the robustness
Computer and Communications Societies, vol. 3, pp. 1556–1563. IEEE, of RED’s active queue management, pp. 1–12. International Computer Sci-
Piscataway (1999) ence Institute, Berkeley (2001)
52. Brakmo, L.S., et al.: TCP Vegas: New techniques for congestion detection 65. Issariyakul, T., Hossain, E.: Introduction to Network Simulator 2 (NS2),
and avoidance. In: Proceedings of the Conference on Communications pp. 1–18. Springer, Boston (2009)
Architectures, Protocols and Applications, vol. 10, pp. 24–35. ACM, New 66. Ramakrishnan, G., et al.: FQ-PIE queue discipline in the Linux kernel:
York (1994) Design, implementation and challenges. In: 2019 IEEE 44th LCN Sym-
53. Floyd, S.: HighSpeed TCP for large congestion windows. RFC 3649 12, posium on Emerging Topics in Networking (LCN Symposium), vol. 10,
1–34 (2003) pp. 117–124. IEEE, Piscataway (2019)
54. Postei, J.: Transmission control protocol-DARPA internet program proto- 67. Rao, V.P., et al.: Analysis of sfqCoDel for active queue management. In:
col specification. RFC 793 9, 1–85 (1981) The Fifth International Conference on the Applications of Digital Infor-
55. Kelly, T.: Scalable TCP: Improving performance in highspeed wide area mation and Web Technologies (ICADIWT 2014), vol. 2, pp. 262–267.
networks’. ACM SIGCOMM Comp. Commun. Rev. 4(1), 83–91 (2003) IEEE, Piscataway (2014)
56. Floyd, S., et al.: The NewReno modification to TCP’s fast recovery algo- 68. Cao, J., et al.: PackMime: An internet traffic generator. In: National Insti-
rithm. RFC 3782 4, 1–19 (2004) tute of Statistical Sciences Affiliates Workshop on Modeling and Analysis
57. Blanton, E., et al.: A conservative selective acknowledgment (SACK)-based of Network Data, vol. 3. National Institute of Statistical Sciences, Bath
loss recovery algorithm for TCP. RFC 3517 4(1), 1–13 (2003) (2001)
58. Handley, M., et al.: TCP friendly rate control (TFRC): Protocol specifica- 69. Khan, K., Goodridge, W.: B-DASH: Broadcast-based dynamic adaptive
tion. RFC 3448 1(1), 1–24 (2003) streaming over HTTP. Int. J. Auton. Adapt. Commun. Syst. 12, 50–74
59. Stewart, R., et al.: Stream control transmission protocol. RFC 4960 9, 1– (2019)
152 (2007).
60. Xu, L., et al.: Binary increase congestion control (BIC) for fast long-
distance networks. In: IEEE INFOCOM 2004, vol. 3, pp. 2514–2524.
IEEE, Piscataway (2004)
61. Chaudhery, T.J.: Performance evaluation of CoDel queue mechanism and How to cite this article: Muhammad S, TJ Chaudhery,
TFRC transport protocol when using VoIP flows. In: 2017 International Y Noh. Study on performance of AQM schemes over
Conference on Frontiers of Information Technology (FIT) 2017, vol. 12, TCP variants in different network environments. IET
pp. 1–6. IEEE, Piscataway (2018)
Commun. 2021;15:93–111.
62. Hashem, E.S.: Analysis of random drop for gateway congestion control.
Technical Report, pp. 1–108. Lab for Computer Science, Massachusetts https://doi.org/10.1049/cmu2.12061
Institute of Technology, Cambridge (1989)

You might also like