Paper 2
Paper 2
gestion Notification (ECN) can perform poorly in environments with multiple bottlenecks,
further reinforcing the preference for end-to-end approaches.
The widespread adoption of end-to-end CCAs is supported by extensive research,
standardization efforts, and real-world deployments. For instance, a study in [8] found
that the top 20,000 global websites predominantly rely on end-to-end TCP CCAs. A
notable milestone is the standardization of TCP CUBIC [9,10] by the Internet Engineering
Task Force (IETF) [11], reflecting years of iterative improvements, research, and practical
implementations. Recent studies continue to highlight the relevance of end-to-end CCAs
for emerging use cases such as 5G networks and data centers [7,12]. Even newer congestion
control algorithms leveraging Machine Learning (ML), as demonstrated in [13–15], operate
within the end-to-end paradigm. The enduring preference for end-to-end TCP CCAs is
attributed to their inherent scalability, simplicity, ease of implementation, robustness, and
the historical momentum they have gained through widespread use [3,4,6].
However, these algorithms face notable challenges due to the decentralized and
distributed nature of the Internet [6]. In such environments, senders have limited visibility
into real-time network conditions and lack direct coordination with competing flows.
Consequently, end-to-end CCAs must infer congestion and available bandwidth implicitly,
often leading to delayed and inaccurate congestion signals, sub-optimal responses, and
stability issues. Despite numerous enhancements, many end-to-end algorithms still fall
short of achieving optimal performance [16,17], highlighting the need for ongoing research
and the exploration of alternative approaches [18].
1.1. Issues
Congestion typically manifests in two forms: packet loss or queueing delay. Packet
loss occurs when bottleneck bandwidth (BtlBW) and network buffer capacity are exceeded,
causing packets to be dropped. Queueing delay arises when packets accumulate in network
buffers, resulting in longer waiting times. Therefore, most end-to-end TCP congestion
control mechanisms rely on either packet loss (Loss-Based algorithms) or queueing de-
lay (Delay-Based algorithms) as congestion indicators. Loss-Based algorithms react to
congestion after it occurs, responding only when packet loss is detected, unless there is
network-support [4]. In networks with large buffers, Loss-Based algorithms are prone to
buffer bloat, high queueing delays and packet-loss-ratio (PLR) [2,19].
In contrast, Delay-Based algorithms aim to detect and mitigate congestion proactively
by responding early to signs of growing queue lengths [17,20]. Despite this advantage,
Delay-Based algorithms suffer from measurement errors, detection delays, and model
inaccuracies [2,18,21,22].
A specific subclass of Delay-Based algorithms is Rate-Based algorithms, which directly
compute the sending rate based on measurements of propagation delay (PropDelay) and
available bandwidth estimates. However, like other Delay-Based approaches, they are
vulnerable to measurement inaccuracies and detection delays, leading to over-utilization
or under-utilization. For example, challenges discussed in [16,23] for TCP Bottleneck
Bandwidth and Round-trip propagation time (BBR) [24] include bias against shorter round-
trip time (RTT) and degradation when RTT variability is high.
Another issue with existing algorithms is their reliance on successive constraint satis-
faction (SCS) heuristics to adjust sending rates in response to congestion. These heuristics,
often based more on intuition than formal mathematical rationale, focus on finding feasible
solutions rather than optimal ones. In Loss-Based algorithms, SCS heuristics can lead to
high-amplitude oscillations, reducing throughput and network utilization [2]. Although
Delay-Based algorithms integrate mathematical models for optimality [17,20], they still
exhibit oscillatory behavior. Additionally, due to their sensitivity to network measurements,
Electronics 2025, 14, 263 3 of 32
they often display abrupt and jerky adjustments, particularly under dynamic network
conditions [7,22].
Empirical studies in [12,25] have demonstrated that even widely deployed algorithms,
such as TCP CUBIC and TCP Bottleneck Bandwidth and Round-trip propagation time
(BBR), are not immune to significant oscillations. While these algorithms generally achieve
high network utilization, their pronounced oscillatory behavior can degrade overall perfor-
mance, especially in environments with fluctuating traffic or variable RTTs.
1.2. Contribution
This article introduces a novel Delay-Based congestion control approach based on
Little’s Law [26]. The main contributions are as follows:
• A novel Delay-Based congestion control approach grounded in queueing theory and
Little’s Law.
• Development and implementation of an algorithm based on the proposed approach.
• Performance evaluation and comparison with widely used algorithms, TCP CUBIC as
in [9] and TCP BBR version 1 [24] (both as implemented in ns-3.41).
The proposed approach avoids reliance on heuristic methods by continuously solving
a closed-form optimality equation derived from Little’s Law [26,27]. This equation takes
the form of a differential equation, capturing the rate of change in delay and data-in-flight
with respect to the sending rate. By using this predictive approach, the algorithm mitigates
oscillations and improves steady-state performance.
A notable advantage of this approach is that it eliminates the need for direct bandwidth
measurements. Instead, the algorithm operates by setting a target RTT and adjusting the
sending rate using the derived optimality equation. To the best of the authors’ knowledge, this
approach is novel, with no prior comparable work beyond preliminary discussions in [27].
2. Related Work
2.1. Threads on End-to-End TCP CCAs
A comprehensive examination of the evolution and classification of end-to-end TCP
CCAs is presented in several key studies [18,28–30]. These works explore the progression of
TCP congestion control in both wired and wireless networks, highlighting ongoing efforts
to enhance performance, reliability, and adaptability. Works in [28,29] focus on preserving
TCP’s host-to-host architecture, offering insights into how foundational principles can be
maintained while achieving performance improvements. Meanwhile, surveys in [18,30]
identify pressing research challenges, particularly the need for congestion control algo-
rithms that can adapt to dynamic and heterogeneous network environments. These studies
also suggest potential directions for future research, especially in the context of emerging
technologies like 5G and beyond.
The existing literature categorizes TCP CCAs into three primary types: Loss-Based,
Delay-Based, and Hybrid algorithms.
• Loss-Based Algorithms are reactive, responding to congestion only after it manifests
as packet loss.
• Delay-Based Algorithms are proactive, aiming to detect congestion early by monitor-
ing queue growth.
• Hybrid Algorithms (also known as Loss-Delay-Based) primarily rely on packet loss as
the congestion trigger but use delay information to fine-tune rate adjustments. Delay-
Based algorithms augmented with Loss-Based techniques, however, are typically not
classified as hybrids.
Despite their different approaches, both Loss-Based and Delay-Based algorithms suffer
from a common limitation: they struggle to eliminate oscillations and achieve steady-state
accuracy under dynamic network conditions [1,12].
As noted in [8,31], there are currently two widely deployed CCAs, TCP CUBIC [9,10],
which is a Loss-Based algorithm, and TCP BBR [24], which is a Delay-Based algorithm.
These two algorithms serve as benchmarks for modern high-speed networks and are
frequently used as comparative references in studies such as in [12]. Their widespread
adoption reflects the balance between throughput optimization (TCP CUBIC) and latency
minimization (TCP BBR), making them critical points of reference for ongoing research in
congestion control.
network utilization with congestion costs, identifying a point of diminishing returns where
increasing throughput is no longer justified by the additional congestion incurred.
Together, the insights from Kleinrock and Stidham offer valuable guidance for design-
ing optimal congestion control mechanisms. In this article, these principles are used to
define the optimal operating range for data-in-flight as between 1 × BDP and 2 × BDP.
Similarly, the optimal RTT range is set between 1 × baseRTT (baseRTT is the lowest mea-
sured RTT) and 2 × baseRTT, ensuring efficient performance while avoiding excessive
queuing delays.
3. Background Concepts
3.1. End-to-End TCP Congestion Control
A network consists of multiple nodes and paths that facilitate data transfer between
senders and receivers. The network’s behavior is defined by several key characteristics:
BtlBW, PropDelay, and buffer size, as illustrated in Figure 1 (see also [14]).
• BtlBW: The smallest bandwidth along a network path, which constrains the maximum
achievable throughput.
• PropDelay: The one-way time taken for a packet to travel from the sender to the
receiver when there is no congestion. It is determined by the physical distance,
the transmission medium’s speed and the processing delay. This delay reflects the
minimum achievable time, unaffected by queuing.
• Buffer size: The capacity of network devices to temporarily store packets waiting for
transmission, helping absorb transient congestion.
The BDP defines the amount of in-flight data that can traverse the network without
requiring buffering. An end-to-end TCP CCA operates on the sender side , and it adjusts
the sending rate to prevent the network congestion based on inferred conditions, as shown
in Figure 1. In TCP, each end-point in a pair operates as a sender and a receiver at the
same time. However, each independently manages its outgoing flow, applying congestion
control without visibility into the other’s behavior.
C data sent
BtlBW
data sent
Receive and
Data Transmission ACKs for self-clocking PropDelay ACKs acknowledge
Buffer Size
Timestamps
cwnd
B Sending Rate
A Congestion
Computation Detection Feedback: ACKs, ECN
The congestion control block at the sender includes three main components:
• A. Congestion Detection: Identifies congestion through duplicate Acknowledgements
(ACKs), timeouts, ECN, or delay measurements.
• B. Sending Rate Computation: Adjusts the sending rate based on feedback from
congestion detection.
• C. Data Transmission: Sends data according to the computed rate, regulated by the
congestion window size (cwnd), with the sending rate approximated by cwnd/RTT.
The receiver sends ACKs to provide feedback about received packets. These ACKs
are also used for self-clocking, triggering the next data transmission and ensuring reliable,
Electronics 2025, 14, 263 7 of 32
ordered delivery. In TCP congestion control, various parameters, such as RTT, bandwidth
estimates, and data-in-flight, are derived from the timing of received ACKs.
NETWORK MODELLED AS A
SENDERS
QUEUEING SYSTEM
µ = BtlBW
λ
+ Buffer size B +
xN
In this queueing model, data packets arrive at an arrival rate λ and are served at the
network bandwidth capacity µ. The goal of congestion control is to regulate the sending
rates of multiple flows x1 , x2 , . . . , x N to match the network capacity and prevent buffer
overflow or excessive queuing delays [41].
Key performance metrics used in queueing theory and congestion control [17,41,45–47]
include the following:
• Occupancy L: The number of items in the system, including those waiting and being
served. It corresponds to data-in-flight in TCP.
• Arrival rate λ: The total rate at which items arrive in the queue, equivalent to the sum
of the sending rates.
• Throughput: The rate at which data are delivered over a network link.
Electronics 2025, 14, 263 8 of 32
• Goodput: The rate of useful data delivery, excluding retransmissions [19]. It is a reli-
able indicator of effective utilization. It reflects the network’s productive performance.
• Response time R: In queueing theory, it refers to the duration from the moment an
item enters the system to the time a response is received upon the completion of its
processing. In TCP it is RTT.
• Utilization ρ = λ/µ: The ratio of arrival rate λ to the network bandwidth capacity µ
in a queueing system.
• Fairness: Equitable distribution of resources.
In both literature and practice, these metrics are commonly expressed as average
values (e.g., average occupancy, average arrival rate). For brevity, this article omits the term
‘average’ unless its inclusion is necessary for clarity or emphasis.
where xi is the throughput of flow i, and N is the total number of flows. A value of J = 1
indicates perfect fairness, while J = 1/N signifies the poorest fairness. In practice, a
fairness index of 0.8 is acceptable [49].
Achieving fairness in network congestion control is critical to ensuring equitable
bandwidth distribution among competing data flows. Fairness prevents any single flow
from monopolizing network resources, thereby improving overall network efficiency and
user satisfaction. This article focuses specifically on intra-fairness, which refers to fairness
among flows using the same congestion control algorithm.
AIMD [50] is a foundational mechanism in TCP congestion control, extensively ana-
lyzed for its fairness properties [51,52]. AIMD achieves fairness by gradually converging
to an equitable allocation of resources during its multiplicative decrease phase, which
reduces the sending rate in response to congestion signals. This reduction allows other
flows to increase their share of bandwidth, fostering balanced resource distribution. Studies
in [51,52] identify two primary factors influencing fairness in AIMD-based algorithms:
• Frequency of the Decrease Phase: Higher frequencies accelerate convergence
to fairness as congestion signals are processed more often, though they may
introduce instability.
• Amplitude of Oscillations: Smaller oscillations lead to higher bandwidth utilization
but slower convergence to fairness.
These trade-offs highlight AIMD’s ability to balance fairness with resource efficiency,
though it requires careful tuning of its increase and decrease parameters.
Research by [53] has shown that algorithms incorporating nonlinear increase strategies,
such as Multiplicative Increase, Multiplicative Decrease (MIMD), can converge to fairness
faster than AIMD under certain conditions. For instance, TCP CUBIC [10], which employs
a cubic window growth function, achieves high throughput while maintaining good
convergence to fairness due to its aggressive probing mechanism.
TCP BBR (Version 1) adopts a fundamentally different approach by adjusting its sending
rate based on measurements of available bandwidth and the minRTT. Unlike traditional loss-
based approaches, BBR does not reduce its sending rate in response to packet loss. Instead, it
periodically reduces the sending rate to probe for minRTT. This periodic reduction creates
opportunities for competing flows to claim available bandwidth, enabling fairness.
Electronics 2025, 14, 263 9 of 32
Simulation studies [54,55] indicate that multiple TCP BBR flows with similar minRTTs
can achieve intra-fairness. However, sustained unfairness may arise due to mismeasure-
ments of available bandwidth or significant RTT variations. The focus on TCP BBR’s
intra-fairness is particularly relevant to this article, as the proposed mechanism also lever-
ages a minRTT probing strategy. Understanding TCP BBR’s strengths and limitations in
achieving fairness provides valuable insights for improving the design and evaluation of
the proposed approach.
dL(λ) dR(λ)
=λ + R(λ) (3)
dλ dλ
L′ (λ) − R(λ)
R′ (λ) = . (4)
λ
When R′ (λ) → 0, the system operates near an optimal point, implying that
A key corollary from (5) is that optimal performance is achieved when L′ (λ) ap-
proaches a target response time Rtarget . If the lowest observed RTT (either minRTT or
baseRTT) is denoted as Rmin , then
L k +1 − L k
− Rk+1 = 0. (7)
λ k +1 − λ k
where k and k + 1 denote consecutive time steps. In the context of TCP congestion control,
the sending rate is estimated as W/R, where W is the cwnd and R is RTT. Substituting this
approximation into Equation (7), we obtain the following:
Electronics 2025, 14, 263 10 of 32
L k +1 − L k
− Rk+1 = 0. (8)
Wk+1 /Rk+1 − Wk /Rk
A key insight of this formulation is that it eliminates the need to calculate the BDP
explicitly, allowing L and W to remain in natural units (e.g., packets), avoiding additional
bandwidth estimation overhead.
In TCP congestion control, the cwnd W is the primary control variable. The values of L
(data-in-flight) and R (RTT) respond dynamically to changes in W, with their measurements
directly obtained from the system once a particular W is applied. Notably, R is a responsive
variable that reflects the influence of L; as L increases, queueing delays can accumulate,
resulting in higher RTTs.
To optimize the system, two algebraic equations are employed: one predicts the future
value of L, while the other computes the optimal W based on that prediction. These
equations are solved iteratively using a boundary value approach. The system reaches an
optimal operating point when
This prediction model ensures convergence to the target RTT, Rtarget , based on the
following observations:
• If R < Rtarget : The predicted value Lpredict increases, encouraging a higher cwnd.
• If R > Rtarget : Lpredict decreases, prompting a reduction in cwnd.
• If R = Rtarget : Lpredict = L, indicating a stable state.
This mechanism actively regulates the evolution of L toward a steady state, prevent-
ing oscillations.
Computing the Next Optimal Value of W:
Once Lpredict is known, the next optimal cwnd, denoted Wnew , can be computed.
Suppose that during this iteration, the cwnd changes from W to Wnew while L evolves to
Lpredict in response. Assuming the RTT remains at Rmin (the minimal observed RTT) until
queueing begins, the collocation points are given by the following:
Electronics 2025, 14, 263 11 of 32
( L, Rmin , W ) at instance k,
( Lpredict , Rmin , Wnew ) at an instance between k and k + 1, and
( Lpredict , R, Wnew ) at instance k + 1
Substituting into Equation (8) and assuming that the change from Rmin to R is just
before the end of the cycle, we obtain the following:
Lpredict − L
− R = 0. (11)
Wnew /Rmin − W/Rmin
Block C: Uses
C Wnew to regulate thedata data
sent sending rate, ensuring that network
BtlBW resources ar
data sent
used efficiently
Send datawhile avoiding congestion.
ACKs for self-clocking PropDelay ACKs
Buffer Size
Timestamps
cwnd
The mechanism was implemented using ns-3.41 to evaluate both basic and practica
network conditions
B Compute R(see
target
Appendix
A B). Two variants, TCP QtCol and TCP QtColFair, ar
Obtain:
provided to explore predict
Compute L the effectiveness ofRthe
L, R, W, min
proposed scheme under different scenario
Feedback: ACKs, ECN
Compute WNew
These implementations are available on GitHub along with visualization and computationa
tools using Python [58].
Figure
Figure A block
3. A3.block diagram
diagram for the end-to-end
for the proposed proposed TCP
end-to-end
congestionTCP congestion
control control
mechanism.
4.3.1. Basic implementation
mechanism.
4.3.1. Basic Implementation
The TcpQtCol class,
The TcpQtCol shown
class, shownininFigure
Figure 4, extends
4, extends thethe TcpNewReno
TcpNewReno class inclass
ns-3.in ns-3. Th
The
Block A: This block gathers the current values of key network metrics:
pseudocode in Algorithm
pseudocode 1. This
in Algorithm variant
1. This assumes
variant assumes constant network
constant network conditions
conditions with-withou
•out considering
considering Congestion
fairness or Window
varying
fairness W
PropDelay.
or varying It uses
PropDelay. baseRTT
It uses instead
baseRTT insteadofof minRTT
minRTT toto defin
• Data-in-Flight
Rmin . define Rmin . L
• RTT R
• minRTT or baseRTT (Rmin ) TcpNewReno
Block B: Computes essential targets and predictions using the numerical framework:
• Rtarget : Target RTT computed from R .
TcpQtColmin
• m_cWnd
Lpredict : Future data-in-flight based on current
: uint32_t
conditions.
▷ Previous or measured cwnd in packets
• m_dataInFlight
Wnew : Updated
: uint32 congestion window. ▷ measured dataInFlight in packets
m_minCwndAllowed : uint32_t ▷ minimum cwnd allowed
m_baseRtt : Time ▷ lowest RTT observed throughout the connection
Block C: Uses :WTime
m_lastRtt new to regulate the data sending rate, ensuring that network
▷ latest resources are
RTT measured
used efficiently while avoiding congestion.
m_rttTarget : Time ▷ target RTT
m_rttTargetAlpha : double ▷ multiplier of m_minRTT to determine m_rttTarget
m_cntRtt : uint32_t ▷ RTTs count, to allow more than one RTT measurements
Them_begSndNxt
mechanism was implemented using ns-3.41 to evaluate
: SequenceNumber32 both basic
▷ sequence number and
for next sendpractical
networkIncreaseWindow(...)
conditions (see Appendix ▷B). Two
calls variants, TCP
ComputeCwnd(...) QtCol
to update and
cwnd TCP QtColFair,
if conditions allow are
providedPktsAcked(...) measure
to explore the effectiveness
▷ RTT every ACK and update m_baseRTT with lowest RTT
of the proposed scheme under different scenarios so far
ComputeCwnd(...) ▷ compute cwnd using developed computational framework
These implementations are available on GitHub along with visualization and computational
tools using
Figure
Figure Python
4. TCP
4. TCP [58].
QtCol
QtCol UML UML class diagram
class diagram for basic implementation.
for basic implementation.
TcpNewReno
TcpQtColFair
m_cWnd : uint32_t ▷ Previous or measured cwnd in packets
m_priorCwnd : uint32_t ▷ last cwnd before probeRtt
m_dataInFlight : uint32_t ▷ measured dataInFlight in packets
m_minCwndAllowed : uint32_t ▷ minimum cwnd allowed
m_minRtt : Time ▷ lowest RTT since last probe or update
m_lastRtt : Time ▷ latest RTT measured
m_minRttFilterLen : Time ▷ duration before minRTT is refreshed
m_minRttStamp : Time ▷ last minRTT refresh
m_rttTarget : Time ▷ target RTT
m_minRttExpired : bool ▷ flag for minRTT refresh
m_rttTargetAlpha : double ▷ multiplier
m_probeMinRttStamp : Time ▷ minRTT probe start
m_probeRttDuration : Time ▷ minRTT probe duration
m_probeRtt : bool ▷ RTT probing flag
m_probeRttRecover : bool ▷ Flag to use InFlight value before probeRtt
m_cntRtt : uint32_t ▷ RTTs count
m_begSndNxt : SequenceNumber32 ▷ next sequence number after RTT
IncreaseWindow(...) ▷ calls ComputeCwnd(...) to update cwnd if conditions allow
PktsAcked(...) ▷ measure RTT every ACK and call UpdateMinRtt(...) to update minRTT
UpdateMinRtt(...) ▷ updates minRTT
ComputeCwnd(...) ▷ compute cwnd using developed computationa framework
The TCP maximum segment size (MaxSegSize) was set to 1448 bytes, based on Eth-
ernet’s 1500-byte Maximum Transmission Unit (MTU), subtracting 52 bytes for TCP and
IP headers [59,60]. The BtlBW and PropDelay were configured at 100 Mbps and 20 ms,
respectively, resulting in a BDP of approximately 345 packets. The network buffer size was
set to 1725 packets (5 × BDP), ensuring sufficient capacity to test the system’s ability to
avoid queueing delays and packet loss.
Access links were assigned 1 Gbps bandwidth and 0.1 ms PropDelay to ensure the
bottleneck link remained the primary constraint. In certain simulations, the buffer size
was varied between 1 × BDP and 10 × BDP, allowing for the exploration of different
congestion scenarios. For a buffer size of 10 × BDP, the maximum possible cwnd was
11 × BDP = 3795 packets. To prevent buffer constraints from limiting throughput, the
SndBufSize and RcvBufSize were set well above 3795 packets.
Simulations tested TCP QtCol and TCP QtColFair with α values of 1.2 and 1.5, repre-
senting small and large values, respectively. Scenarios included
Electronics 2025, 14, 263 15 of 32
Figure 7. Data-in-flight for TCP CUBIC and TCP BBR in a single-flow scenario. Values within the
range of BDP and 2 × BDP are considered acceptable.
Figure 8. RTT or response time for TCP CUBIC and TCP BBR in a single-flow scenario. Values within
the range of baseRTT and 2 × baseRTT are considered acceptable.
In contrast, TCP BBR keeps data-in-flight near the BDP (minRTT × BtlBW), indepen-
dent of buffer size. However, bandwidth and RTT probing mechanisms cause oscillations
between 1 × BDP and 2 × BDP (345 to 690 packets) and RTT between baseRTT and
2 × baseRTT (0.04 to 0.08 s).
Electronics 2025, 14, 263 16 of 32
TCP QtCol performs similarly to TCP BBR in controlling data-in-flight and RTT, as
shown in Figures 9 and 10, but with dampened oscillations. For α values of 1.2 and 1.5,
data-in-flight converge to 400 and 500 packets, respectively, (below 2 × BDP) and RTT
stabilizes at 48 ms and 60 ms. TCP QtCol achieves equilibrium faster than TCP BBR and
maintains stable performance without oscillations under steady conditions.
Figure 9. Data-in-flight for TCP QtCol in a single-flow scenario. Values within the range of BDP and
2 × BDP are considered acceptable. The line labeled BDP = µ × baseRTT represents the convergence
value of data-in-flight when the RTT equals the baseRTT. The lines labeled 1.2 × BDP and 1.5 × BDP
represent the expected convergence values of data-in-flight when the target RTT is 1.2 × baseRTT
and 1.5 × baseRTT, respectively.
Figure 10. RTT or response time for TCP QtCol in a single-flow scenario. Values within the
range of baseRTT and 2 × baseRTT are considered acceptable. The lines labeled 1.2 × baseRTT
and 1.5 × baseRTT represent the expected convergence values of RTT when the target RTT is
1.2 × baseRTT and 1.5 × baseRTT, respectively.
disturbances. TCP QtColFair quickly dampens oscillations that may arise from external
factors, outperforming both TCP CUBIC and TCP BBR.
Figure 11. Data-in-flight for TCP QtCol in a single-flow scenario.Values within the range of BDP and
2 × BDP are considered acceptable. The line labeled BDP = µ × baseRTT represents the convergence
value of data-in-flight when the RTT equals the baseRTT. The lines labeled 1.2 × BDP and 1.5 × BDP
represent the expected convergence values of data-in-flight when the target RTT is 1.2 × baseRTT
and 1.5 × baseRTT, respectively.
Figure 12. RTT or response time for TCP QtCol in a single-flow scenario. Values within the
range of baseRTT and 2 × baseRTT are considered acceptable. The lines labeled 1.2 × baseRTT
and 1.5 × baseRTT represent the expected convergence values of RTT when the target RTT is
1.2 × baseRTT and 1.5 × baseRTT, respectively.
Figure 13. Data-in-flight box-plot for TCP QtCol and TCP QtColFair compared with TCP CU-
BIC and TCP BBR in a single-flow scenario. Values within the range of BDP and 2 × BDP are
considered acceptable.
Figure 14. RTT box-plot for TCP QtCol and TCP QtColFair compared with TCP CUBIC and
TCP BBR in a single-flow scenario. Values within the range of baseRTT and 2 × baseRTT are
considered acceptable.
In summary, TCP QtCol and TCP QtColFair provide better RTT control, oscillation
elimination, and optimization of data-in-flight compared to TCP CUBIC and TCP BBR,
making them more effective at avoiding delays and ensuring smooth network performance.
Figure 15. RTT for TCP CUBIC as BtlBW changes. Values within the range of baseRTT and
2 × baseRTT are considered acceptable. The lines labeled 1.2 × baseRTT and 1.5 × baseRTT represent
the expected median RTT when the target RTT is 1.2 × baseRTT and 1.5 × baseRTT, respectively.
Figure 18. Data-in-flight for TCP QtColFair compared with TCP CUBIC and TCP BBR in multiple-flow
scenario. Values within the range of BDP and 2 × BDP are considered acceptable.
Figure 19. RTT for TCP QtColFair compared with TCP CUBIC and TCP BBR in multiple-flow
scenarios. Values within the range of baseRTT and 2 × baseRTT are considered acceptable.
Figure 20. Effective network utilization and goodput for the algorithms in multiple-flow scenarios.
Figure 22. Convergence to fair sharing of available bandwidth by five TCP QtColFair flows.
Figure 24. Goodput and effective network utilization in shallow network buffer size decreases below
one BDP.
Figure 26. Goodput and effective network utilization as network buffer size increases above on BDP.
Figure 28. Average RTT comparison as network buffer becomes deeper. Values within the range of
baseRTT and 2 × baseRTT are considered acceptable.
Electronics 2025, 14, 263 26 of 32
6. Conclusions
This article introduced TCP QtColFair, an innovative end-to-end TCP CCA designed
to avoid queueing delays while optimizing data-in-flight in alignment with Kleinrock’s
optimality principle. As a delay-based algorithm, TCP QtColFair differentiates itself from
existing approaches through the following key innovations:
• Explicit Target RTT Specification: The target RTT is defined as α × minRTT, where α is
fine-tuned to balance network utilization and congestion avoidance effectively.
• Damping Mechanism Based on Harmonic Motion: A novel damping framework solves
an optimality equation to regulate the sending rate and data-in-flight smoothly over
time, ensuring stability and efficiency.
• Bandwidth Independence: Unlike other Delay-Based algorithms, TCP QtColFair does not
rely on bandwidth estimation, minimizing the impact of measurement inaccuracies
and enhancing robustness.
The performance of TCP QtColFair was evaluated against TCP CUBIC (loss-based)
and TCP BBR (delay-based) in multi-flow scenarios. Key findings include the following:
• TCP CUBIC: Exhibited large oscillations and data-in-flight values exceeding the bottle-
neck BDP and minRTT, leading to inefficient queue utilization.
• TCP BBR: Achieved Kleinrock’s optimality on average but introduced oscillations due
to its bandwidth and RTT probing mechanisms.
• TCP QtColFair: Consistently maintained data-in-flight at approximately α × BDP and
RTT near α × minRTT, outperforming TCP BBR by avoiding queueing delays more
effectively, particularly with smaller α values (e.g., α = 1.2).
Additionally, TCP QtColFair demonstrated superior stability, eliminating oscillations
over time in undisturbed conditions, unlike TCP BBR’s bandwidth probing and TCP CU-
BIC’s inherent oscillatory behavior. It achieved excellent goodput, reaching 96% utilization
with a 5 × BDP buffer size, outperforming TCP BBR (94%) and TCP CUBIC (93%). In
multi-flow scenarios, all algorithms exhibited fairness scores exceeding 0.9.
Future Work and Enhancements
Despite its strong performance, further improvements and research directions can
refine TCP QtColFair and explore its potential in broader contexts.
Improvements needed:
• Handling Packet Losses: The current response to packet losses is akin to TCP NewReno,
which can be overly aggressive. Future versions will enhance loss-handling mecha-
nisms, particularly for networks with shallow buffers or high loss rates.
• Improved RTT Refresh Mechanism: A more robust change-point detection method
is required to adjust minRTT dynamically, especially in multi-flow scenarios with
significant RTT fluctuations. Preliminary studies indicate that TCP BBR also struggles
in such conditions.
Further Research Directions:
• Comparative Analysis: Conduct detailed evaluations against newer versions of
TCP BBR (e.g., BBRv2 and BBRv3) to benchmark performance under diverse
network conditions.
• Inter-Fairness and RTT Fairness: Investigate fairness across different algorithms (inter-
fairness) and among flows with varying round-trip times (RTT fairness).
• Learning-Based Enhancements: Explore the integration of the proposed mechanisms
with machine learning algorithms. Machine learning can analyze global historical
patterns to predict network behavior, while the proposed mechanism adapts in real
time to dynamic network conditions.
Electronics 2025, 14, 263 27 of 32
• Optimal Control Applications: Study the application of classical optimal control theory
to refine congestion control strategies and optimize system performance.
Further simulations and evaluations:
• Topology-Based Analysis: Evaluate performance in complex simulation environments,
such as parking lot and randomized topologies, and in emerging architectures like
5G networks.
• Stochastic Network Scenarios: Assess performance under stochastic network conditions,
incorporating random variations in bandwidth, delay, and packet loss rates.
• Real-World Network Testing: Validate performance in live network environments to
ensure real-world feasibility and robustness.
Funding: This research received no external funding. The APC is funded by the SENTECH Chair in
Broadband Wireless Multimedia Communications, University of Pretoria, https://www.up.ac.za/
sentech-chair-in-broadband-wireless-multimedia-communication accessed on 4 November 2024.
Data Availability Statement: The original data presented in the study are openly available at
https://github.com/dumisa/TowardsOptimalTcp accessed on 2 November 2024.
Acknowledgments: Would thank Sipho Khumalo, Moshe Masota, Mfanasibili Ngwenya and Phindile
Ngwenya for support, reviews and comments.
Conflicts of Interest: The author D.W.N was employed by the company SENTECH SOC Limited.
The remaining authors declare that the research was conducted in the absence of any commercial or
financial relationships that could be construed as a potential conflict of interest.
Abbreviations
The following abbreviations and definitions are used in this manuscript:
ACK, ACKs Acknowledgement, acknowledgements.
AIMD Additive-Increase, Multiplicative-Decrease.
baseRTT Lowest RTT measured over entire TCP connection.
Bandwidth-delay-product, usually refers to network BDP given by minRTT × BtlBW.
BDP
Also used as a unit of measure, e.g., for buffer size, data-in-flight, cwnd, etc.
BtlBW Bottleneck bandWidth. Equivalent to µ.
CCA Congestion control algorithm
cwnd Congestion window. In a context it may refer to cwnd size or value.
ECN Explicit congestion notification.
IP Internet Protocol.
MIMD Multiplicative-Increase, Multiplicative-Decrease.
minRTT Lowest RTT observed over a specific time window.
ML Machine Learning
MTU Maximum Transmission Unit
PropDelay One-way propagation delay.
Round-trip time. Equivalent to response time in a queueing system. While in real life
RTT
RTT is not exactly twice one-way-delay, this article presumes RTT to be 2 × latency.
SCS Successive constraint satisfaction.
Electronics 2025, 14, 263 28 of 32
• Data points lying beyond 1.5 × IQR from the quartiles, marked individually.
• Represent rare or extreme values that deviate from the general distribution
(constitute less than 0.8% in a normal distribution). Note: No outliers are plotted
in the example above.
References
1. Yuan, Y. Research on TCP Congestion Control Strategy Based on Proximal Policy Optimization. In Proceedings of the 2023 IEEE
11th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 8–10 December
2023; Volume 11, pp. 1934–1938.
2. Verma, L.P.; Sharma, V.K.; Kumar, M.; Kanellopoulos, D. A novel Delay-based Adaptive Congestion Control TCP variant. Comput.
Electr. Eng. 2022, 101, 108076. [CrossRef]
3. Varma, S. End-to-End versus Hop-by-Hop Congestion Control. In Internet Congestion Control; Romer, B., Ed.; Elsevier: Amsterdam,
The Netherlands, 2015; p. 32.
Electronics 2025, 14, 263 30 of 32
4. Baker, F.; Fairhurst, G. IETF Recommendations Regarding Active Queue Management; RFC 7567; IETF: Wilmington, DC, USA, 2015.
5. Papadimitriou, D.; Zahariadis, T.; Martinez-Julia, P.; Papafili, I.; Morreale, V.; Torelli, F.; Sales, B.; Demeester, P. Design Principles
for the Future Internet Architecture. In The Future Internet: Future Internet Assembly 2012: From Promises to Reality; Lecture Notes
in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7281, pp. 55–67.
6. Papadimitriou, D.; Welzl, M.; Scharf, M.; Briscoe, B. Open Research Issues in Internet Congestion Control; RFC 6077; IETF: Wilmington,
DC, USA, 2011; pp. 1–51.
7. Lu, Y.; Ma, X.; Cui, C. DCCS: A dual congestion control signals based TCP for datacenter networks. Comput. Netw. 2024, 247,
110457. [CrossRef]
8. Mishra, A.; Sun, X.; Jain, A.; Pande, S.; Joshi, R.; Leong, B. The Great Internet TCP Congestion Control Census. In SIGMETRICS
’20, Proceedings of the ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems, Boston, MA,
USA, 8–12 June 2020; Association for Computing Machinery: New York, NY, USA, 2020; Volume 48, pp. 59–60.
9. Xu, L.; Ha, S.; Rhee, I.; Goel, V.; Eggert, L. CUBIC for Fast and Long-Distance Networks; RFC 9438; IETF: Wilmington, DC, USA, 2023.
10. Ha, S.; Rhee, I.; Xu, L. CUBIC: A New TCP-Friendly High-Speed TCP Variant. ACM SIGOPS Oper. Syst. Rev. 2008, 42, 64–74.
[CrossRef]
11. Internet Engineering Task Force. IETF Homepage. Available online: https://www.ietf.org (accessed on 4 November 2024).
12. Alramli, O.I.; Hanapi, Z.M.; Othman, M.; Ahmad, I.; Samian, N. RTTV-TCP: Adaptive congestion control algorithm based on RTT
variations for mmWave networks. Ad Hoc Netw. 2024, 164, 103611. [CrossRef]
13. Shrestha, S.K.; Pokhrel, S.R.; Kua, J. On the Fairness of Internet Congestion Control over WiFi with Deep Reinforcement Learning.
Future Internet 2024, 16, 330. [CrossRef]
14. Naqvi, A.H.; Hilman, H.M.; Anggorojati, B. Implementability improvement of deep reinforcement learning based congestion
control in cellular network. Comput. Netw. 2023, 233, 109874. [CrossRef]
15. Diel, G.; Miers, C.C.; Pillon, M.A.; Koslovski, G.P. RSCAT: Towards zero touch congestion control based on actor-critic reinforce-
ment learning and software-defined networking. J. Netw. Comput. Appl. 2023, 215, 103639. [CrossRef]
16. Ma, S.; Jiang, J.; Wang, W.; Li, B. Fairness of Congestion-Based Congestion Control: Experimental Evaluation and Analysis. arXiv
2017, arXiv:1706.09115.
17. Kleinrock, L. Internet congestion control using the power metric: Keep the pipe just full, but no fuller. Ad Hoc Netw. 2018, 80,
142–157. [CrossRef]
18. Al-Saadi, R.; Armitage, G.; But, J.; Branch, P. A Survey of Delay-Based and Hybrid TCP Congestion Control Algorithms. IEEE
Commun. Surv. Tutor. 2019, 21, 3609–3638. [CrossRef]
19. Zheng, S.; Liu, J.; Yan, X.; Xing, Z.; Di, X.; Qi, H. BBR-R: Improving BBR performance in multi-flow competition scenarios. Comput.
Netw. 2024, 254 , 110816. [CrossRef]
20. Jain, R. A Delay-Based approach for congestion avoidance in interconnected heterogeneous computer networks. ACM SIGCOMM
Comput. Commun. Rev. 1989, 19, 56–71. [CrossRef]
21. Rodríguez-Pérez, M.; Herrería-Alonso, S.; Fernández-Veiga, M.; López-García, C. Common Problems in Delay-Based Congestion
Control Algorithms: A Gallery of Solutions. Eur. Trans. Telecommun. 2011, 22, 168–178. [CrossRef]
22. Mittal, R.; Lam, V.T.; Dukkipati, N.; Blem, E.; Wassel, H.; Ghobadi, M.; Vahdat, A.; Wang, Y.; Wetherall, D.; Zats, D. TIMELY:
RTT-based Congestion Control for the Datacenter. Comput. Commun. Rev. 2015, 45, 537–550. [CrossRef]
23. Cao, Y.; Jain, A.; Sharma, K.; Balasubramanian, A.; Gandhi, A. When to use and when not to use BBR: An empirical analysis and
evaluation study. In IMC ’19, Proceedings of the ACM Internet Measurement Conference, Amsterdam, the Netherlands, 21–23 October
2019 ; Association for Computing Machinery: New York, NY, USA, 2019; pp. 130–136.
24. Cardwell, N.; Cheng, Y.; Gunn, S.C.; Yeganeh, S.H.; Jacobson, V. BBR: Congestion-Based Congestion Control. Commun. ACM
2017, 60, 58–66. [CrossRef]
25. Liao, X.; Tian, H.; Zeng, C.; Wan, X.; Chen, K. Astraea: Towards Fair and Efficient Learning-based Congestion Control. In EuroSys
’24, Proceedings of the 2024 European Conference on Computer Systems, Athens, Greece, 22–25 April 2024; Association for Computing
Machinery: New York, NY, USA, 2024; pp. 99–114.
26. Little, J.D.C. Little’s law as viewed on its 50th anniversary. Oper. Res. 2011, 59, 536–549. [CrossRef]
27. Ngwenya, D.; Hlophe, M.C.; Maharaj, B.T. Towards Optimal End-to-end TCP Congestion Control Using Queueing-Based
Dynamical Systems Theory. TechRxiv. 13 May 2024. Available online: https://www.techrxiv.org/doi/full/10.36227/techrxiv.17
1560562.26289531/v1 (accessed on 4 November 2024).
28. Lar, S.; Liao, X. An initiative for a classified bibliography on TCP/IP congestion control. J. Netw. Comput. Appl. 2013, 36, 126–133.
[CrossRef]
29. Afanasyev, A.; Tilley, N.; Reiher, P.; Kleinrock, L. Host-to-host congestion control for TCP. IEEE Commun. Surv. Tutor. 2010, 12,
304–342. [CrossRef]
Electronics 2025, 14, 263 31 of 32
30. Lorincz, J.; Klarin, Z.; Ožegović, J. A Comprehensive Overview of TCP Congestion Control in 5G Networks: Research Challenges
and Future Perspectives. Sensors 2021, 21, 4510. [CrossRef]
31. Bruhn, P.; Kühlewind, M.; Muehleisen, M. Performance and improvements of TCP CUBIC in low-delay cellular networks. Comput.
Netw. 2023, 224, 109609. [CrossRef]
32. Jacobson, V. Congestion Avoidance and Control. ACM SIGCOMM Comput. Commun. Rev. 1988, 18, 314–329. [CrossRef]
33. Henderson, T.; Floyd, S.; Gurtov, A.; Nishida, Y. The NewReno Modification to TCP’s Fast Recovery Algorithm; RFC 6582; IETF:
Wilmington, DC, USA, 2012.
34. Allman, M.; Paxson, V.; Blanton, E. TCP Congestion Control; RFC 5681; IETF: Wilmington, DC, USA, 2009.
35. Arun, V.; Balakrishnan, H. Copa: Practical Delay-Based Congestion Control for the Internet. In Proceedings of the 15th USENIX
Symposium on Networked Systems Design and Implementation (NSDI 18), Renton, WA, USA, 9–11 April 2018.
36. Tafa, Z.; Milutinovic, V. The Emerging Internet Congestion Control Paradigms. In Proceedings of the 2022 11th Mediterranean
Conference on Embedded Computing (MECO), Budva, Montenegro, 7–10 June 2022; pp. 7–10.
37. Boryło, P.; Biernacka, E.; Domżał, J.; Kadziołka,
˛ B.; Kantor, M.; Rusek, K.; Skała, M.; Wajda, K.; Wojcik, R.; Zabek, W. A tutorial on
reinforcement learning in selected aspects of communications and networking. Comput. Commun. 2023, 208, 89–110. [CrossRef]
38. Jay, N.; Rotman, N.H.; Godfrey, P.B.; Schapira, M.; Tamar, A. A deep reinforcement learning perspective on internet congestion
control. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Long Beach, CA, USA, 9–15 June
2019; Volume 97, pp. 5390–5399.
39. Zhang, L.; Cui, Y.; Wang, M.; Member, G.S.; Zhu, K. DeepCC: Bridging the Gap Between Congestion Control and Applications
via Multiobjective Optimization. IEEE/ACM Trans. Netw. 2022, 30, 2274–2288. [CrossRef]
40. Piotrowska, A. Performance Evaluation of TCP BBRv3 in Networks with Multiple Round Trip Times. Appl. Sci. 2024, 14, 5053.
[CrossRef]
41. Stidham, S.J. Optimal Design of Queueing Systems, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 2009.
42. Stidham, S.J. Optimal Control of Admission to a Queueing System. IEEE Trans. Automat. Control 1985, 30, 705–713. [CrossRef]
43. Brakmo, L.S.; Peterson, L.L. TCP Vegas: End to End Congestion Avoidance on a Global Internet. IEEE J. Sel. Areas Commun. 1995,
13, 1465–1480. [CrossRef]
44. Varma, S. Analytic Modeling of Congestion Control. In Internet Congestion Control; Romer, B., Ed.; Elsevier: Amsterdam, The
Netherlands, 2015; pp. 49–83.
45. Harchol-Balter, M. Queueing Theory Termilogy. In Performance Modeling and Design of Computer Systems: Queueing Theory in
Action; Cambridge University Press: Cambridge, UK, 2013; Chapter 2, pp. 13–26.
46. Dordal, P.L. An Introduction to Computer Networks; Loyola University Chicago: Chicago, IL, USA, 2018. Available online:
https://intronetworks.cs.luc.edu (accessed on 4 November 2024).
47. Jain, R.; Chiu, D.; Hawe, W. A Quantitative Measure of Fairness and Discrimination for Resource Allocation in Shared Computer
Systems. arXiv 1984. Available online: https://arxiv.org/abs/cs/9809099 (accessed on 4 November 2024).
48. Dzivhani, M.; Ngwenya, D.; Masonta, M.; Ouahada, K. TCP congestion control macroscopic behaviour for combinations of source
and router algorithms. In Proceedings of the 2018 IEEE 7th International Conference on Adaptive Science & Technology (ICAST),
Accra, Ghana, 22–24 August 2018.
49. Geist, M.; Jaeger, B. Overview of TCP Congestion Control Algorithms. In Proceedings of the Seminar Innovative Internet Technologies
and Mobile Communications (IITM); Chair of Network Architectures and Services: Garching, Germany, 2019; pp. 11–15. [CrossRef]
50. Chiu, D.; Jain, R. Analysis of the increase and decrease algorithms for congestion avoidance in computer networks. Comput. Netw.
ISDN Syst. 1989, 17, 1–14. [CrossRef]
51. Lahanas, A.; Tsaoussidis, V. Exploiting the efficiency and fairness potential of AIMD-based congestion avoidance and control.
Comput. Netw. 2003, 43, 227–245. [CrossRef]
52. Kelly, F. Fairness and stability of end-to-end congestion. Eur. J. Control 2003, 9, 159–176. [CrossRef]
53. Gorinsky, S.; Georg, M.; Podlesny, M.; Jechlitschek, C. A Theory of Load Adjustments and its Implications for Congestion Control.
J. Internet Eng. 2007, 1, 82–93.
54. Hock, M.; Bless, R.; Zitterbart, M. Experimental evaluation of BBR congestion control. In Proceedings of the 2017 IEEE 25th
International Conference on Network Protocols (ICNP), Toronto, ON, Canada, 10–13 October 2017.
55. Pan, W.; Tan, H.; Li, X.; Xu, J.; Li, X. Improvement of BBRv2 Congestion Control Algorithm Based on Flow-aware ECN. Secur.
Commun. Netw. 2022, 2022, 1218245. [CrossRef]
56. Biral, F.; Bertolazzi, E.; Bosetti, P. Notes on numerical methods for solving optimal control problems. IEEJ J. Ind. Appl. 2016, 5,
154–166. [CrossRef]
57. Rao, A.V. A Survey of Numerical Methods for Optimal Control. Adv. Astronaut. Sci. 2009, 135, 497–528.
58. Ngwenya, D. TCP QtCol and TCP QtColFair Implementation and Simulation [Source Code]. 2024. Available online: https:
//github.com/dumisa/TowardsOptimalTcp (accessed on 4 November 2024).
Electronics 2025, 14, 263 32 of 32
59. Borman, D. TCP Options and Maximum Segment Size (MSS); RFC 6691; IETF: Wilmington, DC, USA, 2012.
60. Borman, D.; Braden, R.T.; Jacobson, V.; Scheffenegger, R. TCP Extensions for High Performance; RFC 7323; IETF: Wilmington, DC,
USA, 2014.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.