0% found this document useful (0 votes)
37 views5 pages

Classical TCP Congestion Control

Classic TCP congestion control includes TCP Tahoe and TCP Reno, developed to manage network congestion and prevent throughput collapse. Key mechanisms include Slow Start, Congestion Avoidance, and AIMD, which ensure efficient data transmission while responding to network conditions. TCP Reno introduces enhancements like Fast Retransmit and Fast Recovery to improve loss handling and throughput compared to Tahoe.

Uploaded by

Jithin S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views5 pages

Classical TCP Congestion Control

Classic TCP congestion control includes TCP Tahoe and TCP Reno, developed to manage network congestion and prevent throughput collapse. Key mechanisms include Slow Start, Congestion Avoidance, and AIMD, which ensure efficient data transmission while responding to network conditions. TCP Reno introduces enhancements like Fast Retransmit and Fast Recovery to improve loss handling and throughput compared to Tahoe.

Uploaded by

Jithin S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Overview of Classic TCP Congestion Control

• Definition and Historical Context: Classic TCP congestion control encompasses the original
algorithms—TCP Tahoe (1988) and TCP Reno (1990)—developed to manage network
congestion. Introduced by Van Jacobson, these responded to the 1986 Internet congestion
collapse, where uncontrolled retransmissions reduced throughput to kilobytes per second.
They form the bedrock of TCP’s reliability over unreliable networks.

• Core Components: The system hinges on the congestion window (cwnd) to limit in-flight
data, the slow start threshold (ssthresh) to toggle between growth phases, and implicit
feedback (timeouts or duplicate ACKs) to detect congestion. It’s end-to-end, requiring no
router intervention, which ensured scalability in the early Internet.

• Objectives: The goals are to prevent congestion collapse (where retransmissions overwhelm
capacity), maximize throughput, minimize delay, and ensure fairness among flows. This is
achieved via Additive Increase, Multiplicative Decrease (AIMD), a cornerstone of classic TCP.

• Evolution Trigger: Before Tahoe, TCP lacked congestion awareness, treating all losses as
errors to retransmit. The 1980s saw networks grind to a halt as senders flooded links,
necessitating these controls to stabilize the growing Internet.

TCP Tahoe: The First Classic Implementation

• Slow Start Mechanism:

o Purpose and Rationale: Slow Start rapidly probes available bandwidth at connection
start or post-severe loss, avoiding the inefficiency of a fixed low rate. It assumes the
network’s capacity is unknown initially.

o Process: cwnd starts at 1 MSS (e.g., 512 bytes in 1988, later 1460 bytes). Each ACK
increases cwnd by 1 MSS, doubling it per RTT (exponential growth). This mimics a
binary search for capacity.

o Detailed Example: MSS = 1460 bytes, RTT = 100ms; cwnd evolves as 1 → 2 → 4 → 8


MSS over 3 RTTs (300ms), reaching 11.7 KB or 116 Kbps. By RTT 5, cwnd = 32 MSS
(466 Kbps).

o Exit Conditions: Stops when cwnd reaches ssthresh (e.g., 64 KB, a high initial guess)
or loss occurs (timeout or dup-ACKs interpreted as timeout in Tahoe).

o Technical Detail: Rate = cwnd / RTT doubles per RTT, e.g., 14.6 Kbps → 29.2 Kbps →
58.4 Kbps, limited by the bottleneck link.

• Congestion Avoidance Mechanism:

o Purpose and Design: After Slow Start, Congestion Avoidance cautiously increases the
rate to avoid overshooting the network’s capacity, transitioning from exponential to
linear growth.

o Process: cwnd grows by 1 MSS per RTT. For each ACK, cwnd += MSS × (MSS / cwnd),
roughly 1 MSS per full window’s ACKs, ensuring gradual probing.
o Detailed Example: cwnd = 10 MSS (14.6 KB); after 1 RTT (10 ACKs), cwnd ≈ 11 MSS
(16 KB). After 5 RTTs, cwnd = 15 MSS (21.9 KB, 219 Kbps).

o Trigger: Entered when cwnd ≥ ssthresh, typically after Slow Start or loss recovery.

o Technical Detail: Rate increases by MSS / RTT per RTT (e.g., +14.6 Kbps every
100ms), a slow climb to prevent sudden overload.

• Loss Handling (Timeout-Based):

o Purpose and Assumption: Treats packet loss as a congestion signal, assuming buffers
overflowed rather than data corrupted (valid for wired networks then).

o Process: On timeout, cwnd resets to 1 MSS, ssthresh = max(cwnd / 2, 2 MSS), and


Slow Start restarts. This aggressive backoff clears the network.

o Detailed Example: cwnd = 16 MSS (23.4 KB), timeout → ssthresh = 8 MSS, cwnd = 1
MSS. Next RTT: cwnd = 2 MSS (2.9 KB, 29.2 Kbps).

o Drawback and Impact: No distinction between single or multiple losses—full reset to


1 MSS slows recovery, e.g., dropping from 234 Kbps to 14.6 Kbps.

o Technical Detail: Timeout (RTO) = SRTT + 4 × RTTVAR (smoothed RTT + 4× variance),


typically 1-2 seconds, far longer than an RTT.

• AIMD in Tahoe:

o Principle and Logic: Additive Increase (+1 MSS/RTT) during Congestion Avoidance
ensures gradual growth; Multiplicative Decrease (cwnd /= 2) on loss provides rapid
relief.

o Fairness Mechanism: Two flows (e.g., cwnd = 10 vs. 20 MSS, capacity = 25 MSS)
adjust toward equality: loss → 5 vs. 10, then 6 vs. 11 over RTTs.

o Stability Benefit: Large decreases prevent persistent overload; small increases avoid
oscillation, stabilizing shared links.

o Historical Role: AIMD in Tahoe proved congestion could be managed without router
changes, a breakthrough for the decentralized Internet.

• Implementation Simplicity:

o Design Choice: Tahoe uses only timeouts for loss detection, avoiding complex ACK
analysis. This suited 1980s hardware with limited processing.

o Trade-off: Simplicity sacrifices efficiency—e.g., a single loss resets cwnd fully, even if
the network could handle more data post-drop.

TCP Reno: The Refined Classic Implementation

• Slow Start Mechanism:

o Same as Tahoe: Rapid bandwidth probing via exponential cwnd growth remains
unchanged from Tahoe, preserving its core logic.
o Process: cwnd starts at 1 MSS, doubles per RTT with each ACK adding 1 MSS. Initial
ssthresh is high (e.g., 64 KB).

o Detailed Example: RTT 1: cwnd = 1 → 2 MSS (2.9 KB); RTT 3: cwnd = 4 → 8 MSS (11.7
KB); RTT 5: cwnd = 16 → 32 MSS (46.7 KB).

o Exit Conditions: Loss (via 3 dup-ACKs or timeout) or cwnd ≥ ssthresh triggers the
next phase.

o Note: Classic Reno uses 1 MSS start; modern tweaks (e.g., RFC 2581) allow higher
initials, but not in 1990.

• Congestion Avoidance Mechanism:

o Same as Tahoe: Linear growth post-Slow Start or recovery, maintaining Tahoe’s


cautious approach.

o Process: cwnd += 1 MSS per RTT, with per-ACK increments of MSS × (MSS / cwnd).

o Detailed Example: cwnd = 20 MSS (29.2 KB); RTT 1: 20 → 21 MSS (30.7 KB); RTT 5:
25 MSS (36.5 KB, 365 Kbps).

o Trigger: cwnd ≥ ssthresh, ensuring a smooth transition from rapid probing to steady
growth.

o Technical Detail: Rate grows by 14.6 Kbps per 100ms RTT, balancing efficiency and
stability.

• Fast Retransmit Mechanism:

o Purpose and Innovation: Detects single packet loss quickly, avoiding timeout
delays—a key Reno upgrade over Tahoe.

o Process: Receiver sends duplicate ACKs for the last in-order packet on an out-of-
order arrival. Sender retransmits after 3 dup-ACKs.

o Detailed Example: Send 1-5, 2 lost; ACK 1, dup-ACK 1 (for 3), dup-ACK 1 (for 4), dup-
ACK 1 (for 5) → retransmit 2 in ~100ms.

o Technical Detail: 3 dup-ACKs imply subsequent packets reached the receiver,


suggesting mild congestion, not collapse.

o Impact: Recovery drops from seconds (timeout) to 1-2 RTTs, e.g., 200ms vs. 2s.

• Fast Recovery Mechanism:

o Purpose and Refinement: Mitigates loss impact without resetting to Slow Start,
improving throughput over Tahoe.

o Process: On 3 dup-ACKs, ssthresh = cwnd / 2, cwnd = ssthresh + 3 MSS. Each extra


dup-ACK adds 1 MSS. New ACK sets cwnd = ssthresh.

o Detailed Example: cwnd = 20 MSS, loss at 15; 3 dup-ACKs → ssthresh = 10 MSS,


cwnd = 13 MSS; 5 dup-ACKs → cwnd = 18 MSS; new ACK → cwnd = 10 MSS.

o Technical Detail: cwnd inflation (+3 MSS, then +1 per dup-ACK) keeps the pipe full,
reflecting delivered packets.
o Rate Impact: Drops from 292 Kbps to 146 Kbps, not 14.6 Kbps (Tahoe), halving loss
penalty.

• Timeout Fallback:

o Purpose and Continuity: Handles severe congestion or multiple losses when Fast
mechanisms fail, reverting to Tahoe’s logic.

o Process: cwnd = 1 MSS, ssthresh = cwnd / 2, restart Slow Start.

o Detailed Example: cwnd = 16 MSS, timeout → ssthresh = 8 MSS, cwnd = 1 MSS; next
RTT: cwnd = 2 MSS.

o Technical Detail: Used when dup-ACKs don’t arrive (e.g., multiple losses exhaust
receiver buffer), indicating major disruption.

o Drawback: Still aggressive, but less frequent due to Fast Retransmit/Recovery.

• AIMD in Reno:

o Principle and Enhancement: Additive Increase (+1 MSS/RTT); Multiplicative


Decrease (cwnd /= 2), softened by Fast Recovery.

o Fairness Dynamics: Converges faster than Tahoe (e.g., 10 vs. 20 MSS → 10 vs. 13
MSS post-loss, not 5 vs. 10).

o Efficiency Gain: Reduces downtime—e.g., a single loss cuts rate by 50% briefly vs.
90%+ in Tahoe.

o Historical Impact: Reno’s refinements made TCP viable for the 1990s Internet boom,
handling growing traffic.

• Implementation Complexity:

o Design Shift: Adds Fast Retransmit/Recovery, requiring ACK parsing and state
tracking, more complex than Tahoe’s timeout-only approach.

o Trade-off: Gains efficiency at the cost of sender-side logic, still lightweight for 1990s
systems.

Detailed Example Walkthrough (TCP Reno)

• Setup: MSS = 1460 bytes, RTT = 100ms, initial ssthresh = 32 MSS (~46 KB).

• Slow Start Phase:

o RTT 1: cwnd = 1 → 2 MSS (2.9 KB, 29.2 Kbps).

o RTT 3: cwnd = 4 → 8 MSS (11.7 KB, 116 Kbps).

o RTT 5: cwnd = 16 → 32 MSS (46.7 KB, 466 Kbps).

• Congestion Avoidance Phase:

o RTT 6: cwnd = 32 → 33 MSS (48.2 KB, 482 Kbps).


o RTT 8: cwnd = 34 → 35 MSS (51.1 KB, 511 Kbps).

• Loss and Recovery:

o Loss at cwnd = 35 MSS; 3 dup-ACKs → ssthresh = 17 MSS, cwnd = 20 MSS (17 + 3).

o 10 dup-ACKs → cwnd = 30 MSS (43.8 KB).

o New ACK → cwnd = 17 MSS (24.8 KB, 248 Kbps).

• Post-Recovery:

o RTT 9: cwnd = 17 → 18 MSS (26.3 KB, 263 Kbps).

Technical Insights and Limitations

• Throughput Calculation: Throughput ≈ (MSS / RTT) × √(2 / p). E.g., MSS = 1460B, RTT =
100ms, p = 0.01 → 16.4 Mbps; p = 0.04 → 8.2 Mbps.

• Fairness Behavior: AIMD converges flows (e.g., 10 vs. 20 MSS → 12 vs. 18 MSS over cycles),
but slowly with many flows.

• Tahoe Limitations: Full reset on loss (e.g., 16 MSS to 1 MSS) wastes capacity, poor for
frequent drops.

• Reno Limitations: Multiple losses in one window trigger timeout, negating Fast Recovery
benefits.

• Delay Blindness: Ignores RTT spikes, reacting only to loss, missing early congestion cues.

• Wireless Misstep: Loss from noise (not congestion) cuts cwnd unnecessarily, a flaw in loss-
based design.

You might also like