Classical TCP Congestion Control
Classical TCP Congestion Control
• Definition and Historical Context: Classic TCP congestion control encompasses the original
algorithms—TCP Tahoe (1988) and TCP Reno (1990)—developed to manage network
congestion. Introduced by Van Jacobson, these responded to the 1986 Internet congestion
collapse, where uncontrolled retransmissions reduced throughput to kilobytes per second.
They form the bedrock of TCP’s reliability over unreliable networks.
• Core Components: The system hinges on the congestion window (cwnd) to limit in-flight
data, the slow start threshold (ssthresh) to toggle between growth phases, and implicit
feedback (timeouts or duplicate ACKs) to detect congestion. It’s end-to-end, requiring no
router intervention, which ensured scalability in the early Internet.
• Objectives: The goals are to prevent congestion collapse (where retransmissions overwhelm
capacity), maximize throughput, minimize delay, and ensure fairness among flows. This is
achieved via Additive Increase, Multiplicative Decrease (AIMD), a cornerstone of classic TCP.
• Evolution Trigger: Before Tahoe, TCP lacked congestion awareness, treating all losses as
errors to retransmit. The 1980s saw networks grind to a halt as senders flooded links,
necessitating these controls to stabilize the growing Internet.
o Purpose and Rationale: Slow Start rapidly probes available bandwidth at connection
start or post-severe loss, avoiding the inefficiency of a fixed low rate. It assumes the
network’s capacity is unknown initially.
o Process: cwnd starts at 1 MSS (e.g., 512 bytes in 1988, later 1460 bytes). Each ACK
increases cwnd by 1 MSS, doubling it per RTT (exponential growth). This mimics a
binary search for capacity.
o Exit Conditions: Stops when cwnd reaches ssthresh (e.g., 64 KB, a high initial guess)
or loss occurs (timeout or dup-ACKs interpreted as timeout in Tahoe).
o Technical Detail: Rate = cwnd / RTT doubles per RTT, e.g., 14.6 Kbps → 29.2 Kbps →
58.4 Kbps, limited by the bottleneck link.
o Purpose and Design: After Slow Start, Congestion Avoidance cautiously increases the
rate to avoid overshooting the network’s capacity, transitioning from exponential to
linear growth.
o Process: cwnd grows by 1 MSS per RTT. For each ACK, cwnd += MSS × (MSS / cwnd),
roughly 1 MSS per full window’s ACKs, ensuring gradual probing.
o Detailed Example: cwnd = 10 MSS (14.6 KB); after 1 RTT (10 ACKs), cwnd ≈ 11 MSS
(16 KB). After 5 RTTs, cwnd = 15 MSS (21.9 KB, 219 Kbps).
o Trigger: Entered when cwnd ≥ ssthresh, typically after Slow Start or loss recovery.
o Technical Detail: Rate increases by MSS / RTT per RTT (e.g., +14.6 Kbps every
100ms), a slow climb to prevent sudden overload.
o Purpose and Assumption: Treats packet loss as a congestion signal, assuming buffers
overflowed rather than data corrupted (valid for wired networks then).
o Detailed Example: cwnd = 16 MSS (23.4 KB), timeout → ssthresh = 8 MSS, cwnd = 1
MSS. Next RTT: cwnd = 2 MSS (2.9 KB, 29.2 Kbps).
• AIMD in Tahoe:
o Principle and Logic: Additive Increase (+1 MSS/RTT) during Congestion Avoidance
ensures gradual growth; Multiplicative Decrease (cwnd /= 2) on loss provides rapid
relief.
o Fairness Mechanism: Two flows (e.g., cwnd = 10 vs. 20 MSS, capacity = 25 MSS)
adjust toward equality: loss → 5 vs. 10, then 6 vs. 11 over RTTs.
o Stability Benefit: Large decreases prevent persistent overload; small increases avoid
oscillation, stabilizing shared links.
o Historical Role: AIMD in Tahoe proved congestion could be managed without router
changes, a breakthrough for the decentralized Internet.
• Implementation Simplicity:
o Design Choice: Tahoe uses only timeouts for loss detection, avoiding complex ACK
analysis. This suited 1980s hardware with limited processing.
o Trade-off: Simplicity sacrifices efficiency—e.g., a single loss resets cwnd fully, even if
the network could handle more data post-drop.
o Same as Tahoe: Rapid bandwidth probing via exponential cwnd growth remains
unchanged from Tahoe, preserving its core logic.
o Process: cwnd starts at 1 MSS, doubles per RTT with each ACK adding 1 MSS. Initial
ssthresh is high (e.g., 64 KB).
o Detailed Example: RTT 1: cwnd = 1 → 2 MSS (2.9 KB); RTT 3: cwnd = 4 → 8 MSS (11.7
KB); RTT 5: cwnd = 16 → 32 MSS (46.7 KB).
o Exit Conditions: Loss (via 3 dup-ACKs or timeout) or cwnd ≥ ssthresh triggers the
next phase.
o Note: Classic Reno uses 1 MSS start; modern tweaks (e.g., RFC 2581) allow higher
initials, but not in 1990.
o Process: cwnd += 1 MSS per RTT, with per-ACK increments of MSS × (MSS / cwnd).
o Detailed Example: cwnd = 20 MSS (29.2 KB); RTT 1: 20 → 21 MSS (30.7 KB); RTT 5:
25 MSS (36.5 KB, 365 Kbps).
o Trigger: cwnd ≥ ssthresh, ensuring a smooth transition from rapid probing to steady
growth.
o Technical Detail: Rate grows by 14.6 Kbps per 100ms RTT, balancing efficiency and
stability.
o Purpose and Innovation: Detects single packet loss quickly, avoiding timeout
delays—a key Reno upgrade over Tahoe.
o Process: Receiver sends duplicate ACKs for the last in-order packet on an out-of-
order arrival. Sender retransmits after 3 dup-ACKs.
o Detailed Example: Send 1-5, 2 lost; ACK 1, dup-ACK 1 (for 3), dup-ACK 1 (for 4), dup-
ACK 1 (for 5) → retransmit 2 in ~100ms.
o Impact: Recovery drops from seconds (timeout) to 1-2 RTTs, e.g., 200ms vs. 2s.
o Purpose and Refinement: Mitigates loss impact without resetting to Slow Start,
improving throughput over Tahoe.
o Technical Detail: cwnd inflation (+3 MSS, then +1 per dup-ACK) keeps the pipe full,
reflecting delivered packets.
o Rate Impact: Drops from 292 Kbps to 146 Kbps, not 14.6 Kbps (Tahoe), halving loss
penalty.
• Timeout Fallback:
o Purpose and Continuity: Handles severe congestion or multiple losses when Fast
mechanisms fail, reverting to Tahoe’s logic.
o Detailed Example: cwnd = 16 MSS, timeout → ssthresh = 8 MSS, cwnd = 1 MSS; next
RTT: cwnd = 2 MSS.
o Technical Detail: Used when dup-ACKs don’t arrive (e.g., multiple losses exhaust
receiver buffer), indicating major disruption.
• AIMD in Reno:
o Fairness Dynamics: Converges faster than Tahoe (e.g., 10 vs. 20 MSS → 10 vs. 13
MSS post-loss, not 5 vs. 10).
o Efficiency Gain: Reduces downtime—e.g., a single loss cuts rate by 50% briefly vs.
90%+ in Tahoe.
o Historical Impact: Reno’s refinements made TCP viable for the 1990s Internet boom,
handling growing traffic.
• Implementation Complexity:
o Design Shift: Adds Fast Retransmit/Recovery, requiring ACK parsing and state
tracking, more complex than Tahoe’s timeout-only approach.
o Trade-off: Gains efficiency at the cost of sender-side logic, still lightweight for 1990s
systems.
• Setup: MSS = 1460 bytes, RTT = 100ms, initial ssthresh = 32 MSS (~46 KB).
o Loss at cwnd = 35 MSS; 3 dup-ACKs → ssthresh = 17 MSS, cwnd = 20 MSS (17 + 3).
• Post-Recovery:
• Throughput Calculation: Throughput ≈ (MSS / RTT) × √(2 / p). E.g., MSS = 1460B, RTT =
100ms, p = 0.01 → 16.4 Mbps; p = 0.04 → 8.2 Mbps.
• Fairness Behavior: AIMD converges flows (e.g., 10 vs. 20 MSS → 12 vs. 18 MSS over cycles),
but slowly with many flows.
• Tahoe Limitations: Full reset on loss (e.g., 16 MSS to 1 MSS) wastes capacity, poor for
frequent drops.
• Reno Limitations: Multiple losses in one window trigger timeout, negating Fast Recovery
benefits.
• Delay Blindness: Ignores RTT spikes, reacting only to loss, missing early congestion cues.
• Wireless Misstep: Loss from noise (not congestion) cuts cwnd unnecessarily, a flaw in loss-
based design.