Congestion Control in TCP
Congestion Control in TCP
INDRANEEL
1. The packet is put at the end of the input queue while waiting to be checked.
2. The processing module of the router removes the packet from the input queue once it reaches the front of the
queue and uses its routing table and the destination address to find the route.
3. The packet is put in the appropriate output queue and waits its tum to be sent.
We need to be aware of two issues. First, if the rate of packet arrival is higher than the packet processing
rate, the input queues become longer and longer. Second, if the packet departure rate is less than the packet
processing rate, the output queues become longer and longer.
Network Performance
Congestion control involves two factors that measure the performance of a network: delay and throughput.
These two performance measures are function of load.
Delay Versus Load
Note that when the load is much less than the capacity of the network, the delay is at a minimum. This
minimum delay is composed of propagation delay and processing delay, both of which are negligible. However,
when the load reaches the network capacity, the delay increases sharply because we now need to add the
waiting time in the queues (for all routers in the path) to the total delay. Note that the delay becomes infinite
when the load is greater than the capacity. If this is not obvious, consider the size of the queues when almost no
packet reaches the destination, or reaches the destination with infinite delay; the queues become longer and
longer. Delay has a negative effect on the load and consequently the congestion. When a packet is delayed, the
source, not receiving the acknowledgment, retransmits the packet, which makes the delay, and the congestion,
worse.
1
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
2
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat window is better
than the Go-Back-N window for congestion control. In the Go-Back-N window, when the timer for a packet
times out, several packets may be resent, although some may have arrived safe and sound at the receiver. This
duplication may make the congestion worse. The Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the
receiver does not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion. Several approaches are used in this case. A receiver may send an acknowledgment only if it has a
packet to be sent or a special timer expires. A receiver may decide to acknowledge only N packets at a time. We
need to know that the acknowledgments are also part of the load in a network. Sending fewer acknowledgments
means imposing less load on the network.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time may not harm the
integrity of the transmission. For example, in audio transmission, if the policy is to discard less sensitive packets
when congestion is likely to happen, the quality of sound is still preserved and congestion is prevented or
alleviated.
Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual-
circuit networks. Switches in a flow first check the resource requirement
of a flow before admitting it to the network. A router can deny establishing a virtualcircuit connection if there is
congestion in the network or if there is a possibility of future congestion.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Several mechanisms have been used by different protocols. We describe a few of them here.
Backpressure
The technique of backpressure refers to a congestion control mechanism in which a congested node
stops receiving data from the immediate upstream node or nodes. This may cause the upstream node or nodes to
become congested, and they, in turn, reject data from their upstream nodes or nodes. And so on. Backpressure is
a node-to-node congestion control that starts with a node and propagates, in the opposite direction of data flow,
to the source. The backpressure technique can be applied only to virtual circuit networks, in which each node
knows the upstream node from which a flow of data is corning. Figure shows the idea of backpressure.
Node III in the figure has more input data than it can handle. It drops some packets in its input buffer
and informs node II to slow down. Node II, in turn, may be congested because it is slowing down the output
3
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
flow of data. If node II is congested, it informs node I to slow down, which in turn may create congestion. If so,
node I informs the source of data to slow down. This, in time, alleviates the congestion. Note that the pressure
on node III is moved backward to the source to remove the congestion.
It was, however, implemented in the first virtual-circuit network, X.25. The technique
cannot be implemented in a datagram network because in this type of network, a node (router) does not have the
slightest knowledge of the upstream router.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note the difference
between the backpressure and choke packet methods. In backpressure, the warning is from one node to its
upstream node, although the warning may eventually reach the source station. In the choke packet method, the
warning is from the router, which has encountered congestion, to the source station directly. The intermediate
nodes through which the packet has traveled are not warned. We have seen an example of this type of control in
ICMP. The warning message goes directly to the source station; the intermediate routers, and does not take any
action. Figure shows the idea of a choke packet.
Choke packet
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the source.
The source guesses that there is a congestion somewhere in the network from other symptoms. For example,
when a source sends several packets and there is no acknowledgment for a while, one assumption is that the
network is congested. The delay in receiving an acknowledgment is interpreted as congestion in the network;
the source should slow down. We will see this type of signaling when we discuss TCP congestion control later
in the chapter.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or destination. The
explicit signaling method, however, is different from the choke packet method. In the choke packet method, a
separate packet is used for this purpose; in the explicit signaling method, the signal is included in the packets
that carry data. Explicit signaling, as we will see in Frame Relay congestion control, can occur in either the
forward or the backward direction.
Backward Signaling
A bit can be set in a packet moving in the direction opposite to the congestion. This bit can warn the
source that there is congestion and that it needs to slow down to avoid the discarding of packets.
Forward Signaling
A bit can be set in a packet moving in the direction of the congestion. This bit can warn the destination
that there is congestion. The receiver in this case can use policies, such as slowing down the acknowledgments,
to alleviate the congestion.
4
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
5
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
6
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
cases, the size of the threshold is dropped to one-half, a multiplicative decrease. Most TCP implementations
have two reactions:
I. If a time-out occurs, there is a stronger possibility of congestion; a segment has probably been dropped in the
network, and there is no news about the sent segments.
In this case TCP reacts strongly:
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the size of one segment.
c. It starts the slow-start phase again.
2. If three ACKs are received, there is a weaker possibility of congestion; a segment may have been dropped,
but some segments after that may have arrived safely since three ACKs are received. This is called fast
transmission and fast recovery. In this case, TCP has a weaker reaction:
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the value of the threshold (some implementations add three segment sizes to the threshold).
c. It starts the congestion avoidance phase.
We summarize the congestion policy of TCP and the relationships between the three phases.
Congestion Example
7
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
FECN
8
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
5. QUALITY OF SERVICE
Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We can
informally define quality of service as something a flow seeks to attain.
Flow Characteristics
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter, and
bandwidth, as shown in Figure
Reliability
Reliability is a characteristic that a flow needs. Lack of reliability means losing a packet or
acknowledgment, which entails retransmission. However, the sensitivity of application programs to reliability is
not the same. For example, it is more important that electronic mail, file transfer, and Internet access have
reliable transmissions than telephony or audio conferencing.
Delay
Source-to-destination delay is another flow characteristic. Again applications can tolerate
delay in different degrees. In this case, telephony, audio conferencing, video conferencing, and remote log-in
need minimum delay, while delay in file transfer or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging to the same flow. For example, if
four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same delay, 20 units of time. On
the other hand, if the above four packets arrive at 21, 23, 21, and 28, they will have different delays: 21,22, 19,
and 24. For applications such as audio and video, the first case is completely acceptable; the second case is not.
For these applications, it does not matter if the packets arrive with a short or long delay as long as the delay is
the same for all packets. For this application, the second case is not acceptable.
Jitter is defined as the variation in the packet delay. High jitter means the difference between delays is
large; low jitter means the variation is small.
Bandwidth
Different applications need different bandwidths. In video conferencing we need to send millions of bits
per second to refresh a color screen while the total number of bits in an e-mail may not reach even a million.
Flow Classes
Based on the flow characteristics, we can classify flows into groups, with each group having similar
levels of characteristics. This categorization is not formal or universal; some protocols such as ATM have
defined classes, as we will see later. The ATM Forum defines four service classes: CBR, VBR, ABR, and UBR
CBR The constant-bit-rate (CBR) class is designed for customers who need real time audio or video services.
VBR The variable-bit-rate (VBR) class is divided into two subclasses: real-time (VBR-RT) and non-real-time
9
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
(VBR-NRT). VBR-RT is designed for those users who need real-time services (such as voice and video
transmission) and use compression techniques to create a variable bit rate. VBR-NRT is designed for those
users who do not need real-time services but use compression techniques to create a variable bit rate.
ABR The available-bit-rate (ABR) class delivers cells at a minimum rate. If more network capacity is available,
this minimum rate can be exceeded. ABR is particularly suitable for applications that are bursty.
UBR The unspecified-bit-rate (UBR) class is a best-effort delivery service that does not guarantee anything.
6. TECHNIQUES TO IMPROVE QoS
In this section, we discuss some techniques that can be used to improve the quality of service. We
briefly discuss four common methods: scheduling, traffic shaping, admission ontrol, and resource reservation.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling technique
treats the different flows in a fair and appropriate manner. Several scheduling techniques are designed to
improve the quality of service. We discuss three of them here: FIFO queuing, priority queuing, and weighted
fair queuing.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch) is
ready to process them. If the average arrival rate is higher than the average processing rate, the queue will fill up
and new packets will be discarded. A FIFO queue is familiar to those who have had to wait for a bus at a bus
stop. Figure shows a conceptual view of a FIFO queue.
Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its own queue.
The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are processed
last. Note that the system does not stop serving a queue until it is empty. Figure shows priority queuing with
two priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because higherpriority traffic, such as
multimedia, can reach the destination with less delay. However, there is a potential drawback. If there is a
continuous flow in a high-priority queue, the packets in the lower-priority queues will never have a chance to be
processed. This is a condition called starvation.
Weighted Fair Queuing
10
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
A better scheduling method is weighted fair queuing. In this technique, the packets are still assigned to
different classes and admitted to different queues. The queues, however, are weighted based on the priority of
the queues; higher priority means a higher weight. The system processes packets in each queue in a round-robin
fashion with the number of packets selected from each queue based on the corresponding weight. For example,
if the weights are 3, 2, and 1, three packets are processed from the first queue, two from the second queue, and
one from the third queue. If the system does not impose priority on the classes, all weights can be equaL In this
way, we have fair queuing with priority. Figure 24.18 shows the technique with three classes.
Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. Two
techniques can shape traffic: leaky bucket and token bucket.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long as
there is water in the bucket. The rate at which the water leaks does not depend on the rate at which the water is
input to the bucket unless the bucket is empty. The input rate can vary, but the output rate remains constant.
Similarly, in networking, a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are
stored in the bucket and sent out at an average rate. Figure shows a leaky bucket and its effects.
11
IIIB.TECH–II SEM CSE (ACN) UNIT- V S.INDRANEEL
A simple leaky bucket implementation is shown in Figure .A FIFO queue holds the packets. If the traffic
consists of fixed-size packets the process removes a fixed number of packets from the queue at each tick of the
clock. If the traffic consists of variable-length packets, the fixed output rate must
be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the counter by the packet size.
Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.
Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is not sending
for a while, its bucket becomes empty. Now if the host has bursty data, the leaky bucket allows only an average
rate. The time when the host was idle is not taken into account. On the other hand, the token bucket algorithm
allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system
sends n tokens to the bucket. The system removes one token for every cell (or byte) of data sent. For example, if
n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these
tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other words, the host
can send bursty data as long as the bucket is not empty. Figure shows the idea.
The token bucket can easily be implemented with a counter. The token is initialized to zero. Each time a
token is added, the counter is incremented by 1. Each time a unit of data is sent, the counter is decremented by
1. When the counter is zero, the host cannot send data.
Combining Token Bucket and Leaky Bucket
The two techniques can be combined to credit an idle host and at the same time regulate
the traffic. The leaky bucket is applied after the token bucket; the rate of the leaky bucket needs to be higher
than the rate of tokens dropped in the bucket.
Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The quality of service
is improved if these resources are reserved beforehand. We discuss in this section one QoS model called
Integrated Services, which depends heavily on resource reservation to improve the quality of service.
Admission Control
Admission control refers to the mechanism used by a router, or a switch, to accept or reject a flow based
on predefined parameters called flow specifications. Before a router accepts a flow for processing, it checks the
flow specifications to see if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.
12