Congestion
Congestion
When too many packets are present in the subnet, performance degrades. This situation is called
Congestion.
Desirable
Throughput
Congestion
Discussing Flow control vs. Congestion control: Often confusion as relationship is subtle.
Congestion control
has to do with making sure the subnet is able to carry the offered traffic
is a global issue involving behaviour of all hosts, routers, store and
forward processing in routers etc.
Flow control—Relates to point to point traffic between a given sender and receiver,
make sure that faster sender can not continually transmit data faster than
receiver can absorb it,
nearly always involves some direct feedback from receiver to sender
To Illustrate difference –
Consider a fiber optic network of capacity 1000Gb/s on which a super computer is trying to transfer
to PC at 1 Gbps. Although there is no congestion on network, flow control is needed to force SC to
stop frequently to allow PC to breathe.
Again consider a Store & Forward network with 1Mbps & 1000 large computers, 1/2 of which are
trying to transfer at 100kbps to the other half. Here no flow control is required, but total offered
traffic exceeds what network can handle.
There is confusion because some CC algorithms operate by sending massages back to various senders
to slow down when too many packets in network, thus a ‘slow down’ message can come from
receiver (FC) or from network (CC)
Congestion control—Many algorithms are known.
Open loop- Solve problem by good design - prevention, avoidance, make sure congestion does not
occur, no mid-course correction
-decide when to accept new traffic, when to discard packets
-make scheduling decisions at various points in the network
Closed loop-monitor system, give feedback information & adjust system operation
Presence of congestion means, load is (temporarily) greater than resources (in some part of system)
can handle.
Solution is—
1. Increase capacity (not always possible)
2. Decrease load - (i) deny service to some users,
(ii) degrade service to some or all users,
(iii) ask users to reschedule their demands.
Approaches that can be used in both virtual circuits and datagram subnets -Choke packets
-packet discarding
2. Packet discarding
It allows IMP to discard packets at will. It is good for datagrams. For VCs, a copy of packet must be
kept for retransmission later.
Acknowledgements can also be discarded - not logical (defeats purpose of Ack.) - keep one buffer to
accept every incoming packet, use any piggybacked Ack but discard packets.
Heuristics are used to decide when to keep packet and when to discard. A minimum and maximum
number of buffers are dedicated to each line. A minimum number of buffers prevent starving. A
maximum number of buffers prevent unnecessary congestion.
5. Choke Packets
Every IMP monitors its output line utilization by having a real variable u, 0<=u<=1, to reflect line
utilization. If u > threshold, line enters warning state. For an incoming packet, output line is checked
for warning state, if present, IMP sends ‘choke’ packet back to sender with destination address. On
receiving ‘choke’ packet, the sender reduces traffic to that destination by X percent till no more choke
packets arrive.
Leaky Bucket Algorithm Single server queuing system with constant service time (Turner 86) –
smooths out bursts and greatly reduces traffic
If bucket overflows, water spills over the sides (is wasted, as does not appear in 0/p stream)
ie., if packet enters when Queue is full, it is discarded - Packets (unregulated flow)
Bucket is a finite internal queue- allows One packet /clock tick – for fixed pkt size
or a fixed no. of bytes if variable pkt. size
Disadvantage
a) Leaky bucket enforces rigid output pattern at average rate, no matter how bursty traffic is
b) Discards packets when bucket fill up.
c - Better to allow output to speed up for larger bursts.
- More flexible algorithm needed.
TB throws away tokens when bucket fills up, never discards packet
It allows saving, up to a maximum size n, i.e., bursts up to n packets can be sent at once
Disadvantage: still allows bursts
Can smoothen traffic further by putting a leaky/another token bucket after the first token bucket.
Rate of leaky bucket > rate of token bucket, but < maximum rate of network
Eg.,a 500KB/sec token bucket and 10MB/sec leaky bucket
Closed loop c.c. Algorithms in virtual circuits
Admission control - to keep congestion that has already started, from getting worse.
When congestion occurs, stop setting up new virtual circuits or Route virtual circuits around problem
area.
Or negotiate an agreement between host and subnet before setting up virtual circuits specifying
volume of traffic, shape and qos required.
Subnet will then reserve resources along virtual circuits.