0% found this document useful (0 votes)
56 views

Congestion

When too many packets are present in a subnet, performance degrades, causing a situation called congestion. Congestion control aims to ensure the subnet can carry offered traffic and involves behavior of all hosts, routers, and store-and-forward processing. Flow control relates to point-to-point traffic between sender and receiver to prevent faster senders from overwhelming receivers. There is sometimes confusion between the two because congestion control messages from routers can slow senders down like flow control messages from receivers. Common congestion control algorithms include choke packets and preventing congestion before it occurs through traffic shaping techniques like token bucket algorithms.

Uploaded by

Anjali Mohini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Congestion

When too many packets are present in a subnet, performance degrades, causing a situation called congestion. Congestion control aims to ensure the subnet can carry offered traffic and involves behavior of all hosts, routers, and store-and-forward processing. Flow control relates to point-to-point traffic between sender and receiver to prevent faster senders from overwhelming receivers. There is sometimes confusion between the two because congestion control messages from routers can slow senders down like flow control messages from receivers. Common congestion control algorithms include choke packets and preventing congestion before it occurs through traffic shaping techniques like token bucket algorithms.

Uploaded by

Anjali Mohini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 5

Congestion control

When too many packets are present in the subnet, performance degrades. This situation is called
Congestion.

Perfect max carrying capacity of subnet

Desirable
Throughput

Congestion

Packets sent (load)


Reasons
(a) Routers too busy (or if the router’s CPU is slow) performing the bookkeeping tasks required of
them (queuing buffers, updating tables, etc.), queues can build up, even though there is excess line
capacity.
(b) Input traffic rate >capacity of output line
(c) Insufficient memory to hold incoming packets
(d) Congestion tends to feed upon itself and become worse.

Discussing Flow control vs. Congestion control: Often confusion as relationship is subtle.
Congestion control
has to do with making sure the subnet is able to carry the offered traffic
is a global issue involving behaviour of all hosts, routers, store and
forward processing in routers etc.
Flow control—Relates to point to point traffic between a given sender and receiver,
make sure that faster sender can not continually transmit data faster than
receiver can absorb it,
nearly always involves some direct feedback from receiver to sender

To Illustrate difference –

Consider a fiber optic network of capacity 1000Gb/s on which a super computer is trying to transfer
to PC at 1 Gbps. Although there is no congestion on network, flow control is needed to force SC to
stop frequently to allow PC to breathe.

Again consider a Store & Forward network with 1Mbps & 1000 large computers, 1/2 of which are
trying to transfer at 100kbps to the other half. Here no flow control is required, but total offered
traffic exceeds what network can handle.

There is confusion because some CC algorithms operate by sending massages back to various senders
to slow down when too many packets in network, thus a ‘slow down’ message can come from
receiver (FC) or from network (CC)
Congestion control—Many algorithms are known.

Yang & Reddy (1995) developed a taxonomy for CC algorithms

Open loop closed loop

Act at source act at destination explicit feedback implicit feedback

Packets sent back from source deduces existence of


point of congestion congestion by making local
observations such as time for acks

Open loop- Solve problem by good design - prevention, avoidance, make sure congestion does not
occur, no mid-course correction
-decide when to accept new traffic, when to discard packets
-make scheduling decisions at various points in the network
Closed loop-monitor system, give feedback information & adjust system operation

Presence of congestion means, load is (temporarily) greater than resources (in some part of system)
can handle.
Solution is—
1. Increase capacity (not always possible)
2. Decrease load - (i) deny service to some users,
(ii) degrade service to some or all users,
(iii) ask users to reschedule their demands.

Approaches that can be used in both virtual circuits and datagram subnets -Choke packets
-packet discarding

1. Collision avoidance by pre allocation of buffer (for VCs)


While setting up VC, Call request packet takes a route and requests to set up buffer at each
intermediate IMP. So that data traffic can run smoothly. It can also permanently allocate a set of
buffers to each VC. This is expensive if/when VC is idle and to associate a timer with each buffer. If
buffer lies idle for too long, it can be released and reacquired when packet arrives.

2. Packet discarding
It allows IMP to discard packets at will. It is good for datagrams. For VCs, a copy of packet must be
kept for retransmission later.
Acknowledgements can also be discarded - not logical (defeats purpose of Ack.) - keep one buffer to
accept every incoming packet, use any piggybacked Ack but discard packets.
Heuristics are used to decide when to keep packet and when to discard. A minimum and maximum
number of buffers are dedicated to each line. A minimum number of buffers prevent starving. A
maximum number of buffers prevent unnecessary congestion.

3. Use flow control (As in ARPANET)


Flow control is end to end only. Congestion control occurs at intermediate IMPs also
Strategies are ‘stop & wait’ and ‘sliding window’. It is not very effective because computer traffic is
bursty (Peak traffic for small time). So there is no mean rate as in flow control but end-to-end flow
control can help in reducing load in the subnet.
4. Isarithimic Control
It restricts (keeps constant) no. of packets allowed in subnet. A fixed number of ‘permits‘circulate in
the network. Sender IMP gets hold of a permit, sends packet and destroys permit. Receiver
(destination) IMP removes packet from subnet and generates permit. The disadvantage / problems
are:
(a) How to distribute permits to maintain speed of sending. Permits must be ready available. Too
many permits themselves load the network.
(b) If permit is lost, the network bandwidth will reduce.

5. Choke Packets
Every IMP monitors its output line utilization by having a real variable u, 0<=u<=1, to reflect line
utilization. If u > threshold, line enters warning state. For an incoming packet, output line is checked
for warning state, if present, IMP sends ‘choke’ packet back to sender with destination address. On
receiving ‘choke’ packet, the sender reduces traffic to that destination by X percent till no more choke
packets arrive.

Deadlocks-- Ultimate congestion (Lock up)


Suppose IMP A has 5 buffers queued up to B & IMP B has 5 buffers queued up to A, neither IMP can
accept incoming packets from the other because of non-availability of buffers, both are stuck -
deadlock (direct Store & Forward LockUp). Solution is - discard packets or reserved more input
buffers.
Indirect S & F Lockup – occurs between a loop of IMPs
Algorithms for prevention reserve two buffers in each IMP for use as overflow buffers.
Reassembly Lockup – occurs in a network where long messages are fragmented into smaller ones
for transmission and are to be reassembled for delivering to host. If a receiver has only
partially assembled messages and runs out of buffer, no messages can be completed and sent
up.
No buffer can be released - lockup.
Solution is – ask receiver to reserve buffers before sending long messages

Open loop (prevention) policies


a) One main reason of congestion is bursty traffic
 If host transmit at uniform rate/predictable rate, congestion will be less common.
-Approach used in ATM networks and called
Traffic shaping is about regulating the average rate (& burstiness) of data transmission.
-reduces congestion and helps carrier to live up to its promise
- when virtual circuit set up, user and subnet agree on a certain traffic pattern (shape) for that vc.
- If user sticks to it, congestion is reduced
- More important for real time data like audio or video, rather than file transfer.

Leaky Bucket Algorithm Single server queuing system with constant service time (Turner 86) –
smooths out bursts and greatly reduces traffic
If bucket overflows, water spills over the sides (is wasted, as does not appear in 0/p stream)
ie., if packet enters when Queue is full, it is discarded - Packets (unregulated flow)
Bucket is a finite internal queue- allows One packet /clock tick – for fixed pkt size
or a fixed no. of bytes if variable pkt. size

eg., if host produces data at 25MB/s


Network also runs at this speed but router can handle this rate for short intervals only.
For long intervals rate is <= 2MB/sec.
Let data come in 25MB bursts, one 40msec burst/second.
To reduce average rate to 2MB/sec., use a leaky bucket with  = 2MB/sec & capacity C = 1MB
=>Bursts of up to 1MB can be handled without data loss and such bursts are spread out over
500msec, irrespective of how fast they come in.

25MB/sec for 40msec (1 MB)

40 msec input to leaky bucket

2 MB/sec for 500 msec.

500msec leaky bucket output draining at uniform rate

Disadvantage
a) Leaky bucket enforces rigid output pattern at average rate, no matter how bursty traffic is
b) Discards packets when bucket fill up.
c - Better to allow output to speed up for larger bursts.
- More flexible algorithm needed.

Token bucket algorithm


Here the bucket holds tokens, generated by a clock, at the rate of 1 token every δT seconds
For a packet to be transmitted, it must capture and destroy a token.
Does a different shaping than leaky bucket, allows some burstiness
Gives faster response to sudden input bursts

Let token bucket be full when a burst of length S sec. arrives


Token bus capacity = C bytes
Token arrival rate = ρ bytes / sec
Max. o/p rate = M bytes/sec.
O/p burst contains a max. of C + S= bytes
No. of bytes in max. speed burst of S sec. is = MS
C + ρS = MS , ie., S = C / (M – ρ)
Let M=25MB/sec, and ρ =2MB/sec

If C = 250 KB, S = 11 msec. and then 2MB/sec for 363 msec


If C = 500 KB, then S = 22 msec. and then 225 msec.
If C = 750 KB, then S = 33 msec. and then 87 msec.

25MB/sec for 11msec

2MB/sec for 363 msec output of token bucket of size 250kB

TB throws away tokens when bucket fills up, never discards packet
It allows saving, up to a maximum size n, i.e., bursts up to n packets can be sent at once
Disadvantage: still allows bursts
Can smoothen traffic further by putting a leaky/another token bucket after the first token bucket.
Rate of leaky bucket > rate of token bucket, but < maximum rate of network
Eg.,a 500KB/sec token bucket and 10MB/sec leaky bucket
Closed loop c.c. Algorithms in virtual circuits

Admission control - to keep congestion that has already started, from getting worse.
When congestion occurs, stop setting up new virtual circuits or Route virtual circuits around problem
area.
Or negotiate an agreement between host and subnet before setting up virtual circuits specifying
volume of traffic, shape and qos required.
Subnet will then reserve resources along virtual circuits.

You might also like