0% found this document useful (0 votes)
40 views

Lecture 05: Congestion Control and Qos

This document discusses congestion control and quality of service (QoS) in computer networks. It begins by describing different types of data traffic and how congestion occurs when network load exceeds capacity. It then covers open-loop and closed-loop congestion control techniques to prevent and remove congestion. The document also discusses how ATM networks implement traffic management and congestion control to support different QoS classes and minimize complexity.

Uploaded by

alemu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Lecture 05: Congestion Control and Qos

This document discusses congestion control and quality of service (QoS) in computer networks. It begins by describing different types of data traffic and how congestion occurs when network load exceeds capacity. It then covers open-loop and closed-loop congestion control techniques to prevent and remove congestion. The document also discusses how ATM networks implement traffic management and congestion control to support different QoS classes and minimize complexity.

Uploaded by

alemu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Lecture 05: Congestion Control and

QoS

1
Data Traffic
• The main focus of congestion control and quality of
service is data traffic.
• In congestion control we try to avoid traffic
congestion.
• In quality of service, we try to create an appropriate
environment for the traffic.
• So, before talking about congestion control and
quality of service, we discuss the data traffic itself.

2
Figure Traffic descriptors

3
Figure Three traffic profiles

4
Congestion
❑ Congestion in a network may occur if the load on the
network—the number of packets sent to the
network—is greater than the capacity of the
network—the number of packets a network can
handle.
❑ Congestion control refers to the mechanisms and
techniques to control the congestion and keep the
load below the capacity.

5
Figure Queues in a router

6
Figure Packet delay and throughput as functions of load

7
Congestion Control

❑ Congestion control refers to techniques and


mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has
happened.
❑ In general, we can divide congestion control
mechanisms into two broad categories: open-loop
congestion control (prevention) and closed-loop
congestion control (removal).

8
Figure Congestion control categories

9
At Saturation Point

Two Possible Strategies at Node:


1. Discard any incoming packet if no buffer space
is available
2. Exercise flow control over neighbors
◼ May cause congestion to propagate throughout
network

10
Jackson’s Theorem - Application in Packet Switched Networks

Internal load:
L
 =  i
Packet Switched i=1
Network where:
 = total on all links in network
i = load on link i
L = total number of links

Note:
• Internal > offered load
External load, offered to network:
• Average length for all paths:
N N E[number of links in path] = /
 =   jk
j=1 k=2 • Average number of items waiting
where:
and being served in link i: ri = i Tri
• Average delay of packets sent Notice: As any
 = total workload in packets/sec
jk = workload between source j through the network is: i increases,
1 L Mi total delay
and destination k T=  
i=1 Ri - Mi increases.
N = total number of (external)
sources and destinations where: M is average packet length and
Ri is the data rate on link i 11
Ideal Performance

 i.e., infinite buffers, no variable overhead for


packet transmission or congestion control
 Throughput increases with offered load up to
full capacity
 Packet delay increases with offered load
approaching infinity at full capacity
 Power = throughput / delay, or a measure of the
balance between throughput and delay
◼ Higher throughput results in higher delay

12
Ideal Network Utilization

Load:
Ts = L/R

Power: relationship between


Normalized Throughput and Delay

13
Practical Performance

 I.e., finite buffers, non-zero packet processing


overhead
 With no congestion control, increased load
eventually causes moderate congestion:
throughput increases at slower rate than load
 Further increased load causes packet delays to
increase and eventually throughput to drop to
zero

14
Effects of Congestion

 Packets arriving are stored at input buffers


 Routing decision made
 Packet moves to output buffer
 Packets queued for output transmitted as fast as possible
◼ Statistical time division multiplexing
 If packets arrive too fast to be routed, or to be output, buffers
will fill
 Can discard packets
 Can use flow control
◼ Can propagate congestion through network

15
Effects of Congestion -
No Control

16
Mechanisms for Congestion Control

17
Backpressure
 If node becomes congested it can slow down or halt flow of
packets from other nodes
 May mean that other nodes have to apply control on
incoming packet rates
 Propagates back to source
 Can restrict to logical connections generating most traffic
 Used in connection oriented that allow hop by hop
congestion control (e.g. X.25)
 Not used in ATM nor frame relay
 Only recently developed for IP

18
Choke Packet

 Control packet
◼ Generated at congested node
◼ Sent to source node
◼ e.g. ICMP source quench
 From router or destination
 Source cuts back until no more source quench message
 Sent for every discarded packet, or anticipated
 Rather crude mechanism

19
Implicit Congestion Signaling

 Transmission delay may increase with


congestion
 Packet may be discarded
 Source can detect these as implicit indications of
congestion
 Useful on connectionless (datagram) networks
◼ e.g. IP based
 (TCP includes congestion and flow control)
 Used in frame relay

20
Explicit Congestion Signaling

 Network alerts end systems of increasing


congestion
 End systems take steps to reduce offered load
 Backwards
◼ Congestion avoidance in opposite direction to packet
required
 Forwards
◼ Congestion avoidance in same direction as packet
required

21
Categories of Explicit Signaling

 Binary
◼ A bit set in a packet indicates congestion
 Credit based
◼ Indicates how many packets source may send
◼ Common for end to end flow control
 Rate based
◼ Supply explicit data rate limit
◼ e.g. ATM

22
Congestion Control in Packet Switched
Networks

 Send control packet to some or all source nodes


◼ Requires additional traffic during congestion
 Rely on routing information
◼ May react too quickly
 End to end probe packets
◼ Adds to overhead
 Add congestion info to packets as they cross nodes
◼ Either backwards or forwards

23
ATM Traffic Management

 High speed, small cell size, limited overhead bits


 Requirements
◼ Majority of traffic not amenable to flow control
◼ Feedback slow due to reduced transmission time
compared with propagation delay
◼ Wide range of application demands
◼ Different traffic patterns
◼ Different network services
◼ High speed switching and transmission increases
volatility

24
Cell Delay Variation

 For ATM voice/video, data is a stream of cells


 Delay across network must be short
 Rate of delivery must be constant
 There will always be some variation in transit
 Delay cell delivery to application so that
constant bit rate can be maintained to application

25
Traffic and Congestion Control Framework

 ATM layer traffic and congestion control should


support QoS classes for all foreseeable network
services
 Should not rely on protocols that are network
specific, nor higher level application specific
protocols
 Should minimize network and end to end system
complexity

26
Traffic Management in Congested Network – Some Considerations

 Fairness
◼ Various flows should “suffer” equally
◼ Last-in-first-discarded may not be fair
 Quality of Service (QoS)
◼ Flows treated differently, based on need
◼ Voice, video: delay sensitive, loss insensitive
◼ File transfer, mail: delay insensitive, loss sensitive
◼ Interactive computing: delay and loss sensitive
 Reservations
◼ Policing: excess traffic discarded or handled on best-
effort basis

27
Frame Relay Congestion Control
 Minimize frame discard
 Maintain QoS (per-connection bandwidth)
 Minimize monopolization of network
 Simple to implement, little overhead
 Minimal additional network traffic
 Resources distributed fairly
 Limit spread of congestion
 Operate effectively regardless of flow
 Have minimum impact other systems in network
 Minimize variance in QoS

28
Congestion Avoidance with Explicit Signaling
Two general strategies considered:
 Hypothesis 1: Congestion always occurs
slowly, almost always at egress nodes
◼ forward explicit congestion avoidance
 Hypothesis 2: Congestion grows very quickly
in internal nodes and requires quick action
◼ backward explicit congestion avoidance

29
Congestion Control: BECN/FECN

30
FR - 2 Bits for Explicit Signaling

 Forward Explicit Congestion Notification


◼ For traffic in same direction as received frame
◼ This frame has encountered congestion
 Backward Explicit Congestion Notification
◼ For traffic in opposite direction of received frame
◼ Frames transmitted may encounter congestion

31
Frame Relay Traffic Rate Management Parameters

 Committed Information Rate (CIR)


◼ Average data rate in bits/second that the network agrees to
support for a connection
 Data Rate of User Access Channel (Access Rate)
◼ Fixed rate link between user and network (for network
access)
 Committed Burst Size (Bc)
◼ Maximum data over an interval agreed by network
 Excess Burst Size (Be)
◼ Maximum data, above Bc, over an interval that network will
attempt to transfer

32
Committed Information Rate (CIR)
Operation
Current rate at which
user is sending over the
channel
Average data rate
(bps) committed to
the user by the Maximum data rate over
Frame Relay network. time period allowed for
this connection by the
Frame Relay network.

Be

Bc

CIRi,j  AccessRatej Maximum line speed


i
of connection to
Frame Relay network
(i.e., peak data rate)
33
Frame Relay Traffic Rate Management Parameters

Max.
Rate
Bc
CIR = bps
T

34
Quality of Service

❑ Quality of service (QoS) can informally be defined


as something a flow seeks to attain.

35
Figure Flow characteristics

36
Techniques to Improve QoS

❑ In this section, we discuss some techniques that can


be used to improve the quality of service. We briefly
discuss four common methods:
❑ Scheduling
❑ traffic shaping
❑ admission control
❑ resource reservation.

37
Figure Priority queuing

38
Figure Weighted fair queuing

39
Integrated Services

❑ Two models have been designed to provide quality of


service in the Internet:
➢ Integrated Services and
➢ Differentiated Services.

40
IntServ Approach

 Two key features form core of architecture


◼ Resource reservation – routers must maintain state of
available resource reserved for each “session”
◼ Call/session setup – each router on the session’s path must
verify availability of required resources for a session and
admit sessions only if requirements can be met
 Call Admission process
◼ Traffic characterization
◼ Desired QoS caharterization
◼ Reservation signaling (RSVP, RFC 2210)
◼ Per-element call admission

41
IntServ Implementation

 Associate each packet with a “flow”


◼ a distinguishable stream of related IP packets that result from
a single user activity and demand the same QoS (per RFC
1633)
◼ unidirectional, can have multiple recipients
◼ typically identified by: source & destination IP addresses,
port numbers and protocol type
 Provide for enhanced router functions to manage
flows:
◼ Admission control based on requested QoS and availability
of required network resources
◼ Routing protocol based on QoS (like OSPF)
◼ Queuing/scheduling disciplines based on QoS
◼ Packet discard policy based on QoS
42
ISA: 3 Categories of Service

 Guaranteed Service
◼ assured capacity (data rate)
◼ specified upper bound on queuing delay through the
network
◼ no queuing loss (i.e., no buffer overflow)
 Controlled Load
◼ roughly equivalent to best-effort under no-load
conditions (dprop + dtrans)
◼ no specified upper bound on queuing delay, but will
approximate minimum expected transit delay
◼ almost no queuing loss
 Best Effort

43
Leaky Bucket Scheme

Used to:
1. Characterize traffic
in a flow.
2. Describe the load
imposed by a flow.
3. Traffic policing.

Note that, during any time period T, the


amount of data sent cannot exceed RT+B, and
Maximum queuing delay by a packet is B/R.
44
Queuing Disciplines

 Single FIFO queues have numerous


drawbacks relative to QoS demands
◼ no special treatment based on priority
◼ larger packets get better service
◼ connections can get an unfair share of resources
 IntServ allows for multiple queues
◼ one per flow
◼ separate discipline per flow
◼ fair queuing policy

45
Queuing Disciplines (Scheduling)
FIFO (First-Come-First-Served) Round Robin (Fair Queuing)

Drawbacks? Drawbacks?
• Flows with busy (greedy) • Flows with shorter packets are
sources crowd out others penalized
• Flows with shorter packets
are penalized
46
Processor Sharing Approach

 Processor Sharing (PS)


◼ ideal, but not a practical policy
◼ transmit only one bit per round per queue
◼ with N queues, each queue receives exactly 1/N of
the available capacity
◼ consider each queue independently to calculate
“virtual” start and finish times for each transmission

EXAMPLE QUEUE  QUEUE  QUEUE 


Packet 1 Packet 2 Packet 1 Packet 2 Packet 1
Real arrival time, i 0 2 1 2 3
Transmission time, Pi 3 1 1 4 2
Virtual start time, Si 0 3 1 2 3
Virtual finish time, Fi 3 4 2 6 5

47
Bit-Round Fair Queuing

 Bit-Round Fair Queuing (BRFQ)


◼ emulates PS round-robin approach for packets and
multiple synchronous queues
◼ uses packet length and flow identification (queue) to
schedule packets
◼ calculate Si and Fi as though PS were running
◼ when a packet finishes transmission, send next packet
based on smallest value of Fi over all queues
◼ algorithm is fair on the basis of amount of data
transmitted instead of number of packets

48
PS vs. BRFQ Example

Drawback?

No precedence
or priority
weighting of
flows.

49
Queuing Discipline – Priority Queuing

Data Communications and


Networking, Forouzan, 2004

50
Queuing Discipline – Weighted Fair Queuing

Data Communications and


Networking, Forouzan, 2004

51
Weighted Fair Queue (WFQ)

Guaranteed Rate (weight) = .5

Pi
Guaranteed Rate = .05 Fi = Si +  ,  = weight

Maximum delay for flow i


Bi (Ki-1)Li Ki Lmax
Di  + + 
Ri Ri m=1 Cm

Di = max. delay for flow i


Bi = token bucket size for flow i
Ri = token rate for flow i
Ki = number of nodes in flow i path
Li = max. packet size for flow i
Lmax = max. packet length for all flows
through all nodes on flow i path
Cm = outgoing link capacity at node m

52
Scheduling vs. Queue Management (see RFC 2309)

 Closely related, but different performance


issues…
 Scheduling: managing allocation of bandwidth
between flows by determining which packet to
send next (queuing discipline)
 Queue Management: managing the length of
packet queues by proactively dropping packets
when necessary (packet discard policy)

53
Random Early Detection (RED)

 Queuing discipline with proactive packet discard


◼ anticipate congestion and take early avoidance action
◼ improved performance for elastic traffic by not
penalizing bursty traffic
◼ avoids “global synchronization” phenomenon at
congestion onset
◼ control average queue length (buffer size) within
deterministic bounds… therefore, control average
queuing delay

54
RED Buffer Management

Discard probability is calculated for each packet arrival


at the output queue based on:
• the current weighted average queue size, and
• the number of packets sent since the previous packet discard

55
Generalized RED Algorithm
calculate the average queue size, avg
if avg < THmin
queue the packet
else if THmin  avg < THmax
calculate probability Pa
with probability Pa
discard the packet
else with probability 1 – Pa
queue the packet
else if avg  THmax
discard the packet

56
RED Algorithm

 avg lags considerably behind changes in actual


queue size (weight, wq, is small… typ. 0.002)
◼ avg  (1 – wq)avg + wqq
◼ prevents reaction to short bursts
 count, number of packets passed without discard,
increases incrementally while Thmin < avg <
Thmax
◼ probability of discard, Pa, increases as count increases
◼ helps ensure fairness across multiple flows

57
Differentiated Services

58
Differentiated Services (DS)
 ISA and RSVP (Resource Reservation Protocol)
deployment drawbacks
◼ relatively complex
◼ may not scale well for large traffic volumes
 DiffServ solution (RFC2475, 3260)
◼ designed as a simple, easily-implemented, low-overhead tool
◼ offers a range of services in differentiated service categories…
scalable and flexible service classification
 Key characteristics
◼ uses existing IPv4 TOS field or IPv6 Traffic Class field (for DS
field)
◼ SLA established in advance… no application changes required
◼ built-in aggregation mechanism based on traffic category
◼ routers queue and forward based on information carried in the DS

59
DS Domains

 Contiguous portion of the Internet over which a consistent


set of DS policies are agreed and administered
 Typically under control of a single management entity
 Services in a domain defined by a Service Level
Agreement (SLA) – a contract between service provider
and user/another domain which specifies QoS parameters
◼ detailed service parameters: throughput, drop probability, latency
◼ service-based traffic profiles
◼ disposition of excess (in violation of SLA) traffic
 DS field carries a traffic class as specified by the SLA

60
DS Terminology

 Service Level Agreement (per RFC 3260):


◼ A Service Level Specification (SLS) is a set of
parameters and their values which together define the
service offered to a traffic stream by a DS domain.
◼ A Traffic Conditioning Specification (TCS) is a set of
parameters and their values which together specify a set
of classifier rules and a traffic profile. A TCS is an
integral element of an SLS.

61
DS and IPv4 TOS Fields

IP ECN Field,
per RFC 3168
& RFC 3260

Replaces

6-bit DS code point, in three pools


Pool 1: xxxxx0 - standards-based use (e.g. 000000, xxx000)
Pool 2: xxxx11 – experimental/local use
Pool 3: xxxx01 – experimental/local use, future standards

62
DS Traffic Classifier/Conditioner

Conformance test per SLA


(e.g peak rate, burstiness, …)

Regulate traffic flow to


achieve a specified
traffic rate (e.g. with a
token bucket)

Mark with a DS codepoint, or Police traffic and drop packets if


Separate traffic into classes
re-mark as necessary (at rate exceeds that specified
based on fields as specified in
domain ingress node, or at in the SLA (per metering function)
the TCS (source IP, dest. IP,
boundary between domains)
source port #, dest. port #, …)

63
Per-Hop Behavior

 RFC 2475 definition:


◼ “a description of the externally observable forwarding behavior of a
DiffServ node applied to a particular DiffServ behavior aggregate.”
 Two standard PHBs defined:
◼ Expedited Forwarding (RFC 2598)
◼ Assured Forwarding (RFC 2597)
 Expedited Forwarding
◼ “Premium service” with low delay, low-loss, low jitter,
and assured bandwidth
◼ Domain boundary nodes control traffic aggregate to limit
its characteristics (i.e. controlled rate and burstiness)
◼ Interior nodes ensure that the aggregate’s maximum
arrival rate is less than its minimum departure rate (i.e.
limit the queuing effect)

64
Per-Hop Behavior (cont.)

 Assured Forwarding
◼ designed to offer a service level that is superior to best-
effort service
◼ based on explicit allocation concept
 choice of classes offered, each with different traffic profile
 monitor traffic at boundary nodes, and mark as in or out based
on conformance to profile
 interior nodes handle packets based only on ‘in’ or ‘out’ mark
 in congestion, drops outs before ins
➢ implementation defines four AF classes and replaces
in/out mark with a drop precedence code point
◼ simple and easy to implement in nodes

65

You might also like