0% found this document useful (0 votes)
43 views

Unit 4

The document summarizes key concepts related to the network layer in computer networks. It discusses: 1. Store-and-forward packet switching, where packets are stored briefly at routers before being forwarded. 2. Services provided by the network layer to the transport layer, including independence from router technology and a uniform numbering plan. 3. Implementation of connectionless and connection-oriented services, how routing tables direct packet forwarding in each case, and how virtual circuits establish dedicated paths for connections. 4. Common routing algorithms like shortest path, flooding, distance vector, link state, and hierarchical routing and how they differ in metrics and information used.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Unit 4

The document summarizes key concepts related to the network layer in computer networks. It discusses: 1. Store-and-forward packet switching, where packets are stored briefly at routers before being forwarded. 2. Services provided by the network layer to the transport layer, including independence from router technology and a uniform numbering plan. 3. Implementation of connectionless and connection-oriented services, how routing tables direct packet forwarding in each case, and how virtual circuits establish dedicated paths for connections. 4. Common routing algorithms like shortest path, flooding, distance vector, link state, and hierarchical routing and how they differ in metrics and information used.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

BCA Syllabus under CBCS Pattern with effect from 2021-2022 onwards

Subject COMPUTER NETWORKS


Semester IV
Title
Subject
21UCA07 Specialization NA
Code
Type Core: Theory L:T:P:C 71:5:0:4
Unit Contents Levels Sessions
Introduction – Network Hardware - Software - Reference
Models - OSI and TCP/IP Models - Example Networks:
I Internet, ATM, Ethernet and Wireless LANs - Physical K1 10
Layer - Theoretical Basis for Data Communication -
Guided Transmission Media.
Wireless Transmission - Communication Satellites -
II Telephone System: Structure, Local Loop, Trunks and K2 15
Multiplexing and Switching. Data Link Layer: Design
Issues - Error Detection and Correction.
Elementary Data Link Protocols - Sliding Window
III Protocols - Data Link Layer in the Internet - Medium K3 15
Access Layer - Channel Allocation Problem - Multiple
Access Protocols - Bluetooth.
Network Layer - Design Issues - Routing Algorithms -
IV Congestion Control Algorithms - IP Protocol - IP K3,K4 15
Addresses - Internet Control Protocols.
Transport Layer - Services - Connection Management -
V Addressing, Establishing and Releasing a Connection - K5 16
Simple Transport Protocol - Internet Transport Protocols
(ITP) - Network Security: Cryptography.
Learning Resources
Text
Books A. S. Tanenbaum, ―Computer Networksǁ, Prentice-Hall of India 2008, 4th Edition.
1. Stallings, ―Data and Computer Communicationsǁ, Pearson Education 2012,
7th Edition.
Reference 2. B. A. Forouzan, ―Data Communications and Networkingǁ, Tata McGraw
Books Hill 2007, 4th Edition.
3. F. Halsall, ―Data Communications, Computer Networks and Open
Systemsǁ, Pearson Education 2008.
Website / NPTEL & MOOC courses titled Computer Networks
Link https://nptel.ac.in/courses/106106091/
UNIT-IV
Network Layer Design Issues
1. Store-and-forward packet switching
2. Services provided to transport layer
3. Implementation of connectionless service
4. Implementation of connection-oriented service
5. Comparison of virtual-circuit and datagram networks

1 Store-and-forward packet switching

A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-to-point
link to the ISP. The packet is stored there until it has fully arrived and the link has finished its processing by
verifying the checksum. Then it is forwarded to the next router along the path until it reaches the destination
host, where it is delivered. This mechanism is store-and-forward packet switching.

2 Services provided to transport layer


The network layer provides services to the transport layer at the network layer/transport layer interface. The services
need to be carefully designed with the following goals in mind:
1. Services independent of router technology.
2. Transport layer shielded from number, type, topology of routers.
3. Network addresses available to transport layer use uniform numbering plan – even across LANs and WANs

3 Implementation of connectionless service

If connectionless service is offered, packets are injected into the network individually and routed independently
of each other. No advance setup is needed. In this context, the packets are frequently called datagrams (in analogy
with telegrams) and the network is called a datagram network.
A’s table (initially) A’s table (later) C’s Table E’s Table

Let us assume for this example that the message is four times longer than the maximum packet size, so the
network layer has to break it into four packets, 1, 2, 3, and 4, and send eachof them in turn to router A.
Every router has an internal table telling it where to send packets for each of the possible destinations. Each
table entry is a pair(destination and the outgoing line). Only directly connected lines can be used.
A’s initial routing table is shown in the figure under the label ‘‘initially.’’
At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link. Then each packet is forwarded
according to A’s table, onto the outgoing link to C within a new frame. Packet 1 is then forwarded to E and then
to F.
However, something different happens to packet 4. When it gets to A it is sent to router B, even though it is also
destined for F. For some reason (traffic jam along ACE path), A decided to send packet 4 via a different route
than that of the first three packets. Router A updated its routing table, as shown under the label ‘‘later.’’
The algorithm that manages the tables and makes the routing decisions is called the routing algorithm.
4 Implementation of connection-oriented service

A’s table C’s Table E’s Table

If connection-oriented service is used, a path from the source router all the way to thedestination router must be
established before any data packets can be sent. This connection is called a VC (virtual circuit), and the
network is called a virtual-circuit network

When a connection is established, a route from the source machine to the destination machine is chosen as part
of the connection setup and stored in tables inside the routers. That route is used for all traffic flowing over the
connection, exactly the same way that the telephone system works. When the connection is released, the virtual
circuit is also terminated. With connection-oriented service, each packet carries an identifier telling which
virtual circuit it belongs to.

As an example, consider the situation shown in Figure. Here, host H1 has established connection 1 with host
H2. This connection is remembered as the first entry in each of the routing tables. The first line of A’s table says
that if a packet bearing connection identifier 1 comes in from H1, it is to be sent to router C and given
connection identifier 1. Similarly, the first entry at C routes the packet to E, also with connection identifier 1.
Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses connection
identifier 1 (because it is initiating the connection and this is its only connection) and tells the network to
establish the virtual circuit.

This leads to the second row in the tables. Note that we have a conflict here because although A can easily
distinguish connection 1 packets from H1 from connection 1 packets from H3, C cannot do this. For this reason,
A assigns a different connection identifier to the outgoing traffic for the second connection. Avoiding conflicts
of this kind is why routers need the ability to replace connection identifiers in outgoing packets.
In some contexts, this process is called label switching. An example of a connection-oriented network service
is MPLS (Multi Protocol Label Switching).

5 Comparison of virtual-circuit and datagram networks


Routing Algorithms
The main function of NL (Network Layer) is routing packets from the source machine to thedestination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the outgoing line to use for it in the routing
table. This process is forwarding.
b) The other process is responsible for filling in and updating the routing tables. That is where the routing
algorithm comes into play. This process is routing.

Regardless of whether routes are chosen independently for each packet or only when new connections are
established, certain properties are desirable in a routing algorithm correctness, simplicity, robustness,
stability, fairness, optimality
Routing algorithms can be grouped into two major classes:
1) nonadaptive (Static Routing)
2) adaptive. (Dynamic Routing)

Nonadaptive algorithm do not base their routing decisions on measurements or estimates of the current traffic
and topology. Instead, the choice of the route to use to get from I to J is computed in advance, off line, and
downloaded to the routers when the network is booted. This procedure is sometimes called static routing.

Adaptive algorithm, in contrast, change their routing decisions to reflect changes in the topology, and
usually the traffic as well.
Adaptive algorithms differ in
1) Where they get their information (e.g., locally, from adjacent routers, or from all routers),
2) When they change the routes (e.g., every ∆T sec, when the load changes or when thetopology changes), and
3) What metric is used for optimization (e.g., distance, number of hops, or estimated transittime).
This procedure is called dynamic routing

Different Routing Algorithms


• Optimality principle
• Shortest path algorithm
• Flooding
• Distance vector routing
• Link state routing
• Hierarchical Routing

The Optimality Principle


One can make a general statement about optimal routes without regard to network topology or traffic. This
statement is known as the optimality principle.
It states that if router J is on the optimal path from router I to router K, then the optimal path from J to K also
falls along the same
As a direct consequence of the optimality principle, we can see that the set of optimal routes from all sources to
a given destination form a tree rooted at the destination. Such a tree is called a sink tree. The goal of all routing
algorithms is to discover and use the sink trees for all routers

(a) A network. (b) A sink tree for router B.

Shortest Path Routing (Dijkstra’s)


The idea is to build a graph of the subnet, with each node of the graph representing a router and each arc of the
graph representing a communication line or link.
To choose a route between a given pair of routers, the algorithm just finds the shortest path between them on the
graph
1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to this node and make it the first
permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
a. Find the node with the smallest cost and make it Permanent
b. If a node can be reached from more than one route then select the route with theshortest cumulative
cost.
5. Repeat steps 2 to 4 until every node becomes permanent

Flooding
• Another static algorithm is flooding, in which every incoming packet is sent out on every outgoing line
except the one it arrived on.
• Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless some
measures are taken to damp the process.
• One such measure is to have a hop counter contained in the header of each packet, which is decremented at
each hop, with the packet being discarded when the counter reaches zero. Ideally, the hop counter should be
initialized to the length of the path from source to destination.
• A variation of flooding that is slightly more practical is selective flooding. In this algorithm the routers do
not send every incoming packet out on every line, only on those lines that are going approximately in the
right direction.
• Flooding is not practical in most applications.

Intra- and Inter domain Routing


An autonomous system (AS) is a group of networks and routers under the authority of a singleadministration.
Routing inside an autonomous system is referred to as intra domain routing. (DISTANCE VECTOR,
LINK STATE)

Routing between autonomous systems is referred to as inter domain routing. (PATH VECTOR) Each autonomous
system can choose one or more intra domain routing protocols to handle routing inside the autonomous
system. However, only one inter domain routing protocolhandles routing between autonomous systems.

Distance Vector Routing

In distance vector routing, the least-cost route between any two nodes is the route with minimum distance. In
this protocol, as the name implies, each node maintains a vector (table) of minimum distances to every node.

Mainly 3 things in this

Initialization
Sharing
Updating

Initialization
Each node can know only the distance between itself and its immediate neighbors, those directly connected to
it. So for the moment, we assume that each node can send a message to the immediate neighbors and find the
distance between itself and these neighbors. Below fig shows the initial tables for each node. The distance for
any entry that is not a neighbor is marked as infinite (unreachable).

Initialization of tables in distance vector routing

Sharing
The whole idea of distance vector routing is the sharing of information between neighbors. Although node A
does not know about node E, node C does. So if node C shares its routing table with A, node A can also know
how to reach node E. On the other hand, node C does not know how to reach node D, but node A does. If node
A shares its routing table with node C, node C also knows how to reach node D. In other words, nodes A and C,
as immediate neighbors, can improve their routing tables if they help each other.
NOTE: In distance vector routing, each node shares its routing table with its immediate neighbors periodically
and when there is a change

Updating
When a node receives a two-column table from a neighbor, it needs to update its routingtable. Updating takes
three steps:
1. The receiving node needs to add the cost between itself and the sending node to each value in the second
column. (x+y)
2. If the receiving node uses information from any row. The sending node is the next node inthe route.
3. The receiving node needs to compare each row of its old table with the corresponding row of the modified
version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with thesmaller cost. If
there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new row.

For example, suppose node C has previously advertised a route to node X with distance 3. Suppose that now
there is no path between C and X; node C now advertises this route with a distance of infinity. Node A must not
ignore this value even though its old entry is smaller. The old route does not exist anymore. The new route has a
distance of infinity.
Updating in distance vector routing

Final Diagram

When to Share
The question now is, When does a node send its partial routing table (only two columns) to all its immediate
neighbors? The table is sent both periodically and when there is a change in thetable.
Periodic Update A node sends its routing table, normally every 30 s, in a periodic update. The period depends
on the protocol that is using distance vector routing.
Triggered Update A node sends its two-column routing table to its neighbors anytime there is a change in its
routing table. This is called a triggered update. The change can result from the following.
1. A node receives a table from a neighbor, resulting in changes in its own table after updating.
2. A node detects some failure in the neighboring links which results in a distance change toinfinity.

Two-node instability

Three-node instability

SOLUTIONS FOR INSTABILITY


1. Defining Infinity: redefine infinity to a smaller number, such as 100. For our previous scenario, the system
will be stable in less than 20 updates. As a matter of fact, most implementations of the distance vector
protocol define the distance between each node to be 1 and define 16 as infinity. However, this means that
the distance vector routing cannot be used in large systems. The size of the network, in each direction,
cannot exceed 15 hops.

2. Split Horizon: In this strategy, instead of flooding the table through each interface, each node sends only
part of its table through each interface. If, according to its table, node B thinks that the optimum route to
reach X is via A, it does not need to advertise this piece of information to A; the information has come from
A (A already knows). Taking information from node A, modifying it, and sending it back to node A creates
the confusion. In our scenario, node B eliminates the last line of its routing table before it sends it to A. In
this case, node A keeps the value of infinity as the distance to X. Later when node A sends its routing table
to B, node B also corrects its routing table. The system becomes stable after the first update: both node A
and B know that X is not reachable.

3. Split Horizon and Poison Reverse Using the split horizon strategy has one drawback. Normally, the
distance vector protocol uses a timer, and if there is no news about a route, the node deletes the route from
its table. When node B in the previous scenario eliminates the route to X from its advertisement to A, node
A cannot guess that this is due to the split horizon strategy (the source of information was A) or because B
has not received any news about X recently. The split horizon strategy can be combined with the poison
reverse strategy. Node B can still advertise the value for X, but if the source of information is A, it can
replace the distance with infinity as a warning: "Do not use this value; what I know about this route comes
from you."

The Count-to-Infinity Problem

Link State Routing


Link state routing is based on the assumption that, although the global knowledge about the topology is not clear,
each node has partial knowledge: it knows the state (type, condition, and cost) of its links. In other words, the
whole topology can be compiled from the partial knowledgeof each node

Building Routing Tables


1. Creation of the states of the links by each node, called the link state packet (LSP).
2. Dissemination of LSPs to every other router, called flooding, in an efficient and reliable way.
3. Formation of a shortest path tree for each node.
4. Calculation of a routing table based on the shortest path tree

I. Creation of Link State Packet (LSP) A link state packet can carry a large amount of information. For the
moment, we assume that it carries a minimum amount of data: the node identity, the list of links, a sequence
number, and age. The first two, node identity and the list of links, are needed to make the topology. The
third, sequence number, facilitates flooding and distinguishes new LSPs from old ones. The fourth, age,
prevents old LSPs from remaining in the domain for a long time.
LSPs are generated on two occasions:
1. When there is a change in the topology of the domain
2. on a periodic basis: The period in this case is much longer compared to distance vector. The timer set for
periodic dissemination is normally in the range of 60 min or 2 h based on the implementation. A longer
period ensures that flooding does not create too much trafficon the network.
II. Flooding of LSPs: After a node has prepared an LSP, it must be disseminated to all other nodes, not
only to its neighbors. The process is called flooding and based on the following
1. The creating node sends a copy of the LSP out of each interface
2. A node that receives an LSP compares it with the copy it may already have. If the newly arrived
LSP is older than the one it has (found by checking the sequence number), it discards the LSP. If it is
newer, the node does the following:
a. It discards the old LSP and keeps the new one.
b. It sends a copy of it out of each interface except the one from which the packet arrived. This
guarantees that flooding stops somewhere in the domain (where a node has only one interface).
III. Formation of Shortest Path Tree: Dijkstra Algorithm
A shortest path tree is a tree in which the path between the root and every other node is the shortest.
The Dijkstra algorithm creates a shortest path tree from a graph. The algorithm divides the nodes into two sets:
tentative and permanent. It finds the neighbors of a current node, makesthem tentative, examines them, and if
they pass the criteria, makes them permanent.
I. Calculation of a routing table
routing table for node A

Path Vector Routing


Distance vector and link state routing are both intra domain routing protocols. They can be used inside an
autonomous system, but not between autonomous systems. These two protocols are not suitable for inter
domain routing mostly because of scalability. Both of these routing protocols become intractable when the
domain of operation becomes large. Distance vector routing is subject to instability in the domain of
operation. Link state routing needs a huge amount of resources to calculate routing tables. It also creates
heavy traffic because of flooding. There is a need for a third routing protocol which we call path vector routing.
Path vector routing proved to be useful for inter domain routing. The principle of path vector routing is similar
to that of distance vector routing. In path vector routing, we assume that there is one node (there can be
more, but one is enough for our conceptual discussion) in each AS that acts on behalf of the entire AS. Let us
call it the speaker node. The speaker node in an AS creates a routing table and advertises it to speaker nodes in
the neighboring ASs. The idea is the same as for distance vector routing except that only speaker nodes in each
AS can communicate with each other. However, what is advertised is different. A speaker node advertises the
path, not the metric of the nodes, in its autonomous system or other autonomous systems
Initialization
Initial routing tables in path vector routing
Sharing
Just as in distance vector routing, in path vector routing, a speaker in an autonomous system shares its table with
immediate neighbors. In Figure, node A1 shares its table with nodes B1and C1. Node C1 shares its table with
nodes D1, B1, and A1. Node B1 shares its table with C1 and A1. Node D1 shares its table with C1.
Updating When a speaker node receives a two-column table from a neighbor, it updates its own table by adding
the nodes that are not in its routing table and adding its own autonomoussystem and the autonomous system that
sent the table. After a while each speaker has a table and knows how to reach each node in other Ass
a) Loop prevention. The instability of distance vector routing and the creation of loops can be avoided in path
vector routing. When a router receives a message, it checks to see if its AS is in the path list to the
destination. If it is, looping is involved and the message is ignored.
b) Policy routing. Policy routing can be easily implemented through path vector routing. When a router
receives a message, it can check the path. If one of the AS listed in the path is against its policy, it can
ignore that path and that destination. It does not update its routing table with this path, and it does not send
this message to its neighbors.
c) Optimum path. What is the optimum path in path vector routing? We are looking for a path to a
destination that is the best for the organization that runs the AS. One system may use RIP, which defines
hop count as the metric; another may use OSPF with minimum delay defined as the metric. In our previous
figure, each AS may have more than one path to a destination. For example, a path from AS4 to ASI can be
AS4-AS3-AS2-AS1, or it can be AS4- AS3-ASI. For the tables, we chose the one that had the smaller
number of ASs, but this is not always the case. Other criteria, such as security, safety, and reliability, can
also be applied

Hierarchical Routing
As networks grow in size, the router routing tables grow proportionally. Not only is router memory consumed
by ever-increasing tables, but more CPU time is needed to scan them and more bandwidth is needed to send
status reports about them.
At a certain point, the network may grow to the point where it is no longer feasible for every router to have an
entry for every other router, so the routing will have to be done hierarchically, as it is in the telephone network.
When hierarchical routing is used, the routers are divided into what we will call regions. Each router knows all
the details about how to route packets to destinations within its own region but knows nothing about the internal
structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group the regions into
clusters, the clusters into zones, the zones into groups, and so on, until we run outof names for aggregations
When a single network becomes very large, an interesting question is ‘‘how many levels
should the hierarchy have?’’
For example, consider a network with 720 routers. If there is no hierarchy, each router needs 720 routing table
entries.
If the network is partitioned into 24 regions of 30 routers each, each router needs 30 local entries plus 23 remote
entries for a total of 53 entries.

If a three-level hierarchy is chosen, with 8 clusters each containing 9 regions of 10 routers, each router needs
10 entries for local routers, 8 entries for routing to other regions within its own cluster, and 7 entries for distant
clusters, for a total of 25 entries
Kamoun and Kleinrock (1979) discovered that the optimal number of levels for an N router network is ln N,
requiring a total of e ln N entries per router

CONGESTION CONTROL ALGORITHMS


Too many packets present in (a part of) the network causes packet delay and loss that degrades performance.
This situation is called congestion.
The network and transport layers share the responsibility for handling congestion. Since congestion occurs
within the network, it is the network layer that directly experiences it and must ultimately determine what to do
with the excess packets.
However, the most effective way to control congestion is to reduce the load that the transportlayer is placing on
the network. This requires the network and transport layers to work together. In this chapter we will look at the
network aspects of congestion.
When too much traffic is offered, congestion sets in and performance degrades sharply

Above Figure depicts the onset of congestion. When the number of packets hosts send into the network is well
within its carrying capacity, the number delivered is proportional to the number sent. If twice as many are sent,
twice as many are delivered. However, as the offered load approaches the carrying capacity, bursts of traffic
occasionally fill up the buffers inside routers and some packets are lost. These lost packets consume some of the
capacity, so the number of delivered packets falls below the ideal curve. The network is now congested. Unless
the network is well designed, it may experience a congestion collapse

difference between congestion control and flow control.


Congestion control has to do with making sure the network is able to carry the offered traffic. It is a global
issue, involving the behavior of all the hosts and routers.
Flow control, in contrast, relates to the traffic between a particular sender and a particular receiver. Its job is to
make sure that a fast sender cannot continually transmit data faster than the receiver is able to absorb it.
To see the difference between these two concepts, consider a network made up of 100-Gbps fiber optic links on
which a supercomputer is trying to force feed a large file to a personal computer that is capable of handling only
1 Gbps. Although there is no congestion (thenetwork itself is not in trouble), flow control is needed to force the
supercomputer to stop frequently to give the personal computer a chance to breathe.
At the other extreme, consider a network with 1-Mbps lines and 1000 large computers, half of which are trying
to transfer files at 100 kbps to the other half. Here, the problem is not that of fast senders overpowering slow
receivers, but that the total offered traffic exceeds what the network can handle.

The reason congestion control and flow control are often confused is that the best way to handle both problems
is to get the host to slow down. Thus, a host can get a ‘‘slow down’’ message either because the receiver cannot
handle the load or because the network cannot handle it.

Several techniques can be employed. These include:


1. Warning bit
2. Choke packets
3. Load shedding
4. Random early discard
5. Traffic shaping
The first 3 deal with congestion detection and recovery. The last 2 deal with congestionavoidance
Warning Bit
1. A special bit in the packet header is set by the router to warn the source when congestionis detected.
2. The bit is copied and piggy-backed on the ACK and sent to the sender.
3. The sender monitors the number of ACK packets it receives with the warning bit set and adjusts its
transmission rate accordingly.

Choke Packets
1. A more direct way of telling the source to slow down.
2. A choke packet is a control packet generated at a congested node and transmitted to restrict traffic
flow.
3. The source, on receiving the choke packet must reduce its transmission rate by a certainpercentage.
4. An example of a choke packet is the ICMP Source Quench Packet. Hop-by-
Hop Choke Packets
1. Over long distances or at high speeds choke packets are not very effective.
2. A more efficient method is to send to choke packets hop-by-hop.
3. This requires each hop to reduce its transmission even before the choke packet arrive atthe source

Load Shedding
1. When buffers become full, routers simply discard packets.
2. Which packet is chosen to be the victim depends on the application and on the errorstrategy used in
the data link layer.
3. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in thereceived data.
4. For real-time voice or video it is probably better to throw away old data and keep newpackets.
5. Get the application to mark packets with discard priority.

Random Early Discard (RED)


1. This is a proactive approach in which the router discards one or more packets before thebuffer becomes
completely full.
2. Each time a packet arrives, the RED algorithm computes the average queue length, avg.
3. If avg is lower than some lower threshold, congestion is assumed to be minimal or non-existent and the
packet is queued.
4. If avg is greater than some upper threshold, congestion is assumed to be serious and the packet is
discarded.
5. If avg is between the two thresholds, this might indicate the onset of congestion. The probability of
congestion is then calculated.
Traffic Shaping
1. Another method of congestion control is to “shape” the traffic before it enters the
network.
2. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM and
Integrated Services networks.
3. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape).

Two traffic shaping algorithms are:


Leaky Bucket
Token Bucket

The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single- server queue
with constant service time. If the bucket (buffer) overflows then packets are discarded.

(a) A leaky bucket with water. (b) a leaky bucket with packets.
1. The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input.
Does nothing when input is idle.
2. The host injects one packet per clock tick onto the network. This results in a uniform flow of packets,
smoothing out bursts and reducing congestion.
3. When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length
packets though, it is better to allow a fixed number of bytes per tick. E.g. 1024 bytes per tick will allow one
1024-byte packet or two 512-byte packets or four 256- byte packets on 1 tick

Token Bucket Algorithm

1. In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary,depending on
the size of the burst.
2. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must captureand destroy one
token.
3. Tokens are generated by a clock at the rate of one token every t sec.
4. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger
bursts later.
(a) Before. (b) After.

Leaky Bucket vs. Token Bucket


1. LB discards packets; TB does not. TB discards tokens.
2. With TB, a packet can only be transmitted if there are enough tokens to cover its length inbytes.
3. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speedingup the output.
4. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

You might also like