0% found this document useful (0 votes)
14 views

unit 2 part a

The document discusses error detection and correction in the data link layer of the OSI model, highlighting the importance of detecting and correcting errors that can occur during data transmission. It covers various types of transmission errors, detection methods such as VRC, LRC, and CRC, and introduces Hamming code as a technique for error correction. The document emphasizes the need for redundancy in data transmission to facilitate error detection and correction.

Uploaded by

Bhoomi Agarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

unit 2 part a

The document discusses error detection and correction in the data link layer of the OSI model, highlighting the importance of detecting and correcting errors that can occur during data transmission. It covers various types of transmission errors, detection methods such as VRC, LRC, and CRC, and introduces Hamming code as a technique for error correction. The document emphasizes the need for redundancy in data transmission to facilitate error detection and correction.

Uploaded by

Bhoomi Agarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 178

Error Detection

and
Correction

10.1
Data link layer

The data link layer (Layer 2) of the OSI model actually consists
of two sublayers:
1. Media Access Control (MAC) sublayer
2. Logical Link Control (LLC) sublayer.

10.2
Note

Data can be corrupted


during transmission.

Some applications require that


errors be detected and corrected.

10.3
INTRODUCTION

Let us first discuss some issues related, directly or


indirectly, to error detection and correction.

Topics discussed in this section:


Types of Errors

Detection Versus Correction

10.4
Types of
Transmission
Errors

Single Bit Multi Bit Burst Error

10.5
 Single bit error

 In a single-bit error, only 1 bit in the data unit has changed.

 Multiple bits error

 Frame is received with more than one bits in corrupted state.

 Burst error

 Frame contains more than1 consecutive bits corrupted

10.6
Figure 10.2 Burst error of length 8

10.7
Detection versus Correction

 The correction of errors is more difficult than


the detection.
 In detection, only check is made if error is
present? Answer is simple, yes or no.
 In correction, once the check is made and error
is detected, it also locates the error. Answer is
additional information of location of error to
perform correction.

10.8
Note

To detect or correct errors, we need to


send extra (redundant) bits with data.
This is called Redundant Bit.

10.9
Coding
 Redundancy is achieved through various coding schemes.
 We can divide coding schemes into two broad categories:
block coding and convolution coding.
 Our concentration will be on block coding only.
 To perform coding, we need encoding and decoding.
 The receiver can detect an error/ a change in the original
codeword if it follows these two conditions:
1. The receiver has (or can find) a list of valid codewords.
2. The original codeword has changed to an invalid one

10.10
Figure 10.3 The structure of encoder and decoder

10.11
Figure 10.4 XORing of two single bits or two words

10.13
Detection Methods:
 Detection methods
 VRC(Vertical Redundancy Check)
 LRC(Longitudinal Redundancy Check)
 CRC(Cyclical Redundancy Check)
 Checksum
VRC
 VRC(Vertical Redundancy Check)
 A parity bit is added to every data unit so that the
total number of 1s(including the parity bit) becomes
even for an even-parity check or odd for an odd-
parity check.
 VRC can detect all single-bit errors.
 It can detect multiple-bit or burst errors only if the
total number of errors is odd.
 VRC
Even parity VRC concept
Compute
Sender Parity Bit Receiver

Y
1010101 N Reject
Eve
n Data

Compute
Parity Bit Compute
Parity Bit

1010101 0
1010101 0

Transmission
Media
Performance
• Detect single bit error
• It can detect burst error only if the number of error is odd.

• EG
• 11100001 10100001 (3)  correctly Rejects
• 11100001 10100101 (4)  erroneously accept(because burst error is even)

• Ques:
• 1110110
• 1101111
• 1110010
LRC
 LRC(Longitudinal Redundancy Check)
 Organize data into a table(rows and columns) and create a parity for each
column.Parity bits of all the positions are assembled into a new
data unit, which is added to the end of the data block
VRC & LRC
Cyclic Redundancy Check:
 The CRC algorithm uses a polynomial division approach to
generate the CRC code. The data is treated as a sequence of
bits and divided by a predefined polynomial of a fixed degree
using modulo-2 arithmetic. The remainder obtained from this
division is the CRC code that is appended to the data. The
choice of the polynomial used in the CRC algorithm depends
on the specific application and is typically standardized.
 CRC is a reliable and efficient way of detecting errors in
digital communication systems. It is widely used in
communication protocols such as Ethernet, Wi-Fi, Bluetooth,
and many others. CRC is also used in storage systems such as
hard disk drives and optical disks to detect and correct errors
that may occur during data transfer.

10.19
CRC generation method
1. Choose a CRC polynomial: A polynomial is a mathematical expression consisting of
one or more terms. The CRC polynomial (divisor) determines the size of the CRC
code and the error-detection capabilities of the CRC algorithm.
2. Choose an initial value: The initial value is a starting point for the CRC calculation.
3. Append padding: The input data is padded with zeros to match the degree of the
CRC polynomial.
4. Divide the padded input data by the CRC polynomial: The division is performed
using binary arithmetic, with no carry or borrow.
5. XOR the remainder with the initial value: The XOR operation is performed on the
remainder of the division and the initial value to generate the final CRC.
6. Transmit or store the data and the CRC: The data and the CRC code are sent or
stored together. The receiver can use the same CRC polynomial and initial value to
generate the expected CRC code and compare it to the received checksum.
 The CRC has one bit less than the divisor. It means that if CRC is of n bits, the
divisor is of n+ 1 bit.
 The sender appends this CRC to the end of the data unit such that the resulting data
unit becomes exactly divisible by a predetermined divisor i.e. remainder becomes
zero.
CRC Generator
The divisor in a cyclic code is normally called the
generator polynomial or simply the generator.

 CRC generator
 uses modular-2 division.

Binary Division
in a CRC Generator

CRC CODE: 001


TRANSMITED BIT STREAM 100100001
CRC Checker

Binary Division
in a
CRC Checker
One more example

Calculation of
the polynomial
code checksum.
A polynomial to represent a binary word

10.27
Practice Questions
1. Find the CRC for 1110010101 with the divisor x3+x2+1
2. A bit stream 1101011011 is transmitted using the standard
CRC method. The generator polynomial is x4+x+1. What is the
actual bit string transmitted?
3. A bit stream 10011101 is transmitted using the standard CRC
method. The generator polynomial is x3+1. What is the actual
bit string transmitted? Suppose the third bit from the left is
inverted during transmission. How will the receiver detect this
error?
4.If the frame is 1101011011 and generator is x4 + x +1 what
would be the transmitted frame.
5. What is the remainder obtained by dividing x7+x5+1 by the
generator polynomial x3+1 ?

10.28
Find the CRC for 1110010101 with the divisor x3+x2+1

10.29
Checksum:
⦿ Checksum is used by the higher layer protocols
⦿ And is based on the concept of redundancy(VRC, LRC,
CRC …. Hamming code)
⦿ To create the checksum the sender does the following:
– The unit is divided into K sections, each of n bits.
– Section 1 and 2 are added together using one’s complement.
– Section 3 is added to the result of the previous step.
– Section 4 is added to the result of the previous step.
– The process repeats until section k is added to the result of
the previous step.
– Add the carry to the sum if any.
– The final result is complemented to make the checksum.
Checksum Example
Example
Suppose the following block of 16 bits is to be sent using a
checksum of 8 bits.
10101001 00111001
The numbers are added using one’s
complement
10101001
00111001
Sum 1110001-0
Checksum 00011101
The pattern sent is 10101001 00111001 00011101
Example
Now suppose the receiver receives the pattern sent in Example
and there is no error.
10101001 00111001 00011101
When the receiver adds the three sections, it will get all 1s,
which, after complementing, is all 0s and shows that there is
no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is
OK.
Example
Now suppose there is a burst error of length 5 that affects 4 bits.
10101111 11111001 00011101

When the receiver adds the three sections, it gets


10101111

11111001
00011101
Partial Sum 1 11000101
Carry 1
Sum 11000110
Complement 00111001 the pattern is corrupted.
Exercise
1. Consider the data unit to be transmitted is
10011001111000100010010010000100 Consider 8 bit checksum is used.
2. Checksum value of 1001001110010011 and 1001100001001101 of 16 bit
segment is-
a. 1010101000011111
b. 1011111000100101
c. 1101010000011110
d. 1101010000111111

10.36
Error Correction
Forward error correction is the process in which the receiver tries to guess
the message by using redundant bits. This is possible, as we see later, if the
number of errors is small.

Retransmission : Correction by retransmission is a technique in which the


receiver detects the occurrence of an error and asks the sender to resend the
message. Resending is repeated until a message arrives that the receiver

believes is error-free (usually, not all errors can be detected).


Hamming Distance

Hamming distance between two words is


the number of differences between the
corresponding bits. In this both words must
be of same size.
XOR operation is to find the hamming
distance
E.g., d(0010,1011) is 2
The hamming distance represent the
number of bits affected by error
Figure 10.4 XORing of two single bits or two words
Minimum Hamming Distance
The minimum hamming distance is the smallest hamming distance
between all the possible pair.
E.g. Three words are (101, 110, 111)
d(101,110)=2, d(101, 111)=1, d(110,111)=1
In this case 𝑑𝑚𝑖𝑛 =1
Eg: 2:: (000, 011, 101, 110)

Eg: 3::(00000, 10101, 01011, 11110)


Hamming code
 Hamming code is a widely used error-correction
technique in digital communication and data
storage. It was developed by Richard Hamming in the
early 1950s and is named after him. The main
purpose of Hamming codes is to detect and correct
errors in transmitted or stored data.

10.41
-If the total number of bits in a transmittable unit is m+r,
then r must be able to indicate at least m+r+1 different
states
r
2 m+r+1
ex) For value of m is 7(ASCII) , the smallest r value
that can satisfy this equation is 4
24  7 + 4 + 1
Relationship between data and redundancy bits
Number of Number of Total
data bits redundancy bits bits
m r m+r
1 2 3
2 3 5
3 3 6
4 3 7
5 4 9
6 4 10
7 4 11

10.42
Error Correction
Hamming Code : - Developed by R.W.Hamming

The key to the Hamming Code is the use of extra parity bits to allow the identification of a single
error. Create the code word as follows:

1) Mark all bit positions that are powers of two as parity bits. (positions 1, 2, 4, 8, 16, 32, 64, etc.)
2) All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10, 11, 12, 13,
14, 15, 17, etc.)
3) Each parity bit calculates the parity for some of the bits in the code word. The position of the
parity bit determines the sequence of bits that it alternately checks and skips.
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. (2,3,6,7,10,11,14,15,...)
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc.
(4,5,6,7,12,13,14,15,20,21,22,23,...)
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15,24-31,40-47,...)
Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc. (16-31,48-63,80-95,...)
Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc. (32-63,96-127,160-
191,...) etc.
4) Set a parity bit to 1 if the total number of ones in the positions it checks is odd. Set a parity bit
to 0 if the total number of ones in the positions it checks is even.
Example:
- Positions of redundancy bits in Hamming code for 7 bits ASCII (powers of 2)

- In the Hamming code, each r bit is the parity bit for one combination of
data bits.
r1 = bits 1, 3, 5, 7, 9, 11
r2 = bits 2, 3, 6, 7, 10, 11
r4 = bits 4, 5, 6, 7
r8 = bits 8, 9, 10, 11
Error Correction Using Hamming Code
Can be handled in two ways:
• when an error is discovered, the receiver can have the
sender retransmit the entire data unit.
• a receiver can use an error-correcting code, which
automatically corrects certain errors.
• Hamming Code
~ developed by R.W. Hamming
• positions of redundancy bits in Hamming code, lets see
how.
• each r bit is the VRC bit for one combination of data bits
r1 = bits 1, 3, 5, 7, 9, 11
r2 = bits 2, 3, 6, 7, 10, 11
r4 = bits 4, 5, 6, 7
r8 = bits 8, 9, 10, 11
Redundant Bit Position

The bits in position 2n (n=0,1,2,3) are


the parity bits and bits in remaining
position are information bits.
Calculating the r values
Calculating Even Parity
Example:
Error Detection Using Hamming Code:
Practice Questions
Q1. Generate the Hamming Code for the
data 11101001 with even parity.
Q2. A bit stream 1110101001111 is
transmitted. Suppose the fourth bit from the
left is inverted during transmission. How will
the receiver detect and correct error?

10.46
DATA LINK LAYER-
Framing
Different terminology used to
define packets at each layer
Data link layer

11.4
•node-to-node communication
•second function of the data link layer is media access control, or how to share the link
•Data link control functions include framing, flow and error control, and software
implemented protocols that provide smooth and reliable transmission of frames
between nodes.
Introduction
•To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer.
•What the physical layer does is accept a raw bit stream and attempt to deliver it
to the destination.
•This bit stream is not guaranteed to be error free. The number of bits received
may be less than, equal to, or more than the number of bits transmitted, and
they may have different values. It is up to the data link layer to detect and, if
necessary, correct errors.

3.3
Contents
 Introduction

 Fixed size Framing

 Variable size Framing

 Character-oriented protocol

 Bit-oriented protocol
FRAMING
Framing in the data link layer separates a message from one source to a destination, or
from other messages to other destinations, by adding a sender address and a
destination address.
•Frames can be of fixed or variable size.
• In fixed-size framing, there is no need for defining the boundaries of the frames;
the size itself can be used as a delimiter.
• In variable-size framing, we need a way to define the end of the frame and the
beginning of the next.
•a character-oriented approach(byte-stuffing
) and
•a bit-oriented approach(bit-stuffing).

11.3
Fixed-Size
Framing
Frames can be of fixed or variable size. In fixed-size framing, there is no need for
defining the boundaries of the frames; the size itself can be used as a delimiter.

An example of this type of framing is the ATM wide-area network, which uses
frames of fixed size called cells.

3.4
Variable-Size Framing
Variable-size framing is prevalent in local- area networks. In variable-size framing,
we need a way to define the end of the frame and the beginning of the next.

Historically, two approaches were used for this purpose:


• a character-oriented approach(byte-oriented approach) and
• a bit-oriented approach.

3.5
A FRAME IN A CHARACTER-ORIENTED PROTOCOL
• Data to be carried are 8-bit characters from a coding system such as ASCII.
• The header, which normally carries the source and destination addresses and other
control information, and
• the trailer, which carries error detection or error correction redundant bits, are also
multiples of 8 bits.
• To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning
and the end of a frame. The flag, composed of protocol-dependent special characters,
signals the start or end of a frame.

NOTE: popular when only text was exchanged


Byte stuffing and unstuffing
Byte stuffing and unstuffing

11.7
Note
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape
character in the text.

11.9
A frame in a bit-oriented protocol
• The data section of a frame is a sequence of bits to be interpreted by the upper layer as
text, graphic, audio, video, and so on.
• we still need a delimiter to separate one frame from the other. Most protocols use a
special 8-bit pattern flag 01111110 as the delimiter to define the beginning and the end
of the frame.
Note
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow
a 0 in the data, so that the receiver does not mistake
the pattern 0111110 for a flag.

11.11
Bit stuffing and unstuffing

11.12
The following character encoding is used in a data link protocol:
A: 01000111; B: 11100011; FLAG: 01111110; ESC: 11100000
Show the bit sequence transmitted (in binary) for the four-character frame:
A B ESC FLAG when each of the following framing methods are used:
(a) Character count
(b) Flag bytes with byte stuffing.
(c) Starting and ending flag bytes, with bit stuffing.
The following character encoding is used in a data link protocol:
A: 01000111; B: 11100011; FLAG: 01111110; ESC: 11100000
Show the bit sequence transmitted (in binary) for the four-character frame:
A B ESC FLAG when each of the following framing methods are used:
(a) Character count
(b) Flag bytes with byte stuffing.
(c) Starting and ending flag bytes, with bit stuffing.

•ANS:
a) 00000100 01000111 11100011 11100000 01111110
b) 01111110 01000111 11100011 11100000 11100000 11100000
01111110 01111110
c) 01111110 01000111110100011111000000011111010 0111110
The following character encoding is used in a data link protocol:
A: 11010101; B: 10101001; FLAG: 01111110; ESC: 10100011
Show the bit sequence transmitted (in binary) for the five-character frame:
A ESC B ESC FLAG when each of the following framing methods are used:
(a) Flag bytes with byte stuffing.
(b) Starting and ending flag bytes, with bit stuffing.
Ethernet Frame
Format
Basic frame format which is required for all MAC implementation is defined in IEEE 802.3
standard. Though several optional formats are being used to extend the protocol’s basic
capability.

Note – Size of frame of Ethernet IEEE 802.3 varies 64 bytes to 1518 bytes including data
length (46 to 1500 bytes).

3.4
Ethernet (IEEE 802.3) Frame Format

PREAMBLE
–• This is a pattern of alternative 0’s and 1’s which indicates starting of the
• Ethernet frame starts with 7-Bytes Preamble.

frame and allow sender and receiver to establish bit synchronization.


• Initially, PRE (Preamble) was introduced to allow for the loss of a few bits
due to signal delays. But today’s high-speed Ethernet don’t need Preamble
to protect the frame bits.
PRE (Preamble) indicates the receiver that frame is coming and allow the
receiver to lock onto the data stream before the actual frame begins.

3.6
Start of frame delimiter (SFD)

•Start of frame delimiter (SFD) – This is a 1-Byte field which is always set to
10101011.
•SFD indicates that upcoming bits are starting of the frame, which is the
destination address.
•Sometimes SFD is considered the part of PRE, this is the reason Preamble is
described as 8 Bytes in many places.
•The SFD warns station or stations that this is the last chance for
synchronization.

3.7
Destination and Source
Address
Destination Address – This is 6-Byte field which contains the MAC
address of machine for which data is destined.

Source Address – This is a 6-Byte field which contains the MAC


address of source machine. As Source Address is always an
individual address (Unicast), the least significant bit of first byte is
always 0.

3.8
Length
•Length – Length is a 2-Byte field, which indicates
the length of entire Ethernet frame.
•This 16-bit field can hold the length value
between 0 to 65534, but length cannot be larger
than 1500 because of some own limitations of
Ethernet.

3.9
Data –
• Data – This is the place where actual data is inserted, also known
as Payload.
• Both IP header and data will be inserted here if Internet Protocol
is used over Ethernet.
• The maximum data present may be as long as 1500 Bytes. In case
data length is less than minimum length i.e. 46 bytes, then
padding 0’s is added to meet the minimum possible length.

3.10
Cyclic Redundancy Check
(CRC) –
• Cyclic Redundancy Check (CRC) – CRC is 4 Byte field.
• This field contains a 32-bits hash code of data, which is
generated over the Destination Address, Source Address,
Length, and Data field.
• If the CRC computed by destination is not the same as sent
CRC value, data received is corrupted.

3.11
Multiple Access
If we are sharing the media, wire, or air with other users, we need to have a
protocol to first manage the sharing process and then do the data transfer
Data link layer divided into two functionality-oriented sublayers

Logical link control (LLC) layer: The upper sub layer is responsible for data link
control i.e. for flow and error control.
Media access control (MAC) layer: The lower sub layer is responsible for resolving
access to the shared media.
Taxonomy of multiple-access protocols
RANDOM ACCESS

In random access or contention methods, no station is


superior to another station and none is assigned the
control over another. No station permits, or does not
permit, another station to send. At each instance, a
station that has data to send uses a procedure defined
by the protocol to make a decision on whether or not to
send.
ALOHA
■ Norman Abramson at University of Hawaii, in 70’s wanted to connect computer
centers of all the islands of Hawaii.
■ Hawaii is a collection of islands and it was not possible to connect them with
telephone lines.
■ Joining islands with wires laid on seabed was very expensive, so they started
thinking about wireless solution.
■ Solution: ALOHA
■ Using short range radios.
■ Half duplex by nature. At a time, only can send or receiver. Switching also
takes time.
■ Two different frequencies, one for sending, another for receiving.
■ But, problem of collision, how to solve it?
■ Solution: Let the users communicate, if signals collide, not acknowledged and
so, sender resends data.
■ Adding randomness reduces the chance of collision.
■ Algorithm is called Binary Exponential Back-off Algorithm.
■ Also had problem: While transmitting, sender can not sense collision.
■ In ALOHA, maximum 18 out of 100 packets pass without collision if ALOHA
works with optimum speed.
Binary Exponential Backoff
■ Sender sends immediately with idle channel
■ Continues to listen while transmitting
■ In case of a collision, the sender waits for a random
period (maximum of two time slots)
■ In case they collide again, the interval is just doubled
every time it experiences a collision
■ When doubling is repeated to the slot size to 0–1023
it will not increase further
Binary Exponential Back off Algorithm
■ Time is divided into discrete slots whose length is equal to the worst-case round-trip
propagation time on the either (2τ).
■ minimum frame is 64 bytes (header + 46 bytes of data) = 512 bits
■ Channel capacity 10 Mbps, 512/10 M = 51.2µ
■ After 1st collision, each station waits for 0 or 1 time slot before trying again.
■ After 2nd collision, each station picks up either 0,1,2 or 3 at random and waits for that
much time slots.
■ If 3rd collision occurs, then next time number of slots to wait is chosen randomly from
interval 0 to 23-1.
■ In general, after ith collision, random number between 0 to 2i -1 is chosen, that number of
time slot is skipped.
■ After 10th collision, randomized interval is frozen at max of 1023 slots.
■ After 16th collision, controller reports failure back to computer sending and further
recovery is upto higher layers.
■ This algorithm is called Binary Exponential Back off Algorithm.
■ Advantage: Ensures a low delay when only a few stations collide, but also assures that the
collision is resolved in a reasonable interval when many stations collide.
■ Disadvantage: Could introduce significant delay.
PURE ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple, but

elegant protocol. The idea is that each station sends a frame whenever it
has a frame to send.
■ However, since there is only one channel to share, there is the possibility
of collision between frames from different stations.
■ There are four stations (unrealistic assumption) that contend with one
another for access to the shared channel.
■ The figure in next slide shows that each station sends two frames; there
are a total of eight frames on the shared medium. Some of these frames
collide because multiple frames are in contention for the shared channel.
■ It is obvious that we need to resend the frames that have been destroyed
during transmission. The pure ALOHA protocol relies on
acknowledgments from the receiver.
■ When a station sends a frame, it expects the receiver to send an
acknowledgment. If the acknowledgment does not arrive after a time-out
period, the station assumes that the frame (or the acknowledgment) has
been destroyed and resends the frame.
Frames in a pure ALOHA network
Procedure for pure ALOHA protocol
Vulnerable time for pure ALOHA protocol
Vulnerable Time
■ Let us find the length of time, the vulnerable time, in which there is a
possibility of collision. We assume that the stations send fixed-length
frames with each frame taking TfrS to send.
Station A sends a frame at time t. Now imagine station B has already sent
■ a frame between t - Tfr and t. This leads to a collision between the
frames from station A and station B.
■ The end of B's frame collides with the beginning of A's frame. On the
other hand, suppose that station C sends a frame between t and t + Tfr.
■ Here, there is a collision between frames from station A and station C.
The beginning of C's frame collides with the end of A's frame.
■ The vulnerable time, during which a collision may occur in pure
ALOHA, is 2 times the frame transmission time.
Pure ALOHA vulnerable time = 2 x Tfr
A pure ALOHA network transmits 200-bit frames on a shared channel
of 200 kbps. What is the requirement to make this frame collision-free?

Solution: Average frame transmission time Tfr is 200 bits/200 kbps or 1


ms. The vulnerable time is 2 x 1 ms =2 ms. This means no station
should send later than 1 ms before this station starts transmission and
no station should start sending during the one 1-ms period that this
station is sending.
Throughput
■ Let us call G the average number of frames generated by the system during
one frame transmission time. Then it can be proved that the average number
of successful transmissions for pure ALOHA is S = G x e-2G.
■ The maximum throughput Smax is 0.184, for G = 1/2.
■ In other words, if one-half a frame is generated during one frame
transmission time (in other words, one frame during two frame transmission
times), then 18.4 percent of these frames reach their destination
successfully.
■ This is an expected result because the vulnerable time is 2 times the frame
■ transmission time.
Therefore, if a station generates only one frame in this vulnerable time
(and no other stations generate a frame during this time), the frame will
reach its destination successfully.
■ The throughput for pure ALOHA is S =G x e-2G.
■ The maximum throughput Smax =0.184 when G =(1/2).
1.A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What
is the throughput if the system (all stations together) produces 1000 frames per second?
a) 150 frames
b) 80 frames
c) 135 frames
d) 96 frames
Ans C 135 frames
2. A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps.
What is the throughput if the system (all stations together) produces 500 frames per second?
a) 146 frames
b) 92 frames
c) 38 frames
d) 156 frames
Ans b 92 frames
3. A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What
is the throughput if the system (all stations together) produces 250 frames per second?
a) 38 frames
b) 48 frames
c) 96 frames
d) 126 frames
Ans a 38 frames
Slotted ALOHA
The slotted Aloha is designed to overcome the pure Aloha's efficiency because
pure Aloha has a very high possibility of frame hitting. In slotted Aloha, the
shared channel is divided into a fixed time interval called slots. So that, if a
station wants to send a frame to a shared channel, the frame can only be sent at
the beginning of the slot, and only one frame is allowed to be sent to each slot.
And if the stations are unable to send data to the beginning of the slot, the
station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at
the beginning of two or more station time slots.
In slotted ALOHA, 36 out of 100 packets are delivered without collision at optimum
speed.
■ In slotted ALOHA time is divided into discrete intervals, each
corresponding to one frame.
■ A computer is not permitted to send whenever it has data to send.
■ Instead it is required to wait for the next available slot.
■ Well, it still needs improvement.
■ See next figures that explain ALOHA and Slotted ALOHA.
Frames in a slotted ALOHA network
■ In slotted ALOHA we divide the time into slots of Tfr s and force the
station to send only at the beginning of the time slot.
■ Because a station is allowed to send only at the beginning of the
synchronized time slot, if a station misses this moment, it must wait until
the beginning of the next time slot.
■ This means that the station which started at the beginning of this slot has
already finished sending its frame. Of course, there is still the possibility
of collision if two stations try to send at the beginning of the same time
slot.
■ However, the vulnerable time is now reduced to one-half, equal to Tfr
■ Slotted ALOHA vulnerable time = Tfr
■ The throughput for slottedALOHA is S =: G x e-G.
■ The maximum throughput Smax == 0.368 when G=1.
Q. A slotted ALOHA network transmits 200-bit frames using a shared channel with a 200-kbps
bandwidth. Find the throughput if the system (all stations together) produces 500 frames per second.
a) 92 frames
b) 368 frames
c) 276 frames
d) 151 frames
Vulnerable time for slotted ALOHA protocol
Pure aloha v/s slotted aloha
S.no. On the basis of Pure Aloha Slotted Aloha
1. Basic In pure aloha, data can be In slotted aloha, data can be
transmitted at any time by any transmitted at the beginning of the
station. time slot.
2. Introduced by It was introduced under the It was introduced by Robert in 1972
leadership of Norman Abramson in to improve pure aloha's capacity.
1970 at the University of Hawaii.
3. Time Time is not synchronized in pure Time is globally synchronized in
aloha.Time is continuous in it. slotted aloha.Time is discrete in it.
4. Number of collisions It does not decrease the number of On the other hand, slotted aloha
collisions to half. enhances the efficiency of pure
aloha.It decreases the number of
collisions to half.
5. Vulnerable time In pure aloha, the vulnerable time is Whereas, in slotted aloha, the
= 2 x TFr vulnerable time is = TFr

6. Successful transmission In pure aloha, the probability of the In slotted aloha, the probability of
successful transmission of the the successful transmission of the
frame is -S = G * e-2G frame is -S = G * e-G
7. Throughput The maximum throughput in pure The maximum throughput in slotted
aloha is about 18%. aloha is about 37%.
Carrier Sense Multiple Access (CSMA)
■ To minimize the chance of collision and, therefore, increase the
performance, the CSMA method was developed.
■ The chance of collision can be reduced if a station senses the medium
before trying to use it. Carrier sense multiple access (CSMA) requires that
each station first listen to the medium (or check the state of the medium)
before sending.
■ In other words, CSMA is based on the principle "sense before transmit"
or "listen before the talk."
■ CSMA can reduce the possibility of collision, but it cannot eliminate it.
■ The possibility of collision still exists because of propagation delay; when
a station sends a frame, it still takes time (although very short) for the first
bit to reach every station and for every station to sense it.
■ In other words, a station may sense the medium and find it idle, only
because the first bit sent by another station has not yet been received.
Vulnerable time in CSMA
■ The vulnerable time for CSMA is the propagation time Tp .
This is the time needed for a signal to propagate from one
end of the medium to the other.
■ When a station sends a frame, and any other station tries to
send a frame during this time, a collision will result.
■ But if the first bit of the frame reaches the end of the
medium, every station will already have heard the bit and
will refrain from sending.
Vulnerable time in CSMA
CSMA:
■ TYPES:
■ 1. 1 Persistent CSMA
■ 2. Non-Persistent CSMA
■ 3. P Persistent CSMA
■ 4. O- Persistent
■ Modified Protocols
■ CSMA/CD
■ CSMA/CA
1-Persistent: In the 1-Persistent mode of CSMA that defines each node,
first sense the shared channel and if the channel is idle, it immediately
sends the data. Else it must wait and keep track of the status of the
channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle. Since it is transmitting with the probability of 1 when the
carrier is idle called 1 persistent CSMA.
This method has the highest chance of collision when the propagation
delay is high because two or more stations may find the line idle and
send their frames immediately.

Non-Persistent: It is the access mode of CSMA that defines before


transmitting the data, each node must sense the channel, and if the
channel is inactive, it immediately sends the data. Otherwise, the station
must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.
■ The nonpersistent approach reduces the chance of collision because it is
unlikely that two or more stations will wait the same amount of time and retry
to send simultaneously.
■ However, this method reduces the efficiency of the network because the
medium remains idle when there may be stations with frames to send.
P-Persistent
The p-persistent method is used if the channel has time slots with a slot duration
equal to or greater than the maximum propagation time. The p- persistent approach
combines the advantages of the other two strategies.
■ It reduces the chance of collision and improves efficiency.
■ In this method, after the station finds the line idle it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 - p, the station waits for the beginning of the
next time slot and checks the line again.
a. If the line is idle, it goes to step 1.
b.If the line is busy, it acts as though a collision has occurred
and uses the backoff procedure.

O- Persistent: In the O-persistent method Each node is assigned a transmission


order by a supervisory node. that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.
Behavior of three persistence methods
Flow diagram for three persistence methods
CSMA/CD
■ Carrier Sense: Ethernet card listen to channel before
transmission and differ to transmit if somebody else is already
transmitting.
■ Multiple Access: More than one user needs channel access.
■ Collision Detection: Protocol listen when transmission is going on
and find stop transmitting when it finds colliding.
■ Interframe gap: As soon as channel becomes free, it waits for
small interframe gap and then transmits. Interframe gap is idle time
between frames. After a frame has been sent, transmitters are
required to transmit a minimum of 96 bits (12 octets) of idle line
state before transmitting the next frame.
■ Maximum distance limitation: Frame size min 64 bytes.
■ Minimum frame size limitation: Frame length min 250 m.
■ Both, distance and size can not be increased together.
■ More bandwidth deteriorates performance.
■ If first 64 bytes are successfully received, means later there would be
no collision.
Collision Detection & Avoidance
■ Collision garble the frames.
■ Collision Detection:
■ Let collision happen and then solve it.

■ If sender detects collision, it can stop sending and restart later by


following ‘binary back-off algorithm’.
■ Need a mechanism to listen to channel.

■ Used by classic Ethernet.

■ Collision Avoidance:
■ See that collision do not occur by carefully avoiding it.

■ Here, it is possible to extract any component signal from collided


signal. So retransmission is not needed. We just extract what we
need from the received signals.
■ Preferred by 802.11 wireless LANs.

■ CDMA Code Division Multiple Access is used in Mobile phones.


Collision of the first bit in CSMA/CD
Collision and abortion in CSMA/CD
Flow diagram for the CSMA/CD
Energy level during transmission, idleness, or collision

level of energy in a channel can have three values: zero, normal, and abnormal. At the
zero level, the channel is idle. At the normal level, a station has successfully captured
the channel and is sending its frame. At the abnormal level, there is a collision and the
level of the energy is twice the normal level. A station that has a frame to send or is
sending a frame needs to monitor the energy level to determine if the channel is idle,
busy, or in collision mode
Throughput

The throughput of CSMA/CD is greater than that of pure or slotted


ALOHA. The maximum throughput occurs at a different value of G
and is based on the persistence method and the value of p in the p-
persistent approach. For the 1- persistent method the maximum
throughput is around 50 percent when G =1. For nonpersistent
method, the maximum throughput can go up to 90 percent when G
is between 3 and 8.
Efficiency of CSMA/CD
CSMA/CA
■ Collision Avoidance with Career Sense Multiple Access.
■ On Wireless Networks
■ Strategies:
■ 1. Inter-frame Spacing (IFS)

■ 2. Contention Window – Binary Exponential Back off

Algorithm
■ 3. Acknowledgment
Timing in CSMA/CA
INTERFRAME SPACE
■ First, collisions are avoided by deferring transmission even if the channel
is found idle.
■ When an idle channel is found, the station does not send immediately.
■ It waits for a period of time called the interframe space or IFS. Even
though the channel may appear idle when it is sensed, a distant station may
have already started transmitting. The distant station's signal has not yet
reached this station.
■ The IFS time allows the front of the transmitted signal by the distant
station to reach this station.
■ If after the IFS time, the channel is still idle, the station can send, but it
still needs to wait a time equal to the contention time (described next).
■ The IFS variable can also be used to prioritize stations or frame types.
■ For example, a station that is assigned a shorter IFS has a higher priority
Contention Window
■ The contention window is an amount of time divided into slots. A
station that is ready to send chooses a random number of slots as its wait
time.
■ The number of slots in the window changes according to the binary
exponential back-off strategy.
■ This means that it is set to one slot the first time and then doubles each
time the station cannot detect an idle channel after the IFS time.
■ This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station.
■ One interesting point about the contention window is that the station
needs to sense the channel after each time slot.
■ However, if the station finds the channel busy, it does not restart the
process; it just stops the timer and restarts it when the channel is sensed as
idle.
■ This gives priority to the station with the longest waiting time.
Note

In CSMA/CA, the IFS can also be used to


define the priority of a station or a frame.
Note

In CSMA/CA, if the station finds the


channel busy, it does not restart the
timer of the contention window;
it stops the timer and restarts it when
the channel becomes idle.
Flow diagram for CSMA/CA
NAV – DIFS – SIFS – PIFS – EIFS – CTS - RTS
network allocation vector (NAV) that shows how much time must pass before
these stations are allowed to check the channel for idleness.
The exchange of data and control frames in time
1. Before sending a frame, the source station senses the medium by checking the
energy level at the carrier frequency.
a. The channel uses a persistence strategy with backoff until the channel is idle.
b.After the station is found to be idle, the station waits for a period of time called the
DCF interframe space (DIFS); then the station sends a control frame called the
request to send (RTS).
1.After receiving the RTS and waiting a period of time called the short interframe
space (SIFS), the destination station sends a control frame, called the clear to send
(CTS), to the source station. This control frame indicates that the destination station is
ready to receive data.
2. The source station sends data after waiting an amount of time equal to SIFS.
3.The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is needed
in this protocol because the station does not have any means to check for the successful
arrival of its data at the destination. On the other hand, the lack of collision in
CSMA/CD is a kind of indication to the source that data have arrived.
CONTROLLED ACCESS

In controlled access, the stations consult one another


to find which station has the right to send. A station
cannot send unless it has been authorized by other
stations. We discuss three popular controlled-access
methods.
Reservation
Polling
Token Passing
Reservation
■ In the reservation method, a station needs to make a reservation before
sending data. Time is divided into intervals.
■ In each interval, a reservation frame precedes the data frames sent in that
interval.
■ If there are N stations in the system, there are exactly N reservation mini-slots
in the reservation frame.
■ Each mini slot belongs to a station. When a station needs to send a data
frame, it makes a reservation in its own mini slot.
■ The stations that have made reservations can send their data frames after the
reservation frame.
■ Figure in the next slide shows a situation with five stations and a five-mini
slot reservation frame.
■ In the first interval, only stations 1, 3, and 4 have made reservations.
■ In the second interval, only station 1 has made a reservation.
Reservation access method
Polling
■ Polling works with topologies in which one device is designated as a
primary station and the other devices are secondary stations.
■ All data exchanges must be made through the primary device even when
the ultimate destination is a secondary device.
■ The primary device controls the link; the secondary devices follow its
instructions. It is up to the primary device to determine which device is
allowed to use the channel at a given time.
■ The primary device, therefore, is always the initiator of a session. This
method uses poll and select functions to prevent collisions.
■ However, the drawback is if the primary station fails, the system goes
down.
Select and poll functions in polling access method
Select
■ The select function is used whenever the primary device has something to
send.
■ Remember that the primary controls the link.
■ If the primary is neither sending nor receiving data, it knows the link is
available.
■ If it has something to send, the primary device sends it. What it does
not know, however, is whether the target device is prepared to receive.
■ So the primary must alert the secondary to the upcoming transmission
and wait for an acknowledgment of the secondary’s ready status.
■ Before sending data, the primary creates and transmits a select (SEL)
frame, one field of which includes the address of the intended secondary.
Poll
■ The poll function is used by the primary device to solicit transmissions
from the secondary devices.
■ When the primary is ready to receive data, it must ask (poll) each device
in turn if it has anything to send.
■ When the first secondary is approached, it responds either with a NAK
frame if it has nothing to send or with data (in the form of a data frame) if
it does.
■ If the response is negative (a NAK frame), then the primary polls the
next secondary in the same manner until it finds one with data to send.
■ When the response is positive (a data frame), the primary reads the
frame and returns an acknowledgment (ACK frame), verifying its receipt.
Token Passing

■ In the token-passing method, the stations in a network are organized in


a logical ring.
■ In other words, for each station, there is a predecessor and a successor.
■ The predecessor is the station which is logically before the station in the
ring; the successor is the station which is after the station in the ring.
■ The current station is the one that is accessing the channel now.
■ The right to this access has been passed from the predecessor to the current
station.
■ The right will be passed to the successor when the current station has no
more data to send.
■ A special packet called a token circulates through the ring.
■ The possession of the token gives the station the right to access the
channel and send its data.
■ When a station has some data to send, it waits until it receives the token
from its predecessor. It then holds the token and sends its data. When the
station has no more data to send, it releases the token, passing it to the next
logical station in the ring.
■ The station cannot send data until it receives the token again in the next
round. In this process, when a station receives the token and has no data to
send, it just passes the data to the next station.
■ Token management is needed for this access method.
Logical ring and physical topology in token-passing access method
CHANNELIZATION

Channelization is a multiple-access method in which


the available bandwidth of a link is shared in time,
frequency, or through code, between different stations.
In this section, we discuss three channelization
protocols.

Frequency-Division Multiple Access (FDMA)


Time-Division Multiple Access (TDMA)
Code-Division Multiple Access (CDMA)
Frequency-division multiple access
(FDMA)
■ In frequency-division multiple access (FDMA), the available bandwidth is
divided into frequency bands. Each station is allocated a band to send its data. In
other words, each band is reserved for a specific station, and it belongs to the
station all the time.
■ Each station also uses a bandpass filter to confine the transmitter frequencies. To
prevent station interferences, the allocated bands are separated from one another by
small guard bands.
■ FDMA specifies a predetermined frequency band for the entire period of
communication. This means that stream data (a continuous flow of data that may
not be packetized) can easily be used with FDMA.
■ FDMA, is an access method in the data-link layer. The datalink layer in each station
tells its physical layer to make a bandpass signal from the data passed to it.
Frequency-division multiple access (FDMA)
Note

In FDMA, the available bandwidth


of the common channel is divided into
bands that are separated by guard
bands.
Time-division multiple access (TDMA)
■ Time-division multiple access (TDMA), the stations share the bandwidth of the
channel in time. Each station is allocated a time slot during which it can send data.
Each station transmits its data in its assigned time slot.
■ The main problem with TDMA lies in achieving synchronization between the
different stations. Each station needs to know the beginning of its slot and the
location of its slot.
■ This may be difficult because of propagation delays introduced in the system if the
stations are spread over a large area. To compensate for the delays, we can insert
guard times. Synchronization is normally accomplished by having some
synchronization bits (normally referred to as preamble bits) at the beginning of
each slot.
Time-division multiple access (TDMA)
Note

In TDMA, the bandwidth is just one


channel that is timeshared between
different stations.
Code-division multiple access (CDMA)
■ Code-division multiple access (CDMA) was conceived several decades ago.
Recent advances in electronic technology have finally made its implementation
possible. CDMA differs from FDMA in that only one channel occupies the entire
bandwidth of the link. It differs from TDMA in that all stations can send data
simultaneously; there is no timesharing.
■ CDMA simply means communication with different codes. For example, in a large
room with many people, two people can talk privately in English if nobody else
understands English. Another two people can talk in Chinese if they are the only
ones who understand Chinese, and so on. In other words, the common channel, the
space of the room in this case, can easily allow communication between several
couples, but in different languages (codes).
Note

In CDMA, one channel carries all


transmissions simultaneously.
■ Let us assume we have four stations, 1, 2, 3, and 4, connected to the same channel.
The data from station 1 are d1, from station 2 are d2, and so on. The code assigned
to the first station is c1, to the second is c2, and so on. We assume that the assigned
codes have two properties.
1. If we multiply each code by another, we get 0.
2. If we multiply each code by itself, we get 4 (the number of stations).

Station 1 multiplies (a special kind of multiplication, as we will see) its data by its code
to get d1 . c1. Station 2 multiplies its data by its code to get d2 . c2, and so on.

The data that go on the channel are the sum of all these terms, as shown in the box. Any
station that wants to receive data from one of the other three multiplies the data on the
channel by the code of the sender. For example, suppose stations 1 and 2 are talking to
each other. Station 2 wants to hear what station 1 is saying. It multiplies the data on the
channel by c1, the code of station 1.
Because (c1 . c1) is 4, but (c2 . c1), (c3 . c1), and (c4 . c1) are all 0s, station 2 divides
the result by 4 to get the data from station 1.
■ data = (d1 . c1 + d2 . c2 + d3 . c3 + d4 . c4) . c1
■ = d1 . c1 . c1 + d2 . c2 . c1 + d3 . c3 . c1 + d4 . c4 . c1 = 4 d1
Simple idea of communication with code
Chip sequences
1. Each sequence is made of N elements, where N is the number of stations.
2.If we multiply a sequence by a number, every element in the sequence is multiplied
by that element. This is called multiplication of a sequence by a scalar. For example,
2 • [+1 +1 -1 -1] = [+2 +2 -2 -2]
3.If we multiply two equal sequences, element by element, and add the results, we get
N, where N is the number of elements in each sequence. This is called the inner
product of two equal sequences. For example,
[+1 +1 - 1 -1] • [+1 +1 - 1 -1] = 1 + 1 + 1 + 1 = 4
4.If we multiply two different sequences, element by element, and add the results, we
get 0. This is called the inner product of two different sequences. For example,
[+1 +1 - 1 -1] • [+1 +1 +1 +1] = 1 + 1 - 1 - 1 = 0
5.Adding two sequences means adding the corresponding elements. The result is
another sequence. For example,
[+1 +1 - 1 -1] + [+1 +1 +1 +1] = [+2 +2 0 0]
Data representation in CDMA
Sharing channel in CDMA
Digital signal created by four stations in CDMA
Decoding of the composite signal for one in CDMA
Link-Layer Switches

Hosts or LANs do not normally operate in isolation.


They are connected to one another or to the Internet.
To connect hosts or LANs, we use connecting devices.
Connecting devices can operate in different layers of
the Internet model.

Link-Layer Switches
Bridges
Learning Bridge
Spanning tree algorithms
Link-Layer Switches
■ A link-layer switch (or switch) operates in both the physical and the data-
link layers. As a physical-layer device, it regenerates the signal it receives.
As a link- layer device, the link-layer switch can check the MAC
addresses (source and destination) contained in the frame.
■ Filtering: One may ask what the difference in functionality is between a
link-layer switch and a hub. A link-layer switch has filtering capability. It
can check the destination address of a frame and can decide from which
outgoing port the frame should be sent.

■ A link-layer switch has a table used in


filtering decisions.
■ we have a LAN with four stations that are connected to a link-layer switch.
■ If a frame destined for station 71:2B:13:45:61:42 arrives at port 1, the
link-layer switch consults its table to find the departing port.
■ Accordingto its table, frames for 71:2B:13:45:61:42 should be sent out
only through port 2;
■ therefore, there is no need for forwarding the frame through other ports.
Transparent Switches
■ A transparent switch is a switch in which the stations are completely
unaware of the switch’s existence. If a switch is added or deleted from the
system, reconfiguration of the stations is unnecessary. According to the
IEEE 802.1d specification, a system equipped with transparent switches
must meet three criteria:
❑ Frames must be forwarded from one station to another.
❑ The forwarding table is automatically made by learning frame
movements in the network.
❑ Loops in the system must be prevented.
■ Forwarding
A transparent switch must correctly forward the frames
■ Learning
The earliest switches had switching tables that were static. The
system administrator would manually enter each table entry during
switch setup. Although the process was simple, it was not practical.
If a station was added or deleted, the table had to be modified
manually. The same was true if a station’s MAC address changed,
which is not a rare event. For example, putting in a new network card
means a new MAC address.

■ A better solution to the static table is a dynamic table that maps addresses
to ports (interfaces) automatically. To make a table dynamic, we need a
switch that gradually learns from the frames’ movements.

■ To do this, the switch inspects both the destination and the source
addresses in each frame that passes through the switch. The destination
address is used for the forwarding decision (table lookup); the source
address is used for adding entries to the table and for updating purposes.
■ When station A sends a frame to station D, the switch does not have
an entry for either D or A. The frame goes out from all three ports;
the frame floods the network. However, by looking at the source
address, the switch learns that station A must be connected to port 1.
This means that frames destined for A, in the future, must be sent
out through port 1. The switch adds this entry to its table. The
table has its first entry now.
■ When station D sends a frame to station B, the switch has no entry
for B, so it floods the network again. However, it adds one more
entry to the table related to station D.
■ The learning process continues until the table has information about
every port. However, note that the learning process may take a long
time. For example, if a station does not send out a frame (a rare
situation), the station will never have an entry in the table.
Loop Problem
■ Transparent switches work fine as long as there are no redundant
switches in the system. Systems administrators, however, like to
have redundant switches (more than one switch between a pair of
LANs) to make the system more reliable. If a switch fails, another
switch takes over until the failed one is repaired or replaced.
■ Redundancy can create loops in the system, which is very
undesirable. Loops can be created only when two or more
broadcasting LANs (those using hubs, for example) are connected
by more than one switch.
■ Station A sends a frame to
station D. The tables of both
switches are empty.
■ Both forward the frame and
update their tables based on
the source address A.
■ Now there are two copies of the
frame on LAN 2.
■ The copy sent out by the left
switch is received by the right
switch, which does not have any
information about the destination
address D;
■ it forwards the frame. The copy
sent out by the right switch is
received by the left switch and is
sent out for lack of information
about D.
■ Note that each frame is handled
separately because switches, as
two nodes on a broadcast network
sharing the medium, use an access
method such as CSMA/CD.
■ The tables of both switches are
updated, but still there is no
information for destination D.
■ Now there are two copies of
the frame on LAN 1.
■ Step 2 is repeated, and both
copies are sent to LAN2.
■ The process continues on and
on. Note that switches are also
repeaters and regenerate
frames.
■ So in each iteration, there are
newly generated fresh copies
of the frames.
Spanning Tree Algorithm
■ To solve the looping problem, the IEEE specification requires that switches
use the spanning tree algorithm to create a loopless topology.
■ In graph theory, a spanning tree is a graph in which there is no loop.
■ In a switched LAN, this means creating a topology in which each LAN can
be reached from any other LAN through one path only (no loop).
■ We cannot change the physical topology of the system because of physical
connections between cables and switches, but we can create a logical
topology that overlays the physical one. Figure in next slide shows a
system with four LANs and five switches.
■ The figure shown the physical system and its representation in graph theory.
■ The connecting arcs show the connection of a LAN to a switch and vice
versa.
■ To find the spanning tree, we need to assign a cost (metric) to each arc.
■ The hop count is normally 1 from a switch to the LAN and 0 in the
reverse direction.
The process for finding the spanning tree
involves three steps:
■ Every switch has a built-in ID (normally the serial number, which is
unique). Each switch broadcasts this ID so that all switches know
which one has the smallest ID. The switch with the smallest ID is
selected as the root switch (root of the tree). We assume that switch
S1 has the smallest ID. It is, therefore, selected as the root switch.
■ The algorithm tries to find the shortest path (a path with the shortest
cost) from the root switch to every other switch or LAN. The
shortest path can be found by examining the total cost from the root
switch to the destination.
■ The combination of the shortest paths creates
the shortest tree.
Ports used in STP
❑ Root port: The root port is a port that has the lowest cost path to the root
bridge.
❑ Designated port: The designated port is a port that forwards the traffic
away from the root bridge.
❑ Blocking port: The blocking port is a port that receives the frames, but it
neither forwards nor sends the frames. It simply drops the received
frames.
❑ Backup port: The backup port is a port that provides the backup path in
a spanning tree if a designated port fails. This port gets active
immediately when the designated port fails.
❑ Alternate port: The alternate port is a port that provides the alternate path
to the root bridge if the root bridge fails.
❑ Forwarding: A port in normal operation receiving and forwarding frames.
The port monitors incoming BPDUs that would indicate it should return
to the blocking state to prevent a loop.
■ Based on the spanning tree, we mark the ports that are part of
it, the forwarding ports, which forward a frame that the
switch receives. We also mark those ports that are not part of
the spanning tree, the blocking ports, which block the frames
received by the switch.
■ Note that there is only one path from any LAN to any other LAN in the spanning
tree system.
■ This means there is only one path from one LAN to any other LAN.
■ No loops are created.
■ Before STP decides which path is the best to the Root Bridge, it needs to first decide which
switch has to be elected as the Root Bridge, which is where the Bridge ID comes into play.
■ Every switch has an identity when they are part of a network. This identity is called the Bridge
ID or BID.
■ It is an 8 byte field which is divided into two parts. The first part is a 2-byte Bridge Priority field
(which can be configured) while the second part is the 6-byte MAC address of the switch.
■ While the Bridge Priority is configurable, the MAC address is unique amongst all switches and
the sum of these two ensures a unique Bridge ID.
THE ROOT BRIDGE ELECTION PROCESS

■ he election process uses several STP messages sent between switches which help each switch
to decide, who is the Root Bridge. These messages are called Hello BPDU where BPDU stands
for Bridge Protocol Data Unit. It is important to understand the information these BPDUs carry
as it will help understand the election process itself.
■ Each BPDU carries several fields in it. The following table defines each field:

Field Description

Root Bridge ID or Root BID BID of the switch that the sender of this BPDU
believes to be the root switch

Sender’s Bridge ID BID of the switch sending this Hello BPDU

Cost to the Root Bridge The STP cost between this switch and the current
root

Timer values on Root Bridge Hello Timer, Max Age Timer, Forward Delay Timer
■ Now, the election process itself is very simple. The switch with the
lowest BID becomes the Root Bridge.
■ Since the BID starts with the Bridge Priority field, essentially, the switch with the
lowest Bridge Priority field becomes the Root Bridge.
■ If there is a tie between two switches having the same priority value, then the switch
with the lowest MAC address becomes the Root Bridge.
■ The STP Root Bridge election process starts with each switch advertising
themselves as the Root Bridge and constructing the Hello BPDU accordingly.
■ So each switch lists its own BID as the Root BID. The Sender Bridge ID is of course
the same as the Root BID, as it is again its own BID.
■ With in BPDU, the Cost field is listed with a value of 0, because there is no cost
between itself.
■ The switches send out the Hello BPDU constructed as above, onto the network.
They will keep on maintaining their status as Root Bridge by default, until they
receive a Hello BPDU which carries a lower BID.
■ This Hello BPDU then becomes a superior BPDU.
■ Now the switch receiving this superior BPDU makes changes to the Hello BPDU it
has been sending out.
■ It changes the value of the Root BID to reflect the Root BID from the superior Hello
BPDU.
■ This process continues till every switch agrees on which switch has the lower BID,
and hence deserves to be the Root Bridge.
■ Suppose there are four switches A, B, C, and D on a local area network. There are redundant
links that exist among these interconnected devices. In the above figure, there are two paths
that exist, i.e., DBA and DCA.

■ Link redundancy is good for network availability, but it creates layer 2 loops. The question
arises "how network blocks the unwanted links to avoid the loops without destroying the link
redundancy?".

■ The answer to this question is STP. First, STP chooses one switch as a root bridge.

■ In the above case, A switch is chosen as a root bridge. Next, other switches select the path to
the root bridge, having the least path cost.

■ Now we look at the switch B. For switch B, there are two paths that exist to reach switch A
(root bridge), i.e., BDCA and BA.

■ The path BDCA costs 7 while the path BA costs 2. Therefore, path BA is chosen to reach the
root bridge.

■ The port at switch B is selected as a root port, and the other end is a designated port.
■ Now we look at the switch C. From switch C, there are two paths that exist, i.e., CDBA and
CA. The least-cost path is CA, as it costs 1. Thus, it is selected as a root port, and the other
end is selected as a designated port.

■ Now we look at the switch D. For switch D, there are two paths that exist to reach switch A,
i.e., DBA and DCA. The path DBA costs 4 while the DCA costs 5.

■ Therefore, path DBA is chosen as it has the least cost path. The port on D is selected as a root
port, and on the other end, switch B is selected as a designated port. In this example, we
have observed that the root bridge can contain many designated ports, but it does not
contain a root port.

You might also like