unit 2 part a
unit 2 part a
and
Correction
10.1
Data link layer
The data link layer (Layer 2) of the OSI model actually consists
of two sublayers:
1. Media Access Control (MAC) sublayer
2. Logical Link Control (LLC) sublayer.
10.2
Note
10.3
INTRODUCTION
10.4
Types of
Transmission
Errors
10.5
Single bit error
Burst error
10.6
Figure 10.2 Burst error of length 8
10.7
Detection versus Correction
10.8
Note
10.9
Coding
Redundancy is achieved through various coding schemes.
We can divide coding schemes into two broad categories:
block coding and convolution coding.
Our concentration will be on block coding only.
To perform coding, we need encoding and decoding.
The receiver can detect an error/ a change in the original
codeword if it follows these two conditions:
1. The receiver has (or can find) a list of valid codewords.
2. The original codeword has changed to an invalid one
10.10
Figure 10.3 The structure of encoder and decoder
10.11
Figure 10.4 XORing of two single bits or two words
10.13
Detection Methods:
Detection methods
VRC(Vertical Redundancy Check)
LRC(Longitudinal Redundancy Check)
CRC(Cyclical Redundancy Check)
Checksum
VRC
VRC(Vertical Redundancy Check)
A parity bit is added to every data unit so that the
total number of 1s(including the parity bit) becomes
even for an even-parity check or odd for an odd-
parity check.
VRC can detect all single-bit errors.
It can detect multiple-bit or burst errors only if the
total number of errors is odd.
VRC
Even parity VRC concept
Compute
Sender Parity Bit Receiver
Y
1010101 N Reject
Eve
n Data
Compute
Parity Bit Compute
Parity Bit
1010101 0
1010101 0
Transmission
Media
Performance
• Detect single bit error
• It can detect burst error only if the number of error is odd.
• EG
• 11100001 10100001 (3) correctly Rejects
• 11100001 10100101 (4) erroneously accept(because burst error is even)
• Ques:
• 1110110
• 1101111
• 1110010
LRC
LRC(Longitudinal Redundancy Check)
Organize data into a table(rows and columns) and create a parity for each
column.Parity bits of all the positions are assembled into a new
data unit, which is added to the end of the data block
VRC & LRC
Cyclic Redundancy Check:
The CRC algorithm uses a polynomial division approach to
generate the CRC code. The data is treated as a sequence of
bits and divided by a predefined polynomial of a fixed degree
using modulo-2 arithmetic. The remainder obtained from this
division is the CRC code that is appended to the data. The
choice of the polynomial used in the CRC algorithm depends
on the specific application and is typically standardized.
CRC is a reliable and efficient way of detecting errors in
digital communication systems. It is widely used in
communication protocols such as Ethernet, Wi-Fi, Bluetooth,
and many others. CRC is also used in storage systems such as
hard disk drives and optical disks to detect and correct errors
that may occur during data transfer.
10.19
CRC generation method
1. Choose a CRC polynomial: A polynomial is a mathematical expression consisting of
one or more terms. The CRC polynomial (divisor) determines the size of the CRC
code and the error-detection capabilities of the CRC algorithm.
2. Choose an initial value: The initial value is a starting point for the CRC calculation.
3. Append padding: The input data is padded with zeros to match the degree of the
CRC polynomial.
4. Divide the padded input data by the CRC polynomial: The division is performed
using binary arithmetic, with no carry or borrow.
5. XOR the remainder with the initial value: The XOR operation is performed on the
remainder of the division and the initial value to generate the final CRC.
6. Transmit or store the data and the CRC: The data and the CRC code are sent or
stored together. The receiver can use the same CRC polynomial and initial value to
generate the expected CRC code and compare it to the received checksum.
The CRC has one bit less than the divisor. It means that if CRC is of n bits, the
divisor is of n+ 1 bit.
The sender appends this CRC to the end of the data unit such that the resulting data
unit becomes exactly divisible by a predetermined divisor i.e. remainder becomes
zero.
CRC Generator
The divisor in a cyclic code is normally called the
generator polynomial or simply the generator.
CRC generator
uses modular-2 division.
Binary Division
in a CRC Generator
Binary Division
in a
CRC Checker
One more example
Calculation of
the polynomial
code checksum.
A polynomial to represent a binary word
10.27
Practice Questions
1. Find the CRC for 1110010101 with the divisor x3+x2+1
2. A bit stream 1101011011 is transmitted using the standard
CRC method. The generator polynomial is x4+x+1. What is the
actual bit string transmitted?
3. A bit stream 10011101 is transmitted using the standard CRC
method. The generator polynomial is x3+1. What is the actual
bit string transmitted? Suppose the third bit from the left is
inverted during transmission. How will the receiver detect this
error?
4.If the frame is 1101011011 and generator is x4 + x +1 what
would be the transmitted frame.
5. What is the remainder obtained by dividing x7+x5+1 by the
generator polynomial x3+1 ?
10.28
Find the CRC for 1110010101 with the divisor x3+x2+1
10.29
Checksum:
⦿ Checksum is used by the higher layer protocols
⦿ And is based on the concept of redundancy(VRC, LRC,
CRC …. Hamming code)
⦿ To create the checksum the sender does the following:
– The unit is divided into K sections, each of n bits.
– Section 1 and 2 are added together using one’s complement.
– Section 3 is added to the result of the previous step.
– Section 4 is added to the result of the previous step.
– The process repeats until section k is added to the result of
the previous step.
– Add the carry to the sum if any.
– The final result is complemented to make the checksum.
Checksum Example
Example
Suppose the following block of 16 bits is to be sent using a
checksum of 8 bits.
10101001 00111001
The numbers are added using one’s
complement
10101001
00111001
Sum 1110001-0
Checksum 00011101
The pattern sent is 10101001 00111001 00011101
Example
Now suppose the receiver receives the pattern sent in Example
and there is no error.
10101001 00111001 00011101
When the receiver adds the three sections, it will get all 1s,
which, after complementing, is all 0s and shows that there is
no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is
OK.
Example
Now suppose there is a burst error of length 5 that affects 4 bits.
10101111 11111001 00011101
11111001
00011101
Partial Sum 1 11000101
Carry 1
Sum 11000110
Complement 00111001 the pattern is corrupted.
Exercise
1. Consider the data unit to be transmitted is
10011001111000100010010010000100 Consider 8 bit checksum is used.
2. Checksum value of 1001001110010011 and 1001100001001101 of 16 bit
segment is-
a. 1010101000011111
b. 1011111000100101
c. 1101010000011110
d. 1101010000111111
10.36
Error Correction
Forward error correction is the process in which the receiver tries to guess
the message by using redundant bits. This is possible, as we see later, if the
number of errors is small.
10.41
-If the total number of bits in a transmittable unit is m+r,
then r must be able to indicate at least m+r+1 different
states
r
2 m+r+1
ex) For value of m is 7(ASCII) , the smallest r value
that can satisfy this equation is 4
24 7 + 4 + 1
Relationship between data and redundancy bits
Number of Number of Total
data bits redundancy bits bits
m r m+r
1 2 3
2 3 5
3 3 6
4 3 7
5 4 9
6 4 10
7 4 11
10.42
Error Correction
Hamming Code : - Developed by R.W.Hamming
The key to the Hamming Code is the use of extra parity bits to allow the identification of a single
error. Create the code word as follows:
1) Mark all bit positions that are powers of two as parity bits. (positions 1, 2, 4, 8, 16, 32, 64, etc.)
2) All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10, 11, 12, 13,
14, 15, 17, etc.)
3) Each parity bit calculates the parity for some of the bits in the code word. The position of the
parity bit determines the sequence of bits that it alternately checks and skips.
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. (2,3,6,7,10,11,14,15,...)
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc.
(4,5,6,7,12,13,14,15,20,21,22,23,...)
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15,24-31,40-47,...)
Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc. (16-31,48-63,80-95,...)
Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc. (32-63,96-127,160-
191,...) etc.
4) Set a parity bit to 1 if the total number of ones in the positions it checks is odd. Set a parity bit
to 0 if the total number of ones in the positions it checks is even.
Example:
- Positions of redundancy bits in Hamming code for 7 bits ASCII (powers of 2)
- In the Hamming code, each r bit is the parity bit for one combination of
data bits.
r1 = bits 1, 3, 5, 7, 9, 11
r2 = bits 2, 3, 6, 7, 10, 11
r4 = bits 4, 5, 6, 7
r8 = bits 8, 9, 10, 11
Error Correction Using Hamming Code
Can be handled in two ways:
• when an error is discovered, the receiver can have the
sender retransmit the entire data unit.
• a receiver can use an error-correcting code, which
automatically corrects certain errors.
• Hamming Code
~ developed by R.W. Hamming
• positions of redundancy bits in Hamming code, lets see
how.
• each r bit is the VRC bit for one combination of data bits
r1 = bits 1, 3, 5, 7, 9, 11
r2 = bits 2, 3, 6, 7, 10, 11
r4 = bits 4, 5, 6, 7
r8 = bits 8, 9, 10, 11
Redundant Bit Position
10.46
DATA LINK LAYER-
Framing
Different terminology used to
define packets at each layer
Data link layer
11.4
•node-to-node communication
•second function of the data link layer is media access control, or how to share the link
•Data link control functions include framing, flow and error control, and software
implemented protocols that provide smooth and reliable transmission of frames
between nodes.
Introduction
•To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer.
•What the physical layer does is accept a raw bit stream and attempt to deliver it
to the destination.
•This bit stream is not guaranteed to be error free. The number of bits received
may be less than, equal to, or more than the number of bits transmitted, and
they may have different values. It is up to the data link layer to detect and, if
necessary, correct errors.
3.3
Contents
Introduction
Character-oriented protocol
Bit-oriented protocol
FRAMING
Framing in the data link layer separates a message from one source to a destination, or
from other messages to other destinations, by adding a sender address and a
destination address.
•Frames can be of fixed or variable size.
• In fixed-size framing, there is no need for defining the boundaries of the frames;
the size itself can be used as a delimiter.
• In variable-size framing, we need a way to define the end of the frame and the
beginning of the next.
•a character-oriented approach(byte-stuffing
) and
•a bit-oriented approach(bit-stuffing).
11.3
Fixed-Size
Framing
Frames can be of fixed or variable size. In fixed-size framing, there is no need for
defining the boundaries of the frames; the size itself can be used as a delimiter.
An example of this type of framing is the ATM wide-area network, which uses
frames of fixed size called cells.
3.4
Variable-Size Framing
Variable-size framing is prevalent in local- area networks. In variable-size framing,
we need a way to define the end of the frame and the beginning of the next.
3.5
A FRAME IN A CHARACTER-ORIENTED PROTOCOL
• Data to be carried are 8-bit characters from a coding system such as ASCII.
• The header, which normally carries the source and destination addresses and other
control information, and
• the trailer, which carries error detection or error correction redundant bits, are also
multiples of 8 bits.
• To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning
and the end of a frame. The flag, composed of protocol-dependent special characters,
signals the start or end of a frame.
11.7
Note
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape
character in the text.
11.9
A frame in a bit-oriented protocol
• The data section of a frame is a sequence of bits to be interpreted by the upper layer as
text, graphic, audio, video, and so on.
• we still need a delimiter to separate one frame from the other. Most protocols use a
special 8-bit pattern flag 01111110 as the delimiter to define the beginning and the end
of the frame.
Note
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow
a 0 in the data, so that the receiver does not mistake
the pattern 0111110 for a flag.
11.11
Bit stuffing and unstuffing
11.12
The following character encoding is used in a data link protocol:
A: 01000111; B: 11100011; FLAG: 01111110; ESC: 11100000
Show the bit sequence transmitted (in binary) for the four-character frame:
A B ESC FLAG when each of the following framing methods are used:
(a) Character count
(b) Flag bytes with byte stuffing.
(c) Starting and ending flag bytes, with bit stuffing.
The following character encoding is used in a data link protocol:
A: 01000111; B: 11100011; FLAG: 01111110; ESC: 11100000
Show the bit sequence transmitted (in binary) for the four-character frame:
A B ESC FLAG when each of the following framing methods are used:
(a) Character count
(b) Flag bytes with byte stuffing.
(c) Starting and ending flag bytes, with bit stuffing.
•ANS:
a) 00000100 01000111 11100011 11100000 01111110
b) 01111110 01000111 11100011 11100000 11100000 11100000
01111110 01111110
c) 01111110 01000111110100011111000000011111010 0111110
The following character encoding is used in a data link protocol:
A: 11010101; B: 10101001; FLAG: 01111110; ESC: 10100011
Show the bit sequence transmitted (in binary) for the five-character frame:
A ESC B ESC FLAG when each of the following framing methods are used:
(a) Flag bytes with byte stuffing.
(b) Starting and ending flag bytes, with bit stuffing.
Ethernet Frame
Format
Basic frame format which is required for all MAC implementation is defined in IEEE 802.3
standard. Though several optional formats are being used to extend the protocol’s basic
capability.
Note – Size of frame of Ethernet IEEE 802.3 varies 64 bytes to 1518 bytes including data
length (46 to 1500 bytes).
3.4
Ethernet (IEEE 802.3) Frame Format
–
PREAMBLE
–• This is a pattern of alternative 0’s and 1’s which indicates starting of the
• Ethernet frame starts with 7-Bytes Preamble.
3.6
Start of frame delimiter (SFD)
–
•Start of frame delimiter (SFD) – This is a 1-Byte field which is always set to
10101011.
•SFD indicates that upcoming bits are starting of the frame, which is the
destination address.
•Sometimes SFD is considered the part of PRE, this is the reason Preamble is
described as 8 Bytes in many places.
•The SFD warns station or stations that this is the last chance for
synchronization.
3.7
Destination and Source
Address
Destination Address – This is 6-Byte field which contains the MAC
address of machine for which data is destined.
3.8
Length
•Length – Length is a 2-Byte field, which indicates
the length of entire Ethernet frame.
•This 16-bit field can hold the length value
between 0 to 65534, but length cannot be larger
than 1500 because of some own limitations of
Ethernet.
3.9
Data –
• Data – This is the place where actual data is inserted, also known
as Payload.
• Both IP header and data will be inserted here if Internet Protocol
is used over Ethernet.
• The maximum data present may be as long as 1500 Bytes. In case
data length is less than minimum length i.e. 46 bytes, then
padding 0’s is added to meet the minimum possible length.
3.10
Cyclic Redundancy Check
(CRC) –
• Cyclic Redundancy Check (CRC) – CRC is 4 Byte field.
• This field contains a 32-bits hash code of data, which is
generated over the Destination Address, Source Address,
Length, and Data field.
• If the CRC computed by destination is not the same as sent
CRC value, data received is corrupted.
3.11
Multiple Access
If we are sharing the media, wire, or air with other users, we need to have a
protocol to first manage the sharing process and then do the data transfer
Data link layer divided into two functionality-oriented sublayers
Logical link control (LLC) layer: The upper sub layer is responsible for data link
control i.e. for flow and error control.
Media access control (MAC) layer: The lower sub layer is responsible for resolving
access to the shared media.
Taxonomy of multiple-access protocols
RANDOM ACCESS
6. Successful transmission In pure aloha, the probability of the In slotted aloha, the probability of
successful transmission of the the successful transmission of the
frame is -S = G * e-2G frame is -S = G * e-G
7. Throughput The maximum throughput in pure The maximum throughput in slotted
aloha is about 18%. aloha is about 37%.
Carrier Sense Multiple Access (CSMA)
■ To minimize the chance of collision and, therefore, increase the
performance, the CSMA method was developed.
■ The chance of collision can be reduced if a station senses the medium
before trying to use it. Carrier sense multiple access (CSMA) requires that
each station first listen to the medium (or check the state of the medium)
before sending.
■ In other words, CSMA is based on the principle "sense before transmit"
or "listen before the talk."
■ CSMA can reduce the possibility of collision, but it cannot eliminate it.
■ The possibility of collision still exists because of propagation delay; when
a station sends a frame, it still takes time (although very short) for the first
bit to reach every station and for every station to sense it.
■ In other words, a station may sense the medium and find it idle, only
because the first bit sent by another station has not yet been received.
Vulnerable time in CSMA
■ The vulnerable time for CSMA is the propagation time Tp .
This is the time needed for a signal to propagate from one
end of the medium to the other.
■ When a station sends a frame, and any other station tries to
send a frame during this time, a collision will result.
■ But if the first bit of the frame reaches the end of the
medium, every station will already have heard the bit and
will refrain from sending.
Vulnerable time in CSMA
CSMA:
■ TYPES:
■ 1. 1 Persistent CSMA
■ 2. Non-Persistent CSMA
■ 3. P Persistent CSMA
■ 4. O- Persistent
■ Modified Protocols
■ CSMA/CD
■ CSMA/CA
1-Persistent: In the 1-Persistent mode of CSMA that defines each node,
first sense the shared channel and if the channel is idle, it immediately
sends the data. Else it must wait and keep track of the status of the
channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle. Since it is transmitting with the probability of 1 when the
carrier is idle called 1 persistent CSMA.
This method has the highest chance of collision when the propagation
delay is high because two or more stations may find the line idle and
send their frames immediately.
■ Collision Avoidance:
■ See that collision do not occur by carefully avoiding it.
level of energy in a channel can have three values: zero, normal, and abnormal. At the
zero level, the channel is idle. At the normal level, a station has successfully captured
the channel and is sending its frame. At the abnormal level, there is a collision and the
level of the energy is twice the normal level. A station that has a frame to send or is
sending a frame needs to monitor the energy level to determine if the channel is idle,
busy, or in collision mode
Throughput
Algorithm
■ 3. Acknowledgment
Timing in CSMA/CA
INTERFRAME SPACE
■ First, collisions are avoided by deferring transmission even if the channel
is found idle.
■ When an idle channel is found, the station does not send immediately.
■ It waits for a period of time called the interframe space or IFS. Even
though the channel may appear idle when it is sensed, a distant station may
have already started transmitting. The distant station's signal has not yet
reached this station.
■ The IFS time allows the front of the transmitted signal by the distant
station to reach this station.
■ If after the IFS time, the channel is still idle, the station can send, but it
still needs to wait a time equal to the contention time (described next).
■ The IFS variable can also be used to prioritize stations or frame types.
■ For example, a station that is assigned a shorter IFS has a higher priority
Contention Window
■ The contention window is an amount of time divided into slots. A
station that is ready to send chooses a random number of slots as its wait
time.
■ The number of slots in the window changes according to the binary
exponential back-off strategy.
■ This means that it is set to one slot the first time and then doubles each
time the station cannot detect an idle channel after the IFS time.
■ This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station.
■ One interesting point about the contention window is that the station
needs to sense the channel after each time slot.
■ However, if the station finds the channel busy, it does not restart the
process; it just stops the timer and restarts it when the channel is sensed as
idle.
■ This gives priority to the station with the longest waiting time.
Note
Station 1 multiplies (a special kind of multiplication, as we will see) its data by its code
to get d1 . c1. Station 2 multiplies its data by its code to get d2 . c2, and so on.
The data that go on the channel are the sum of all these terms, as shown in the box. Any
station that wants to receive data from one of the other three multiplies the data on the
channel by the code of the sender. For example, suppose stations 1 and 2 are talking to
each other. Station 2 wants to hear what station 1 is saying. It multiplies the data on the
channel by c1, the code of station 1.
Because (c1 . c1) is 4, but (c2 . c1), (c3 . c1), and (c4 . c1) are all 0s, station 2 divides
the result by 4 to get the data from station 1.
■ data = (d1 . c1 + d2 . c2 + d3 . c3 + d4 . c4) . c1
■ = d1 . c1 . c1 + d2 . c2 . c1 + d3 . c3 . c1 + d4 . c4 . c1 = 4 d1
Simple idea of communication with code
Chip sequences
1. Each sequence is made of N elements, where N is the number of stations.
2.If we multiply a sequence by a number, every element in the sequence is multiplied
by that element. This is called multiplication of a sequence by a scalar. For example,
2 • [+1 +1 -1 -1] = [+2 +2 -2 -2]
3.If we multiply two equal sequences, element by element, and add the results, we get
N, where N is the number of elements in each sequence. This is called the inner
product of two equal sequences. For example,
[+1 +1 - 1 -1] • [+1 +1 - 1 -1] = 1 + 1 + 1 + 1 = 4
4.If we multiply two different sequences, element by element, and add the results, we
get 0. This is called the inner product of two different sequences. For example,
[+1 +1 - 1 -1] • [+1 +1 +1 +1] = 1 + 1 - 1 - 1 = 0
5.Adding two sequences means adding the corresponding elements. The result is
another sequence. For example,
[+1 +1 - 1 -1] + [+1 +1 +1 +1] = [+2 +2 0 0]
Data representation in CDMA
Sharing channel in CDMA
Digital signal created by four stations in CDMA
Decoding of the composite signal for one in CDMA
Link-Layer Switches
Link-Layer Switches
Bridges
Learning Bridge
Spanning tree algorithms
Link-Layer Switches
■ A link-layer switch (or switch) operates in both the physical and the data-
link layers. As a physical-layer device, it regenerates the signal it receives.
As a link- layer device, the link-layer switch can check the MAC
addresses (source and destination) contained in the frame.
■ Filtering: One may ask what the difference in functionality is between a
link-layer switch and a hub. A link-layer switch has filtering capability. It
can check the destination address of a frame and can decide from which
outgoing port the frame should be sent.
■ A better solution to the static table is a dynamic table that maps addresses
to ports (interfaces) automatically. To make a table dynamic, we need a
switch that gradually learns from the frames’ movements.
■ To do this, the switch inspects both the destination and the source
addresses in each frame that passes through the switch. The destination
address is used for the forwarding decision (table lookup); the source
address is used for adding entries to the table and for updating purposes.
■ When station A sends a frame to station D, the switch does not have
an entry for either D or A. The frame goes out from all three ports;
the frame floods the network. However, by looking at the source
address, the switch learns that station A must be connected to port 1.
This means that frames destined for A, in the future, must be sent
out through port 1. The switch adds this entry to its table. The
table has its first entry now.
■ When station D sends a frame to station B, the switch has no entry
for B, so it floods the network again. However, it adds one more
entry to the table related to station D.
■ The learning process continues until the table has information about
every port. However, note that the learning process may take a long
time. For example, if a station does not send out a frame (a rare
situation), the station will never have an entry in the table.
Loop Problem
■ Transparent switches work fine as long as there are no redundant
switches in the system. Systems administrators, however, like to
have redundant switches (more than one switch between a pair of
LANs) to make the system more reliable. If a switch fails, another
switch takes over until the failed one is repaired or replaced.
■ Redundancy can create loops in the system, which is very
undesirable. Loops can be created only when two or more
broadcasting LANs (those using hubs, for example) are connected
by more than one switch.
■ Station A sends a frame to
station D. The tables of both
switches are empty.
■ Both forward the frame and
update their tables based on
the source address A.
■ Now there are two copies of the
frame on LAN 2.
■ The copy sent out by the left
switch is received by the right
switch, which does not have any
information about the destination
address D;
■ it forwards the frame. The copy
sent out by the right switch is
received by the left switch and is
sent out for lack of information
about D.
■ Note that each frame is handled
separately because switches, as
two nodes on a broadcast network
sharing the medium, use an access
method such as CSMA/CD.
■ The tables of both switches are
updated, but still there is no
information for destination D.
■ Now there are two copies of
the frame on LAN 1.
■ Step 2 is repeated, and both
copies are sent to LAN2.
■ The process continues on and
on. Note that switches are also
repeaters and regenerate
frames.
■ So in each iteration, there are
newly generated fresh copies
of the frames.
Spanning Tree Algorithm
■ To solve the looping problem, the IEEE specification requires that switches
use the spanning tree algorithm to create a loopless topology.
■ In graph theory, a spanning tree is a graph in which there is no loop.
■ In a switched LAN, this means creating a topology in which each LAN can
be reached from any other LAN through one path only (no loop).
■ We cannot change the physical topology of the system because of physical
connections between cables and switches, but we can create a logical
topology that overlays the physical one. Figure in next slide shows a
system with four LANs and five switches.
■ The figure shown the physical system and its representation in graph theory.
■ The connecting arcs show the connection of a LAN to a switch and vice
versa.
■ To find the spanning tree, we need to assign a cost (metric) to each arc.
■ The hop count is normally 1 from a switch to the LAN and 0 in the
reverse direction.
The process for finding the spanning tree
involves three steps:
■ Every switch has a built-in ID (normally the serial number, which is
unique). Each switch broadcasts this ID so that all switches know
which one has the smallest ID. The switch with the smallest ID is
selected as the root switch (root of the tree). We assume that switch
S1 has the smallest ID. It is, therefore, selected as the root switch.
■ The algorithm tries to find the shortest path (a path with the shortest
cost) from the root switch to every other switch or LAN. The
shortest path can be found by examining the total cost from the root
switch to the destination.
■ The combination of the shortest paths creates
the shortest tree.
Ports used in STP
❑ Root port: The root port is a port that has the lowest cost path to the root
bridge.
❑ Designated port: The designated port is a port that forwards the traffic
away from the root bridge.
❑ Blocking port: The blocking port is a port that receives the frames, but it
neither forwards nor sends the frames. It simply drops the received
frames.
❑ Backup port: The backup port is a port that provides the backup path in
a spanning tree if a designated port fails. This port gets active
immediately when the designated port fails.
❑ Alternate port: The alternate port is a port that provides the alternate path
to the root bridge if the root bridge fails.
❑ Forwarding: A port in normal operation receiving and forwarding frames.
The port monitors incoming BPDUs that would indicate it should return
to the blocking state to prevent a loop.
■ Based on the spanning tree, we mark the ports that are part of
it, the forwarding ports, which forward a frame that the
switch receives. We also mark those ports that are not part of
the spanning tree, the blocking ports, which block the frames
received by the switch.
■ Note that there is only one path from any LAN to any other LAN in the spanning
tree system.
■ This means there is only one path from one LAN to any other LAN.
■ No loops are created.
■ Before STP decides which path is the best to the Root Bridge, it needs to first decide which
switch has to be elected as the Root Bridge, which is where the Bridge ID comes into play.
■ Every switch has an identity when they are part of a network. This identity is called the Bridge
ID or BID.
■ It is an 8 byte field which is divided into two parts. The first part is a 2-byte Bridge Priority field
(which can be configured) while the second part is the 6-byte MAC address of the switch.
■ While the Bridge Priority is configurable, the MAC address is unique amongst all switches and
the sum of these two ensures a unique Bridge ID.
THE ROOT BRIDGE ELECTION PROCESS
■ he election process uses several STP messages sent between switches which help each switch
to decide, who is the Root Bridge. These messages are called Hello BPDU where BPDU stands
for Bridge Protocol Data Unit. It is important to understand the information these BPDUs carry
as it will help understand the election process itself.
■ Each BPDU carries several fields in it. The following table defines each field:
Field Description
Root Bridge ID or Root BID BID of the switch that the sender of this BPDU
believes to be the root switch
Cost to the Root Bridge The STP cost between this switch and the current
root
Timer values on Root Bridge Hello Timer, Max Age Timer, Forward Delay Timer
■ Now, the election process itself is very simple. The switch with the
lowest BID becomes the Root Bridge.
■ Since the BID starts with the Bridge Priority field, essentially, the switch with the
lowest Bridge Priority field becomes the Root Bridge.
■ If there is a tie between two switches having the same priority value, then the switch
with the lowest MAC address becomes the Root Bridge.
■ The STP Root Bridge election process starts with each switch advertising
themselves as the Root Bridge and constructing the Hello BPDU accordingly.
■ So each switch lists its own BID as the Root BID. The Sender Bridge ID is of course
the same as the Root BID, as it is again its own BID.
■ With in BPDU, the Cost field is listed with a value of 0, because there is no cost
between itself.
■ The switches send out the Hello BPDU constructed as above, onto the network.
They will keep on maintaining their status as Root Bridge by default, until they
receive a Hello BPDU which carries a lower BID.
■ This Hello BPDU then becomes a superior BPDU.
■ Now the switch receiving this superior BPDU makes changes to the Hello BPDU it
has been sending out.
■ It changes the value of the Root BID to reflect the Root BID from the superior Hello
BPDU.
■ This process continues till every switch agrees on which switch has the lower BID,
and hence deserves to be the Root Bridge.
■ Suppose there are four switches A, B, C, and D on a local area network. There are redundant
links that exist among these interconnected devices. In the above figure, there are two paths
that exist, i.e., DBA and DCA.
■ Link redundancy is good for network availability, but it creates layer 2 loops. The question
arises "how network blocks the unwanted links to avoid the loops without destroying the link
redundancy?".
■ The answer to this question is STP. First, STP chooses one switch as a root bridge.
■ In the above case, A switch is chosen as a root bridge. Next, other switches select the path to
the root bridge, having the least path cost.
■ Now we look at the switch B. For switch B, there are two paths that exist to reach switch A
(root bridge), i.e., BDCA and BA.
■ The path BDCA costs 7 while the path BA costs 2. Therefore, path BA is chosen to reach the
root bridge.
■ The port at switch B is selected as a root port, and the other end is a designated port.
■ Now we look at the switch C. From switch C, there are two paths that exist, i.e., CDBA and
CA. The least-cost path is CA, as it costs 1. Thus, it is selected as a root port, and the other
end is selected as a designated port.
■ Now we look at the switch D. For switch D, there are two paths that exist to reach switch A,
i.e., DBA and DCA. The path DBA costs 4 while the DCA costs 5.
■ Therefore, path DBA is chosen as it has the least cost path. The port on D is selected as a root
port, and on the other end, switch B is selected as a designated port. In this example, we
have observed that the root bridge can contain many designated ports, but it does not
contain a root port.