Link Layer
Link Layer
Data Link Layer is second layer of OSI Layered Model. This layer is one of the most complicated layers
and has complex functionalities and liabilities. Data link layer hides the details of underlying hardware
and represents itself to upper layer as the medium to communicate.
Data link layer works between two hosts which are directly connected in some sense. This direct
connection could be point to point or broadcast. Systems on broadcast network are said to be on
same link. The work of data link layer tends to get more complex when it is dealing with multiple
hosts on single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit and to send that over the
underlying hardware. At the receiving end, Data link layer picks up data from hardware which are in
the form of electrical signals, assembles them in a recognizable frame format, and hands over to
upper layer.
Logical Link Control: It deals with protocols, flow-control, and error control
Data link layer does many tasks on behalf of upper layer. These are:
Framing
Data-link layer takes packets from Network Layer and encapsulates them into Frames.Then, it sends
each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up signals from hardware
and assembles them into frames.
Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed to be
unique on the link. It is encoded into hardware at the time of manufacturing.
Synchronization
When data frames are sent on the link, both machines must be synchronized in order to transfer to
take place.
Error Control
Sometimes signals may have encountered problem in transition and the bits are flipped.These errors
are detected and attempted to recover actual data bits. It also provides error reporting mechanism to
the sender.
Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow control that
enables both machine to exchange data on same speed.
Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of collision. Data-link
layer provides mechanism such as CSMA/CD to equip capability of accessing a shared media among
multiple Systems.
There are many reasons such as noise, cross-talk etc., which may help data to get corrupted during
transmission. The upper layers work on some generalized view of network architecture and are not
aware of actual hardware data processing.Hence, the upper layers expect error-free transmission
between the systems. Most of the applications would not function expectedly if they receive
erroneous data. Applications such as voice and video may not be that affected and with some errors
they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are
transmitted with certain level of accuracy. But to understand how errors is controlled, it is essential to
know what types of errors may occur.
Types of Erros :
3. Burst Error :-
1.Error Detection
1. Error Correction
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy Check
(CRC). In both cases, few extra bits are sent along with actual data to confirm that bits received at
other end are same as they were sent. If the counter-check at receiver’ end fails, the bits are
considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of even
parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity is used
and number of 1s is even then one bit with value 0 is added. This way number of 1s remains even.If
the number of 1s is odd, to make it even a bit with value 1 is added.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But when more
than one bits are erro r neous, then it is very hard for the receiver to detect the error.
CRC is a different approach to detect if the received frame contains valid data. This technique involves
binary division of the data bits being sent. The divisor is generated using polynomials. The sender
performs a division operation on the bits being sent and calculates the remainder. Before sending the
actual bits, the sender adds the remainder at the end of the actual bits. Actual data bits plus the
remainder is called a codeword. The sender transmits data bits as codewords.
At
the other end, the receiver performs division operation on codewords using the same CRC divisor. If
the remainder contains all zeros the data bits are accepted, otherwise it is considered as there some
data corruption occurred in transit.
Error Correction
Backward Error Correction When the receiver detects an error in the data received, it requests back
the sender to retransmit the data unit.
Forward Error Correction When the receiver detects some error in the data received, it executes
error-correcting code, which helps it to auto-recover and to correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless transmission
retransmitting may cost too much. In the latter case, Forward Error Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is corrupted.
To locate the bit in error, redundant bits are used as parity bits for error detection.For example, we
take ASCII words (7 bits data), then there could be 8 kind of information we need: first seven bits to
tell us which bit is error and one more bit to tell that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of information. In m+r
bit codeword, there is possibility that the r bits themselves may get corrupted. So the number of r bits
used must inform about m+r bit locations plus no-error information, i.e. m+r+1.
Data-link layer is responsible for implementation of point-to-point flow and error control mechanism.
Flow Control :-
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is
required that the sender and receiver should work at the same speed. That is, sender sends at a
speed on which the receiver can process and accept the data. What if the speed (hardware/software)
of the sender or receiver differs? If sender is sending too fast the receiver may be overloaded,
(swamped) and data may be lost.
This flow control mechanism forces the sender after transmitting a data frame to stop and wait until
the acknowledgement of the data-frame sent is received.
Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of data-frames after
which the acknowledgement should be sent. As we learnt, stop and wait flow control mechanism
wastes resources, this protocol tries to make use of underlying resources as much as possible.
Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or it
is received corrupted. In both cases, the receiver does not receive the correct data-frame and sender
does not know anything about any loss.In such case, both sender and receiver are equipped with
some protocols which helps them to detect transit errors such as loss of data-frame. Hence, either the
sender retransmits the data-frame or the receiver may request to resend the previous data-frame.
Error detection - The sender and receiver, either both or any, must ascertain that there is some error
in the transit.
Positive ACK - When the receiver receives a correct frame, it should acknowledge it.
Negative ACK - When the receiver receives a damaged frame or a duplicate frame, it sends a NACK
back to the sender and the sender must retransmit the correct frame.
Retransmission: The sender maintains a clock and sets a timeout period. If an acknowledgement of a
data-frame previously transmitted does not arrive before the timeout the sender retransmits the
frame, thinking that the frame or it’s acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to control the errors
by Automatic Repeat Requests (ARQ):
Stop-and-wait ARQ
The sending-window size enables the sender to send multiple frames without receiving the
acknowledgement of the previous ones. The receiving-window enables the receiver to receive
multiple frames and acknowledge them. The receiver keeps track of incoming frame’s sequence
number.
When the sender sends all the frames in window, it checks up to what sequence number it has
received positive acknowledgement. If all frames are positively acknowledged, the sender sends next
set of frames. If sender finds that it has received NACK or has not receive any ACK for a particular
frame, it retransmits all the frames after which it does not receive any positive ACK.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers the frames in
memory and sends NACK for only frame which is missing or damaged.
The sender in this case, sends only packet for which NACK is received.
Sliding window protocols are data link layer protocols for reliable and sequential delivery
of data frames. The sliding window is also used in Transmission Control Protocol.
In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver. The term sliding window refers to the imaginary boxes
to hold frames. Sliding window method is also known as windowing.
Working Principle
In these protocols, the sender has a buffer called the sending window and the receiver has
buffer called the receiving window.
The size of the sending window determines the sequence number of the outbound frames.
If the sequence number of the frames is an n-bit field, then the range of sequence
numbers that can be assigned is 0 to 2 𝑛−1. Consequently, the size of the sending window
is 2𝑛−1. Thus in order to accommodate a sending window size of 2 𝑛−1, a n-bit sequence
number is chosen.
The sequence numbers are numbered as modulo-n. For example, if the sending window
size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The
number of bits in the sequence number is 2 to generate the binary sequence 00, 01, 10,
11.
The size of the receiving window is the maximum number of frames that the receiver can
accept at a time. It determines the maximum number of frames that the sender can send
before receiving acknowledgment.
Example
Suppose that we have sender window and receiver window each of size 4. So the sequence
numbering of both the windows will be 0,1,2,3,0,1,2 and so on. The following diagram
shows the positions of the windows after sending the frames and receiving
acknowledgments.
Types of Sliding Window Protocols
The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two categories −
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. It uses the concept of sliding window, and so
is also called sliding window protocol. The frames are sequentially numbered and a
finite number of frames are sent. If the acknowledgment of a frame is not received
within the time period, all frames starting from that frame are retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgment for the first frame. However, here only the erroneous or lost
frames are retransmitted, while the good frames are received and buffered.
MAC address is defined as the identification number for the hardware. In general, the network
interface cards (NIC) of each computer such as Wi-Fi Card, Bluetooth or Ethernet Card has
unchangeable MAC address embedded by the vendor at the time of manufacturing. Dell, Nortel,
Belkin, and Cisco are some of the well known NIC manufacturers. One can change the given default
address of the device by replacing the NIC cards.
History of MAC
As far as history, we can say that Xerox PARC scientists have created the existence of Media Access
Control addresses. There are many similar terms that are used in place of MAC address such as
hardware address, physical address, ethernet hardware address of a network device. Even burned-in
address (BIA especially for Cisco Router Switches) also referred to as the same.
Characteristics of MAC
The MAC address that is considered to be the distinguishing number of the hardware is globally
unique. This lets us identify each device within a connected network.
The total length MAC address in byte is 6 (or 48 bits). According to the IEEE 802 standards, this
address is written in three commonly used formats:
Six two-digits hexadecimals separated by hyphens (-) like 45-67-89-AB-12-CD .
Six two-digits hexadecimals separated by colons (:) like 45:67:89:AB:DE:23 .
Three four-digits hexadecimals separated by dots (.) like ABCD.4567.1238 .
The left 24 bits (3 bytes) of the address is termed as Organizationally Unique Identifier (OUI) number.
This OUI number is assigned by Internet Assigned Number Authority (IANA). This globally unique OUI
number will always remain the same for NICs manufactured by the same company. The right 24 bits
(3 bytes) of the address is termed as Network Interface Controller Specific (NICS), which is responsible
for communication either by using cables or wirelessly over a computer network.
Some devices that exist on this second layer are NIC cards, bridges and switches. This layer is also
responsible for error free data transmission over the Physical layer under LAN transmissions. If we
refer to our Open Systems Interconnection (OSI) network model, we will find that MAC addresses in
the medium access control protocol sub-layer uses data link layer.
Advantages of MAC
The devices that connect to the network have no free attachment cost associated with it.
The router or switch has policy set on them. Either it has permitted equipment attached or non-
permitted equipment attached irrespective of the person attaching it.
The MAC addresses for all the devices on the same network subnet are different. Hence, Diagnosing
Network issues relating to IP address, etc. are easy because of the usefulness of MAC Addresses.
A network administrator feels reliability in identifying senders and receivers of data on the network
with the help of MAC address. The only reason behind is that unlike dynamic IP addresses, the MAC
addresses doesn’t change from time to time.
Disadvantages of MAC
Due to the reason that the first three bytes (OUI) for a MAC address reserved for the manufacturer,
therefore it is limited for having only be 2^24 unique addresses per OUI by the same manufacturer.
We can say spoofing is easy for MAC address filtering. One can act in disguise and just listen to and
from permitted MAC addresses because of the broadcast nature of ethernet.
In most cases an intruder can obtain access to the network by constantly changing his MAC Address
to a one that is permitted.
Channel Allocation
Channel allocation is a process in which a single channel is divided and allotted to multiple users in
order to carry user specific tasks. There are user’s quantity may vary every time the process takes
place. If there are N number of users and channel is divided into N equal-sized sub channels, Each
user is assigned one portion. If the number of users are small and don’t vary at times, then Frequency
Division Multiplexing can be used as it is a simple and efficient channel bandwidth allocating
technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs and
MANs, and Dynamic Channel Allocation.
However, it is not suitable in case of a large number of users with variable bandwidth requirements.
It is not efficient to divide into fixed number of chunks.
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
In dynamic channel allocation scheme, frequency bands are not permanently assigned to the users.
Instead channels are allotted to users dynamically as needed, from a central pool. The allocation is
done considering a number of parameters so that transmission interference is minimized.
This allocation scheme optimises bandwidth usage and results is faster transmissions.
Centralised Allocation
Distributed Allocation
Possible assumptions include:
The Data Link Layer is responsible for transmission of data between two nodes. Its main functions are-
The data link control is responsible for reliable transmission of message over transmission channel by
using techniques like framing, error control and flow control. For Data link control refer to – Stop and
Wait ARQ
If there is a dedicated link between the sender and the receiver then data link control layer is
sufficient, however if there is no dedicated link present then multiple stations can access the channel
simultaneously. Hence multiple access protocols are required to decrease collision and avoid
crosstalk. For example, in a classroom full of students, when a teacher asks a question and all the
students (or stations) start answering simultaneously (send data at same time) then a lot of chaos is
created( data overlap or data lost) then it is the job of the teacher (multiple access protocols) to
manage the students and make them answer one at a time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple access protocols
can be subdivided further as –
1. Random Access Protocol:-
In this, all stations have same superiority that is no station has more priority than another station. Any
station can send data depending on medium’s state( idle or busy). It has two features:
(a) ALOHA :-
It was designed for wireless LAN but is also applicable for shared medium. In this, multiple stations
can transmit data at the same time and can hence lead to collision and data being garbled.
Pure Aloha:
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t come
within the allotted time then the station waits for a random amount of time called back-off time (Tb)
and re-sends the data. Since different stations wait for different amount of time, the probability of
further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed only at
the beginning of these slots. If a station misses out the allowed time, it must wait for the next slot.
This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b) CSMA :-
Carrier Sense Multiple Access ensures fewer collisions as the station is required to first sense the
medium (for idle or busy) before transmitting data. If it is idle then it sends data, otherwise it waits till
the channel becomes idle. However there is still chance of collision in CSMA due to propagation delay.
For example, if station A wants to send data, it will first sense the medium.If it finds the channel idle,
it will start sending data. However, by the time the first bit of data is transmitted (delayed due to
propagation delay) from station A, if station B requests to send data and senses the medium it will
also find it idle and will also send data. This will result in collision of data from station A and B.
1-persistent: The node senses the channel, if idle it sends the data, otherwise it continuously keeps on
checking the medium for being idle and transmits unconditionally(with 1 probability) as soon as the
channel gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the medium
after a random amount of time (not continuously) and transmits when found idle.
P-persistent: The node senses the medium, if idle it sends the data with p probability. If the data is not
transmitted ((1-p) probability) then it waits for some time and checks the medium again, now if it is
found idle then it send with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
O-persistent: Superiority of nodes is decided beforehand and transmission occurs in that order. If the
medium is idle, node waits for its time slot to send data.
(c) CSMA/CD – Carrier sense multiple access with collision
(c) CSMA/CA :–
Carrier sense multiple access with collision avoidance. The process of collisions detection involves
sender receiving acknowledgement signals. If there is just one signal(its own) then the data is
successfully sent but if there are two signals(its own and the one with which it has collided) then it
means a collision has occurred. To distinguish between these two cases, collision must have a lot of
impact on received signal. However it is not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1. Interframe space :–
Station waits for medium to become idle and if found idle it does not immediately send data (to
avoid collision due to propagation delay) rather it waits for a period of time called Interframe space
or IFS. After this time it again checks the medium for being idle. The IFS duration depends on the
priority of station.
2. Contention Window :-
It is the amount of time divided into slots. If the sender is ready to send data, it chooses a random
number of slots as wait time which doubles every time medium is not found idle. If the medium is
found busy it does not restart the entire process, rather it restarts the timer when the channel is
found idle again.
3. Acknowledgement :-
The sender re-transmits the data if acknowledgement is not received before time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For further details
refer – Controlled Access Protocols
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) :–
The available bandwidth is divided into equal bands so that each station can be allocated its own
band. Guard bands are also added so that no two bands overlap to avoid crosstalk and noise.
Time Division Multiple Access (TDMA) :–
In this, the bandwidth is shared between multiple stations. To avoid collision time is divided into
slots and stations are allotted these slots to transmit data. However there is a overhead of
synchronization as each station needs to know its time slot. This is resolved by adding
synchronization bits to each slot. Another issue with TDMA is propagation delay which is resolved by
addition of guard bands.
For more details refer – Circuit Switching
One channel carries all transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all speaking at the same time, then
also perfect reception of data is possible if only two person speak the same language. Similarly, data
from different stations can be transmitted simultaneously in different code languages.
Orthogonal Frequency Division Multiple Access (OFDMA) :–
In OFDMA the available bandwidth is divided into small subcarriers in order to increase the overall
performance, Now the data is transmitted through these small subcarriers. it is widely used in the
5G technology.
Advantages:
Increase in efficiency
High data rates
Good for multimedia traffic
Disadvantages:
Complex to implement
High peak to power ratio
Spatial Division Multiple Access (SDMA) :–
SDMA uses multiple antennas at the transmitter and receiver to separate the signals of multiple
users that are located in different spatial directions. This technique is commonly used in MIMO
(Multiple-Input, Multiple-Output) wireless communication systems.
Advantages :
Frequency band uses effectively
The overall signal quality will be improved
The overall data rate will be increased
Disadvantages :
It is complex to implement
It require the accurate information about the channel
Contention-based access:-
Multiple access protocols are typically contention-based, meaning that multiple devices compete
for access to the communication channel. This can lead to collisions if two or more devices transmit
at the same time, which can result in data loss and decreased network performance.
Carrier Sense Multiple Access (CSMA):-
CSMA is a widely used multiple access protocol in which devices listen for carrier signals on the
communication channel before transmitting. If a carrier signal is detected, the device waits for a
random amount of time before attempting to transmit to reduce the likelihood of collisions.
Collision Detection (CD): CD is a feature of some multiple access protocols that allows devices to
detect when a collision has occurred and take appropriate action, such as backing off and retrying
the transmission.
Collision Avoidance (CA): CA is a feature of some multiple access protocols that attempts to avoid
collisions by assigning time slots to devices for transmission.
Token passing: Token passing is a multiple access protocol in which devices pass a special token
between each other to gain access to the communication channel. Devices can only transmit data
when they hold the token, which ensures that only one device can transmit at a time.
Bandwidth utilization: Multiple access protocols can affect the overall bandwidth utilization of a
network. For example, contention-based protocols may result in lower bandwidth utilization due to
collisions, while token passing protocols may result in higher bandwidth utilization due to the
controlled access to the communication channel.
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The process of collisions
detection involves sender receiving acknowledgement signals. If there is just one signal(its own)
then the data is successfully sent but if there are two signals(its own and the one with which it has
collided) then it means a collision has occurred. To distinguish between these two cases, collision
must have a lot of impact on received signal. However it is not so in wired networks, so CSMA/CA is
used in this case.
Interframe space – Station waits for medium to become idle and if found idle it does not
immediately send data (to avoid collision due to propagation delay) rather it waits for a period of
time called Interframe space or IFS. After this time it again checks the medium for being idle. The
IFS duration depends on the priority of station.
Contention Window – It is the amount of time divided into slots. If the sender is ready to send data,
it chooses a random number of slots as wait time which doubles every time medium is not found
idle. If the medium is found busy it does not restart the entire process, rather it restarts the timer
when the channel is found idle again.
Acknowledgement – The sender re-transmits the data if acknowledgement is not received before
time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For further details
refer – Controlled Access Protocols
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) – The available bandwidth is divided into equal bands
so that each station can be allocated its own band. Guard bands are also added so that no two
bands overlap to avoid crosstalk and noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between multiple stations.
To avoid collision time is divided into slots and stations are allotted these slots to transmit data.
However there is a overhead of synchronization as each station needs to know its time slot. This is
resolved by adding synchronization bits to each slot. Another issue with TDMA is propagation delay
which is resolved by addition of guard bands.
For more details refer – Circuit Switching
Code Division Multiple Access (CDMA) – One channel carries all transmissions simultaneously.
There is neither division of bandwidth nor division of time. For example, if there are many people in
a room all speaking at the same time, then also perfect reception of data is possible if only two
person speak the same language. Similarly, data from different stations can be transmitted
simultaneously in different code languages.
Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA the available bandwidth is
divided into small subcarriers in order to increase the overall performance, Now the data is
transmitted through these small subcarriers. it is widely used in the 5G technology.
Advantages:
Increase in efficiency
High data rates
Good for multimedia traffic
Disadvantages:
Complex to implement
High peak to power ratio
Spatial Division Multiple Access (SDMA) – SDMA uses multiple antennas at the transmitter and
receiver to separate the signals of multiple users that are located in different spatial directions. This
technique is commonly used in MIMO (Multiple-Input, Multiple-Output) wireless communication
systems.
Advantages :
Disadvantages :
It is complex to implement
It require the accurate information about the channel
Collision Detection (CD): CD is a feature of some multiple access protocols that allows devices to
detect when a collision has occurred and take appropriate action, such as backing off and retrying
the transmission.
Collision Avoidance (CA): CA is a feature of some multiple access protocols that attempts
to avoid collisions by assigning time slots to devices for transmission.
Token passing: Token passing is a multiple access protocol in which devices pass a special
token between each other to gain access to the communication channel. Devices can only
transmit data when they hold the token, which ensures that only one device can transmit at
a time.
Bandwidth utilization: Multiple access protocols can affect the overall bandwidth
utilization of a network. For example, contention-based protocols may result in lower
bandwidth utilization due to collisions, while token passing protocols may result in higher
bandwidth utilization due to the controlled access to the communication channel.
Network Switching :-
Network switching is the process of forwarding data frames or packets from one port to
another leading to data transmission from source to destination. Data link layer is the
second layer of the Open System Interconnections (OSI) model whose function is to divide
the stream of bits from physical layer into data frames and transmit the frames according
to switching requirements. Switching in data link layer is done by network devices called
bridges.
Bridges
A data link layer bridge connects multiple LANs (local area networks) together to form a
larger LAN. This process of aggregating networks is called network bridging. A bridge
connects the different components so that they appear as parts of a single network.
The bridge is not responsible for end to end data transfer. It is concerned with transmitting
the data frame from one hop to the next. Hence, they do not examine the payload field of
the frame. Due to this, they can help in switching any kind of packets from the network
layer above.
If any segment of the bridged network is wireless, a wireless bridge is used to perform the
switching.
simple bridging
multi-port bridging
learning or transparent bridging