CN Module 2 Data Link Layer Final
CN Module 2 Data Link Layer Final
• Data Link Layer is second layer of OSI Layered Model. This layer is one of the most complicated layers and has complex
functionalities and liabilities. Data link layer hides the details of underlying hardware and represents itself to upper layer as
the medium to communicate.
• Data link layer works between two hosts which are directly connected.
• This direct connection could be point to point or broadcast. Systems on broadcast network are said to be on same link.
• Data link layer is responsible for converting data stream to signals bit by bit and to send that over the underlying hardware.
At the receiving end, Data link layer picks up data from hardware which are in the form of electrical signals, assembles
them in a recognizable frame format, and hands over to upper layer.
3.1 Data Link Layer design Issues
• Physical layer delivers bits of information to and from data link layer. The functions of Data Link Layer
are:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.
• Data Link layer
– Takes the packets from Network layer, and
– Encapsulates them into frames for transmission
Difference between Frame & Packet
Frame Packet
Frame is the data link layer protocol data unit Packet is the Network layer protocol data unit
• Principal Service Function of the data link layer is to transfer the data from the network layer on the source
machine to the network layer on the destination machine.
• Process in the network layer that hands some bits to the data link layer for transmission.
• Job of data link layer is to transmit the bits to the destination machine so they can be handed over to the
network layer there
Actual communication:
N/w layer-Data link layer –physical layer
on sending machine
Actual communication:
physical layer -- Data link layer N/w layer
on receiving machine
Virtual communication:
No Physical medium is present
• Each frame send by the Data Link layer is acknowledged and the sender knows if a specific frame has been
received or lost.
• Typically the protocol uses a specific time period that if has passed without getting acknowledgment it will
re-send the frame.
• This service is useful for commutation when an unreliable channel is being utilized (e.g., 802.11 WiFi).
• Network layer does not know frame size of the packets and other restriction of the data link layer. Hence it
becomes necessary for data link layer to have some mechanism to optimize the transmission.
3. Acknowledged Connection Oriented Service
• Framing is function of Data Link Layer that is used to separate message from source to destination by adding sender address
& destination address
• Need of Framing:
• The Data Link Layer needs to pack bits into frames, so that each frame is distinguishable from another. The Data Link
Layer prepares a packet for transport across the local media by encapsulating it with a header & trailer to create a frame
• Framing is a point-to-point connection between two computers or devices consists of a wire in which data is transmitted as a
stream of bits. However, these bits must be framed into discernible blocks of information.
• Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are meaningful to the
receiver.
Framing Methods
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
• This methods gets around the boundary detection of the frame by having each appended by the frame start and
frame end special bytes.
• If they are the same (beginning and ending byte in the frame) they are called flag byte.
• If the actual data contains a byte that is identical to the FLAG byte (e.g., picture, data stream, etc.) the
convention that can be used is to have escape character inserted just before the “FLAG” character.
Framing Method 3: Flag Bits with Bit Stuffing Framing Method
• This methods achieves the same thing as Byte Stuffing method by using Bits (1) instead of Bytes (8 Bits).
• It was developed for High-level Data Link Control (HDLC) protocol.
• Each frames begins and ends with a special bit patter:
• 01111110 or 0x7E <- Flag Byte
• Whenever the sender’s data link layer encounters five consecutive 1s in the data it automatically stuffs a 0
bit into the outgoing bit stream.
• USB uses bit stuffing.
Bit stuffing. (a) The original data.
• After solving the marking of the frame with start and end the data link layer has to handle eventual errors in
transmission or detection.
• Ensuring that all frames are delivered to the network layer at the destination and in proper order.
• Unacknowledged connectionless service: it is OK for the sender to output frames regardless of its reception.
• Reliable connection-oriented service: it is NOT OK.
• Reliable connection-oriented service usually will provide a sender with some feedback about what is happening at the
other end of the line.
• Receiver Sends Back Special Control Frames.
• If the Sender Receives positive Acknowledgment it will know that the frame has arrived safely.
• Timer and Frame Sequence Number for the Sender is Necessary to handle the case when there is no response (positive
or negative) from the Receiver .
Flow Control
• Important Design issue for the cases when the sender is running on a fast powerful computer and receiver is running on a
slow low-end machine.
• Two approaches:
1. Feedback-based flow control
2. Rate-based flow control
• Data Link Layer uses some error control mechanism to ensure frames are transmitted with certain level
of accuracy
Types of error
• Single bit error
• Multiple bit error
• Burst bit error
• Error codes are examined in Link Layer because this is the first place that we have run up against the problem
of reliability transmitting groups of bits.
• Codes are reused because reliability is an overall concern.
• The error correcting code are also seen in the physical layer for noise channels.
• Commonly they are used in link, network and transport layer.
• Error codes have been developed after long fundamental research conducted in mathematics.
• Many protocol standards get codes from the large field in mathematics.
Error-Detecting Codes
Three different types of error detecting codes which are Linear, systematic block codes are
1. Parity.
2. Checksums.
• Hamming code is a liner code that is useful for error detection up to two immediate bit errors. It is
capable of single-bit errors.
• In Hamming code, the source encodes the message by adding redundant bits in the message. These
redundant bits are mostly inserted and generated at certain positions in the message to accomplish
error detection and correction process.
• When the above redundant bits are embedded within the message, it is sent to the user.
What is a Hamming code?
• Codeword: b1 b2 b3 b4 ….
• Check bits: The bits that are powers of 2 (p1, p2, p4, p8, p16, …).
• The rest of bits (m3, m5, m6, m7, m9, …) are filled with m data bits.
• Example of the Hamming code with m = 7 data bits and r = 4 check bits is given in the next
slide.
• Consider a message having four data bits (D) which is to be transmitted as a 7-bit codeword
by adding three error control bits. This would be called a (7,4) code. The three bits to be
added are three EVEN Parity bits (P), where the parity of each is computed on different
subsets of the message bits as shown below.
7 6 5 4 3 2 1
D D D P D P P 7-BIT CODEWORD
D - D - D - P (EVEN PARITY)
D D - - D P - (EVEN PARITY)
D D D P - - - (EVEN PARITY)
Hamming Code
Hamming Code ( Example)
1. Parity Check
A parity bit is added to every data unit so that the total number of 1s is even (or odd for odd-parity).
2. Checksum ( use for error detection)
The receiver follows these steps: The sender follows these steps:
1. The unit is divided into k sections, each of 1. The unit is divided into k sections, each of n
n bits. bits.
2. All sections are added using one’s 2. All sections are added using one’s complement
complement to get the sum. to get the sum.
3. The sum is complemented. 3. The sum is complemented and becomes the
4. If the result is zero, the data are accepted: checksum.
otherwise, rejected 4. The checksum is sent with the data.
Implementation of checksum
The sender initializes the checksum to 0 and adds all data items and the checksum.
36 cannot be expressed in 4 bits. The extra two bits are wrapped and added with the sum to create the wrapped sum
value 6.
The sum is then complemented, resulting in the checksum value 9 (15 − 6 = 9).
2. Checksum
• Internet checksum
• Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are added using one’s complement addition.
4. The sum is complemented and becomes the checksum.
5. The checksum is sent with the data.
• Receiver site:
1. The message (including checksum) is divided into 16-bit words.
2. All words are added using one’s complement addition.
3. The sum is complemented and becomes the new checksum.
4. If the value of checksum is 0, the message is accepted; otherwise, it is rejected.
Example
Now suppose the receiver receives the pattern sent in previous Example
and there is no error. 10101001 00111001 00011101
When the receiver adds the three sections, it will get all 1s, which, after
complementing, is all 0s and shows that there is no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is OK.
Example
● Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control.
● Framing is the process of dividing bit - streams from physical layer into data frames
whose size ranges from a few hundred to a few thousand bytes.
● Flow control regulates speed of delivery and so that a fast sender does not drown a
slow receiver
39
3.1 Introduction
✔ The Data Link layer exists as a connecting layer between the software processes of the layers above it and the
Physical layer below it (Fig.a)
● It prepares the Network layer packets for transmission across some form of media, be it copper, fiber, or the
atmosphere.
● Data Link layer is embodied as a physical entity, such as an Ethernet network interface card (NIC), which
inserts into the system bus of a computer and makes the connection between running software processes on
the computer and physical media.
● Software associated with the NIC enables it to prepare data for transmission and encode it as signal to be sent
on the associated media
40
Types of Data Link Protocols
Data link Protocols
○ Unidirectional protocol
○ Transmitting and receiving layers are always ready; processing time is insignificant
42
Utopian Simplex Protocol
● Protocol consists of two procedures: sender and receiver
● Both run in data link layer of source and destination machines respectively
● No sequence numbers or acknowledgements are used
● Only frame_arrival event type is used to indicate arrival of undamaged frame
● Sender is in infinite while loop pumping data out onto the channel as fast as
possible
● Loop consists of three actions:
○ Fetch a packet from network layer
○ Construct an outbound frame using variable s
○ Send frame on it way
● Only info field is used for the protocol
43
Utopian Simplex Protocol
Summary
The Simplex protocol is hypothetical protocol designed for unidirectional data transmission over an ideal channel, i.e. a channel through which
transmission can never go wrong. It has distinct procedures for sender and receiver. The sender simply sends all its data available onto the channel
as soon as they are available its buffer. The receiver is assumed to process all incoming data instantly. It is hypothetical since it does not handle flow
control or error control.
44
3.1 Stop and wait protocol (Error free channel)
45
Stop and wait protocol (Error free channel)
● Protocols in which sender sends one frame and then waits for an acknowledgement before proceeding are
stop and wait protocol
● Only sender or receiver can send a frame at one time, half duplex connection is formed
● Sending data link layer need not check incoming frame as it will be an acknowledgement always
Summary
Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data transmission without any error control facilities.
However, it provides for flow control so that a fast sender does not drown a slow receiver. The receiver has a finite buffer size with
finite processing speed. The sender can send a frame only when it has received indication from the receiver that it is available for
further data processing.
● Advantage:
● Next frame is transmitted only when the first frame is acknowledge, so there is no chance of the frame being lost
● Disadvantage:
1. It makes the transmission process slow
2. If two devices are distance apart, lot of time is wasted waiting for acknowledge.
46
3.1 Stop and wait protocol (Noisy channel)
47
Fig. a 48
3.1 Stop and wait protocol scenario (Noisy channel)
● Consider a scenario where machine A sends a series of packets to its data link layer which needs to be sent
to network layer of machine B by its own data link layer
● Network layer on machine B has no way of knowing if packet has been lost or duplicated
● Hence data link layer must guarantee no combination of transmission errors take place
49
3.1 Stop and wait protocol scenario (Noisy channel)
50
Fig. b 51
3.1 Stop and wait protocol scenario (Noisy channel)
● To avoid this, receiver needs to be able to distinguish a frame from its retransmitted one
● Solution for this is to add a sequence number in the header of each frame it sends
● Receiver can check the sequence number and see if it is a new frame or a duplicated one
● Number of bits assigned for the sequence number in the header should be minimum (1 bit is sufficient)
● If sender sends a frame m and it is damaged or lost, receiver will not acknowledge it and sender keeps
sending it again instead of its successor fram m+1
● The ambiguity in this is, depending on reception of acknowledgement frame, sender sends either m or m+1
frame,
52
Disadvantages of Stop and Wait Protocol
3.1 AUTOMATIC REPEAT REQUEST (ARQ) PROTOCOL
54
Fig. c 55
3.1 SLIDING WINDOW PROTOCOL
● To achieve full duplex transmission, we can run two instances of previous protocols using a separate link
with simplex transmission in different directions
● Each link has a “FORWARD CHANNEL” for data and a “REVERSE CHANNEL” for acknowledgements
● Hence , there was a need for a full duplex transmission using same link, which is sliding window protocol
56
3.1 SLIDING WINDOW PROTOCOL
● In this protocol, data frames from A to B are intermixed with acknowledgement frames from A to B
● Referring to the “kind” field in the header of an incoming frame, receiver can differentiate whether it is data
or acknowledgement.
57
Data Link Layer Header
58
3.1 SLIDING WINDOW PROTOCOL
● Another advantage is, when a data frame arrives, instead of sending a control frame immediately, receiver
restrains itself till the next data frame is passed down from the network layer
● Acknowledgement is attached to this next outgoing data frame, using the acknowledgement field in the header
● Thus, acknowledgement gets a free ride with the outgoing data packet and no bandwidth is wasted
● This technique of delaying acknowledgements temporarily so that they can be hooked to the next outgoing
frame is known as PIGGYBACKING
59
Sender Window & Receiver Window
3.4 SLIDING WINDOW PROTOCOL
● Advantages:
Better use of available channel bandwidth
Number of frames sent is reduced
Thus reducing the processing load at the receiver
● Disadvantages:
Wait time for data link layer to piggyback the acknowledgement is not known
If DLL waits longer than sender’s time out period, retransmission takes place thus duplicating the frame
Needs an ad-hoc system to wait a fixed amount of time to piggyback acknowledgement or else send it
as a separate frame
61
3.1 TYPES OF SLIDING WINDOW PROTOCOL
62
3.1 1-BIT SLIDING WINDOW PROTOCOL
● A sliding window protocol with window size 1 uses a stop and wait protocol indirectly because sender
transmits one frame and waits for acknowledgement before sending the next one
● Normally, one of the two data link layers goes first and transmits first frame
● Starting machine collects data from its network layer, builds a frame and sends it
● At the receiver, the frame is checked and is passed to the network layer and receiver’s window is slide up
63
3.1 1-BIT SLIDING WINDOW PROTOCOL
64
3.1 1-BIT SLIDING WINDOW PROTOCOL – ISSUES
● Consider computer A is trying to send frame A0 to computer B but A’s timeout interval is too short
● In such a scenario, A will timeout repeatedly and send a series of identical frames all with Seq No 0 and Ack
No 1
● B will reject every duplicate frame and it will send A frame with Seq No 0 and Ack No 0
● Eventually A will send the correct packet
● Protocol works correctly in this scenario but after a series of lost frames and timeouts
65
Fig. 3.4 66
3.2 GO BACK-N PROTOCOL
● In this protocol, the sender can transmit up to a window of w frames before blocking transmission while
waiting for acknowledgement
● With large enough choice for w, sender will be able to continuously transmit frames since acknowledgements
for previous frames will arrive before the window becomes full
● To find correct value of w, we consider how many frames can fit in the channel as they propagate from sender
to receiver
67
3.2 GO BACK-N PROTOCOL
68
3.2 GO BACK-N PROTOCOL
(example of window size)
● Consider a link with bandwidth 50 kbps and a one way transmit time as 250 msec
● Thus, bandwidth delay product is 12.5kbit (50kbps*250msec) or 12.5 frames of 1000 bits each
● Assume sender begins sending frame 0 and then sends a new frame every 20 msec
● By the time 26 frames are sent, time lapsed is 520msec and acknowledgement for frame 0 will have just arrived
69
3.4.2 GO BACK-N PROTOCOL
(example of window size)
● For smaller window sizes for this example, utilization of the link will be less than 100% since the sender will
be blocked sometimes.
70
3.4.2 GO BACK-N PROTOCOL (pipelining)
● Pipelining frames over unreliable communication channel may result in lost or damaged frames at the
receiver side.
● If any one frame is damaged, it is supposed to be discarded but question arises for the correct frames
following it.
● First option (w =1) is for receiver to discard all subsequent frames and no acknowledgement be sent for
those frames
● The sender will timeout and retransmit all unacknowledged frames in order starting with damaged one
71
3.2 GO BACK-N PROTOCOL (Pipelining)
72
3.2 GO BACK-N PROTOCOL (Pipelining)
● In the first option, DLL refuses to accept any frame other than the next one
73
3.1 GO BACK-N PROTOCOL (Pipelining)
74
3.1 GO BACK-N PROTOCOL
75
Advantages of Go back N ARQ
3.1 GO BACK-N PROTOCOL (Issues)
77
Disadvantages of Go back N
3.1 SELECTIVE REPEAT PROTOCOL
The go-back-n protocol works well if errors are less, but if the line is poor it wastes a lot of bandwidth on
retransmitted frames. An alternative strategy, the selective repeat protocol, is to allow the receiver to accept
and buffer the frames following a damaged or lost one.
• Selective Repeat attempts to retransmit only those packets that are actually lost (due to errors) :
• Receiver must be able to accept packets out of order.
• Since receiver must release packets to higher layer in order, the receiver must be able to buffer some packets.
3.1 SELECTIVE REPEAT PROTOCOL
● In this protocol, sender and receiver both maintain a window of outstanding and acceptable
sequence numbers respectively
● Receiver has a buffer reserved for each sequence number within its fixed window
80
3.1 SELECTIVE REPEAT PROTOCOL
● When a frame arrives, its sequence number is checked by the function to see if it falls within the window
If so and it has not been received yet, it is accepted and stored
It is stored even if it is not the next packet expected by the network layer; it is kept in the DLL and not
passed to network layer until all lower numbered frames are delivered to network layer
In Selective Repeat ARQ, the size of the sender and receiver window must be at most one-half of 2^m.
81
3.1 SELECTIVE REPEAT PROTOCOL
Comparison between Stop and wait and sliding window Protocol
Comparison between Stop and wait and sliding window Protocol
BASIS FOR
GO-BACK-N SELECTIVE REPEAT
COMPARISON
Basic Retransmits all the frames Retransmits only those
that sent after the frame frames that are suspected to
which suspects to be lost or damaged.
damaged or lost.
Bandwidth Utilization If error rate is high, it Comparatively less
wastes a lot of bandwidth. bandwidth is wasted in
retransmitting.
Complexity Less complicated. More complex as it require
to apply extra logic and
sorting and storage, at
sender and receiver.
Window size N-1 <= (N+1)/2
Comparison between Go back-N and Selective Repeat
• There are two schemes to allocate a single channel among competing users:
► Traditional way of allocating a single channel, such as a telephone trunk, among multiple competing users is to
chop its capacity by using one of the multiplexing scheme such as FDM, TDM etc.
► FDM
If there are N users, the bandwidth is divided into N equal portions, with each other being assigned one
portion.
Since each user has a private frequency band, there is no interference among users.
A wireless example is FM radio stations. Each station gets a portion of the FM band and uses it most of the
time to broadcast its signal
Problems with FDM: when some users are quiet, their bandwidth is simply lost. They are not using it and no
one else is allowed to use it either.
A static allocation is a poor fit to most computer systems, in which data traffic is extremely busty, often with
peak traffic ratios of 1000:1. Consequently most of the channel will be idle most of the time.
Static channel Allocation in LANs and WANs
92
Dynamic Channel Allocation in LANs and WANs
In this method either a fixed frequency or fixed time slot is not allowed to the user, the user can use the
single channel as per his requirement
Dynamic Allocation strategies
1.Contension resolution approach
Collision: If two frames are transmitted simultaneously, they overlap in time and resulting signal is garbled.
This event is called a Collision. All stations can detect that a collision has occurred. A collided frame
must be transmitted again later. No errors other than those generated by collision occur.
93
MULTIPLE ACCESS
95
3.2 ALOHA
96
Pure ALOHA
97
Pure ALOHA
Such systems where multiple users share a common channel in a way that leads to conflicts are known
as ‘contention systems’
As seen in Fig. 4.2.1a and b, whenever two frames try to occupy the channel at the same time, collision
occurs and both frames are garbled.
100
Pure ALOHA - Vulnerable time for collision
Assume k frames are generated within a frame time and G frames are expected by the channel. The
probability of this is given as:
Pr[k] = Gk e-G/k!
102
Pure ALOHA - Throughput
S = G e-2G
103
Pure ALOHA - Throughput
Fig. 4.2.3 Throughput versus offered traffic for ALOHA systems. 104
Pure ALOHA - Disadvantages
This is expected due to the feature that all users transmit whenever they want.
105
Slotted ALOHA
Time is divided into discrete intervals; each interval corresponding to one frame
Stations can only transmit frames at the beginning of the slot
Hence, synchronization is needed, for which a ‘pip’ is emitted at the start of each interval like a clock
Vulnerable period is halved as a station has to wait for its slot to begin.
106
Slotted ALOHA
107
Slotted ALOHA - Throughput
The probability that no other traffic is present during same slot as our frame; i.e collision is avoided is
e-G
Thus, throughput per frame time is given as:
S = G e-G
108
Slotted ALOHA - Vulnerable Time
110
Pure ALOHA Vs Slotted ALOHA
111
Pure ALOHA Vs Slotted ALOHA
112
CARRIER SENSE MULTIPLE ACCESS (CSMA)
•CSMA protocol was developed to overcome the problem found in ALOHA i.e. to minimize the chances of
collision, so as to improve the performance.
•The chances of collision can be reduce to great extent if a station senses the channel before trying to use it.
• Although CSMA can reduce the possibility of collision, but it cannot eliminate it completely.
•In this method, station that wants to transmit data continuously sense the Channel to check whether the channel
is idle or busy.
•If the channel is busy , the station waits until it becomes idle.
•When the station detects an idle channel, it immediately transmits the frame with probability 1. Hence it is called
1-persistent CSMA.
•This method has the highest chance of collision because two or more station may find channel to be idle at the
same time and transmit their frames.
•When the collision occurs, the stations wait a random amount of time and start all over again.
1-Persistent CSMA (cont..)
Drawback of 1-persistent
•The propagation delay time greatly affects this protocol. Let us suppose, just after the station 1 begins its
transmission, station 2 also become ready to send its data and sense the channel. If the station 1 signal has not yet
reached station 2, station 2 will sense the channel to be idle and will begin its transmission. This will result in
collision.
•Even if propagation delay time is zero, collision will still occur. If two stations become ready in the middle of
third station’s transmission both stations will wait until the transmission of first station ends and both will begin
their transmission exactly simultaneously. This will also result in collision.
COUNTINOUSLY SENSES
CHANNEL? BUSY
TIME
IDLE CHANNEL IDLE
BUSY CHANNEL
•If the channel is busy, it waits a random amount of time and then senses the channel again.
•In non-persistent CSMA the station does not continuously sense the channel for purpose of capturing it
when it defects the end of precious transmission .
Advantages and Disadvantages of Non-persistent Method
Advantages:
• It reduces the chances of collision because the stations wait a rando amount of time. It is unlikely that two or
more stations
• Will wait for same amount of time and will retransmit at the same time.
Disadvantages
•It reduces the efficiency of network because the channel remains idle when there may be station with frames to
send.
• This is due to the fact that the stations wait a random amount of time after the collision.
(iii) p-persistent CSMA
•This method is used when channel has time slots such that the time slot duration is equal to or greater than
the maximum propagation delay time.
•With the probability q=1-p, the station then waits for the beginning of the next time slot.
•If the next slot is also idle, it either transmits or wait again with probabilities p and q.
•This process is repeated till either frame has been transmitted or another station has begun transmitting.
•In case of the transmission by another station, the station act as though a collision has occurred and it waits a
random amount of time and starts again.
Advantages of p-persistent
✔ It reduce the chances of collision and improve the efficiency of the network.
1. If the medium is idle, transmit with probability p, and delay for one time unit with probability (1 - p) (time
unit = length of propagation delay) .
2. If the medium is busy, continue to listen until medium becomes idle, then go to Step 1 .
3. If transmission is delayed by one time unit, continue with Step 1
CDMA/CD collision detection
► CSMA/CD, as well as many other LAN protocols, uses the conceptual model as shown in next figure
► At the point marked t0, a station has finished transmitting its frame.
► Any other station having a frame to send may now attempt to do so.
► If two or more stations decide to transmit simultaneously, there will be a collision.
► If a station detects a collision, it aborts its transmission, waits a random period of time, and then tries again
(assuming that no other station has started transmitting in the meantime).
► Therefore our model for CSMA/CD will consist of alternating contention and transmission periods, with the
idle periods occurring when all stations are quiet (e.g for lack of work)
CSMA with Collision Detection
• The minimum time to detect the collision is just the time it takes the signal to propagate from one station to the
other.
• In the worst case, a station cannot be sure that it has seized the channel until it has transmitted for 2 τ without
hearing a collision.
• So, CSMA/CD contention can be considered as a slotted ALOHA system with a slot width of 2 τ.
• The difference between CSMA/CD compared to slotted ALOHA is that slots in which only one station
transmits (i.e. in which the channel is seized) are followed by the rest of a frame.
• This difference will greatly improve performance if the frame time is much longer that the propagation time.
CSMA/CD
• Algorithms
• The algorithm of CSMA/CD is:
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station starts transmitting and continually monitors the channel to detect collision.
• If a collision is detected, the station starts the collision resolution algorithm.
• The station resets the retransmission counters and completes frame transmission.
• The algorithm of Collision Resolution is:
• The station continues transmission of the current frame for a specified time along with a jam signal, to ensure that all the
other stations detect collision.
• The station increments the retransmission counter.
• If the maximum number of retransmission attempts is reached, then the station aborts transmission.
• Otherwise, the station waits for a backoff period which is generally a function of the number of collisions and restart
main algorithm.
The following flowchart summarizes the algorithms: