0% found this document useful (0 votes)
319 views

Data Link Layer-Unit II

The document discusses the functions and design issues of the data link layer. The data link layer is responsible for framing data, error control, and flow control between two directly connected machines or nodes. It provides services like transferring data frames from the network layer of the source machine to the destination machine. Key functions of the data link layer include framing the bit stream into discrete data frames, computing checksums for error detection, using acknowledgments and retransmissions for error control, and regulating frame transmission rates for flow control. Common framing methods include using start/end flags, character or byte counts, and bit stuffing/destuffing. Error detection codes like checksums and error correcting codes like Hamming codes are also discussed

Uploaded by

hha
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
319 views

Data Link Layer-Unit II

The document discusses the functions and design issues of the data link layer. The data link layer is responsible for framing data, error control, and flow control between two directly connected machines or nodes. It provides services like transferring data frames from the network layer of the source machine to the destination machine. Key functions of the data link layer include framing the bit stream into discrete data frames, computing checksums for error detection, using acknowledgments and retransmissions for error control, and regulating frame transmission rates for flow control. Common framing methods include using start/end flags, character or byte counts, and bit stuffing/destuffing. Error detection codes like checksums and error correcting codes like Hamming codes are also discussed

Uploaded by

hha
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 209

The Data Link Layer

Introduction
The data link layer specifies the algorithms for achieving reliable,
efficient communication between two adjacent machines at the
data link layer.
By adjacent, we mean the two machines are physically connected
by a communication channel that acts conceptually like a wire
(e.g., A coaxial cable or a phone line).
Data link layer design issues

The data link layer has a number of different functions to carry


out.These functions include
 providing a well defined service interface to the network layer,
 determining how the bits of the physical layer are grouped into
frames ,
 dealing with transmission errors , and
 regulating the flow of frames so that slow receivers are not
swamped with fast senders.
Data link layer design issues

Functions of the Data Link Layer


Relationship between packets and frames.
Data Link Layer Design Issues
• Services Provided to the Network Layer
• Framing
• Error Control
• Flow Control
Services provided to the network layer

Services provided to the network layer


The principal service is transferring data from the network layer
on the source machine to the network layer on the destination
machine.
A process in the network layer, hands some bits to the data link
layer which in turn takes the responsibility to transmit them to
the network layer of the destination machine.
4 4
4 4
3 3
3 3
2 2
2 2
1 1
1 1

Virtual Communication Actual Communication


Services provided to the network layer
An example: a WAN subnet consisting of routers connected
by point-to-point leased telephone lines, as shown in fig. .

Router

Data link Data link


layer process Routing layer process Routing
process process

2 3 2 2 3 2
Data link
Frames Frames
Packets protocol Packets
here here here here

Placement of a data link protocol


Services provided to the network layer

• When a frame arrives at a router, the hardware


verifies the checksum and passes the frame to the
data link layer software.
• The data link layer software, which may be
embedded in a chip on the network adaptor board,
checks to see if this is the frame expected. And if so,
gives the packet in the payload field to the (network
layer) routing software.
Services provided to the network layer

 The routing software chooses the appropriate outgoing line


and passes the packet back down to the data link layer
software.
• The data link layer software transmits it over the
selected outgoing line.
Framing
 What is framing ?
The process of breaking the bit stream offered by the
physical layer into discrete segments (frames).
 Why framing ?
• For computing the checksum for each frame.
• For acknowledging the lost frames.
 How to do framing?
Framing
 Method 1: character count: in this method a field in the
header is used to specify the no. Of characters in the frame .
when the data link layer at the destination sees the character
count , it knows how many characters follow, and hence
where the end of the frame is?

Character count

5123456789801234567789123
Framing
A character stream. (a) Without errors. (b)With one error.
Framing
The trouble with this algorithm is that the count can be altered by
a transmission error. For example if the character count of 7 in
the last frame is changed to 5, the destination will make error in
finding the data from the frame.
Sending a frame back to the source asking for a retransmission
does not help either, since the destination does not know how
many characters to skip over to get to the start of the
retransmission . for this reason the character count method is
rarely used anymore.
Framing
Method 2: Starting and ending flags with byte/character stuffing

(a) A frame delimited by flag bytes.


(b) Four examples of byte sequences before and after stuffing.
Framing
Method 3: starting and ending flags (e.G., 01111110),
with bit stuffing .
This technique allows data frames to contain an arbitrary no
of bits and allows character codes with an arbitrary no of
characters.. Each frame begins and ends with a special bit
pattern 0 1111110, called a flag byte .
Framing
Whenever the sender’s data link layer encounters five
consecutive ones in the data, it automatically stuffs a 0 bit
into the outgoing bit stream. When the receiver sees the five
consecutive ones followed by a 0 bit , it automatically
destuffs the 0 bit.
Framing (3)

Bit stuffing
(a) The original data.
(b)The data as they appear on the line.
(c) The data as they are stored in receiver’s memory after destuffing.
Framing
 Method 4: physical layer coding violations.
e.g., The IEEE 802 standard LAN uses the following (Manchester)
encoding scheme: a 1 bit is encoded as HL, and a 0 bit as LH.
Character count can be combined with others for extra safety.
Physical Layer Coding Violation.

0 1 0 1 1 0

Menchester Encoding
Error control
 The goal (of a reliable connection-oriented service)is to ensure
each frame is ultimately passed to the network layer at the
destination exactly once, no more and no less and in the proper
order.To ensure this following mechanisms can be used.

 Acknowledgements from the receiver to the sender.


 Timers for preventing senders from hanging forever.
 Sequence numbers for dealing with duplicated frames.
Flow Control
Flow control specifies…
What to do with a sender that systematically wants to transmit
frames faster than the receiver can accept them ?
Some kind of feedback mechanism is needed so that the sender
can be made aware of whether or not the receiver is able to keep
up.
This is frequently integrated with error handling for convenience.
Error Detection and Correction

Transmission errors are a fact of life.


Sources of errors:
 Thermal noise and electromagnetic interference .
 Propagation speed, and phase of signals distortion.
 Echoes, due to birds, etc.
Errors tend to come in burst rather than singly on some media.
Note

In a single-bit error, only 1 bit in the data


unit has changed.
Single-bit error
Note

A burst error means that 2 or more bits


in the data unit have changed.
Burst error of length 8
Error Detection and Correction

 there are two basic strategies for dealing with errors…


 Include enough redundant information along with each block of
data sent to enable the receiver to deduce what the transmitted
character must have been. This strategy uses error correcting
codes.
 The other way is to include only enough redundancy to allow the
receiver to deduce that an error occurred , but not which error ,
and have it request a retransmission. This strategy uses error
detecting codes.
Note

To detect or correct errors, we need to


send extra (redundant) bits with data.
The structure of encoder and decoder
Error Detection and Correction

Error-correcting codes.
Basic idea: including redundant information along with
each block of data sent to enable the receiver to deduce what
the transmitted character must have been.
An n bit unit containing m bits of data and r bits of
redundancy, or check bits, is referred to as an n( =m+r) bits
codeword.
Hamming Code

A hamming code is a linear error-correcting code named


after its inventor, Richard hamming. Hamming codes can
detect and correct single-bit errors, and can detect (but not
correct) double-bit errors
Hamming Code

 The bits of the codeword are numbered consecutively,


starting with bit 1 at the left end.
 The bits that are powers of 2 (1, 2, 4, 8, 16, etc.) Are check
bits.The rest (3, 5, 6, 7, etc.) Are filled up with the data bits.
Hamming Code

 Check bit forces the parity of the some following bits


to be even (or odd):
• Each parity bit calculates the parity for some of the bits
in the code word. The position of the parity bit
determines the sequence of bits that it alternately
checks and skips.
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1
bit, etc. (1,3,5,7,9,11,13,15,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2
bits, etc. (2,3,6,7,10,11,14,15,...)
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4
bits, etc. (4,5,6,7,12,13,14,15,20,21,22,23,...)
Hamming Code

Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8


bits, etc. (8-15,24-31,40-47,...)
Position 16: check 16 bits, skip 16 bits, check 16 bits,
skip 16 bits, etc. (16-31,48-63,80-95,...)
Position 32: check 32 bits, skip 32 bits, check 32 bits,
skip 32 bits, etc. (32-63,96-127,160-191,...)
Etc.

• Set a parity bit to 1 if the total number of ones in the


positions it checks is odd. Set a parity bit to 0 if the
total number of ones in the positions it checks is even.
 Here is an example:
 A byte of data: 10011010
create the data word, leaving spaces for the parity bits: _ _ 1
_ 0 0 1 _ 1 0 1 0
calculate the parity for each parity bit (a ? represents the bit
position being set):
 Position 1 checks bits 1,3,5,7,9,11:
? _ 1 _ 0 0 1 _ 1 0 1 0. Even parity so set position 1 to a 0: 0
_ 1 _ 0 0 1 _ 1 0 1 0.
 Position 2 checks bits 2,3,6,7,10,11:
0 ? 1 _ 0 0 1 _ 1 0 1 0. Odd parity so set position 2 to a 1: 0
1 1 _ 0 0 1 _ 1 0 1 0.
 Position 4 checks bits 4,5,6,7,12:
0 1 1 ? 0 0 1 _ 1 0 1 0. Odd parity so set position 4 to a 1: 0
1 1 1 0 0 1 _ 1 0 1 0.
 Position 8 checks bits 8,9,10,11,12:
0 1 1 1 0 0 1 ? 1 0 1 0. Even parity so set position 8 to a 0: 0
1 1 1 0 0 1 0 1 0 1 0.
 Code word: 011100101010.
Hamming Code

 At the receiver side,


 A counter is initialized to zero.
 Each check bit (= 1, 2, 4, ...) and its checked data bits are
examined to see if the parity is correct.
 If not, is added into the counter.
Hamming Code

 If the counter is zero at the end, the codeword is accepted as


valid. Otherwise, it contains the number of the incorrect bit.

 e.g., If the parities were wrong when examining check bits 1,


2, 4, then , we can conclude that data bit 7 was inverted since
bit 7 is the only one checked by bits 1, 2, and 4.
Error Detection and Correction

The above example created a code word of 011100101010.


Suppose the word that was received was 011100101110 instead.
Then the receiver could calculate which bit was wrong and correct
it.The method is to verify each check bit.
In general, check each parity bit, and add the positions that are
wrong, this will give you the location of the bad bit.
Error-Correcting Codes

Use of a Hamming code to correct burst errors.


 Try one yourself.
Test if these code words are correct, assuming they were
created using an even parity hamming code . if one is
incorrect, indicate what the correct code word should have
been. Also, indicate what the original data was.
 010101100011.
 111110001100.
 000010001010.
Error-detecting Codes
Error-correcting codes are suitable for simplex channels
where no retransmission can be requested.
 Error-detection plus retransmission is often preferred
because it is more efficient.
Error-detecting Codes
 For example, consider a channel with error rate of 10–6 per
bit. Let block size be 1000 bits.
 To correct a single error (by hamming code), 10 check bits
per block are needed. To transmit 1000 blocks, 10,000 check
bits (overhead) are required.
Error-detecting Codes
 To detect a single error, a single parity bit per block will
suffice. To transmit 1000 blocks, only one extra block (due to
the error rate of per bit) will have to be transmitted.
Error-detecting Codes
Basic approach used for error detection is the use of
redundancy, where additional bits are added to facilitate
detection of errors. Popular techniques are:
 Simple Parity check
 Two-dimensional Parity check
 Checksum
 Cyclic redundancy check
Simple Parity Check or One dimension parity
Check
The most common and least expensive mechanism for error-
detection is the simple parity check. In this technique, a redundant
bit called parity bit, is appended to every data unit so that the
number of 1s in the unit (including the parity) becomes even.
Parity checking is not very robust, since if the number
of bits changed is even, the check bit will be valid and
the error will not be detected.

Moreover, parity does not indicate which bit contained


the error, even when it can detect it.

 The data must be discarded entirely, and re-


transmitted from scratch.

 While parity checking is not very good, it uses only a


single bit, resulting in the least overhead, and does
allow for the restoration of a missing bit, when which bit
is missing is known.
Two-dimension Parity Check
 Performance can be improved by using two-dimensional parity
check, which organizes the block of bits in the form of a table.
 Parity check bits are calculated for each row, which is equivalent
to a simple parity check bit.
 Parity check bits are also calculated for all columns then both are
sent along with the data.
 At the receiving end these are compared with the parity bits
calculated on the received data .
Two-dimension Parity Check
Single Bit Parity: Two Dimensional Bit Parity:
Detect single bit errors Detect and correct single bit errors

0 0
Two-dimension Parity Check
• Two- Dimension Parity Checking increases the likelihood of
detecting burst errors. a 2-D Parity check of n bits can detect a
burst error of n bits.

• There is, however, one pattern of error that remains elusive. If


two bits in one data unit are damaged and two bits in exactly
same position in another data unit are also damaged, the 2-D
Parity check checker will not detect an error.
Two-dimension Parity Check
• For example, if two data units: 11001100 and 10101100. If
first and second from last bits in each of them is changed,
making the data units as 01001110 and 00101110, the error
cannot be detected by 2-D Parity check.
Error-detecting Codes

The polynomial code (or cyclic redundancy code (CRC))


method:
 A k-bit frame is regarded as the coefficient list for a polynomial
with k terms, ranging from x k-1to x0.
 The high order (left-most) bit is the coefficient of xk-1; the next
bit is the coefficient of xk-2, and so on.
 Polynomial arithmetic is done modulo 2. Both addition and
subtraction are identical to EXCLUSIVE OR:
Error-detecting Codes

10011011 11110000
+11001010 - 10100110
01010001 01010110
The basic idea of the CRC method:
 The sender and receiver must agree upon a
generator polynomial, G(x) in advance. Both high
and low order bits of the generator must be 1.
Error-detecting Codes

 To compute the checksum for some frame with m bits,


corresponding to the polynomial M(x), the frame must be longer
than the generator polynomial.
 The sender appends a checksum to the end of the frame in such a
way that the polynomial represented by the check summed frame
is divisible by a generator polynomial .
Error-detecting Codes
 When the receiver gets the frame, it tries dividing it by the
same G(x). if there is a remainder, there must have been an
error and a retransmission will be requested.
How to compute the checksum for a given frame ?
 Let r be the degree of G(x). Append r zero bits to the low
order end of the frame , so it now contain m+r bits and
corresponds to the polynomial xrM(x).
Error-detecting Codes
 Divide the bit string corresponding to xrM(x) by the bit
string corresponding to G(x) using modulo 2 division.

 Subtract the reminder ( which is always r or fewer bits) from


the bit string corresponding to xrM(x) using modulo 2
subtraction. The result is the check summed frame to be
transmitted.
Error-detecting Codes
In any division problem, if you diminish the dividend by the
remainder, what is left over is divisible by the divisor.
polynomial to represent a binary word
CRC division using polynomials
Division in CRC encoder
Note

The divisor in a cyclic code is normally


called the generator polynomial
or simply the generator.
Note

In a cyclic code,
If s(x) ≠ 0, one or more bits is corrupted.
If s(x) = 0, either

a. No bit is corrupted. or
b. Some bits are corrupted, but the
decoder failed to detect them.
Standard polynomials
Calculation of the polynomial code checksum.
CHECKSUM

The last error detection method we discuss here is called


the checksum. The checksum is used in the Internet by
several protocols although not at the data link layer.
However, we briefly discuss it here to complete our
discussion on error checking
CHECKSUM
Note

Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are
added using one’s complement addition.
4. The sum is complemented and becomes the
checksum.
5. The checksum is sent with the data.
Note

Receiver site:
1. The message (including checksum) is
divided into 16-bit words.
2. All words are added using one’s
complement addition.
3. The sum is complemented and becomes the
new checksum.
4. If the value of checksum is 0, the message
is accepted; otherwise, it is rejected.
Examples ̶
Suppose a block of 16 bits need to be sent: 10101001
00111001
10101001
00111001
—————-
Sum 11100010
Checksum 00011101
Sent pattern: 10101001 00111001 00011101
Example(showing no error in bit pattern)
Segment110101001
Segment2 00111001
Checksum 00011101
Sum 11111111
Complement 00000000
Means that the transmission is OK
Flow Control
Flow control is a technique for assuring that a transmitting
entity does not overwhelm a receiving entity with data. The
receiving entity typically allocates a data buffer of some
maximum length for a transfer.When data are received , the
receiver must do a certain amount of processing before passing
the data to the higher level software. In the absence of flow
control , the receiver’s buffer may fill up and overflow while it is
processing old data.
Following protocols are used for flow control….
Flow Control
 Stop- and –wait flow control: in this method of flow
control the sender waits for the acknowledgement from
the receiver for sending the next frame after sending a
frame. It means the sender will not send any frame further,
if it does not receive the acknowledgement.
Stop and Wait
Flow Control
Thus the receiver can stop the transmission simply by
withholding the acknowledgement. A source often breaks
up a large block of data into smaller blocks and transmits
the data in many frames
Flow Control
Sliding –window flow control:
 In stop and wait flow control only one frame can be transmitted
at any one time and this leads to inefficiency.
 In the sliding window method of flow control, the sender can
transmit several frames before it needs an acknowledgement.
The link can be used to carry several frames at once and it’s
capacity can be used efficiently.
Flow Control
The receiver acknowledges only some of the frames , using a
single ACK to confirm the receipt of multiple data frames.
The sliding window refers to imaginary boxes at both the sender
and the receiver. This window can hold frames at either end and
provides upper limit of the no of frames that can be transmitted
before requiring an acknowledgement.
Flow Control
Frames may be acknowledged at any point without waiting for the
window to fill up and may be transmitted as long as the
window is not yet full.
The frames are numbered from 0 to n-1. For example if n=8, the
frames are numbered
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1……… the size of the
window is n-1.
Flow Control
When the receiver sends an ACK, it includes the number of next
frame it expects to receive. In other words, to acknowledge the
receipt of string of frames ending in frame 4, the receiver sends
an ACK, containing the number 5, it knows that all frames up
through no. 4 have been received.
The window can hold n-1 frames at either end ; therefore, a
maximum of n-1 frames may be sent before an
acknowledgement is required
Sliding Window
Flow Control
Sender window: at the beginning of a transmission, the
sender’s window contains n-1 frames. As frames are sent
out , the left boundary of the window moves inward,
shrinking the size of the window. Given a window of size
w , if three frames have been transmitted since the last
acknowledgement, then the no. Of frames left in the
window is w-3.
Flow Control
Once an ACK arrives , the window expands to allow in a
number of new frames equal to the number of frames
acknowledged by the ACK.
Flow Control
Conceptually the sliding window of the sender shrinks from
the left when frames of data are sent. The sliding window
of the sender expands to the right when
acknowledgements are received.
Sender Sliding Window
Flow Control
Receiver window: at the beginning of transmission, the
receiver window contains no frame but space for n-1
frames. As new frames come in, the size of the receiver
window shrinks. Thus the receiver window represents the
no. Of frames that may still be received before an ACK
must be sent.
Flow Control
As each ACK is sent out , the receiving window expands to
include as many new placeholders as newly acknowledged
frames. The window expands to include a number of new
frame spaces equal to the number of the most recently
acknowledged frame minus the number of the previously
acknowledged frame.
Flow Control
In a 7 frame window , if the prior ACK was for frame 2 and the
current ACK is for frame 5, the window expands by 5-2=3. If
the prior ACK was for frame 3 and current ACK is for frame 1 ,
the window expands by 6(1+8-3).
Conceptually, the sliding window of the receiver shrinks from the
left when frames of data are received. The sliding window of the
receiver expands to the right when acknowledgments are sent.
Receiver Sliding Window
Sliding Window Example
Sender
Receiver
Error Control
Error control refers to mechanism to detect and correct errors
that occur in the transmission of frames. Two types of errors
are possible…..
 Lost frames: A frame fails to arrive at the other side. For
example a noise burst may damage a frame to the extent that
the receiver is not aware that a frame has been transmitted.
Error Control
 Damaged frame: A recognizable frame does arrive , but
some of the bits are in error(have been altered during
transmission).
Error Control
Collectively, following mechanisms are all referred to as
automatic repeat request(ARQ).
 Error detection.
 Positive acknowledgement.
 Retransmission after timeout.
 Negative acknowledgement and retransmission.
The effect of ARQ is to turn an unreliable data link into a reliable
one.
Error Control
Three versions of ARQ are standardized …
 Stop – and- wait ARQ.
 Go-back-n ARQ.
 Selective –reject ARQ.
Error Control
1. Stop-and-wait ARQ: this is a form of stop-and-wait flow
control, extended for retransmission of damaged or lost
frames.
Following features are added to the basic flow- control
mechanism…
Error Control
 The sending device keeps a copy of the last frame transmitted
until it receives an acknowledgement for that frame.
 For identification purpose , both data frames and ACK frames
are numbered alternately 0 and 1. A data 0 frame is
acknowledged by an ACK 1 frame, indicating that the receiver
has gotten data 0 and expecting data 1.
 Sender device is equipped with a timer.
Error Control:Stop and Wait ARQ
 source transmits single frame
 wait for ACK
 if received frame damaged, discard it
 transmitter has timeout
 if no ACK within timeout, retransmit
 if ACK damaged,transmitter will not recognize it
 transmitter will retransmit
 receiver gets two copies of frame
 use alternate numbering and ACK0 / ACK1
Error Control:Stop and Wait ARQ
 pros and cons
 simple
 inefficient
Error Control
2. Sliding window ARQ: to extend sliding window to cover
retransmission of lost or damaged frames, three features
are added to the basic flow control mechanism…
1. The sending device keeps copies of all
transmitted frames, until they have been
acknowledged.
Error Control

2. In addition to ACK frames, the receiver has a option of


returning a NAK frame if the data have been received
damaged. The NAK frame tells the sender to retransmit a
damaged frame.Since sliding window is a continuous
transmission mechanism,, both ACK and NAK frames must
be numbered for identification. The ACK frame carries the
number of the next frame to be transmitted and the NAK
frame carries the number of the damaged frame itself.
Error Control
3. Like stop and wait ARQ, the sending device is equipped
with a timer to enable it to handle lost
acknowledgements.
Error Control

Go-back-n ARQ: in this sliding window based protocol,


if one frame is lost or damaged, all frames sent since the
last frame acknowledged are retransmitted.
Damaged frames:
Case 1: if the frame 0,1,2,3 have been transmitted and the
first acknowledgement received is NAK 3? NAK means two
things: first all the frames prior to the damaged frame are
received positively acknowledged and second a negative
acknowledgement to the frame indicated.
If the first acknowledgement is NAK 3 it means that only
frame 3 is to be retransmitted.

Case II: is frames 0,1,2,3,4 have been transmitted before an


NAK is received for frame 2, then the frames 2, 3 and 4 are
to be transmitted again.
Lost data frames:
sliding window protocol requires that the frame must be
transmitted sequentially. When the receiver receives the frames
it, checks the sequence number and if any frame is missing , it
sends the NAK for that frame indicating the requirement of
retransmission. The sender in turn retransmits the desired
frame as well as all the frames transmitted after that frame.
Lost acknowledgement:
the sending device can send as many frames as the window
allows before waiting for an acknowledgement. Once that limit
has been reached, or the sender has no more frames to send, it
must wait. If the acknowledgement sent by the receiver is lost,
then sender will wait for infinite time and the system will hang.
To overcome this condition, the sender is equipped with a
timer and the timer starts ,as the window capacity is reached.
If the acknowledgement is not received within the time
limit, the sender retransmits every frame transmitted,
since the last ACK.
Damaged Frame
Lost Frame
Lost ACK
Error Control
Selective-reject ARQ: in selective reject only the specific
damaged or lost frame is retransmitted.. If a frame is corrupted
in transit, a NAK is returned and the frame is resent out of
sequence. The receiving device must be able to sort the frames
it has and insert the retransmitted frame into its proper place in
the sequence. The selective reject system differs from the go-
back-n ARQ system in the following ways…
• The receiving device must contain sorting logic to enable it to
reorder frames received out of sequence. It must also be able to
store the frames received after a NAK has been sent until the
damaged frame has been replaced.
• The sending device must contain the searching mechanism that
allows it to find and select only the requested frame for
retransmission.
• A buffer in the receiver must keep all previously received
frames on hold until all retransmissions have been sorted and
any duplicate frames have been identified and discarded.
• It is recommended that the window size be less than or equal to
(n/2).Where n-1 is the go- back- n window size.
Damaged frames:
in case of selective reject protocol, the receiver sends the NAK
frame for the damaged or lost frame, indicating that all
previously sent frames are received positively. The receiver
continues to accept the frames while waiting for an error to be
corrected. Since the ACK implies the successful receipt of the
frame as well as for the frames previously received, frames
received after the erroneous frame are not acknowledged until
the damaged frames have been retransmitted.
Lost frames:
The frames can be received out of order but, these can not be
acknowledged out of order. Thus when a frame is lost, and
next frame is received, then while arranging the frames in
order the receiver captures the discrepancy and returns a NAK.
But if the last frame is lost, the receiver does nothing and the
sender treats this silence like a lost acknowledgement.
Lost acknowledgement:
this protocol treats the lost ACK and NAK in the same way as
the go-back-n.
Selective Reject
Data Link Layer Protocols

a protocol in data communication is the set of rules or


specifications used to implement one or more layers of the OSI
model.
Data link protocols are sets of specifications used to implement
the data link layer .
Types Of Data Link Layer Protocols

1. Character-oriented protocols ( also called byte oriented


protocols) interpret a transmission frame or packet as a
succession of characters, each usually composed of one byte.
All control information is in the form of an existing character
encoding system.( E.G. ASCII characters). For example Binary
Synchronous Communication (BSC) protocol.
2. Bit- oriented protocols: interprets a transmission frame or a
packet as a succession of individual bits, made meaningful by
their placement in the frame by their combination with other
bits. Control information in a bit oriented protocol can be one
or multiple bits depending on the information embodied in the
pattern. For example SDLC, HDLC.
High level data link control(HDLC):

HDLC is a bit oriented data link protocol designed to support


both half duplex and full duplex communication over point to
point and multipoint link. Systems using HDLC can be
characterized by their station type, their configurations and
their response modes.
Station type: HDLC differentiates between three types of
stations. Primary, secondary and combined.
 Primary terminal is responsible for operation control over
the link. It issues the frames which are called commands.
Primary stations open and close connections and poll remote
stations for data, or availability. Primary stations are used
primarily in multi-point networks or with mainframe
applications where the mainframe is the primary station.
 Secondary terminal operates under the control of the
primary. Frames issued, are responses only. Secondary
machines respond to commands sent from the Primary.
Secondary stations are usually terminals attatched to
mainframes.

 Combined terminal, has the features of both primary and


secondary terminals. It issues both commands and responses.
These station types are usually used in point-to-point serial
links such as between routers connected by a T1 or frame relay.
Configurations: this word refers to the relationship of
hardware devices on a link. Primary, secondary and combined
stations can be configured in three ways, unbalanced,
symmetrical and balanced.
Configurations: this word refers to the relationship of
hardware devices on a link. Primary, secondary and combined
stations can be configured in three ways, unbalanced,
symmetrical and balanced.
An unbalanced configuration is one in which one device is
primary and the others are secondary. Unbalanced
configurations are often point to point when only two devices
are involved; More often they are multipoint with one primary
controlling several secondaries.
HDLC Configuration
 An unbalanced configuration
A symmetrical configuration is one in which each physical
station on a link consists of two logical stations, one a primary
and the other a secondary . separate lines link the primary
aspect of one physical station to the secondary aspect of
another physical station. This is like unbalanced configuration
except that control of the link can shift between the two
stations.
HDLC Configuration
A balanced configuration is one in which both stations in a
point to point topology are of the combined type. The stations
are linked by a single line that can be controlled by either
station.
HDLC Configuration
A balanced configuration:
Modes of communication:
A mode in HDLC is the relationship between two devices
involved in an exchange, the mode describes , who controls
the link. HDLC supports three modes of communication
between stations.
HDLC Modes
 Normal response mode (NRM) is an unbalanced
configuration in which only the primary terminal may
initiate data transfer. The secondary terminal transmits
data only in response to commands from the primary
terminal. The primary terminal polls the secondary
terminal(s) to determine whether they have data to
transmit, and then selects one to transmit.
 Asynchronous response mode (ARM) is an unbalanced
configuration in which secondary terminals may transmit
without permission from the primary terminal. However,
the primary terminal still retains responsibility for line
initialization, error recovery, and logical disconnect.
 Asynchronous balanced mode (ABM) is a balanced
configuration in which either station may initiate the
transmission.
Frames:
HDLC defines three types of frames:
 information frames (i- frames),
 supervisory frames( S- frames) and
 unnumbered frames (u- frames).
I frames are used to transport user data and control
information relating to user data.
S frames are used only to transport control information ,
primary data link layer flow and error control.
U frames are reserved for system management. The
information carried out by U frames is intended for
managing the link itself.
HDLC Frame Types
HDLC Frame Types
HDLC Frame Types
The flag field:every frame on the link must begin and end
with a flag sequence field(f).. The flag sequence is an octet
looking like 01111110
The flag field in HDLC may cause some problem of data
transparency, i.e. The receiver may get confused about the data
which is having the bit sequence like flag bits. To overcome this
problem, HDLC uses a process called bit stuffing.
Every time a sender wants to send a bit sequence having more
than five 1s , it inserts one redundant 0 after five consecutive 1s,
regardless of 6th bit being one or zero. When the receiver receives
the bit stream, it checks for the 0 after five 1s. If it gets this
sequence, the 0 is dropped and the original sequence is restored.
Otherwise it is treated as a flag.
Start

After one 0 and


5 consecutive ones

0 1
7th bit?
Unstuff
0 1
zero 8th bit?
Continue counting
It is a flag Ones until the next zero
It is a part of
the data <15 Total >=15
Ones?

It is an abort It means an
ideal channel

Stop
e.g. The bit sequence 01111111110 will be sent as
011111011110.
A total of 7 to 14 consecutive 1s indicate an abort and a total of
15 or more 1s indicate an ideal channel.
The address field.
The address field(a) identifies the primary or secondary stations
involved in the frame transmission or reception. An address field
can be one byte or several bytes long , depending on the needs of
the network. One byte can identify up to 128 stations.
If the address field is only one byte , the last bit is always 1 and if it
is more than one byte , all bytes but the last one will end with
zero, only the last will end with 1.Ending each intermediate byte
with zero indicates to the receiver that there are more address
bytes to come.
HDLC Address Field
The control field.
HDLC uses the control field(c) to determine how to control the
communications process. This field contains the commands,
responses and sequence numbers used to maintain the data flow
accountability of the link, defines the functions of the frame and
initiates the logic to control the movement of traffic between
sending and receiving stations. Control fields differ depending on
frame type.
If the first bit of the control field is 0,the frame is an I frame.
If the first bit is a 1 and second bit is a 0 , it is an S- frame
and if both the first and second bits are ones it is a U- frame.

Flag Address Control Information FCS Flag

P/f
0
I-frame
N(S) N(R )
P/f
1 0
S- frame code N(R )
U-frame 1 1 P/f

code code
An I-frame contains two 3 bit error and flow control sequences ,
called N(S) and N(R ). N(S) specifies the number of the frame
being sent and N(R ) specifies the number of the frame expected
in return in a two way exchange ; thus N(R ) is the
acknowledgement field. If the last frame received was error free
then , the N( R) number will be that of the next frame in the
sequence. If the last frame was the damaged frame, then N( R) will
be the number of the damaged frame, indicating the need of it’s
retransmission.
S-frames contain no N( S )field, but N(R ) field to indicate the
receiver that no data is available to send. S-frames do not carry
user data.
The poll/final bit(p/f)
The 5th bit position in the control field is called the poll/final bit,
or p/f bit. It can only be recognized when it is set to 1. If it is set
to 0, it is ignored. The poll/final bit is used to provide dialogue
between the primary station and secondary station. The primary
station uses P=1 to acquire a status response from the secondary
station. The P bit signifies a poll. The secondary station responds to
the P bit by transmitting a data or status frame to the primary
station with the P/F bit set to F=1. The F bit can also be used to
signal the end of a transmission from the secondary station under
normal response mode.
.
U- frames contain, no N( S) or N(R ) field and are not
designed for user data exchange or acknowledgement..
Instead u-frames have two code fields, one two bit and one
three bit. These codes are used to identify the type of u-frame
and it’s functions.
Poll/Final
The information field.
This field is not always in a HDLC frame. It is only present when
the frame is I frame or U-frame. The information field contains the
actually data the sender is transmitting to the receiver in case of I-
frame and network management information in a U-
frame.Combining data to be sent with control information is
called piggybacking which is used in I frames.
The frame check sequence field.
This field contains a 16 bit, or 32 bit cyclic redundancy check. It is
used for error detection.
.
HDLC Information Field
HDLC FCS Field
HDLC commands and responses: I frame is the most
straight forward. These are designed for user information
transport and piggybacked acknowledgements and nothing
else.
But S-frames and U-frames can be divided among different
types on the basis of the code bits.
Type 0(Receive Ready) - is an acknowledgement frame used to
indicate the next frame expected. This frame is used when there is
no reverse traffic to use for piggybacking.

Type 1(Reject): is a negative acknowledgement frame. It is used to


indicate that a transmission error has been detected. The N(R )
field indicates the first frame in sequence not received correctly.
The sender is required to retransmit all outstanding frames
starting at N( R).
Type 2( Receive not ready): acknowledges all frames up to but
not including N(R ), just as RECEIVE READY, but it tells the
sender to stop sending. This is intended to signal certain
temporary problem with the receiver.
Type 3(Selective Reject): It calls for retransmission of only
the frame specified .
Unnumbered Frame (U-Frame):
U frames are used to exchange session management and
control information between connected devices. Unlike S-
frames U frames contain an information field, but one used
for system management information not user data.
U-Frame Control Field
U-Frame Control Field
On the basis of two code segments together, 32 different
types of U frames are possible. Few of them are following.

 DISC(00 010): this allows a machine to announce that it is


going down.
 SNRM(00 001): Set Normal Response Mode allows a
machine that has just come back on-line to announce it’s
presence .
 SABM(11 100):Set Asynchronous Balanced Mode resets the
line and declares both parties to be equal..
 SABME(11 110) and SNRME(11 010) are same as SABM and
SNRM respectively but they enable an extended frame
format that uses 7 bit sequence numbers instead of 3 bit
sequence numbers.
 UA(00 110): Unnumbered Acknowledgement is provided to
acknowledge a control frame.
Operation Of HDLC: The operations of HDLC involves three
phases. First one side or another initializes the data link so that
frames may be exchanged in an orderly fashion. During this
phase the options that are to be used are agreed upon.. After
initialization the two sides exchange user data and the control
information to exercise flow and error control. Finally one of
the two sides signals termination of the operation.
Initialization: initialization may be requested by either side by
issuing one of the six set mode commands. This command
serves three purposes:
1. It signals the other side that initialization is requested .
2. It specifies which of three modes (NRM, ABM, ARM) is
requested.
3. It specifies whether 3- or 7- bit sequence numbers are to be
used.
If the other side accepts this request, then the HDLC module on
that end transmits an unnumbered acknowledged (UA) frame
back to the initiating side. If the request is rejected , then a
disconnected mode (DM) frame is sent.
Data Transfer: when the initialization has been requested and
accepted, then a logical connection is established. Both sides
may begin to send user data in I- frames, starting with sequence
number zero.
An HDLC module sending a sequence of I- frames will number
them sequentially, modulo 8 or 128, depending on whether 3 or
7 bit sequence numbers are used and place the sequence
number in N(S).
N( R) is the acknowledgment of I-frame received. S frames are
also used for flow control and error control.
The Receive Ready (RR) frame acknowledges the last I frame
received by indicating the next I- frame expected. The RR is
used when there is no reverse user data traffic (I-frame) to carry
an acknowledgement.
The RNR (Receive not ready) acknowledges an I-frame , as with
RR, but also asks the peer entity to suspend transmission of I-
frame . When the entity that issued the RNR is again ready , it
sends an RR. REJ initiates the go back n ARQ. It indicates that
the last I-frame received is rejected and that retransmission of
all I-frames beginning with number N(R ) is required. Selective
reject (SREJ) is used to request retransmission of just a single
frame.
Disconnect: Either HDLC module can initiate a Disconnect ,
either on its own initiative if there is some sort of fault or at the
request of its higher layer user. HDLC issues a disconnect by
sending a Disconnect (DISC) frame. The remote entity must
accept the disconnect by replying with a UA and informing its
layer3 user that the connection has been terminated. Any
outstanding unacknowledged I-frames may be lost, and their
recovery is the responsibility of higher layers.
Example of piggybacking without error
Example of piggybacking with error
Medium Access Sub layer: Introduction
Networks can be divided into two categories:
 Those using point to point connections and
 Those using broadcast channels.
In a broadcast network , the key issue is how to determine who
gets to use the channel when there is a competition for it. In the
literature broadcast channels are sometimes referred to as multi-
access channels or random access channels.
Medium Access Sub layer: Introduction
 The protocols used to determine who goes next on a multi-
access channel belong to a sublayer of the data link layer
called the MAC (Medium Access Control) sublayer. The
MAC sublayer is especially important in LANs, nearly all of
which use a multi-access (or broadcast) channel as the basis
of their networks.
The channel allocation problem

There are two schemes to allocate a single broadcast channel


among competing users.
 Static channel allocation
 Dynamic channel allocation
The channel allocation problem

Static Channel Allocation in LANs and MANs:


The traditional way of allocating a single channel among multiple
competing users is Frequency division multiplexing in which we
divide the bandwidth into equal sized portions so that each user
can be assigned one portion. Since each user has a private
frequency band, there is no interference between users. When
there is only a small and fixed number of users , each of which has
a heavy load of traffic, FDM is a simple and efficient allocation
mechanism.
What's the problem with FDM ?
 If fewer than N users are currently interested in communication,
some portions of spectrum will be wasted.
The channel allocation problem

 If more than N users want to communicate, some of them will be


denied permission even if some users with allocated frequency
hardly ever transmit anything.
 Even the number of users is N and constant, when some users are
quiescent, no one else can use their bandwidth so it is simply
wasted.
 For bursty data traffic (peak traffic to mean traffic ratio of
1000:1), the allocated small sub channel will be idle most of the
time but unable to handle the peak traffic.
The channel allocation problem

The same arguments are also applicable to time division


multiplexing (TDM). Each user is statically allocated every Nth
time slot.
Dynamic channel Allocation in LANs and MANs:
 .
Multiple Access Protocols
Dynamic channel Allocation in LANs and MANs:
• ALOHA
• Carrier Sense Multiple Access Protocols
• Collision-Free Protocols
Multiple Access Protocols:
 ALOHA
The ALOHA system was used for ground-based radio
broadcasting, but the basic idea is applicable to any system in
which uncoordinated users are competing for the use of a single
shared channel.
There are two versions of ALOHA
 Pure
 Slotted
They differ with respect to whether or not time is divided up into
discrete slots into which all frames must fit.
Pure ALOHA
 developed for packet radio nets
 when station has frame, it sends
 then listens for a bit over max round trip time
 if receive ACK then fine
 if not, retransmit
 if no ACK after repeated transmissions, give up
 uses a frame check sequence (as in HDLC)
 frame may be damaged by noise or by another station transmitting at
the same time (collision)
 any overlap of frames causes collision
 max utilization 18%
Pure ALOHA
Systems in which multiple users share a common channel in a
way that can lead to conflicts are known as contention
systems.
A sketch of frame generation in an ALOHA system is given in
next slide.
Pure ALOHA
.

In pure ALOHA, frames are transmitted at completely arbitrary times


Pure ALOHA

Whenever two frames try to occupy the channel at the same


time there will be a collision. If first bit of a frame overlaps
with just the last bit of the frame almost finished, both
frames will be totally destroyed , and both will have to be
retransmitted later.
Slotted ALOHA

In 1972 , Roberts published a method for doubling the


capacity of an ALOHA system. His proposal was…
 Time is divided up into discrete intervals, each interval
corresponding to one frame.
 A terminal is not permitted to send until the beginning of
the next slot.
This was known as slotted ALOHA.
Slotted ALOHA
Following are the features of slotted ALOHA
 time on channel based on uniform slots equal to frame transmission
time
 need central clock (or other sync mechanism)
 transmission begins at slot boundary
 frames either miss or overlap totally
 max utilization 37%
Both have poor utilization and fail to use fact that propagation time is
much less than frame transmission time
Carrier Sense Multiple Access Protocols:
Protocols in which stations listen for a carrier (i.e., a
transmission) and act accordingly are called carrier sense
protocols.
Following are some versions of these protocols….

1. 1- persistent CSMA
2. non persistent CSMA
3. p-persistent CSMA
1-persistent CSMA
 To send data, a station first listens to the channel to see if anyone
else is transmitting.
 If so, the station waits (keeps sensing it) until the channel
becomes idle. Otherwise, it transmits a frame.
 If a collision occurs , the station waits a random amount of time
and starts all over again.
It is called 1-persistent because the station transmits with a
probability of 1 whenever it starts sensing the channel and finds
the channel idle
1-persistent CSMA
How could collisions happen in CSMA ?
The propagation delay has an important effect on the performance
of the protocol. There is a small chance that just after a station
begins sending , another station will become ready to send and
sense the channel . If the first station’s signal has not yet reached
the second one , the later will sense an idle channel and will also
begin sending, resulting in a collision. The longer the propagation
delay, the more important this effect becomes , and the worse the
performance of the protocol.
1-persistent CSMA
Even if the propagation delay is zero, there will still be
collisions. If two stations become ready in the middle of the
third station’s transmission, both will wait politely until the
transmission ends and then both will begin transmitting
exactly simultaneously, resulting in a collision.
Nonpersistent CSMA:
In this protocol a conscious attempt is made to be less greedy
than in the previous one.
 To send data, a station first listens to the channel to see if
anyone else is transmitting.
 If so, the station waits a random period of time (instead of
keeping sensing until the end of the transmission) and repeats
the algorithm. Otherwise, it transmits a frame.
Nonpersistent CSMA
 random delays reduces probability of collision.
 If a collision occurs, the station waits a random amount of time
and starts all over again.
 This protocol has better channel utilization
 capacity is wasted because medium will remain idle following end
of transmission.
P-persistent CSMA
 a compromise to reduce collisions and idle time
 p-persistent CSMA rules:
1. if medium idle, transmit with probability p, and delay one time unit with
probability (1–p)
2. if medium busy, listen until idle and repeat step 1
3. if transmission is delayed one time unit, repeat step 1
 issue of choosing effective value of p to avoid instability under
heavy load
CSMA with collision Detection:

Persistent and nonpersistent CSMA protocols improve


ALOHA by ensuring that no station begins to transmit when
it senses the channel busy.
CSMA/CD (Carrier Sense Multiple Access with
Collision Detection) protocol further improves ALOHA
by aborting transmissions as soon as a collision is detected.
CSMA/CD Description
 with CSMA, collision occupies medium for duration of
transmission
 better if stations listen while transmitting
 CSMA/CD rules:
1. if medium idle, transmit
2. if busy, listen for idle, then transmit
3. if collision detected, transmit a brief jam signal and then
cease transmission
4. after jam, wait random time then retry
Collision-free protocols:
These are the protocols which resolve the contention for the
channel without any collision at all, not even during the
contention period.
We make the assumption that there are N stations, each with
a unique address from 0 to N-1 “ wired” into it.
A Bit-Map Protocol:

Each contention period consists of exactly N slots.

 If station j has a frame to send, it transmits 1 bit during the jth


slot; otherwise, it transmits 0 bit during the jth slot.
 After all slots have passed by, stations begin transmitting in
numerical order.
 After the last ready station has transmitted its frame, another N-bit
contention period is begun.

1 1 1 1 3 7 1 1 1 5 1 2
Multiple Access Protocols:
Protocols like this in which the desire to transmit is
broadcasted before the actual transmission are called
reservation protocols.
Binary countdown:

Each station has a binary address. All addresses are of the


same length.
To transmit, a station broadcasts its address as a binary bit
string, starting with high-order bit.
Binary countdown:

 The bits in each address position from different stations are


BOOLEAN ORed together (so called Binary countdown).
 As soon as a station sees that a high-order bit position that is 0 in
its address has been overwritten with a 1, it gives up.
 After the winning station has transmitted its frame, there is no
information available telling how many other stations to send, so
the algorithm begins all over with the next frame.
Binary countdown:

Example:

Bit time
0123
0010 0___
0100 0___

1001 100_

1010 10 1 0

Result 10 1 0

Station 0010 Station 1001 sees this 1


and 0100 and gives up
see this 1
and give up.

You might also like