0% found this document useful (0 votes)
30 views

CN Module 2 Data Link Layer Final

Uploaded by

2022.sahil.ahuja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

CN Module 2 Data Link Layer Final

Uploaded by

2022.sahil.ahuja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 128

Chapter 3 - Data Link Layer

Mrs Nusrat . J. Ansari


Data Link layer

• Data Link Layer is second layer of OSI Layered Model. This layer is one of the most complicated layers and has complex
functionalities and liabilities. Data link layer hides the details of underlying hardware and represents itself to upper layer as
the medium to communicate.

• Data link layer works between two hosts which are directly connected.

• This direct connection could be point to point or broadcast. Systems on broadcast network are said to be on same link.

• Data link layer is responsible for converting data stream to signals bit by bit and to send that over the underlying hardware.
At the receiving end, Data link layer picks up data from hardware which are in the form of electrical signals, assembles
them in a recognizable frame format, and hands over to upper layer.
3.1 Data Link Layer design Issues

1. Network layer services


2. Framing
3. Error control
4. Flow control

• Physical layer delivers bits of information to and from data link layer. The functions of Data Link Layer
are:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.
• Data Link layer
– Takes the packets from Network layer, and
– Encapsulates them into frames for transmission
Difference between Frame & Packet

Frame Packet

Frame is the data link layer protocol data unit Packet is the Network layer protocol data unit

Segment is encapsulated within a packet Packet is encapsulated within a Frame

Source & destination MAC address Source & destination IP address


3.1 Data Link Layer Issues

• Each frame has a


• frame header
• a payload field for holding the packet,
• and frame trailer.
• Frame Management is what Data Link Layer does.

Relationship between packets and frames


Services Provided to the Network Layer

• Principal Service Function of the data link layer is to transfer the data from the network layer on the source
machine to the network layer on the destination machine.
• Process in the network layer that hands some bits to the data link layer for transmission.
• Job of data link layer is to transmit the bits to the destination machine so they can be handed over to the
network layer there

Actual communication:
N/w layer-Data link layer –physical layer
on sending machine

Actual communication:
physical layer -- Data link layer N/w layer
on receiving machine

Virtual communication:
No Physical medium is present

It can be visualized that two Data link layer


are communicating with each other using
data link layer protocol

(a) Virtual communication (b) Actual communication


Possible Services Offered

Unacknowledged connectionless service.


Acknowledged connectionless service.
Acknowledged connection-oriented service

1.Unacknowledged Connectionless Service:


• It consists of having the source machine send independent frames to the destination machine without having
the destination machine acknowledge them.
• No connection is set up between the host machines.
• Error & data lost is not handled in this service
• Example: Ethernet, Voice communication, etc. in all the communication channel were real time operation is
more important that quality of transmission.
2. Acknowledged Connectionless Service

• Each frame send by the Data Link layer is acknowledged and the sender knows if a specific frame has been
received or lost.
• Typically the protocol uses a specific time period that if has passed without getting acknowledgment it will
re-send the frame.
• This service is useful for commutation when an unreliable channel is being utilized (e.g., 802.11 WiFi).
• Network layer does not know frame size of the packets and other restriction of the data link layer. Hence it
becomes necessary for data link layer to have some mechanism to optimize the transmission.
3. Acknowledged Connection Oriented Service

• Source and Destination establish a connection first.


• Each frame sent is numbered
• Data link layer guarantees that each frame sent is indeed received.
• It guarantees that each frame is received only once and that all frames are received in the correct order.
• Three distinct phases:
• Set up of connection
• Sending frames
• Release connection
1. Connection is established by having both side initialize variables and counters needed to keep track of which frames
have been received and which ones have not.
2. One or more frames are transmitted.
3. Finally, the connection is released – freeing up the variables, buffers, and other resources used to maintain the
connection
• Examples:
• Satellite channel communication
• Long-distance telephone communication etc.
Framing in Data Link Layer

• Framing is function of Data Link Layer that is used to separate message from source to destination by adding sender address
& destination address

• Need of Framing:
• The Data Link Layer needs to pack bits into frames, so that each frame is distinguishable from another. The Data Link
Layer prepares a packet for transport across the local media by encapsulating it with a header & trailer to create a frame

• Framing is a point-to-point connection between two computers or devices consists of a wire in which data is transmitted as a
stream of bits. However, these bits must be framed into discernible blocks of information.

• Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are meaningful to the
receiver.
Framing Methods

1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.

4. Physical layer coding violations/Encoding violations .


Framing Method 1: Byte Count Framing Method
• It uses a field in the header to specify the number of bytes in the frame.
• Once the header information is being received it will be used to determine end of the frame.
• Trouble with this algorithm is that when the count is incorrectly received the destination will get out of synch with
transmission.
• Destination may be able to detect that the frame is in error but it does not have a means (in this algorithm) how to
correct it.
Framing Method 2: Flag Bytes with Byte Staffing Framing Method

• This methods gets around the boundary detection of the frame by having each appended by the frame start and
frame end special bytes.
• If they are the same (beginning and ending byte in the frame) they are called flag byte.
• If the actual data contains a byte that is identical to the FLAG byte (e.g., picture, data stream, etc.) the
convention that can be used is to have escape character inserted just before the “FLAG” character.
Framing Method 3: Flag Bits with Bit Stuffing Framing Method

• This methods achieves the same thing as Byte Stuffing method by using Bits (1) instead of Bytes (8 Bits).
• It was developed for High-level Data Link Control (HDLC) protocol.
• Each frames begins and ends with a special bit patter:
• 01111110 or 0x7E <- Flag Byte
• Whenever the sender’s data link layer encounters five consecutive 1s in the data it automatically stuffs a 0
bit into the outgoing bit stream.
• USB uses bit stuffing.
Bit stuffing. (a) The original data.

(b) The data as they appear on the line.

(c) The data as they are stored in the


receiver’s memory after destuffing.
Error Control

• After solving the marking of the frame with start and end the data link layer has to handle eventual errors in
transmission or detection.
• Ensuring that all frames are delivered to the network layer at the destination and in proper order.
• Unacknowledged connectionless service: it is OK for the sender to output frames regardless of its reception.
• Reliable connection-oriented service: it is NOT OK.
• Reliable connection-oriented service usually will provide a sender with some feedback about what is happening at the
other end of the line.
• Receiver Sends Back Special Control Frames.
• If the Sender Receives positive Acknowledgment it will know that the frame has arrived safely.
• Timer and Frame Sequence Number for the Sender is Necessary to handle the case when there is no response (positive
or negative) from the Receiver .
Flow Control

• Important Design issue for the cases when the sender is running on a fast powerful computer and receiver is running on a
slow low-end machine.
• Two approaches:
1. Feedback-based flow control
2. Rate-based flow control

1. Feedback-based Flow Control:


• Receiver sends back information to the sender giving it permission to send more data, or
• Telling sender how receiver is doing.

2. Rate-based Flow Control:


• Built in mechanism that limits the rate at which sender may transmit data, without the need for feedback from the receiver.
Error Detection and Correction

• Data Link Layer uses some error control mechanism to ensure frames are transmitted with certain level
of accuracy

Types of error
• Single bit error
• Multiple bit error
• Burst bit error

• Two basic strategies to deal with errors:


• Error Detection codes.
Include enough redundant information to enable the receiver to deduce what the transmitted data must have been.

Error correcting codes.


Include only enough redundancy to allow the receiver to deduce that an error has occurred (but not which error).
Error Detection and Correction

• Error codes are examined in Link Layer because this is the first place that we have run up against the problem
of reliability transmitting groups of bits.
• Codes are reused because reliability is an overall concern.
• The error correcting code are also seen in the physical layer for noise channels.
• Commonly they are used in link, network and transport layer.
• Error codes have been developed after long fundamental research conducted in mathematics.
• Many protocol standards get codes from the large field in mathematics.
Error-Detecting Codes

Three different types of error detecting codes which are Linear, systematic block codes are
1. Parity.
2. Checksums.

3. Cyclic Redundancy Checks (CRCs ).


What is a Hamming code?

• Hamming code is a liner code that is useful for error detection up to two immediate bit errors. It is
capable of single-bit errors.
• In Hamming code, the source encodes the message by adding redundant bits in the message. These
redundant bits are mostly inserted and generated at certain positions in the message to accomplish
error detection and correction process.

• Process of Encoding a message using Hamming Code


• Calculation of total numbers of redundant bits.
• Checking the position of the redundant bits.
• Lastly, calculating the values of these redundant bits.

• When the above redundant bits are embedded within the message, it is sent to the user.
What is a Hamming code?

• Step 1) Calculation of the total number of redundant bits.


• Let assume that the message contains:
• n- number of data bits
• p - number of redundant bits which are added to it so that np can indicate at least (n + p + 1)
different states.
• Here, (n + p) depicts the location of an error in each of (n + p) bit positions and one extra state
indicates no error.
• As p bits can indicate 2p states, 2p has to at least equal to (n + p + 1).

• Step 2) Placing the redundant bits in their correct position.


• The p redundant bits should be placed at bit positions of powers of 2. For example, 1, 2, 4, 8, 16,
etc. They are referred to as p1 (at position 1), p2 (at position 2), p3 (at position 4), etc.

• Step 3) Calculation of the values of the redundant bit.


• The redundant bits should be parity bits makes the number of 1s either even or odd.
Hamming Code

• Codeword: b1 b2 b3 b4 ….
• Check bits: The bits that are powers of 2 (p1, p2, p4, p8, p16, …).
• The rest of bits (m3, m5, m6, m7, m9, …) are filled with m data bits.
• Example of the Hamming code with m = 7 data bits and r = 4 check bits is given in the next
slide.
• Consider a message having four data bits (D) which is to be transmitted as a 7-bit codeword
by adding three error control bits. This would be called a (7,4) code. The three bits to be
added are three EVEN Parity bits (P), where the parity of each is computed on different
subsets of the message bits as shown below.
7 6 5 4 3 2 1

D D D P D P P 7-BIT CODEWORD

D - D - D - P (EVEN PARITY)
D D - - D P - (EVEN PARITY)
D D D P - - - (EVEN PARITY)
Hamming Code
Hamming Code ( Example)
1. Parity Check

• A k-bit dataword is changed to an n-bit codeword where n = k + 1.


• An extra bit (called the parity bit), is sent along with original bits to make number of 1’s either even in case of even parity or
odd in case of odd parity.
• Types of parity:
1. even parity
2. odd parity
• The minimum Hamming distance dmin = 2, which means that the code is a single-bit error-detecting code.(dmin >=s+1)
parity-check code with k = 4, n = 5;
• where k=no of msg bits, n= codeword length (n,k) (5,4)
Even-parity concept

A parity bit is added to every data unit so that the total number of 1s is even (or odd for odd-parity).
2. Checksum ( use for error detection)

The receiver follows these steps: The sender follows these steps:
1. The unit is divided into k sections, each of 1. The unit is divided into k sections, each of n
n bits. bits.
2. All sections are added using one’s 2. All sections are added using one’s complement
complement to get the sum. to get the sum.
3. The sum is complemented. 3. The sum is complemented and becomes the
4. If the result is zero, the data are accepted: checksum.
otherwise, rejected 4. The checksum is sent with the data.
Implementation of checksum
The sender initializes the checksum to 0 and adds all data items and the checksum.
36 cannot be expressed in 4 bits. The extra two bits are wrapped and added with the sum to create the wrapped sum
value 6.
The sum is then complemented, resulting in the checksum value 9 (15 − 6 = 9).
2. Checksum

• Internet checksum

• Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are added using one’s complement addition.
4. The sum is complemented and becomes the checksum.
5. The checksum is sent with the data.

• Receiver site:
1. The message (including checksum) is divided into 16-bit words.
2. All words are added using one’s complement addition.
3. The sum is complemented and becomes the new checksum.
4. If the value of checksum is 0, the message is accepted; otherwise, it is rejected.
Example

Suppose the following block of 16 bits is to be sent using a checksum of 8 bits.


10101001 00111001
The numbers are added using one’s complement
10101001
00111001
------------
Sum 11100010
Checksum 00011101
The pattern sent is 10101001 00111001 00011101
Example

Now suppose the receiver receives the pattern sent in previous Example
and there is no error. 10101001 00111001 00011101
When the receiver adds the three sections, it will get all 1s, which, after
complementing, is all 0s and shows that there is no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is OK.
Example

Now suppose there is a burst error of length 5 that affects 4 bits.


10101111 11111001 00011101
When the receiver adds the three sections, it gets
10101111
11111001
00011101
Partial Sum 1 11000101
Carry 1
Sum 11000110
Complement 00111001 the pattern is corrupted.
3. Cyclic Redundancy Checks (CRCs)

Cyclic codes are special linear block code.


If a codeword is cyclically shifted (rotated), the result is another codeword.
Ex : 1011000 is a codeword and if cyclically left-shifted, the result 0110001 is also a codeword.

• Cyclic Redundancy Check


–The use of cyclic codes to detect and correct error
3. Cyclic Redundancy Checks (CRCs)

CRC generator and checker


3. Cyclic Redundancy Checks (CRCs) Example : Division in CRC Encoder

No of zero’s to b appended = highest


degreeof polynomial
3. Cyclic Redundancy Checks (CRCs) Example : Division in CRC Decoder
3. Cyclic Redundancy Checks (CRCs)

Example calculation of the CRC


3.1 ELEMENTARY DATA LINK PROTOCOLS
ELEMENTARY DATA LINK PROTOCOLS

● Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control.

● Framing is the process of dividing bit - streams from physical layer into data frames
whose size ranges from a few hundred to a few thousand bytes.

● Error control mechanisms deals with transmission errors and retransmission of


corrupted and lost frames.

● Flow control regulates speed of delivery and so that a fast sender does not drown a
slow receiver

39
3.1 Introduction

✔ The Data Link layer exists as a connecting layer between the software processes of the layers above it and the
Physical layer below it (Fig.a)

● It prepares the Network layer packets for transmission across some form of media, be it copper, fiber, or the
atmosphere.

● Data Link layer is embodied as a physical entity, such as an Ethernet network interface card (NIC), which
inserts into the system bus of a computer and makes the connection between running software processes on
the computer and physical media.

● Software associated with the NIC enables it to prepare data for transmission and encode it as signal to be sent
on the associated media

40
Types of Data Link Protocols
Data link Protocols

3.1 Utopian Simplex Protocol

● Unrealistic protocol because:

○ It does not worry about the possibility of anything going wrong

○ Unidirectional protocol

○ Transmitting and receiving layers are always ready; processing time is insignificant

○ Infinite buffer space

○ Communication channel never breaks down or loses data

42
Utopian Simplex Protocol
● Protocol consists of two procedures: sender and receiver
● Both run in data link layer of source and destination machines respectively
● No sequence numbers or acknowledgements are used
● Only frame_arrival event type is used to indicate arrival of undamaged frame

● Sender is in infinite while loop pumping data out onto the channel as fast as
possible
● Loop consists of three actions:
○ Fetch a packet from network layer
○ Construct an outbound frame using variable s
○ Send frame on it way
● Only info field is used for the protocol

43
Utopian Simplex Protocol

● At receiver side, it waits for an undamaged frame to arrive


● When the frame arrives, procedure “wait_for_event” returns and event is set at “frame arrival”
● The call to “from_physical_layer” removes the newly arrived frame from the hardware buffer and puts
it in variable r
● Then the data is passed on to the network layer
● Data link layer settles back waiting for next frame

Summary
The Simplex protocol is hypothetical protocol designed for unidirectional data transmission over an ideal channel, i.e. a channel through which
transmission can never go wrong. It has distinct procedures for sender and receiver. The sender simply sends all its data available onto the channel
as soon as they are available its buffer. The receiver is assumed to process all incoming data instantly. It is hypothetical since it does not handle flow
control or error control.

44
3.1 Stop and wait protocol (Error free channel)

● Communication channel is assumed to be error free


● Data traffic is considered to be simplex
● It only tackles the problem of preventing the sender from flooding the receiver with frames faster than
receiver is able to process them
● Two ways to solve the problem of flooding:
a. Build a receiver powerful enough to process a continuous stream of back to back frames:
■ Needs sufficient buffering and processing abilities
■ Needs dedicated hardware
■ Resources can be of no use if utilization of link is low most of the time
b. Receiver sends feedback
■ Receiver sends a dummy frame (acknowledgement) to the sender after sending the packet to
network layer; giving permission to sender to transmit next packet

45
Stop and wait protocol (Error free channel)

● Protocols in which sender sends one frame and then waits for an acknowledgement before proceeding are
stop and wait protocol

● Communication between sender and receiver is bi directional

● Only sender or receiver can send a frame at one time, half duplex connection is formed

● Sending data link layer need not check incoming frame as it will be an acknowledgement always
Summary
Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data transmission without any error control facilities.
However, it provides for flow control so that a fast sender does not drown a slow receiver. The receiver has a finite buffer size with
finite processing speed. The sender can send a frame only when it has received indication from the receiver that it is available for
further data processing.

● Advantage:
● Next frame is transmitted only when the first frame is acknowledge, so there is no chance of the frame being lost

● Disadvantage:
1. It makes the transmission process slow
2. If two devices are distance apart, lot of time is wasted waiting for acknowledge.
46
3.1 Stop and wait protocol (Noisy channel)

● Communication channel is not assumed to be error free


● Frames may be damaged or lost during transmission (Fig.a)
● This protocol assumes that if a frame is damaged, receiver hardware will detect it while compiling
checksum due to which no acknowledgement will be sent and after a time out sender will resend the
frame
● If a damaged frame gives correct checksum (unlikely occurrence), this protocol can fail (deliver incorrect
packet)

47
Fig. a 48
3.1 Stop and wait protocol scenario (Noisy channel)

● Consider a scenario where machine A sends a series of packets to its data link layer which needs to be sent
to network layer of machine B by its own data link layer
● Network layer on machine B has no way of knowing if packet has been lost or duplicated
● Hence data link layer must guarantee no combination of transmission errors take place

49
3.1 Stop and wait protocol scenario (Noisy channel)

● In this scenario; consider following case:


Network layer on A gives packet 1 to its data link layer. Packet is received error free at B and
passed on to its network layer
B sends acknowledgement frame to A but it gets lost (Fig. b)
Data link layer on A eventually times out. Not receiving an acknowledgement , it assumes its data
frame was lost or damaged and resends it
Duplicate frame also arrives at B error free and is passed on to network layer
Thus, if A is sending a file to B, part of it will be duplicated and copy of the file at B will be
incorrect

50
Fig. b 51
3.1 Stop and wait protocol scenario (Noisy channel)

● To avoid this, receiver needs to be able to distinguish a frame from its retransmitted one
● Solution for this is to add a sequence number in the header of each frame it sends
● Receiver can check the sequence number and see if it is a new frame or a duplicated one
● Number of bits assigned for the sequence number in the header should be minimum (1 bit is sufficient)
● If sender sends a frame m and it is damaged or lost, receiver will not acknowledge it and sender keeps
sending it again instead of its successor fram m+1
● The ambiguity in this is, depending on reception of acknowledgement frame, sender sends either m or m+1
frame,

52
Disadvantages of Stop and Wait Protocol
3.1 AUTOMATIC REPEAT REQUEST (ARQ) PROTOCOL

● In this, 1 bit sequence number is added in the header


● At each instant of time, receiver expects a particular sequence number next
● When a frame containing correct sequence number arrives, it is accepted and passed on to network layer and
acknowledged
● Then the expected sequence number is incremented modulo 2 (i.e. 0 becomes 1 and 1 becomes 0)
● If a frame arrives with a wrong sequence number, it is rejected as a duplicate (Fig. c)
● In this case, last valid acknowledgement is repeated for sender to discover which frame has been received

54
Fig. c 55
3.1 SLIDING WINDOW PROTOCOL

● In previous protocols, frames were transmitted in one direction only

● To achieve full duplex transmission, we can run two instances of previous protocols using a separate link
with simplex transmission in different directions

● Each link has a “FORWARD CHANNEL” for data and a “REVERSE CHANNEL” for acknowledgements

● In both the links capacity of reverse channels is wasted completely

● Hence , there was a need for a full duplex transmission using same link, which is sliding window protocol

56
3.1 SLIDING WINDOW PROTOCOL

● In this protocol, data frames from A to B are intermixed with acknowledgement frames from A to B

● Referring to the “kind” field in the header of an incoming frame, receiver can differentiate whether it is data
or acknowledgement.

● Hence, we INTERLEAVE data and control frames on the same link

57
Data Link Layer Header

58
3.1 SLIDING WINDOW PROTOCOL

● Another advantage is, when a data frame arrives, instead of sending a control frame immediately, receiver
restrains itself till the next data frame is passed down from the network layer
● Acknowledgement is attached to this next outgoing data frame, using the acknowledgement field in the header
● Thus, acknowledgement gets a free ride with the outgoing data packet and no bandwidth is wasted
● This technique of delaying acknowledgements temporarily so that they can be hooked to the next outgoing
frame is known as PIGGYBACKING

59
Sender Window & Receiver Window
3.4 SLIDING WINDOW PROTOCOL

● Advantages:
Better use of available channel bandwidth
Number of frames sent is reduced
Thus reducing the processing load at the receiver

● Disadvantages:
Wait time for data link layer to piggyback the acknowledgement is not known
If DLL waits longer than sender’s time out period, retransmission takes place thus duplicating the frame
Needs an ad-hoc system to wait a fixed amount of time to piggyback acknowledgement or else send it
as a separate frame

61
3.1 TYPES OF SLIDING WINDOW PROTOCOL

● There are three variations in Sliding Window Protocol


○ 1 - bit sliding window
○ Go Back -N
○ Selective Repeat

62
3.1 1-BIT SLIDING WINDOW PROTOCOL

● A sliding window protocol with window size 1 uses a stop and wait protocol indirectly because sender
transmits one frame and waits for acknowledgement before sending the next one
● Normally, one of the two data link layers goes first and transmits first frame
● Starting machine collects data from its network layer, builds a frame and sends it
● At the receiver, the frame is checked and is passed to the network layer and receiver’s window is slide up

63
3.1 1-BIT SLIDING WINDOW PROTOCOL

● Steps for protocol:


Acknowledgement field contains the number of last frame received without error
If this number agrees with the sequence number of the frame the sender is trying to send, sender knows it
is done with the frame stored in the buffer and can fetch the next packet
If the sequence number disagrees, it continues to send the same frame

64
3.1 1-BIT SLIDING WINDOW PROTOCOL – ISSUES

● Consider computer A is trying to send frame A0 to computer B but A’s timeout interval is too short
● In such a scenario, A will timeout repeatedly and send a series of identical frames all with Seq No 0 and Ack
No 1
● B will reject every duplicate frame and it will send A frame with Seq No 0 and Ack No 0
● Eventually A will send the correct packet
● Protocol works correctly in this scenario but after a series of lost frames and timeouts

65
Fig. 3.4 66
3.2 GO BACK-N PROTOCOL

● In this protocol, the sender can transmit up to a window of w frames before blocking transmission while
waiting for acknowledgement

● With large enough choice for w, sender will be able to continuously transmit frames since acknowledgements
for previous frames will arrive before the window becomes full

● To find correct value of w, we consider how many frames can fit in the channel as they propagate from sender
to receiver

67
3.2 GO BACK-N PROTOCOL

● w can be calculated as:


■ w = 2BD + 1
● Where BD is Bandwidth-Delay Product given as:
= bandwidth * one way transit time
● 2BD is the number of frames that can be outstanding if the sender continuously sends frames when the round
trip time to receive an acknowledgement is considered
● 1 is added because an acknowledgement frame will not be sent unless a complete frame is received

68
3.2 GO BACK-N PROTOCOL
(example of window size)

● Consider a link with bandwidth 50 kbps and a one way transmit time as 250 msec

● Thus, bandwidth delay product is 12.5kbit (50kbps*250msec) or 12.5 frames of 1000 bits each

● In this case, w is = 2(12.5)+1 = 26 frames

● Assume sender begins sending frame 0 and then sends a new frame every 20 msec

● By the time 26 frames are sent, time lapsed is 520msec and acknowledgement for frame 0 will have just arrived

69
3.4.2 GO BACK-N PROTOCOL
(example of window size)

● Thereafter, acknowledgements arrive every 20 msec

● From then onwards, 26 unacknowledged frames will always be outstanding

● For smaller window sizes for this example, utilization of the link will be less than 100% since the sender will
be blocked sometimes.

● Link utilization can be determined as:


Link utilization <= w / (2BD + 1)

70
3.4.2 GO BACK-N PROTOCOL (pipelining)

● This technique of keeping multiple frames in flight is known as Pipelining

● Pipelining frames over unreliable communication channel may result in lost or damaged frames at the
receiver side.

● Receiving data link layer hands frames in sequence to network layer.

● If any one frame is damaged, it is supposed to be discarded but question arises for the correct frames
following it.

● There are two approaches in this scenario:

● First option (w =1) is for receiver to discard all subsequent frames and no acknowledgement be sent for
those frames
● The sender will timeout and retransmit all unacknowledged frames in order starting with damaged one

71
3.2 GO BACK-N PROTOCOL (Pipelining)

72
3.2 GO BACK-N PROTOCOL (Pipelining)

● In the first option, DLL refuses to accept any frame other than the next one

● In the second option (w >> 1); (fig. b)


○ Sender continues to send frames until the timer for damaged or lost frame is expired.
○ As the timer expires, it backs up to the damaged frame and resends all the frames following it.

73
3.1 GO BACK-N PROTOCOL (Pipelining)

74
3.1 GO BACK-N PROTOCOL

75
Advantages of Go back N ARQ
3.1 GO BACK-N PROTOCOL (Issues)

● Works when errors are rare


● If connection is unreliable, bandwidth is wasted on retransmission of frames
● No buffer is applied to the frames received after a damaged or lost frame
● Multiple timers are needed; one for each frame

77
Disadvantages of Go back N
3.1 SELECTIVE REPEAT PROTOCOL

• Why Selective Repeat Protocol?

The go-back-n protocol works well if errors are less, but if the line is poor it wastes a lot of bandwidth on
retransmitted frames. An alternative strategy, the selective repeat protocol, is to allow the receiver to accept
and buffer the frames following a damaged or lost one.
• Selective Repeat attempts to retransmit only those packets that are actually lost (due to errors) :
• Receiver must be able to accept packets out of order.
• Since receiver must release packets to higher layer in order, the receiver must be able to buffer some packets.
3.1 SELECTIVE REPEAT PROTOCOL

● In this protocol, sender and receiver both maintain a window of outstanding and acceptable
sequence numbers respectively

● Sender’s window starts from 0 and grows up to a predefined maximum number

● Receiver’s window in contrast, is fixed and equal to predefined maximum

● Receiver has a buffer reserved for each sequence number within its fixed window

80
3.1 SELECTIVE REPEAT PROTOCOL

● When a frame arrives, its sequence number is checked by the function to see if it falls within the window
If so and it has not been received yet, it is accepted and stored

It is stored even if it is not the next packet expected by the network layer; it is kept in the DLL and not
passed to network layer until all lower numbered frames are delivered to network layer

In Selective Repeat ARQ, the size of the sender and receiver window must be at most one-half of 2^m.

81
3.1 SELECTIVE REPEAT PROTOCOL
Comparison between Stop and wait and sliding window Protocol
Comparison between Stop and wait and sliding window Protocol

BASIS FOR STOP-AND-WAIT SLIDING WINDOW


COMPARISON PROTOCOL PROTOCOL
Behaviour Request and reply Simultaneous transmit

Number of transferrable Only one Multiple


frames
Efficiency Less More comparatively

Acknowledgement Sent after each arriving Window of


packet acknowledgement is
maintained
Type of transmission Half duplex Full duplex
Propagation delay Long Short
Comparison between Go back-N and Selective Repeat

BASIS FOR
GO-BACK-N SELECTIVE REPEAT
COMPARISON
Basic Retransmits all the frames Retransmits only those
that sent after the frame frames that are suspected to
which suspects to be lost or damaged.
damaged or lost.
Bandwidth Utilization If error rate is high, it Comparatively less
wastes a lot of bandwidth. bandwidth is wasted in
retransmitting.
Complexity Less complicated. More complex as it require
to apply extra logic and
sorting and storage, at
sender and receiver.
Window size N-1 <= (N+1)/2
Comparison between Go back-N and Selective Repeat

Sorting Sorting is neither required at Receiver must be able to sort


sender side nor at receiver as it has to maintain the
side. sequence of the frames.

Storing Receiver do not store the Receiver stores the frames


frames received after the received after the damaged
damaged frame until the frame in the buffer until the
damaged frame is damaged frame is replaced.
retransmitted.
Searching No searching of frame is The sender must be able to
required neither on sender side search and select only the
nor on receiver requested frame.
ACK Numbers NAK number refer to the next NAK number refer to the
expected frame number. frame lost.

Use It more often used. It is less in practice because of


its complexity.
Comparison
3.2 Media Access Control

Networks are classified into two categories


a)Point to point networks b) Broadcast networks
Broadcast channels are also called as multi access channel
In Broadcast networks the most important point is the criteria by which we decide who is allowed to
use the common channel when more than one users want to use it
The protocols used to determine who goes next on a multiaccess channel belongs to a MAC sublayer.
3.2 Media Access Control

Medium Access Control (MAC) is a sublayer of the Data-link layer.


The protocols used to determine who goes next on a multiaccess channel belongs to a MAC sublayer.
MAC is important in LAN which use a multiaccess channel as the basis for communication.
MAC is a broadcast network
The Channel Allocation Problem

• There are two schemes to allocate a single channel among competing users:

1) Static channel allocation in LANs & MANs.


2) Dynamic Channel Allocation in LANs & MANs
Static channel Allocation in LANs and WANs

► Traditional way of allocating a single channel, such as a telephone trunk, among multiple competing users is to
chop its capacity by using one of the multiplexing scheme such as FDM, TDM etc.
► FDM
If there are N users, the bandwidth is divided into N equal portions, with each other being assigned one
portion.
Since each user has a private frequency band, there is no interference among users.
A wireless example is FM radio stations. Each station gets a portion of the FM band and uses it most of the
time to broadcast its signal
Problems with FDM: when some users are quiet, their bandwidth is simply lost. They are not using it and no
one else is allowed to use it either.
A static allocation is a poor fit to most computer systems, in which data traffic is extremely busty, often with
peak traffic ratios of 1000:1. Consequently most of the channel will be idle most of the time.
Static channel Allocation in LANs and WANs

Frequency Division Multiplexing (FDM)

92
Dynamic Channel Allocation in LANs and WANs
In this method either a fixed frequency or fixed time slot is not allowed to the user, the user can use the
single channel as per his requirement
Dynamic Allocation strategies
1.Contension resolution approach
Collision: If two frames are transmitted simultaneously, they overlap in time and resulting signal is garbled.
This event is called a Collision. All stations can detect that a collision has occurred. A collided frame
must be transmitted again later. No errors other than those generated by collision occur.

2.Perfectly scheduled approach:


Contension free approach for eg polling reservation

93
MULTIPLE ACCESS

ALOHA, CSMA, CSMA/CD


MULTIPLE ACCESS PROTOCOLS -CLASSIFICATION

95
3.2 ALOHA

ALOHA is a system proposed for solving the channel allocation problem.


there are two versions of ALOHA:
Pure ALOHA
Slotted ALOHA
The basic difference with respect to timing is:
Pure ALOHA does not require global time synchronization; i.e. time is continuous
In slotted ALOHA time is divided into discrete slots

96
Pure ALOHA

The system is working as follows:

let users transmit whenever they have data to be sent.


expected collisions will occur.
the collided frames will be destroyed.
using a feedback mechanism to know about the status of frame.
retransmit the destroyed frame

97
Pure ALOHA

Fig 4.2.1a frames are transmitted at completely arbitrary times. 98


Pure ALOHA

Fig 4.2.1b frames are transmitted at completely arbitrary times. 99


Pure ALOHA

Such systems where multiple users share a common channel in a way that leads to conflicts are known
as ‘contention systems’

As seen in Fig. 4.2.1a and b, whenever two frames try to occupy the channel at the same time, collision
occurs and both frames are garbled.

100
Pure ALOHA - Vulnerable time for collision

Fig. 4.2.2 Vulnerable period for the shaded frame. 101


Pure ALOHA - Throughput

Assume k frames are generated within a frame time and G frames are expected by the channel. The
probability of this is given as:
Pr[k] = Gk e-G/k!

Thus, probability of 0 frames is: e-G

102
Pure ALOHA - Throughput

In an interval of two frame times, number of frames generated becomes 2G


Thus, Throughput per frame time, S is given as:

S = G e-2G

In an interval of two frame times, number of frames generated becomes 2G


As seen in Fig. 4.2.3, throughput is maximum at G = 0.5 and S = 0.184

103
Pure ALOHA - Throughput

Fig. 4.2.3 Throughput versus offered traffic for ALOHA systems. 104
Pure ALOHA - Disadvantages

The main disadvantage of Pure ALOHA is a low channel utilization.

This is expected due to the feature that all users transmit whenever they want.

105
Slotted ALOHA

Time is divided into discrete intervals; each interval corresponding to one frame
Stations can only transmit frames at the beginning of the slot
Hence, synchronization is needed, for which a ‘pip’ is emitted at the start of each interval like a clock
Vulnerable period is halved as a station has to wait for its slot to begin.

106
Slotted ALOHA

107
Slotted ALOHA - Throughput

The probability that no other traffic is present during same slot as our frame; i.e collision is avoided is
e-G
Thus, throughput per frame time is given as:
S = G e-G

As seen in Fig. 4.2.3, slotted ALOHA is maximum at G = 1 and


S = 0.368
The best possible scenario for this protocol is 37% slots empty, 37% slots successes and 26% collisions

108
Slotted ALOHA - Vulnerable Time

Fig. 4.2.4 Vulnerable time for the slotted ALOHA 109


Slotted ALOHA - Disadvantages

● The main disadvantage of slotted ALOHA is idle time slots


● Another disadvantage is need for clock synchronization

110
Pure ALOHA Vs Slotted ALOHA

111
Pure ALOHA Vs Slotted ALOHA

112
CARRIER SENSE MULTIPLE ACCESS (CSMA)

•CSMA protocol was developed to overcome the problem found in ALOHA i.e. to minimize the chances of
collision, so as to improve the performance.

•CSMA protocol is based on the principle of ‘carrier sense’.

•The chances of collision can be reduce to great extent if a station senses the channel before trying to use it.

• Although CSMA can reduce the possibility of collision, but it cannot eliminate it completely.

•The chances of collision still exist because of propagation delay.


Different Types of CSMA protocols

•There are three different types of CSMA protocols :-


(i) 1-Persistent CSMA
(ii) Non-Persistent CSMA
(iii) P-Persistent CSMA
(i) 1-Persistent CSMA

•In this method, station that wants to transmit data continuously sense the Channel to check whether the channel
is idle or busy.

•If the channel is busy , the station waits until it becomes idle.

•When the station detects an idle channel, it immediately transmits the frame with probability 1. Hence it is called
1-persistent CSMA.

•This method has the highest chance of collision because two or more station may find channel to be idle at the
same time and transmit their frames.

•When the collision occurs, the stations wait a random amount of time and start all over again.
1-Persistent CSMA (cont..)
Drawback of 1-persistent

•The propagation delay time greatly affects this protocol. Let us suppose, just after the station 1 begins its
transmission, station 2 also become ready to send its data and sense the channel. If the station 1 signal has not yet
reached station 2, station 2 will sense the channel to be idle and will begin its transmission. This will result in
collision.
•Even if propagation delay time is zero, collision will still occur. If two stations become ready in the middle of
third station’s transmission both stations will wait until the transmission of first station ends and both will begin
their transmission exactly simultaneously. This will also result in collision.

SENSES & TRANSMIT

COUNTINOUSLY SENSES
CHANNEL? BUSY

TIME
IDLE CHANNEL IDLE
BUSY CHANNEL

STATION CAN TRANSMIT


(ii) Non –persistent CSMA

•A station that has a frame to send senses the channel.

•If the channel is idle, it sense immediately.

•If the channel is busy, it waits a random amount of time and then senses the channel again.

•In non-persistent CSMA the station does not continuously sense the channel for purpose of capturing it
when it defects the end of precious transmission .
Advantages and Disadvantages of Non-persistent Method

Advantages:
• It reduces the chances of collision because the stations wait a rando amount of time. It is unlikely that two or
more stations
• Will wait for same amount of time and will retransmit at the same time.

Disadvantages
•It reduces the efficiency of network because the channel remains idle when there may be station with frames to
send.
• This is due to the fact that the stations wait a random amount of time after the collision.
(iii) p-persistent CSMA

•This method is used when channel has time slots such that the time slot duration is equal to or greater than
the maximum propagation delay time.

•Whenever a station becomes ready to send the channel.

•If channel is busy, station waits until next slot.

•If the channel is idle, it transmits with a probability p.

•With the probability q=1-p, the station then waits for the beginning of the next time slot.

•If the next slot is also idle, it either transmits or wait again with probabilities p and q.

•This process is repeated till either frame has been transmitted or another station has begun transmitting.

•In case of the transmission by another station, the station act as though a collision has occurred and it waits a
random amount of time and starts again.
Advantages of p-persistent

✔ It reduce the chances of collision and improve the efficiency of the network.

p-persistent CSMA protocol

1. If the medium is idle, transmit with probability p, and delay for one time unit with probability (1 - p) (time
unit = length of propagation delay) .
2. If the medium is busy, continue to listen until medium becomes idle, then go to Step 1 .
3. If transmission is delayed by one time unit, continue with Step 1
CDMA/CD collision detection

✔ It’s an analog process.


✔ The station's hardware must listen to the channel while it is
transmitting.
✔ If the signal it reads back is different from the signal it is pulling
out, it knows that a collision is occurring.
✔ The implications are that a received signal must not ne tiny
compared to the transmitted signal (which is difficult for wireless
as received signals may be 1,000,000 times weaker than
transmitted signals)
✔ And the modulation must be chosen to allow collisions to be
detected (e.g. a collision of two 0 volt signals may well be
impossible to detect.).
CDMA/CD collision detection

► CSMA/CD, as well as many other LAN protocols, uses the conceptual model as shown in next figure
► At the point marked t0, a station has finished transmitting its frame.
► Any other station having a frame to send may now attempt to do so.
► If two or more stations decide to transmit simultaneously, there will be a collision.
► If a station detects a collision, it aborts its transmission, waits a random period of time, and then tries again
(assuming that no other station has started transmitting in the meantime).
► Therefore our model for CSMA/CD will consist of alternating contention and transmission periods, with the
idle periods occurring when all stations are quiet (e.g for lack of work)
CSMA with Collision Detection

CSMA/CD can be in one of three states: contention, transmission, or idle.


CDMA/CD collision detection

• The minimum time to detect the collision is just the time it takes the signal to propagate from one station to the
other.
• In the worst case, a station cannot be sure that it has seized the channel until it has transmitted for 2 τ without
hearing a collision.
• So, CSMA/CD contention can be considered as a slotted ALOHA system with a slot width of 2 τ.
• The difference between CSMA/CD compared to slotted ALOHA is that slots in which only one station
transmits (i.e. in which the channel is seized) are followed by the rest of a frame.
• This difference will greatly improve performance if the frame time is much longer that the propagation time.
CSMA/CD

• Algorithms
• The algorithm of CSMA/CD is:
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station starts transmitting and continually monitors the channel to detect collision.
• If a collision is detected, the station starts the collision resolution algorithm.
• The station resets the retransmission counters and completes frame transmission.
• The algorithm of Collision Resolution is:
• The station continues transmission of the current frame for a specified time along with a jam signal, to ensure that all the
other stations detect collision.
• The station increments the retransmission counter.
• If the maximum number of retransmission attempts is reached, then the station aborts transmission.
• Otherwise, the station waits for a backoff period which is generally a function of the number of collisions and restart
main algorithm.
The following flowchart summarizes the algorithms:

You might also like