Computer
Computer
UNIT 2
Syllabus:
The Data link layer: Design issues of DLL, Error detection and correction,
Elementary data link protocols, Sliding window protocols.
The medium access control sublayer: The channel allocation problem,
Multiple access protocols.
To accomplish these goals, the data link layer takes the packets it gets from the
network layer and encapsulates them into frames for transmission. Each frame
contains a frame header, a payload field for holding the packet, and a frame trailer,
as illustrated in Fig. 3-1. Frame management forms the heart of what the data link
layer does. In the following sections we will examine all the above mentioned
issues in detail.
The data link layer can be designed to offer various services. The actual services
that are offered vary from protocol to protocol. Three reasonable possibilities that
we will consider in turn are:
To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer. What the physical layer does is accept a raw
bit stream and attempt to deliver it to the destination. If the channel is noisy, as it
is for most wireless and some wired links, the physical layer will add some
redundancy to its signals to reduce the bit error rate to a tolerable level. However,
the bit stream received by the data link layer is not guaranteed to be error free.
Some bits may have different values and the number of bits received may be less
than, equal to, or more than the number of bits transmitted. It is up to the data link
layer to detect and, if necessary, correct errors.
The usual approach is for the data link layer to break up the bit stream into discrete
frames, compute a short token called a checksum for each frame, and include the
checksum in the frame when it is transmitted. When a frame arrives at the
destination, the checksum is recomputed. If the newly computed checksum is
different from the one contained in the frame, the data link layer knows that an
error has occurred and takes steps to deal with it (e.g., discarding the bad frame
and possibly also sending back an error report).
Breaking up the bit stream into frames is more difficult than it at first appears. A
good design must make it easy for a receiver to find the start of new frames while
using little of the channel bandwidth. We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
Byte count
The first framing method uses a field in the header to specify the number of bytes
in the frame. When the data link layer at the destination sees the byte count, it
knows how many bytes follow and hence where the end of the frame is. This
technique is shown in Fig. 3-3(a) for four small example frames of sizes 5, 5, 8,
and 8 bytes, respectively.
The trouble with this algorithm is that the count can be garbled by a transmission
error. For example, if the byte count of 5 in the second frame of Fig. 3-3(b)
becomes a 7 due to a single bit flip, the destination will get out of synchronization.
It will then be unable to locate the correct start of the next frame. Even if the
checksum is incorrect so the destination knows that the frame is bad, it still has
no way of telling where the next frame starts. Sending a frame back to the source
asking for a retransmission does not help either, since the destination does not
know how many bytes to skip over to get to the start of the retransmission. For
this reason, the byte count method is rarely used by itself.
However, if either the frame or the acknowledgement is lost, the timer will go off,
alerting the sender to a potential problem. The obvious solution is to just transmit
the frame again. However, when frames may be transmitted multiple times there
is a danger that the receiver will accept the same frame two or more times and
pass it to the network layer more than once. To prevent this from happening, it is
generally necessary to assign sequence numbers to outgoing frames, so that the
receiver can distinguish retransmissions from originals.
The whole issue of managing the timers and sequence numbers so as to ensure
that each frame is ultimately passed to the network layer at the destination exactly
once, no more and no less, is an important part of the duties of the data link layer
2.1.4 Flow Control
Another important design issue that occurs in the data link layer (and higher
layers as well) is what to do with a sender that systematically wants to transmit
frames faster than the receiver can accept them. This situation can occur when the
sender is running on a fast, powerful computer and the receiver is running on a
slow, low-end machine.
Clearly, something has to be done to prevent this situation. Two approaches are
commonly used. In the first one, feedback-based flow control, the receiver sends
back information to the sender giving it permission to send more data, or at least
telling the sender how the receiver is doing. In the second one, rate-based flow
control, the protocol has a built-in mechanism that limits the rate at which senders
may transmit data, without using feedback from the receiver.
Other types of errors also exist. Sometimes, the location of an error will be
known, perhaps because the physical layer received an analog signal that was far
from the expected value for a 0 or 1 and declared the bit to be lost. This situation
is called an erasure channel. It is easier to correct errors in erasure channels than
in channels that flip bits because even if the value of the bit has been lost, at least
we know which bit is in error. However, we often do not have the benefit of
erasures
Error Control
Error control can be done in two ways
Error detection – Error detection involves checking whether any error has
occurred or not. The number of error bits and the type of error does not matter.
Error correction – Error correction involves ascertaining the exact number of
bits that has been corrupted and the location of the corrupted bits. The use of error
correcting codes is often referred to as forward error correction
For both error detection and error correction, the sender needs to send some
additional bits along with the data bits called as Redundant bits. The receiver
performs necessary checks based upon the additional redundant bits. If it finds
that the data is free from errors, it removes the redundant bits before passing the
message to the upper layers.
2.2.1 Error Correcting Codes
Types of Error Correcting Codes
ECCs can be broadly categorized into two types, block codes and convolution
codes.
The procedure used by the sender to encode the message encompasses the
following steps −
Once the redundant bits are embedded within the message, this is sent to the user.
If the message contains m𝑚number of data bits, r𝑟number of redundant bits are
added to it so that m𝑟 is able to indicate at least (m + r+ 1) different states. Here,
(m + r) indicates location of an error in each of (𝑚 + 𝑟) bit positions and one
additional state indicates no error. Since, r𝑟 bits can indicate 2r𝑟 states, 2r𝑟 must
be at least equal to (m + r + 1). Thus the following equation should hold 2r ≥
m+r+1
The redundant bits are parity bits. A parity bit is an extra bit that makes the
number of 1s either even or odd. The two types of parity are −
Even Parity − Here the total number of bits in the message is made even.
Odd Parity − Here the total number of bits in the message is made odd.
Each redundant bit, ri, is calculated as the parity, generally even parity, based
upon its bit position. It covers all bit positions whose binary representation
includes a 1 in the ith position except the position of ri. Thus −
r1 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the least significant position excluding 1 (3, 5, 7, 9, 11 and
so on)
r2 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)
r3 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 3 from right except 4 (5-7, 12-15, 20-23 and so
on)
Decoding a message in Hamming Code
Using the same formula as in encoding, the number of redundant bits are
ascertained.
Parity bits are calculated based upon the data bits and the redundant bits using the
same rule as during generation of p1,p2 ,p3 ,p4 etc. Thus
Parity
The parity check is done by adding an extra bit, called parity bit, to the data to
make the number of 1s either even or odd depending upon the type of parity. The
parity check is suitable for single bit error detection only.
The two types of parity checking are
Even Parity − Here the total number of bits in the message is made even.
Odd Parity − Here the total number of bits in the message is made odd.
Checksums
This is a block code method where a checksum is created based on the data values
in the data blocks to be transmitted using some algorithm and appended to the
data. When the receiver gets this data, a new checksum is calculated and
compared with the existing checksum. A non-match indicates an error.
For error detection by checksums, data is divided into fixed sized frames or
segments.
Sender's End − The sender adds the segments using 1’s complement
arithmetic to get the sum. It then complements the sum to get the checksum
and sends it along with the data frames.
Receiver's End − The receiver adds the incoming segments along with the
checksum using 1’s complement arithmetic to get the sum and then
complements it.
If the result is zero, the received frames are accepted; otherwise they are
discarded.
CRC involves binary division of the data bits being sent by a predetermined
divisor agreed upon by the communicating system. The divisor is generated using
polynomials. So, CRC is also called polynomial code checksum.
When the polynomial code method is employed, the sender and receiver must
agree upon a generator polynomial, G(x), in advance. Both the high- and low
order bits of the generator must be 1. To compute the CRC for some frame with
m bits corresponding to the polynomial M(x), the frame must be longer than the
generator polynomial. The idea is to append a CRC to the end of the frame in
such a way that the polynomial represented by the check summed frame is
divisible by G(x). When the receiver gets the check summed frame, it tries
dividing it by G(x). If there is a remainder, there has been a transmission error.
1. Let r be the degree of G(x). Append r zero bits to the low-order end of the
frame so it now contains m + r bits and corresponds to the polynomial x r M(x).
2. Divide the bit string corresponding to G(x) into the bit string corresponding to
x r M(x), using modulo 2 division.
3. Subtract the remainder (which is always r or fewer bits) from the bit string
corresponding to x r M(x) using modulo 2 subtraction. The result is the
checksummed frame to be transmitted. Call its polynomial T(x).
Figure 3-9 illustrates the calculation for a frame 1101011111 using the generator
G(x) = 𝑥 4 + x + 1
Example 2
Data link protocols can be broadly divided into two categories, depending on
whether the transmission channel is noiseless or noisy.
Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data
transmission over an ideal channel, i.e. a channel through which transmission can
never go wrong. It has distinct procedures for sender and receiver. The sender
simply sends all its data available onto the channel as soon as they are available
its buffer. The receiver is assumed to process all incoming data instantly. It is
hypothetical since it does not handle flow control or error control.
Stop – and – Wait Protocol
Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional
data transmission without any error control facilities. However, it provides for
flow control so that a fast sender does not drown a slow receiver. The receiver
has a finite buffer size with finite processing speed. The sender can send a frame
only when it has received indication from the receiver that it is available for
further data processing.
Stop – and – Wait ARQ
Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a
variation of the above protocol with added error control mechanisms, appropriate
for noisy channels. The sender keeps a copy of the sent frame. It then waits for a
finite time to receive a positive acknowledgement from receiver. If the timer
expires or a negative acknowledgement is received, the frame is retransmitted. If
a positive acknowledgement is received then the next frame is sent.
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and
so is also called sliding window protocol. The frames are sequentially numbered
and a finite number of frames are sent. If the acknowledgement of a frame is not
received within the time period, all frames starting from that frame are
retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost
frames are retransmitted, while the good frames are received and buffered.
2.3.1 Elementary Data Link Protocols
1. An Unrestricted (Utopian) Simplex Protocol
2. A Simplex Stop-and-Wait Protocol
3. A Simplex Protocol for a Noisy Channel (Stop and wait ARQ)
NOISELESS CHANNELS
Let us first assume we have an ideal channel in which no frames are lost,
duplicated, or corrupted. We introduce two protocols for this type of channel. The
first is a protocol that does not use flow control; the second is the one that does.
Of course, neither has error control because we have assumed that the channel is
a perfect noiseless channel.
Protocol 1 : An Unrestricted Simplex Protocol
Assumptions
The transmission channel is completely noiseless (a channel in which no
frames are lost, corrupted, or duplicated).
The transmission channel is assumed to be ideal in which there is no data
loss, no data duplication, and no data corruption.
There is no error and flow control mechanism.
The buffer space for storing the frames at the sender's end and the receiver's
end is infinite.
The processing time of the simplest protocol is very short. Hence it can be
neglected.
The sender and the receiver are always ready to send and receive data.
The sender sends a sequence of data frames without thinking about the
receiver.
There is no data loss hence no ACK or NACK.
The DLL at the receiving end immediately removes the frame header and
transfers the data to the subsequent layer.
Design
The design of the simplest protocol is very simple as there is no error and a flow
control mechanism. The sender end (present at the data link layer) gets the data
from the network layer and converts the data into the form of frames so that it can
be easily transmitted.
Now on the receiver's end, the data frame is taken from the physical layer and
then the data link layer extracts the actual data (by removing the header from the
data frame) from the data frame.
Algorithms
Flow diagram
Let us now look at the flow chart of the data transfer from the sender to the
receiver. Suppose that the sender is A and the receiver is B.
The basic flow of the data frame is depicted in the diagram below.
The idea is very simple, and the sender sends the sequence of data frames without
thinking about the receiver. Whenever a sending request comes from the network
layer, the sender sends the data frames. Similarly, whenever the receiving request
comes from the physical layer, the receiver receives the data frames.
Protocol 2 : A Simplex Stop-and-Wait Protocol (noiseless channel)
Assumptions
The transmission channel is completely noiseless (a channel in which no
frames are lost, corrupted, or duplicated).
The transmission channel is assumed to be ideal in which there is no data
loss, no data duplication, and no data corruption.
There is no error control mechanism.
The buffer space for storing the frames at the sender's end and the receiver's
end is finite.
While transmitting data from sender to receiver, the flow of data is required
to be controlled.
Design
Sender Side Rule 1: Sender will send one packet at a time Rule 2: Sender will
send the next packet to the receiver only when it receives the acknowledgment of
the previous packet from the receiver. So, in the stop-and-wait protocol, the
sender-side process is very simple.
Receiver Side Rule 1: The receiver receives the data packet and then consumes
the data packet. Rule 2: The receiver sends acknowledgment when the data packet
is consumed. So, in this protocol, the receiver-side process is also very simple.
Stop & wait protocol is accurate as the sender sends the next frame to the
receiver only when the acknowledgment of the previous packet is received.
So there is less chance of the frame being lost.
Conclusion
Stop and wait protocol is a simple and reliable protocol for flow control.
Stop and wait protocol is a data link layer protocol.
In this protocol, the sender will not send the next packet to the receiver
until the acknowledgment of the previous packet is received.
One of the disadvantages of stop and wait protocol is that its efficiency is
low.
NOISY CHANNELS
Although the Stop-and-Wait Protocol gives us an idea of how to add flow control
to its predecessor, noiseless channels are non existent. We can ignore the error (as
we sometimes do), or we need to add error control to our protocols. We discuss
three protocols in this section that use error control.
Protocol 3 :A Simplex Protocol for Noisy Channel (Stop and Wait ARQ)
Stop & Wait ARQ is a sliding window protocol for flow control and it overcomes
the limitations of Stop & Wait, we can say that it is the improved or modified
version of the Stop & Wait protocol.
Assumptions
• Stop & Wait ARQ assumes that the communication channel is noisy
(previously Stop & Wait assumed that the communication channel
is not noisy).
Stop & Wait ARQ also assumes that errors may occur in the data
while transmission.
• The buffer space for storing the frames at the sender's end and the
receiver's end is finite.
• While transmitting data from sender to receiver, the flow of data is
required to be controlled.
Problems:
1. Lost Data 2. Lost Acknowledgement:
1. Time Out:
3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement
also.
To begin with, the sender side will share data frames with the receiver
side per the window size assigned to the model.
The sliding window will appear on the frames transmitted over to the
receiver side.
Then the sender will wait for an acknowledgment from the receiver side
for the shared frames, as mentioned in figure1.
On receiving the data frames from the sender side, the receiver will use
the frames in the network model.
After the receiver uses the frame, it will transmit the acknowledgement
to the sender side for that data frame.
Then, the receiver side will receive the next data frame from the sender
side, as mentioned in figure2.
One -bit Sliding Window
One bit sliding window protocol is based on the concept of sliding window
protocol. But here the window size is of 1 bit.
Assume that computer A is trying to send its frame 0 to computer B and that B is
trying to send its frame 0 to A. Suppose that A sends a frame to B, but A’s timeout
interval is a little too short. Consequently, A may time out repeatedly, sending a
series of identical frames, all with seq = 0 and ack = 1.
When the first valid frame arrives at computer B, it will be accepted and frame
expected will be set to a value of 1. All the subsequent frames received will be
rejected because B is now expecting frames with sequence number 1, not 0.
Furthermore, since all the duplicates will have ack = 1 and B is still waiting for
an acknowledgement of 0, B will not go and fetch a new packet from its network
layer.
After every rejected duplicate comes in, B will send A a frame containing seq =
0 and ack = 0. Eventually, one of these will arrive correctly at A, causing A to
begin sending the next packet. No combination of lost frames or premature
timeouts can cause the protocol to deliver duplicate packets to either network
layer, to skip a packet, or to deadlock. The protocol is correct.
However, to show how subtle protocol interactions can be, we note that a peculiar
situation arises if both sides simultaneously send an initial packet. This
synchronization difficulty is illustrated by Fig. 3-17. In part (a), the normal
operation of the protocol is shown. In (b) the peculiarity is illustrated. If B waits
for A’s first frame before sending one of its own, the sequence is as shown in (a),
and every frame is accepted.
Go-Back-N ARQ
It is a sliding window protocol in which multiple frames are sent from sender to
receiver at once. The number of frames that are sent at once depends upon the
size of the window that is taken.
Pipelining
With the help of Go-Back-N ARQ, multiple frames can be transit in the forward
direction and multiple acknowledgements can transit in the reverse direction. The
idea of this protocol is similar to the Stop-and-wait ARQ but there is a difference
and it is the window of Go-Back-N ARQ allows us to have multiple frames in the
transition as there are many slots in the send window.
In the Go-Back-N ARQ, the size of the send window must be always less than
2m and the size of the receiver window is always 1.
Algorithm:
Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps
track of only one variable, and there is no need to buffer out-of-order frames; they
are simply discarded. However, this protocol is very inefficient for a noisy link.
In a noisy link a frame has a higher probability of damage, which means the
resending of multiple frames. This resending uses up the bandwidth and slows
down the transmission. For noisy links, there is another mechanism that does not
resend N frames when just one frame is damaged; only the damaged frame is
resent. This mechanism is called Selective Repeat ARQ. It is more efficient for
noisy links, but the processing at the receiver is more complex.
Design
The design in this case is to some extent similar to the one we described for the
00- Back-N, but more complicated, as shown in Figure 11.20.
To coordinate the access to the channel, multiple access protocols are requiring.
All these protocols belong to the MAC sub layer. Data Link layer is divided into
two sub layers:
1. Logical Link Control (LLC)- is responsible for error control & flow control.
Step 1 − If there are N users, the bandwidth is divided into N equal sized
partitions, where each user is assigned with one portion. This is because, each
user has a private frequency band.
Step 2 − When there is only a small and constant number of users, each user has
a heavy load of traffic, this division is a simple and efficient allocation
mechanism.
Step 3 − Let us take a wireless example of FM radio stations, each station gets a
portion of FM band and uses it most of the time to broadcast its signal.
Step 4 − When the number of senders is large and varying or traffic is suddenly
changing, FDM faces some problems.
Step 5 − If the spectrum is cut up into N regions and fewer than N users are
currently interested in communicating, a large piece of valuable spectrum will be
wasted. And if more than N users want to communicate, some of them will be
denied permission for lack of bandwidth, even if some of the users who have been
assigned a frequency band hardly ever transmit or receive anything.
Step 6 − A static allocation is a poor fit to most computer systems, in which data
traffic is extremely burst, often with peak traffic to mean traffic ration of 1000:1,
consequently most of the channels will be idle most of the time.
Now let us divide the single channel into N independent subchannels, each with
capacity C/N bps. The mean input rate on each of the subchannels will now be
λ/N. Recomputing T, we get
Many protocols have been defined to handle the access to shared link. These
protocols are organized in three different groups:
In this, all stations have same superiority that is no station has more priority than
another station. Any station can send data depending on medium’s state( idle or
busy). It has two features:
ALOHA
It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit data
across a network simultaneously when a data frameset is available for
transmission.
Aloha Rules
PURE ALOHA
Whenever data is available for sending over a channel at stations, we use Pure
Aloha. In pure Aloha, when each station transmits data to a channel without
checking whether the channel is idle or not, the chances of collision may occur,
and the data frame can be lost. When any station transmits the data frame to a
channel, the pure Aloha waits for the receiver's acknowledgment. If it does not
acknowledge the receiver end within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the station may assume
the frame has been lost or destroyed. Therefore, it retransmits the frame until all
the data are successfully transmitted to the receiver.
As we can see in the figure above, there are four stations for accessing a shared
channel and transmitting data frames. Some frames collide because most stations
send their frames at the same time. Only two frames, frame 1.1 and frame 2.2, are
successfully transmitted to the receiver end. At the same time, other frames are
lost or destroyed. Whenever two frames fall on a shared channel simultaneously,
collisions can occur, and both will suffer damage. If the new frame's first bit
enters the channel before finishing the last bit of the second frame. Both frames
are completely finished, and both stations must retransmit the data frame.
SLOTTED ALOHA
The slotted Aloha is designed to overcome the pure Aloha's efficiency because
pure Aloha has a very high possibility of frame hitting. In slotted Aloha, the
shared channel is divided into a fixed time interval called slots. So that, if a station
wants to send a frame to a shared channel, the frame can only be sent at the
beginning of the slot, and only one frame is allowed to be sent to each slot. And
if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of
two or more station time slot.
In Carrier Sense Multiple Access (CSMA) protocol, the station will sense the
channel before the transmission of data. CSMA reduces the chances of collision
in the network but it does not eliminate the collision from the channel. 1-
Persistent, Non-Persistent, P-Persistent are the three access methods of CSMA.
1-Persistent
In this, if the station wants to transmit the data. Then the station first senses
the medium.
If the medium is busy then the station waits until the channel becomes idle.
And the station continuously senses the channel until the medium becomes
idle.
If the station detects the channel as idle then the station will immediately
send the data frame with 1 probability that’s why the name of this method
is 1-persistent.
This is one of the most straightforward methods. In this method, once the station
finds that the medium is idle then it immediately sends the frame. By using this
method there are higher chances for collision because it is possible that two or
more stations find the shared medium idle at the same time and then they send
their frames immediately.
Non-Persistent
In this method of CSMA, if the station finds the channel busy then it will wait
for a random amount of time before sensing the channel again.
If the station wants to transmit the data then first of all it will sense the
medium.
If the medium is idle then the station will immediately send the data.
Otherwise, if the medium is busy then the station waits for a random
amount of time and then again senses the channel after waiting for a
random amount of time.
In P-persistent there is less chance of collision in comparison to the 1-
persistent method as this station will not continuously sense the channel
but since the channel after waiting for a random amount of time.
P-Persistent
The p-persistent method of CSMA is used when the channel is divided into
multiple time slots and the duration of time slots is greater than or equal to the
maximum propagation time. This method is designed as a combination of the
advantages of 1-Persistent and Non-Persistent CSMA. The p-persistent method
of CSMA reduces the chance of collision in the network and there is an
increment in the efficiency of the network. When any station wants to transmit
the data firstly it will sense the channel If the channel is busy then the station
continuously senses the channel until the channel becomes idle. If the channel
is idle then the station does the following steps.
CSMA/CD, as well as many other LAN protocols, uses the conceptual model of
Fig. 4-5. At the point marked t 0, a station has finished transmitting its frame.
Any other station having a frame to send may now attempt to do so. If two or
more stations decide to transmit simultaneously, there will be a collision. If a
station detects a collision, it aborts its transmission, waits a random period of
time, and then tries again (assuming that no other station has started transmitting
in the meantime). Therefore, our model for CSMA/CD will consist of alternating
contention and transmission periods, with idle periods occurring when all stations
are quiet (e.g., for lack of work).
Almost all collisions can be avoided in CSMA/CD but they can still occur during
the contention period. The collision during the contention period adversely affects
the system performance, this happens when the cable is long and length of packet
are short. This problem becomes serious as fiber optics network came into use.
Here we shall discuss some protocols that resolve the collision during the
contention period.
Bit-map Protocol
Token Passing
Binary Countdown
Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:
Try-if collide-Retry
No guarantee of performance
What happen if the network load is high?
1. Bit-map Protocol:
If there are M stations, the reservation interval is divided into M slots, and
each station has one slot.
Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1.
No other station is allowed to transmit during this slot.
In general, ith station may announce that it has a frame to send by inserting
a 1 bit into ith slot. After all N slots have been checked, each station knows
which stations wish to transmit.
The stations which have reserved their slots transfer their frames in that
order.
After data transmission period, next reservation interval begins.
Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot
reservation frame. In the first interval, only stations 1, 3, and 4 have made
reservations. In the second interval, only station 1 has made a reservation.
Disadvantages
2. Token passing
In token passing scheme, the stations are connected logically to each other in form
of ring and access to stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one
station to the next in some predefined order.
In Token ring, token is passed from one station to another adjacent station
in the ring whereas incase of Token bus, each station uses the bus to send
the token to the next station in some predefined order.
In both cases, token represents permission to send. If a station has a frame
queued for transmission when it receives the token, it can send that frame
before it passes the token to the next station. If it has no queued frame, it
passes the token simply.
After sending a frame, each station must wait for all N stations (including
itself) to send the token to their neighbours and the other N – 1 stations to
send a frame, if they have one.
There exists problems like duplication of token or token is lost or insertion
of new station, removal of a station, which need be tackled for correct and
reliable operation of this scheme.
3. Binary Countdown
Binary countdown protocol is used to overcome the overhead 1 bit per binary
station. In binary countdown, binary station addresses are used. A station wanting
to use the channel broadcast its address as binary bit string starting with the high
order bit. All addresses are assumed of the same length. Here, we will see the
example to illustrate the working of the binary countdown.
For example, if stations 0010, 0100, 1001, and 1010 are all trying to get the
channel, in the first bit time the stations transmit 0, 0, 1, and 1, respectively. These
are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a
higher-numbered station is competing for the channel, so they give up for the
current round. Stations 1001 and 1010 continue. The next bit is 0, and both
stations continue. The next bit is 1, so station 1001 gives up. The winner is station
1010 because it has the highest address. After winning the bidding, it may now
transmit a frame, after which another bidding cycle starts. The protocol is
illustrated in Fig. 4-8. It has the property that higher-numbered stations have a
higher priority than lower-numbered stations, which may be either good or bad,
depending on the context.
The channel efficiency of this method is d /(d + log 2 𝑁). If, however, the frame
format has been cleverly chosen so that the sender’s address is the first field in
the frame, even these log 2 𝑁 bits are not wasted, and the efficiency is 100%
Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good
when the network load is low.
Collision free protocols (bitmap, binary Countdown) are good when load
is high.
partition the group of station and limit the contention for each slot.
Under light load, everyone can try for each slot like aloha
Under heavy load, only a group can try for each slot
How do we do it :
Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to
send), conflict
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
There is an even more important difference between wireless LANs and wired
LANs. A station on a wireless LAN may not be able to transmit frames to or
receive frames from all other stations because of the limited radio range of the
stations. In wired LANs, when one station sends a frame, all other stations receive
it. The absence of this property in wireless LANs causes a variety of
complications.
A naive approach to using a wireless LAN might be to try CSMA: just listen for
other transmissions and only transmit if no one else is doing so. The trouble is,
this protocol is not really a good way to think about wireless because what matters
for reception is interference at the receiver, not at the sender. To see the nature of
the problem, consider Fig. 4-11, where four wireless stations are illustrated. For
our purposes, it does not matter which are APs and which are laptops. The radio
range is such that A and B are within each other’s range and can potentially
interfere with one another. C can also potentially interfere with both B and D, but
not with A.
Now let us look at a different situation: B transmitting to A at the same time that
C wants to transmit to D, as shown in Fig. 4-11(b). If C senses the medium, it will
hear a transmission and falsely conclude that it may not send to D (shown as a
dashed line). In fact, such a transmission would cause bad reception only in the
zone between B and C, where neither of the intended receivers is located. We
want a MAC protocol that prevents this kind of deferral from happening because
it wastes bandwidth. The problem is called the exposed terminal problem.
An early and influential protocol that tackles these problems for wireless LANs
is MACA (Multiple Access with Collision Avoidance) (Karn, 1990). The basic
idea behind it is for the sender to stimulate the receiver into outputting a short
frame, so stations nearby can detect this transmission and avoid transmitting for
the duration of the upcoming (large) data frame. This technique is used instead of
carrier sense.
MACA is illustrated in Fig. 4-12. Let us see how A sends a frame to B. A starts
by sending an RTS (Request To Send) frame to B, as shown in Fig. 4-12(a). This
short frame (30 bytes) contains the length of the data frame that will eventually
follow. Then B replies with a CTS (Clear To Send) frame, as shown in Fig. 4-
12(b). The CTS frame contains the data length (copied from the RTS frame).
Upon receipt of the CTS frame, A begins transmission.
Now let us see how stations overhearing either of these frames react. Any station
hearing the RTS is clearly close to A and must remain silent long enough for the
CTS to be transmitted back to A without conflict. Any station hearing the CTS is
clearly close to B and must remain silent during the upcoming data transmission,
whose length it can tell by examining the CTS frame.
In Fig. 4-12, C is within range of A but not within range of B. Therefore, it hears
the RTS from A but not the CTS from B. As long as it does not interfere with the
CTS, it is free to transmit while the data frame is being sent. In contrast, D is
within range of B but not A. It does not hear the RTS but does hear the CTS.
Hearing the CTS tips it off that it is close to a station that is about to receive a
frame, so it defers sending anything until that frame is expected to be finished.
Station E hears both control messages and, like D, must be silent until the data
frame is complete.
Despite these precautions, collisions can still occur. For example, B and C could
both send RTS frames to A at the same time. These will collide and be lost. In the
event of a collision, an unsuccessful transmitter (i.e., one that does not hear a CTS
within the expected time interval) waits a random amount of time and tries again
later.