0% found this document useful (0 votes)
2 views180 pages

M2

The document discusses the Data Link Layer in computer networks, focusing on error detection and correction techniques such as block coding and cyclic codes. It explains the importance of ensuring data integrity during transmission, the types of errors that can occur, and the use of redundancy to detect and correct these errors. Additionally, it covers concepts like Hamming distance, linear block codes, and cyclic redundancy checks as methods for error control.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views180 pages

M2

The document discusses the Data Link Layer in computer networks, focusing on error detection and correction techniques such as block coding and cyclic codes. It explains the importance of ensuring data integrity during transmission, the types of errors that can occur, and the use of redundancy to detect and correct these errors. Additionally, it covers concepts like Hamming distance, linear block codes, and cyclic redundancy checks as methods for error control.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 180

Department of Computer Science & Engineering

COMPUTER NETWORKS[BCS502]

Dr. M S Sunitha Patel


Associate Professor
Dept. of Computer Science & Engineering
ATMECE, Mysuru
Department of Computer Science & Engineering

MODULE-2
Module-2

Data Link Layer: Error Detection and Correction: Introduction,


Block Coding, Cyclic Codes. Data link control: DLC Services:
Framing, Flow Control, Error Control, Connectionless and
Connection Oriented, Data link layer protocols, High Level Data
Link Control. Media Access Control: Random Access, Controlled
Access. Check Sum and Point to Point Protocol
Department of Computer Science & Engineering

Data can be corrupted


during transmission.

Some applications require that


errors be detected and corrected.
Department of Computer Science & Engineering
INTRODUCTION
• Networks must be able to transfer data from one device to another with
acceptable accuracy.
• For most applications, a system must guarantee that the data received are
identical to the data transmitted.
• Any time data are transmitted from one node to the next, they can become
corrupted in passage.
• Many factors can alter one or more bits of a message.
• Some applications require a mechanism for detecting and correcting errors.
Some applications can tolerate a small level of error.
• For example, random errors in audio or video transmissions may be tolerable,
but when we transfer text, we expect a very high level of accuracy.
• At the data-link layer, if a frame is corrupted between the two nodes, it needs
to be corrected before it continues its journey to other nodes.
• Some multimedia applications, however, try to correct the corrupted frame.
Department of Computer Science & Engineering

Types of Errors

1. single-bit error

 The term single-bit error means that only 1 bit of a given data unit
(such as a byte, character, or packet) is changed from 1 to 0 or from 0 to 1.

2. burst error

 The term burst error means that 2 or more bits in the data unit have
changed from 1 to 0 or from 0 to 10. Figure 10.1 shows the effect of a
single-bit and a burst error on a data unit.
Department of Computer Science & Engineering

In a single-bit error, only 1 bit in the data


unit has changed.

10.6
Department of Computer Science & Engineering

Figure 10.1 Single-bit error


Department of Computer Science & Engineering

A burst error means that 2 or more bits


in the data unit have changed.
Department of Computer Science & Engineering

Figure 10.2 Burst error of length 8


Department of Computer Science & Engineering

To detect or correct errors, we need to


send extra (redundant) bits with data.
Department of Computer Science & Engineering

Redundancy
The central concept in detecting or correcting errors is redundancy.

To be able to detect or correct errors, we need to send some extra


bits with our data.

These redundant bits are added by the sender and removed by the

receiver.

Their presence allows the receiver to detect or correct corrupted


bits.
Department of Computer Science & Engineering

Detection versus Correction

In error detection, we are only looking to see if any error has

occurred. The answer is a simple yes or no.

We are not even interested in the number of corrupted bits.

A single-bit error is the same for us as a burst error.

 In error correction, we need to know the exact number of bits

that are corrupted and, more importantly, their location in the

message.
Department of Computer Science & Engineering

Coding
Redundancy is achieved through various coding schemes.

The sender adds redundant bits through a process that creates a

relationship between the redundant bits and the actual data bits.

The ratio of redundant bits to data bits and the robustness of the

process are important factors in any coding scheme.

 We can divide coding schemes into two broad categories: block

coding and convolution coding.


Department of Computer Science & Engineering

BLOCK CODING

In block coding, we divide our message into blocks, each of k bits,

called datawords.

We add r redundant bits to each block to make the length n = k + r

The resulting n-bit blocks are called codewords.


Department of Computer Science & Engineering

Error Detection

How can errors be detected by using block coding?


If the following two conditions are met, the receiver can
detect a change in the original codeword.
1. The receiver has (or can find) a list of valid codewords.
2. The original codeword has changed to an invalid one.
Department of Computer Science & Engineering

Figure 10.2: Process of error detection in block coding


Department of Computer Science & Engineering

Let us assume that k = 2 and n = 3. Table 10.1 shows the list of


datawords and codewords. Later, we will see how to derive a codeword
from a dataword.

Assume the sender encodes the dataword 01 as 011 and sends it to the receiver.
Consider the following cases:
1. The receiver receives 011. It is a valid codeword. The receiver extracts the
dataword 01 from it.
2. The codeword is corrupted during transmission, and 111 is received (the leftmost
bit is corrupted). This is not a valid codeword and is discarded.
3. The codeword is corrupted during transmission, and 000 is received (the right two
bits are corrupted). This is a valid codeword. The receiver incorrectly extracts the
Department of Computer Science & Engineering

Hamming Distance
• One of the central concepts in coding for error control is the idea of the
Hamming distance.
• The Hamming distance between two words (of the same size) is the number
of differences between the corresponding bits.
• We show the Hamming distance between two words x and y as d(x, y).
• Hamming distance between the received codeword and the sent codeword is
the number of bits that are corrupted during transmission.
• For example, if the codeword 00000 is sent and 01101 is received, 3 bits are in
error and the Hamming distance between the two is d(00000, 01101) = 3.
• In other words, if the Hamming distance between the sent and the received
codeword is not zero, the codeword has been corrupted during transmission.
98
Department of Computer Science & Engineering

Figure 10.4 XORing of two single bits or two words

The Hamming distance can easily be found if we apply the XOR operation (⊕) on the two
words and count the number of 1s in the result.
Department of Computer Science & Engineering

Let us find the Hamming distance between two pairs of


words.

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because


Department of Computer Science & Engineering

MinimumHamming Distance for Error Detection

 Minimum Hamming distance is the smallest Hamming


distance b/w all possible pairs of code-words.
 Let dmin = minimum Hamming distance.

 To find dmin value, we find the Hamming distances between


all words and select the smallest one.

Minimum-distance for Error-detection

 If ‘s’ errors occur during transmission, the Hamming distance


b/w the sent code-word and received code-word is ‘s’ (Figure
10.3). 100
Department of Computer Science & Engineering

Find the minimum Hamming distance of the coding scheme in


Table 10.1.
Solution
We first find all Hamming distances.

The dmin in this case is 2.


Department of Computer Science & Engineering

 If code has to detect upto ‘s’ errors, the minimum-distance b/w the valid
codes must be ‘s+1’ i.e. dmin=s+1.
 We use a geometric approach to define dmin=s+1.

 Let us assume that the sent code-word x is at the center of a circle with
radius s.
 All received code-words that are created by 0 to s errors are points inside
the circle or on the perimeter of the circle.
 All other valid code-words must be outside the circle
101
Department of Computer Science & Engineering

A code scheme has a Hamming distance dmin = 4. This code guarantees


the detection of up to three errors (d = s + 1 or
s = 3).
Department of Computer Science & Engineering

Linear Block Codes

 a linear block code is a code in which the exclusive OR (addition modulo-2) of


two valid codewords creates another valid codeword.
 The scheme in Table 10.1 is a linear block code because the result of XORing
any codeword with any other codeword is a valid codeword. For example, the
XORing of the second and third codewords creates the fourth one.
Department of Computer Science & Engineering

Minimum Distance for Linear Block Codes


• It is simple to find the minimum Hamming distance for a linear block code.
• The minimum Hamming distance is the number of 1s in the nonzero valid
codeword with the smallest number of 1s.
Example :
 In our first code (Table 10.1), the numbers of 1s in the nonzero codewords are
2, 2, and 2. So the minimum Hamming distance is dmin = 2.

Table 10.1
Department of Computer Science & Engineering

Parity-Check Code

 Most familiar error-detecting code is the parity-check code.

 This code is a linear block code.

 In this code, a k-bit dataword is changed to an n-bit codeword where n = k + 1.

 The extra bit, called the parity bit, is selected to make the total number of 1s in the
codeword even.

• The code below is a parity-check code (k = 2 and n = 3)


Department of Computer Science & Engineering

 The code in Table below is also a parity-check code with k = 4 and n = 5


Department of Computer Science & Engineering

Encoder and decoder for simple parity-check code


Department of Computer Science & Engineering

 The calculation is done in modular arithmetic .

 The encoder uses a generator that takes a copy of a 4-bit dataword (a0, a1, a2,
and a3) and generates a parity bit r0.

 The dataword bits and the parity bit create the 5-bit codeword.

 The parity bit that is added makes the number of 1s in the codeword even.
This is normally done by adding the 4 bits of the dataword (modulo-2); the
result is the parity bit.

r0 = a3 + a2 + a1 + a0 (modulo-2)

 If the number of 1s is even, the result is 0; if the number of 1s is odd, the


result is 1.

 In both cases, the total number of 1s in the codeword is even.


Department of Computer Science & Engineering

 The sender sends the codeword, which may be corrupted during transmission. The
receiver receives a 5-bit word.
 The checker at the receiver does the same thing as the generator in the sender with
one exception: The addition is done over all 5 bits. The result, which is called the
syndrome, is just 1 bit.
 The syndrome is 0 when the number of 1s in the received codeword is even;
otherwise, it is 1.
s0 = b3 + b2 + b1 + b0 + q0 (modulo-2)
 The syndrome is passed to the decision logic analyzer. If the syndrome is 0, there is
no detectable error in the received codeword; the data portion of the received
codeword is accepted as the dataword;
 If the syndrome is 1, the data portion of the received codeword is discarded. The
dataword is not created.
Department of Computer Science & Engineering

Example
 Let us look at some transmission scenarios. Assume the sender sends the
dataword 1011. The codeword created from this dataword is 10111, which is sent
to the receiver.
 We examine five cases:
1.No error occurs; the received codeword syndrome
2.One single-bit error changes a1. The received codeword syndrome
3.One single-bit error changes r0. The received codeword is . The syndrome
is 1.
4.An error changes r0 and a second error changes a3. The received
codeword is . The syndrome is .
5.Three bits—a3, a2, and a1—are changed by errors. The received codeword
is The syndrome is
Department of Computer Science & Engineering
Example
 Let us look at some transmission scenarios. Assume the sender sends the dataword 1011.

The codeword created from this dataword is 10111, which is sent to the receiver. We

examine five cases:

1. the received codeword is 10111


– The syndrome is 0.
– No error occurs;
– The dataword 1011 is created.

2. One single-bit error changes a1. The received codeword is 10011.


– The syndrome is 1.
– No dataword is created.

3. One single-bit error changes r0. The received codeword is 10110.


– The syndrome is 1.
– No dataword is created.

• Note that although none of the dataword bits are corrupted, no dataword is created

because the code is not sophisticated enough to show the position of the corrupted bit.
Department of Computer Science & Engineering
Example
4. An error changes r0 and a second error changes a3. The received codeword is 00110.
– The syndrome is 0.
– The dataword 0011 is created at the receiver.
– Note that here the dataword is wrongly created due to the syndrome value.

• The simple parity-check decoder cannot detect an even number of errors. The errors

cancel each other out and give the syndrome a value of 0.

4. Three bits—a3, a2, and a1—are changed by errors. The received codeword is 01011.
– The syndrome is 1.
– The dataword is not created.

• This shows that the simple parity check, guaranteed to detect one single error, can

also find any odd number of errors.


Department of Computer Science & Engineering

Cyclic codes
 Cyclic codes are special linear block codes with one extra property.
 In a cyclic code, if a codeword is cyclically shifted (rotated), the result is another
codeword.

 For example, if 1011000 is a codeword and we cyclically left-shift, then 0110001 is


also a codeword.

 In this case, if we call the bits in the first word a0 to a6, and the bits in the second
word b0 to b6, we can shift the bits by using the following:

 In the rightmost equation, the last bit of the first word is wrapped around and
becomes the first bit of the second word.
Department of Computer Science & Engineering

Cyclic Redundancy Check


 We can create cyclic codes to correct errors.

 cyclic redundancy check (CRC) is used in networks such as LANs and WANs.

Figure :CRC encoder and decoder


Department of Computer Science & Engineering

 In the encoder, the dataword has k bits (4 here); the codeword has n bits (7 here).
 The size of the dataword is augmented by adding n − k (3 here) 0s to the right-hand side
of the word.
 The n-bit result is fed into the generator. The generator uses a divisor of size n − k + 1
(4 here), predefined and agreed upon.
 The generator divides the augmented dataword by the divisor (modulo-2 division).
 The quotient of the division is discarded; the remainder (r2r1r0) is appended to the
dataword to create the codeword.
 The decoder receives the codeword (possibly corrupted in transition). A copy of all n
bits is fed to the checker, which is a replica of the generator.
 The remainder produced by the checker is a syndrome of n − k (3 here) bits, which is
fed to the decision logic analyzer.
 The analyzer has a simple function. If the syndrome bits are all 0s, the 4 leftmost bits of
the codeword are accepted as the dataword (interpreted as no error); otherwise, the 4
bits are discarded (error).
Department of Computer Science & Engineering

Encoder
 The encoder takes a dataword and augments it with n − k number of 0s. It then
divides the augmented dataword by the divisor, as shown in Figure below

Figure : Division in
CRC encoder
Department of Computer Science & Engineering

 As in decimal division, the process is done step by step. In each step, a copy of the
divisor is XORed with the 4 bits of the dividend.

 The result of the XOR operation (remainder) is 3 bits (in this case), which is used
for the next step after 1 extra bit is pulled down to make it 4 bits long.

 There is one important point we need to remember in this type of division. If the
leftmost bit of the dividend (or the part used in each step) is 0, the step cannot use
the regular divisor; we need to use an all-0s divisor.

 When there are no bits left to pull down, we have a result. The 3-bit remainder
forms the check bits (r2, r1, and r0). They are appended to the dataword to create
the codeword.
Department of Computer Science & Engineering

Decoder
 The decoder does the same division
process as the encoder. The
remainder of the division is the
syndrome.
 If the syndrome is all 0s, there is no
error with a high probability; the
dataword is separated from the
received codeword and accepted.
Otherwise, everything is discarded.
 Figure shows two cases: The left-
hand figure shows the value of the
syndrome when no error has
occurred; the syndrome is 000.

Figure : Division in the CRC decoder for two cases


Department of Computer Science & Engineering

Decoder
 The decoder does the same
division process as the
encoder. The remainder of
the division is the syndrome.
 If the syndrome is all 0s,
there is no error with a high
probability; the dataword is
separated from the received
codeword and accepted.
Otherwise, everything is
discarded.
 Figure shows two cases: The
right-hand part of the figure
shows the case in which
there is a single error. The
syndrome is not all 0s (it is
011).
Department of Computer Science & Engineering

Polynomials

 A better way to understand cyclic codes and how they can be analyzed is to represent
them as polynomials.
 A pattern of 0s and 1s can be represented as a polynomial with coefficients of 0 and 1.
 The power of each term shows the position of the bit; the coefficient shows the value
of the bit.
Department of Computer Science & Engineering

Degree of a Polynomial

 The degree of a polynomial is the highest power in the


polynomial.

 For example, the degree of the polynomial x6 + x + 1 is 6.

 Note that the degree of a polynomial is 1 less than the number of


bits in the pattern. The bit pattern in this case has 7 bits.
Department of Computer Science & Engineering

Adding and Subtracting Polynomials

 Adding and subtracting polynomials in mathematics are done


by adding or subtracting the coefficients of terms with the
same power.

 In our case, the coefficients are only 0 and 1, and adding is in


modulo-2. This has two consequences.

 First, addition and subtraction are the same.

 Second, adding or subtracting is done by combining terms and


deleting pairs of identical terms.

 For example, adding x5 + x4 + x2 and x6 + x4 + x2 gives just x6


+ x5 . The terms x4 and x2 are deleted.
Department of Computer Science & Engineering

Multiplying or Dividing Terms

 In this arithmetic, multiplying a term by another term is very


simple; we just add the powers. For example, x3 × x4 is x7.

 For dividing, we just subtract the power of the second term from
the power of the first. For example, x5/x2 is x3.
Department of Computer Science & Engineering

Shifting
 A binary pattern is often shifted a number of bits to the right or left.
Shifting to the left means adding extra 0s as rightmost bits; shifting to the
right means deleting some rightmost bits.

 Shifting to the left is accomplished by multiplying each term of the


polynomial by xm, where m is the number of shifted bits; shifting to the
right is accomplished by dividing each term of the polynomial by xm.

 The following shows shifting to the left and to the right.


Department of Computer Science & Engineering

Cyclic Code Encoder Using Polynomials


 The divisor in a cyclic code is normally called the generator polynomial or simply the
generator.

 The divisor 1011 is represented as

Figure : CRC division


using polynomials
Department of Computer Science & Engineering

Cyclic Code Analysis


 We can analyze a cyclic code to find its capabilities by using polynomials.

 We define the following, where f(x) is a polynomial with binary coefficients.

Dataword: d(x) Codeword: c(x) Generator: g(x) Syndrome: s(x) Error: e(x)

 Note that ¦ means divide.

 Let us first find the relationship among the sent codeword, error, received
codeword, and the generator. We can say
Department of Computer Science & Engineering

 In other words, the received codeword is the sum of the sent codeword and the
error. The receiver divides the received codeword by g(x) to get the syndrome.
 We can write this as

 The first term at the right-hand side of the equality has a remainder of zero
(according to the definition of codeword).
 So the syndrome is actually the remainder of the second term on the right-hand
side.
 If this term does not have a remainder (syndrome = 0), either e(x) is 0 or e(x) is
divisible by g(x).
 We do not have to worry about the first case (there is no error); the second case is
very important. Those errors that are divisible by g(x) are not caught.
Department of Computer Science & Engineering

Single-Bit Error

 If the generator has more than one term and the coefficient of x0 is 1, all single-bit
errors can be caught.
Department of Computer Science & Engineering

Two Isolated Single-Bit Errors


 If a generator cannot divide xt + 1 (t between 0 and n - 1), then all isolated double
errors can be detected.
Department of Computer Science & Engineering

Odd Numbers of Errors


 A generator that contains a factor of x +1 can detect all odd-numbered errors.
Burst Errors
Department of Computer Science & Engineering

 We can summarize the criteria for a good polynomial generator:


Department of Computer Science & Engineering

Standard Polynomials
Department of Computer Science & Engineering

Hardware Implementation

 One of the advantages of a cyclic code is that the encoder and decoder can easily
and cheaply be implemented in hardware by using a handful of electronic devices.
 Also, a hardware implementation increases the rate of check bit and syndrome bit
calculation.
Divisor
 Let us first consider the divisor. We need to note the following points:
1.The divisor is repeatedly XORed with part of the dividend.
2.The divisor has n − k + 1 bits which either are predefined or are all 0s. The bits
do not change from one dataword to another. The divisor bits were either 1011
or 0000. The choice was based on the leftmost bit of the part of the augmented
data bits that are active in the XOR operation.
Department of Computer Science & Engineering

3. A close look shows that only n − k bits of the divisor are needed in the XOR
operation. The leftmost bit is not needed because the result of the operation is
always 0, no matter what the value of this bit. The reason is that the inputs to this
XOR operation are either both 0s or both 1s.

Fig: Hardwired design of the divisor in CRC


Department of Computer Science & Engineering

Fig: Simulation of division in CRC encoder

 At each clock tick, shown as different times, one of the bits from the augmented
dataword is used in the XOR process.
Department of Computer Science & Engineering

Fig: The CRC encoder design using shift registers

 A 1-bit shift register holds a bit for a duration of one clock time.
 At a time click, the shift register accepts the bit at its input port, stores the new
bit, and displays it on the output port.
 The content and the output remain the same until the next input arrives. When we
connect several 1-bit shift registers together, it looks as if the contents of the
register are shifting.
Department of Computer Science & Engineering

Fig: General design of encoder and decoder of a CRC code


Department of Computer Science & Engineering

CHECKSUM
 Checksum is an error-detecting technique that can be applied to a message of any
length.
 In the Internet, the checksum technique is mostly used at the network and transport
layer rather than the data-link layer.
 At the source, the message is first divided into m-bit units. The generator then
creates an extra m-bit unit called the checksum, which is sent with the message.
 At the destination, the checker creates a new checksum from the combination of
the message and sent checksum.
 If the new checksum is all 0s, the message is accepted; otherwise, the message is
discarded.
 In the real implementation, the checksum unit is not necessarily added at the end of
the message; it can be inserted in the middle of the message.
Department of Computer Science & Engineering

Fig: Checksum
Department of Computer Science & Engineering

Concept

 Suppose the message is a list of five 4-bit numbers that we want to send to a

destination. In addition to sending these numbers, we send the sum of the

numbers.

 For example, if the set of numbers is (7, 11, 12, 0, 6), we send (7, 11, 12, 0, 6, 36),

where 36 is the sum of the original numbers.

 The receiver adds the five numbers and compares the result with the sum.

 If the two are the same, the receiver assumes no error, accepts the five numbers,

and discards the sum. Otherwise, there is an error somewhere and the message is

not accepted.
Department of Computer Science & Engineering

 The previous example has one major drawback. Each number can be written as a 4-bit

word (each is less than 15) except for the sum.

 One solution is to use one’s complement arithmetic.

 In this arithmetic, we can represent unsigned numbers between 0 and 2m − 1 using

only m bits. If the number has more than m bits, the extra leftmost bits need to be

added to the m rightmost bits (wrapping).

 We can make the job of the receiver easier if we send the negative (complement) of the

sum, called the checksum.

 In this case, we send (7, 11, 12, 0, 6, −36).

 The receiver can add all the numbers received (including the checksum). If the result

is 0, it assumes no error; otherwise, there is an error.


Department of Computer Science & Engineering
Department of Computer Science & Engineering

Algorithm to calculate a traditional checksum


Department of Computer Science & Engineering

Internet Checksum

 Traditionally, the Internet has used a 16-bit checksum.

 The sender and the receiver follow the steps depicted in Table. The sender or the
receiver uses five steps.

Table : Procedure to calculate the traditional checksum


Department of Computer Science & Engineering

Internet Checksum Example

Department of CSE- Data Science


Department of Computer Science & Engineering

Performance
 The traditional checksum uses a small number of bits (16) to
detect errors in a message of any size (sometimes thousands of bits).
 However, it is not as strong as the CRC in error-checking
capability.
 For example, if the value of one word is incremented and the
value of another word is decremented by the same amount, the
two errors cannot be detected because the sum and checksum
remain the same.
 Also, if the values of several words are incremented but the sum
and the checksum do not change, the errors are not detected.
Department of Computer Science & Engineering

Other Approaches to the Checksum

 There is one major problem with the traditional checksum


calculation.

 If two 16-bit items are transposed in transmission, the checksum


cannot catch this error.

 The reason is that the traditional checksum is not weighted: it treats


each data item equally.

 In other words, the order of data items is immaterial to the


calculation.

 Several approaches have been used to prevent this problem. We


Department
mention two of them of CSE- Data
here: Fletcher andScience
Adler.
Department of Computer Science & Engineering

Fletcher Checksum
 The Fletcher checksum was devised to weight each data item according to its
position.
 Fletcher has proposed two algorithms: 8-bit and 16-bit.

 The first, 8-bit Fletcher, calculates on 8-bit data items and creates a 16-bit
checksum.
 The second, 16-bit Fletcher, calculates on 16-bit data items and creates a 32-
bit checksum.
 The 8-bit Fletcher is calculated over data octets (bytes) and creates a 16-bit
checksum.
 The calculation is done modulo 256 (28), which means the intermediate results
are divided by 256 and the remainder is kept.
 The algorithm uses two accumulators, L and R. The first simply adds data items
Department
together; the second adds a weightof
to CSE- Data Science
the calculation.
Department of Computer Science & Engineering

Algorithm to calculate an 8-bit Fletcher checksum

Department of CSE- Data Science


Department of Computer Science & Engineering

Adler Checksum

 The Adler checksum is a 32-bit checksum.

 It is similar to the 16-bit Fletcher with three differences.

 First, calculation is done on single bytes instead of 2 bytes at a time.

 Second, the modulus is a prime number (65,521) instead of 65,536.

 Third, L is initialized to 1 instead of 0.

 It has been proved that a prime modulo has a better detecting


capability in some combinations of data.
Department of Computer Science & Engineering

Algorithm to calculate an Adler checksum


Department of Computer Science & Engineering

Data Link Control


 The data-link layer is divided into two sublayers.

 The upper sublayer of the data-link layer (DLC).

 The lower sublayer, multiple access control (MAC)

 The data link control (DLC) deals with procedures for


communication between two adjacent nodes—node-to-node
communication—no matter whether the link is dedicated or
broadcast.

 Data link control functions include framing and flow and error
control.
Department of Computer Science & Engineering

DLC SERVICES

 Data link control functions include framing and flow and error control.

Framing
 Data transmission in the physical layer means moving bits in the form of a
signal from the source to the destination.
 The data-link layer, on the other hand, needs to pack bits into frames, so that
each frame is distinguishable from another.
 Our postal system practices a type of framing.
‣ The simple act of inserting a letter into an envelope separates one piece of
information from another, the envelope serves as the delimiter.
‣ In addition, each envelope defines the sender and receiver addresses, which is
necessary since the postal system is a many to- many carrier facility.
Department of Computer Science & Engineering

 Framing in the data-link layer separates a message from one source to a


destination by adding a sender address and a destination address.
 The destination address defines where the packet is to go; the sender
address helps the recipient acknowledge the receipt.
 Although the whole message could be packed in one frame, that is not
normally done.
‣ One reason is that a frame can be very large, making flow and error
control very inefficient.
‣ When a message is carried in one very large frame, even a single-bit
error would require the retransmission of the whole frame.
‣ When a message is divided into smaller frames, a single-bit error
affects only that small frame.
Department of Computer Science & Engineering

Character-Oriented Framing

 In character-oriented (or byte-oriented) framing, data to be carried are 8-bit


characters from a coding system such as ASCII

 The header, which normally carries the source and destination addresses and
other control information, and the trailer, which carries error detection
redundant bits, are also multiples of 8 bits.

 To separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and the end of a frame.

 The flag, composed of protocol-dependent special characters, signals the start or


end of a frame.

Figure : A frame in a character-oriented protocol


Department of Computer Science & Engineering

 Character-oriented framing was popular when only text was exchanged by the
data-link layers.
 The flag could be selected to be any character not used for text
communication.
 Now, however, we send other types of information such as graphs, audio, and
video; any character used for the flag could also be part of the information.
 If this happens, the receiver, when it encounters this pattern in the middle of the
data, thinks it has reached the end of the frame.
 To fix this problem, a byte-stuffing strategy was added to character-oriented
framing.
 In byte stuffing (or character stuffing), a special byte is added to the data
section of the frame when there is a character with the same pattern as the
flag.
Department of Computer Science & Engineering

 The data section is stuffed with an extra byte. This byte is usually called the
escape character (ESC) and has a predefined bit pattern.
 Whenever the receiver encounters the ESC character, it removes it from the data
section and treats the next character as data, not as a delimiting flag.
Department of Computer Science & Engineering

 Byte stuffing by the escape character allows the presence of the flag in the
data section of the frame, but it creates another problem.

 What happens if the text contains one or more escape characters followed

by a byte with the same pattern as the flag? The receiver removes the

escape character, but keeps the next byte, which is incorrectly interpreted as

the end of the frame.

 To solve this problem, the escape characters that are part of the text must

also be marked by another escape character.

 In other words, if the escape character is part of the text, an extra one is
added to show that the second one is part of the text.
Department of Computer Science & Engineering

Bit-Oriented Framing

 In bit-oriented framing, the data section of a frame is a sequence of bits


to be interpreted by the upper layer as text, graphic, audio, video, and so
on.
 In addition to headers (and possible trailers), we still need a delimiter to
separate one frame from the other.
 Most protocols use a special 8-bit pattern flag, 01111110, as the
delimiter to define the beginning and the end of the frame

Figure : A frame in a bit-oriented protocol


Department of Computer Science & Engineering

 If the flag pattern appears in the data, we need to somehow inform the

receiver that this is not the end of the frame.

 We do this by stuffing 1 single bit (instead of 1 byte) to prevent the

pattern from looking like a flag. The strategy is called bit stuffing.

 In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0

is added. This extra stuffed bit is eventually removed from the data by the

receiver.
 Note that the extra bit is added after one 0 followed by five 1s regardless
of the value of the next bit. This guarantees that the flag field sequence
does not inadvertently appear in the frame.
Department of Computer Science & Engineering
Department of Computer Science & Engineering

Flow and Error Control


Flow Control

 Whenever an entity produces items and another entity consumes them, there

should be a balance between production and consumption rates.

 If the items are produced faster than they can be consumed, the consumer can be

overwhelmed and may need to discard some items.

 If the items are produced more slowly than they can be consumed, the consumer

must wait, and the system becomes less efficient.

 Flow control is related to the first issue.

 We need to prevent losing the data items at the consumer site.


Department of Computer Science & Engineering

 In communication at the data-link layer, we are dealing with four entities: network
and data-link layers at the sending node and network and data-link layers at the
receiving node.

 The figure shows that the data-link layer at the sending node tries to push frames
toward the data-link layer at the receiving node.

 If the receiving node cannot process and deliver the packet to its network at the
same rate that the frames arrive, it becomes overwhelmed with frames.

 Flow control in this case can be feedback from the receiving node to the sending
node to stop or slow down pushing frames.

Figure : Flow control at the data-link layer


Department of Computer Science & Engineering

Buffers

 Although flow control can be implemented in several ways, one of the solutions is

normally to use two buffers; one at the sending data-link layer and the other at

the receiving data-link layer.

 A buffer is a set of memory locations that can hold packets at the sender and

receiver.

 The flow control communication can occur by sending signals from the

consumer to the producer.

 When the buffer of the receiving data-link layer is full, it informs the sending

data-link layer to stop pushing frames.


Department of Computer Science & Engineering

Error Control
 Error control at the data-link layer is normally very simple and implemented
using one of the following two methods.

 In both methods, a CRC is added to the frame header by the sender and

checked by the receiver.


1. In the first method, if the frame is corrupted, it is silently discarded; if it is
not corrupted, the packet is delivered to the network layer. This method is
used mostly in wired LANs such as Ethernet.
2. In the second method, if the frame is corrupted, it is silently discarded; if it
is not corrupted, an acknowledgment is sent (for the purpose of both flow
and error control) to the sender.
Department of Computer Science & Engineering

Combination of Flow and Error Control

 Flow and error control can be combined.

 In a simple situation, the acknowledgment that is sent for flow control

can also be used for error control to tell the sender the packet has

arrived uncorrupted.

 The lack of acknowledgment means that there is a problem in the sent

frame
Department of Computer Science & Engineering

Connectionless and Connection-Oriented

 A DLC protocol can be either connectionless or connection-oriented.

Connectionless Protocol

 In a connectionless protocol, frames are sent from one node to the next without

any relationship between the frames; each frame is independent.

 Note that the term connectionless here does not mean that there is no physical

connection (transmission medium) between the nodes; it means that there is no

connection between frames.

 The frames are not numbered and there is no sense of ordering.

 Most of the data-link protocols for LANs are connectionless protocols.


Department of Computer Science & Engineering

Connection-Oriented Protocol

 In a connection-oriented protocol, a logical connection should first be


established between the two nodes (setup phase).

 After all frames that are somehow related to each other are transmitted
(transfer phase), the logical connection is terminated (teardown phase).

 In this type of communication, the frames are numbered and sent in order.

 If they are not received in order, the receiver needs to wait until all frames
belonging to the same set are received and then deliver them in order to
the network layer.

 Connection oriented protocols are rare in wired LANs, but we can see them
in some point-to-point protocols, some wireless LANs, and some WANs.
Department of Computer Science & Engineering

DATA-LINK LAYER PROTOCOLS

 Four protocols have been defined for the data-link layer to deal with flow and
error control:

1.Simple

2.Stop-and-Wait

3.Go-Back-N

4.Selective-Repeat.

 The behavior of a data-link-layer protocol can be better shown as a finite state


machine (FSM).

 An FSM is thought of as a machine with a finite number of states.

 The machine is always in one of the states until an event occurs.


Department of Computer Science & Engineering
 The figure shows a machine with three states.
 Each event is associated with two
 There are only three possible events and three possible
reactions: defining the list
actions.
(possibly empty) of actions to be
‣ The machine starts in state I.
performed and determining the
‣ If event 1 occurs, the machine performs actions 1 and 2 and
next state (which can be the same
moves to state II.
as the current state).
‣ When the machine is in state II, two events may occur. If
 One of the states must be defined
event 1 occurs, the machine performs action 3 and remains in
as the initial state, the state in
the same state, state II. If event 3 occurs, the machine
which the machine starts when it
performs no action, but move to state I.
turns on.

Figure : Connectionless and


connection-oriented service
represented as FSMs
Department of Computer Science & Engineering

1. Simple Protocol
 Our first protocol is a simple protocol with neither flow nor error control.
 We assume that the receiver can immediately handle any frame it receives. In
other words, the receiver can never be overwhelmed with incoming frames
 The data-link layer at the sender gets a packet from its network layer, makes a
frame out of it, and sends the frame.
 The data-link layer at the receiver receives a frame from the link, extracts the
packet from the frame, and delivers the packet to its network layer.
 The data-link layers of the sender and receiver provide transmission services for
their network layers.
Department of Computer Science & Engineering

FSMs
 The sender site should not send a frame until its network layer has a message to
send.
 The receiver site cannot deliver a message to its network layer until a frame
arrives.
 We can show these requirements using two FSMs. Each FSM has only one state,
the ready state.
Department of Computer Science & Engineering

 The sending machine remains in the ready state until a request comes from the
process in the network layer.
 When this event occurs, the sending machine encapsulates the message in a
frame and sends it to the receiving machine.
 The receiving machine remains in the ready state until a frame arrives from the
sending machine.
 When this event occurs, the receiving machine decapsulates the message out of
the frame and delivers it to the process at the network layer.
Department of Computer Science & Engineering

Example

• Figure shows an example of communication using this protocol.


• The sender sends frames one after another without even thinking about
the receiver.
Department of Computer Science & Engineering

2. Stop-and-Wait Protocol
 uses both flow and error control

 The sender sends one frame at a time and waits for an acknowledgment before
sending the next one.

 To detect corrupted frames, we need to add a CRC to each data frame.

 When a frame arrives at the receiver site, it is checked. If its CRC is incorrect, the
frame is corrupted and silently discarded.

 The silence of the receiver is a signal for the sender that a frame was either
corrupted or lost.
Department of Computer Science & Engineering

 Every time the sender sends a frame, it starts a timer.


 If an acknowledgment arrives before the timer expires, the timer is stopped and
the sender sends the next frame (if it has one to send).
 If the timer expires, the sender resends the previous frame, assuming that the
frame was either lost or corrupted.
 This means that the sender needs to keep a copy of the frame until its
acknowledgment arrives.
 When the corresponding acknowledgment arrives, the sender discards the copy
and sends the next frame if it is ready.
 FSMs
Sender States
‣ The sender is initially in the ready state, but it can move between the ready and
blocking state.
Department of Computer Science & Engineering

Ready State.
‣ When the sender is in this state, it is only waiting for a packet from the network layer.
‣ If a packet comes from the network layer, the sender creates a frame, saves a copy of
the frame, starts the only timer and sends the frame.
‣ The sender then moves to the blocking state.
Blocking State. When the sender is in this state, three events can occur:
a. If a time-out occurs, the sender resends the saved copy of the frame and restarts the
timer.
b. If a corrupted ACK arrives, it is discarded.
c. If an error-free ACK arrives, the sender stops the timer and discards the saved copy of
the frame. It then moves to the ready state.
Department of Computer Science & Engineering

Receiver
 The receiver is always in the ready state. Two events may occur:
• a. If an error-free frame arrives, the message in the frame is delivered to the
network
• layer and an ACK is sent.
• b. If a corrupted frame arrives, the frame is discarded.
Department of Computer Science & Engineering

 The first frame is sent and


acknowledged.
 The second frame is sent, but lost.
 After time-out, it is resent.
 The third frame is sent and
acknowledged, but the
acknowledgment is lost. The frame is
resent.
 However, there is a problem with
this scheme. The network layer at the
receiver site receives two copies of
the third packet, which is not right.
 Solution :sequence numbers and
acknowledgment numbers.
Department of Computer Science & Engineering

 The sequence numbers start with 0(0, 1, 0, 1, 0, 1, . . .), the acknowledgment


numbers start with 1(1, 0, 1, 0, 1, 0, …).
 An acknowledgment number always defines the sequence number of the next
frame to receive.
Department of Computer Science & Engineering

Piggybacking
 The simple & stop and wait protocols
designed for unidirectional
communication, in which data is flowing
only in one direction although the
acknowledgment may travel in the other
direction.
 To make the communication more
efficient, the data in one direction is
piggybacked with the acknowledgment in
the other direction.
 when node A is sending data to node B,
Node A also acknowledges the data
received from node B.
 Because piggybacking makes
communication at the datalink layer more
complicated, it is not a common practice.
Department of Computer Science & Engineering

High-level Data Link Control (HDLC)

 HDLC is a bit-oriented protocol for communication over point-to-point and


multipoint links. It implements the Stop-and-Wait protocol
Configurations and Transfer Modes
 HDLC provides two common transfer modes that can be used in different
configurations:
1. Normal response mode (NRM)
2. Asynchronous balanced mode (ABM)
Department of Computer Science & Engineering

1. Normal response mode (NRM)

 In normal response mode (NRM), the station configuration is unbalanced.


 We have one primary station and multiple secondary stations.
 A primary station can send commands; a secondary station can only respond.
 The NRM is used for both point-to-point and multipoint links
Department of Computer Science & Engineering

Asynchronous balanced mode (ABM).

 In ABM, the configuration is balanced.


 The link is point-to-point, and each station can function as a primary and a
secondary (acting as peers)
 This is the common mode today
Department of Computer Science & Engineering

Framing
 HDLC defines three types of frames:
1. Information frames (I-frames)
2. Supervisory frames (S-frames)
3. Unnumbered frames (U-frames).
 Each type of frame serves as an envelope for the transmission of a different type of
message.
 Iframes are used to data-link user data and control information relating to user data
(piggybacking).
 S-frames are used only to transport control information.
 U-frames are reserved for system management. Information carried by U-frames is
intended for managing the link itself.
Department of Computer Science & Engineering

 Each frame in HDLC may contain up to six fields,

‣ Flag field. This field contains synchronization pattern 01111110, which identifies
both the beginning and the end of a frame.

‣ Address field. This field contains the address of the secondary station.
‣ Control field. The control field is one or two bytes used for flow and error
control.
‣ Information field. The information field contains the user’s data from the network
layer or management information. Its length can vary from one network to
another.
‣ FCS field. The frame check sequence (FCS) is the HDLC error detection field. It
can contain either a 2- or 4-byte CRC.
Department of Computer Science & Engineering

Control field format for the different frame types


Department of Computer Science & Engineering

Control Field for I-Frames

‣ I-frames are designed to carry user data from the network layer. In addition, they can
include flow- and error-control information (piggybacking).

‣ The subfields in the control field are used to define these functions.

‣ The first bit defines the type. If the first bit of the control field is 0, this means the
frame is an I-frame.

‣ The next 3 bits, called N(S), define the sequence number of the frame. Note that with
3 bits, we can define a sequence number between 0 and 7.

‣ The last 3 bits, called N(R), correspond to the acknowledgment number when
piggybacking is used.

‣ The single bit between N(S) and N(R) is called the P/F bit. The P/F field is a single bit
with a dual purpose. It has meaning only when it is set (bit = 1) and can mean poll(
frame is sent by primary station to secondary) or final ( frame is sent by secondary to
primary)
Department of Computer Science & Engineering

Control Field for S-Frames


‣ Supervisory frames are used for flow and error control whenever piggybacking is
either impossible or inappropriate.

‣ S-frames do not have information fields. If the first 2 bits of the control field are
10, this means the frame is an S-frame.

‣ The last 3 bits, called N(R), correspond to the acknowledgment number (ACK) or
negative acknowledgment number (NAK), depending on the type of S-frame.

‣ The 2 bits called code are used to define the type of S-frame itself. With 2 bits, we
can have four types of S-frames

00 Receive ready (RR).


01 Reject (REJ
10 Receive not ready (RNR)
11 Selective reject (SREJ)
Department of Computer Science & Engineering

 Receive ready (RR)


‣ This kind of frame acknowledges the receipt of a safe and sound frame or group
of frames.
‣ In this case, the value of the N(R) field defines the acknowledgment number.
 Receive not ready (RNR)
‣ This kind of frame is an RR frame with additional functions. It acknowledges the
receipt of a frame or group of frames, and it announces that the receiver is busy
and cannot receive more frames.
‣ It acts as a kind of congestion-control mechanism by asking the sender to slow
down.
‣ The value of N(R) is the acknowledgment number.
Department of Computer Science & Engineering

 Reject (REJ)
‣ This is a NAK frame, but not like the one used for Selective Repeat ARQ.
‣ It is a NAK that can be used in Go-Back-N ARQ to improve the efficiency
of the process by informing the sender, before the sender timer expires, that
the last frame is lost or damaged. The value of N(R) is the negative
acknowledgment number.
 Selective reject (SREJ)
‣ This is a NAK frame used in Selective Repeat ARQ.
‣ The value of N(R) is the negative acknowledgment number.
Department of Computer Science & Engineering

Control Field for U-Frames

 Unnumbered frames are used to exchange session management and control


information between connected devices.
 U-frames contain an information field, but one used for system management
information, not user data.
 As with S-frames, however, much of the information carried by U-frames is
contained in codes included in the control field.
 U-frame codes are divided into two sections: a 2-bit prefix before the P/ F bit
and a 3-bit suffix after the P/F bit.
 Together, these two segments (5 bits) can be used to create up to 32 different
types of U-frames.
Department of Computer Science & Engineering

Figure : Example of
connection and
disconnection

 Figure shows how U-


frames can be used for
connection establishment
and connection release.
 Node A asks for a
connection with a set
asynchronous balanced
mode (SABM) frame;
 Node B gives a positive
response with an
unnumbered
acknowledgment (UA)
frame.

 After these two exchanges, data can be transferred between the two nodes (not shown in
the figure).
 After data transfer, node A sends a DISC (disconnect) frame to release the connection; it is
confirmed by node B responding with a UA (unnumbered acknowledgment).
Department of Computer Science & Engineering

Example of piggybacking with and without error

 Figure shows two exchanges using piggybacking.


 The first is the case where no error has occurred; the second is the case where
an error has occurred and some frames are discarded.
Department of Computer Science & Engineering

POINT-TO-POINT PROTOCOL (PPP)


 One of the most common protocols for point-to-point access is the Point-to-Point
Protocol (PPP).
 Today, millions of Internet users who need to connect their home computers to
the server of an Internet service provider use PPP to control and manage the
transfer of data.
Services
Services Provided by PPP
 PPP defines the format of the frame to be exchanged between devices.
 It also defines how two devices can negotiate the establishment of the link and
the exchange of data.
 PPP is designed to accept payloads from several network layers (not only IP).
Authentication is also provided in the protocol, but it is optional.
 The new version of PPP, called Multilink PPP, provides connections over
multiple links.
 One interesting feature of PPP is that it provides network address configuration.
Department of Computer Science & Engineering

Services Not Provided by PPP

 PPP does not provide flow control. A sender can send several frames one after
another with no concern about overwhelming the receiver.

 PPP has a very simple mechanism for error control. A CRC field is used to detect
errors. If the frame is corrupted, it is silently discarded; the upper-layer protocol
needs to take care of the problem.

 Lack of error control and sequence numbering may cause a packet to be


received out of order.

 PPP does not provide a sophisticated addressing mechanism to handle frames in


a multipoint configuration.
Department of Computer Science & Engineering

Framing
 PPP uses a character-oriented (or byte-oriented) frame.

 Address. The address field in this protocol is a constant value and set to 11111111
(broadcast address).

 Control. This field is set to the constant value 00000011. PPP does not provide any flow
control. Error control is also limited to error detection.

 Protocol. The protocol field defines what is being carried in the data field: either user data
or other information. This field is by default 2 bytes long, but the two parties can agree to
use only 1 byte.

 Payload field. This field carries either the user data or other. The data field is a sequence of
bytes with the default of a maximum of 1500 bytes; but this can be changed during
negotiation.
100
 FCS. The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.
Department of Computer Science & Engineering

Byte Stuffing
 Since PPP is a byte-oriented protocol, the flag in PPP is a byte that needs to be
escaped
 whenever it appears in the data section of the frame. The escape byte is 01111101.
Transition Phases

Figure :Transition phases


Department of Computer Science & Engineering

 The transition diagram, which is an FSM, starts with the dead state. In this state, there is
no active carrier (at the physical layer) and the line is quiet.
 When one of the two nodes starts the communication, the connection goes into the
establish state. In this state, options are negotiated between the two parties. If the two
parties agree that they need authentication (for example, if they do not know each other),
then the system needs to do authentication (an extra step); otherwise, the parties can
simply start communication. The link-control protocol packets are used for this purpose.
 Several packets may be exchanged here.
 Data transfer takes place in the open state. When a connection reaches this state, the
exchange of data packets can be started. The connection remains in this state until one of
the endpoints wants to terminate the connection.
 In this case, the system goes to the terminate state. The system remains in this state until
the carrier (physical-layer signal) is dropped, which moves the system to the dead state
again.
Department of Computer Science & Engineering

Multiplexing
 Three sets of protocols are defined to make PPP powerful:
1.The Link Control Protocol (LCP),
2.Two Authentication Protocols (APs), and
3.Several Network Control Protocols (NCPs)
 At any moment, a PPP packet can carry data from one of these protocols in its
data field, as shown in Figure below

Figure : Multiplexing in PPP


Department of Computer Science & Engineering

Link Control Protocol


 LCP is responsible for establishing, maintaining, configuring, and terminating
links.
 Also provides negotiation mechanisms to set options between the two endpoints.
Both endpoints of the link must reach an agreement about the options before the
link can be established.
 All LCP packets are carried in the payload field of the PPP frame with the protocol
field set to C021 in hexadecimal
Department of Computer Science & Engineering

 The code field defines the type of LCP packet. There are 11 types of packets

 There are three categories of packets.


1.The first category, comprising the first four packet types, is used for link
configuration during the establish phase.
2.The second category, comprising packet types 5 and 6, is used for link termination
during the termination phase.
3.The last five packets are used for link monitoring and debugging.
Department of Computer Science & Engineering

 The ID field holds a value that matches a request with a reply. One endpoint
inserts a value in this field, which will be copied into the reply packet.
 The length field defines the length of the entire LCP packet.
 The information field contains information, such as options, needed for some
LCP packets. There are many options that can be negotiated between the two
endpoints.
 Options are inserted in the information field of the configuration packets
 Information field is divided into three fields: option type, option length, and
option data.
Department of Computer Science & Engineering

Authentication Protocols
 Authentication plays a very important role in PPP because PPP is designed for use over
dial-up links where verification of user identity is necessary.
 Authentication means validating the identity of a user who needs to access a set of
resources.
 PPP has created two protocols for authentication: Password Authentication Protocol
(PAP) and Challenge Handshake Authentication Protocol(CHAP).
 These protocols are used during the authentication phase
PAP
• The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
1.The user who wants to access a system sends an authentication identification
(usually the user name) and a password.
2.The system checks the validity of the identification and password and either accepts
or denies connection.
Department of Computer Science & Engineering

 Figure shows the three types of packets used by PAP and how they are
actually exchanged.
 When a PPP frame is carrying any PAP packets, the value of the protocol field
is 0xC023.
 The three PAP packets are authenticate-request, authenticate-ack, and
authenticate-nak.
 The first packet is used by
the user to send the user
name and password.
 The second is used by the
system to allow access. The
third is used by the system to
deny access.
Department of Computer Science & Engineering

CHAP
 The Challenge Handshake Authentication Protocol (CHAP) is a three-way handshaking
authentication protocol that provides greater security than PAP.
 In this method, the password is kept secret; it is never sent online.
a. The system sends the user a challenge packet containing a challenge value, usually a few
bytes.
b. The user applies a predefined function that takes the challenge value and the user’s own
password and creates a result. The user sends the result in the response packet to the
system.
c. The system does the same. It applies the same function to the password of the user
(known to the system) and the challenge value to create a result. If the result created is the
same as the result sent in the response packet, access is granted; otherwise, it is denied.
 CHAP is more secure than PAP, especially if the system continuously changes the challenge
value.
 Even if the intruder learns the challenge value and the result, the password is still secret.
Department of Computer Science & Engineering

CHAP

 CHAP packets are


encapsulated in the PPP
frame with the protocol
Fig: CHAP value C223 in
packets hexadecimal.
encapsul  There are four CHAP
ated in a packets: challenge,
PPP response, success, and
frame failure.

 The first packet is used by the system to send the challenge value
 The second is used by the user to return the result of the calculation.
 The third is used by the system to allow access to the system.
 The fourth is used by the system to deny access to the system.
Department of Computer Science & Engineering

Network Control Protocols

 PPP is a multiple-network-layer protocol. It can carry a network-layer data packet


from protocols defined by the Internet, OSI, Xerox, DECnet, AppleTalk, Novel,
and so on.
 To do this, PPP has defined a specific Network Control Protocol for each network
protocol.
 For example, IPCP (Internet Protocol Control Protocol) configures the link for
carrying IP data packets.
 Xerox CP does the same for the Xerox protocol data packets, and so on.
 Note that none of the NCP packets carry network-layer data; they just configure
the link at the network layer for the incoming data.
Department of Computer Science & Engineering

IPCP
 One NCP protocol is the Internet Protocol Control Protocol (IPCP). This
protocol configures the link used to carry IP packets in the Internet.
 The format of an IPCP packet is shown in Figure below

Figure : IPCP packet encapsulated in PPP frame


 IPCP defines seven packets, distinguished by their code values
Department of Computer Science & Engineering

Other Protocols
 There are other NCP protocols for other network-layer protocols. The OSI
Network Layer Control Protocol has a protocol field value of 8023; the Xerox
NS IDP Control Protocol has a protocol field value of 8025; and so on.
Data from the Network Layer
 After the network-layer configuration is completed by one of the NCP protocols,
the users can exchange data packets from the network layer.
 Here again, there are different protocol fields for different network layers. For
example, if PPP is carrying data from the IP network layer, the field value is
0021 (note that the three rightmost digits are the same as for IPCP).
 If PPP is carrying data from the OSI network layer, the value of the protocol field
is 0023, and so on.
Department of Computer Science & Engineering

Figure IP datagram encapsulated in a PPP frame

Multilink PPP
 PPP was originally designed for a single-channel point-to-point physical link.
 The availability of multiple channels in a single point-to-point link motivated the
development of Multilink PPP.
 In this case, a logical PPP frame is divided into several actual PPP frames.
 A segment of the logical frame is carried in the payload of an actual PPP frame
Department of Computer Science & Engineering

 To show that the actual PPP frame is carrying a fragment of a logical PPP
frame, the protocol field is set to (003d)16.
Department of Computer Science & Engineering

Example
 Let us go through the phases
followed by a network layer
packet as it is transmitted
through a PPP connection.
 Figure shows the steps. For
simplicity, we assume
unidirectional movement of
data from the user site to the
system site (such as sending an
e-mail through an ISP).
 The first two frames show link
establishment. We have chosen
two options using PAP for
authentication and suppressing
the address control fields.
 Frames 3 and 4 are for
authentication. Frames 5 and 6
establish the network layer
connection using IPCP.
Department of Computer Science & Engineering

Media Access Control


 When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.

 The problem of controlling the access to the medium is similar to the rules of speaking in an
assembly. The procedures guarantee that the right to speak is upheld and ensure that two
people do not speak at the same time, do not interrupt each other, do not monopolize the
discussion, and so on.

 Many protocols have been devised to handle access

to a shared link.

• All of these protocols belong


to a sublayer in the data-link
layer called media access
control (MAC).
• We categorize them into three
groups, as shown in Figure
Department of Computer Science & Engineering

Random Access

 Also called contention-based access

 No station is superior to another station and No station is assigned to control another.

 A station that has data to send uses a procedure defined by the protocol to make a
decision on whether or not to send.

 Decision depends on the state of the medium (idle or busy).

 Two features of RA:

‣ No scheduled time for a station to transmit

‣ No rules specify which station should send next

 Stations compete with one another to access the medium


Department of Computer Science & Engineering

 In a random-access method, each station has the right to the medium without
being controlled by any other station.
 If more than one station tries to send, there is an access conflict—collision—and
the frames will be either destroyed or modified.
 To avoid access conflict or to resolve it when it happens, each station follows a
procedure that answers the following questions:
‣ When can the station access the medium?
‣ What can the station do if the medium is busy?
‣ How can the station determine the success or failure of the transmission?
‣ What can the station do if there is an access conflict?
Department of Computer Science & Engineering

ALOHA

 Aloha is the type of Random access protocol


 ALOHA, was developed at the University of Hawaii in early 1970.
 It was designed for a radio (wireless) LAN, but it can be used on any shared
medium.
 It have two types one is Pure Aloha and another is Slotted Aloha.
 There is a potential of collisions
 The medium is shared between the stations.
 When a station sends data, another station may attempt to do so at the same
time.
 The data from the two stations collide and become garbled.
Department of Computer Science & Engineering

Pure ALOHA
 The original ALOHA protocol is called pure ALOHA.
 The idea is that each station sends a frame whenever it has a frame to send
(multiple access).
 However, since there is only one channel to share, there is the possibility of
collision between frames from different stations.

Fig: Frames in Pure ALOHA


Department of Computer Science & Engineering

 If collision occurs then retransmission frames.


 The pure ALOHA protocol relies on acknowledgments from the receiver.
 If the acknowledgment does not arrive after a time-out period, then
retransmission take place.
 If all these stations try to resend their frames after the time-out, the frames
will collide again.
 After time-out, each station waits a random amount of time (backoff time
TB) before resending its frame.
 This help avoid more collisions.
Department of Computer Science & Engineering

Procedure for pure


ALOHA protocol

 Pure ALOHA has a second method to prevent congesting the channel with retransmitted
frames.
 After a maximum number of retransmission attempts Kmax, a station must give up and
try later.
 The time-out period is equal to the maximum possible round-trip propagation delay,
which is twice the amount of time required to send a frame between the two most widely
separated stations (2 × Tp).
 The backoff time TB is a random value that normally depends on . The formula for TB
depends on the implementation
Department of Computer Science & Engineering

Example

 The stations on a wireless ALOHA network are a maximum of 600 km apart. If we


assume that signals propagate at 3 × 108 m/s
 we find Tp = (600 × 103) / (3 × 108) = 2 ms.
 For K = 2, the range of R is {0, 1, 2, 3}.
 This means that TB can be 0, 2, 4, or 6 ms, based on the outcome of the random
variable R.
Department of Computer Science & Engineering

Vulnerable time
 Let us find the vulnerable time, the length of time in which there is a possibility of
collision.
 We assume that the stations send fixed-length frames with each frame taking Tfr
seconds to send. Figure below shows the vulnerable time for station B.
 Station B starts to send a frame at time t. Now imagine station A has started to send its
frame after t − Tfr. This leads to a collision between the frames from station B and
station A.
 On the other hand, suppose that station C starts to send a frame before time t+
Tfr. Here, there is also a collision between frames from station B and station C.
 The vulnerable time during which a collision may occur in pure ALOHA is 2 times the
frame transmission time.
Department of Computer Science & Engineering

Example
 A pure ALOHA network transmits 200-bit frames on a shared channel of 200
kbps. What is the requirement to make this frame collision-free?
Solution
 Average frame transmission time Tfr is 200 bits/200 kbps or 1 ms.
 The vulnerable time is 2 × 1 ms = 2 ms. This means no station should send later
than 1 ms before this station starts transmission and no station should start
sending during the period (1 ms) that this station is sending.
Department of Computer Science & Engineering

Throughput

 G the average number of frames generated by the system during one frame transmission time.
Department of Computer Science & Engineering
Department of Computer Science & Engineering

Slotted ALOHA

 Slotted ALOHA was invented to improve the efficiency of pure ALOHA.


 Slotted Aloha divides the time of shared channel into discrete intervals called
as time slots.
 Any station can transmit its data in any time slot.
 The only condition is that station must start its transmission from the beginning
of the time slot.
 If the beginning of the slot is missed, then station has to wait until the
beginning of the next time slot.
 A collision may occur if two or more stations try to transmit data at the
beginning of the same time slot.
Department of Computer Science & Engineering
Department of Computer Science & Engineering

Vulnerable time for slotted ALOHA protocol

 The vulnerable time is now reduced to one-half, equal to Tfr


Throughput
Department of Computer Science & Engineering
Department of Computer Science & Engineering

CSMA(Carrier Sense Multiple Access)

 To minimize the chance of collision and to increase the performance


 Principle of CSMA: “sense before transmit” or “listen before talk”
 Carrier busy= Transmission is taking place
 Carrier idle= No transmission currently taking place
 CSMA can reduce the possibility of collision, but it cannot eliminate it.
Department of Computer Science & Engineering

Collision in CSMA

 At time t1, station B senses


the medium and finds it idle,
so it sends a frame.
 At time t2 (t2 > t1), station C
senses the medium and finds
it idle because, at this time,
the first bits from station B
have not reached station C.
 Station C also sends a frame.
 The two signals collide and
both frames are destroyed.
Department of Computer Science & Engineering

Vulnerable Time
 The vulnerable time for CSMA is the propagation time Tp. This is the time needed for
a signal to propagate from one end of the medium to the other.
 When a station sends a frame and any other station tries to send a frame during this
time, a collision will result.
 But if the first bit of the frame reaches the end of the medium, every station will
already have heard the bit and will refrain from sending.

 Figure shows the worst case. The leftmost station, A, sends a frame at time t1,
which reaches the rightmost station, D, at time t1 + Tp. The gray area shows the
vulnerable area in time and space.
Department of Computer Science & Engineering

Persistence Methods

 What should a station do if the channel is busy?


 What should a station do if the channel is idle?
 Three methods have been devised to answer these questions:
1. 1-persistent method
2. Non persistent method
3. p-persistent method
Department of Computer Science & Engineering

1-Persistent

 The 1-persistent method is simple and straightforward.


 In this method, after the station finds the line idle, it sends its frame immediately
 This method has the highest chance of collision because two or more stations
may find the line idle and send their frames immediately.
Department of Computer Science & Engineering

Non-persistent
 In the non-persistent method, a station that has a frame to send senses the line
 If the line is idle, it sends immediately.
 If the line is not idle, it waits a random amount of time and then senses the line
again.
 The non-persistent approach reduces the chance of collision
Department of Computer Science & Engineering

p-Persistent
 If the channel has time slots with a slot duration equal to or greater than the
maximum propagation time.
 The p-persistent approach combines the advantages of the other two strategies.
 It reduces the chance of collision and improves efficiency.
Department of Computer Science & Engineering
Department of Computer Science & Engineering

CSMA/CD

 Carrier Sense Multiple Access with Collision Detection


 Station monitors channel while sending a frame

 If, however, there is a collision, the frame is sent again.


Department of Computer Science & Engineering

 At time t1, station A has executed its persistence procedure and starts
sending the bits of its frame.
 At time t2, station C has not yet sensed the first bit sent by A.
 Station C executes its persistence procedure and starts sending the bits in
its frame, which propagate both to the left and to the right.
 The collision occurs sometime after time t2.
 Station C detects a collision at time t3 when it receives the first bit of A’s
frame. Station C immediately aborts transmission.
 Station A detects collision at time t4 when it receives the first bit of C’s
frame; it also immediately aborts transmission.
 Looking at the figure, we see that A transmits for the duration t4 − t1; C
transmits for the duration t3 − t2.
Department of Computer Science & Engineering

Figure : Collision and abortion in CSMA/CD


Department of Computer Science & Engineering

Fig: Flow diagram for the CSMA/CD


Department of Computer Science & Engineering

Throughput
 The throughput of CSMA/CD is greater than that of pure or slotted ALOHA.
 The maximum throughput occurs at a different value of G and is based on the
persistence method and the value of p in the p-persistent approach.
 For the 1-persistent method, the maximum throughput is around 50 percent when
G = 1.
 For the nonpersistent method, the maximum throughput can go up to 90 percent
when G is between 3 and 8.

 One of the LAN protocols that used CSMA/CD is the traditional Ethernet
with the data rate of 10 Mbps.
Department of Computer Science & Engineering

CSMA/CA

 Carrier Sense Multiple Access with Collision Avoidance was invented for
wireless networks

 Used in a network where collision cannot be detected

 Collisions are avoided through the use of CSMA/CA’s three strategies:

1.Interframe space (IFS)

2.Contention window

3.Acknowledgments

100
Department of Computer Science & Engineering

1. Interframe Space (IFS).


‣ When an idle channel is found, the station does not send immediately.
‣ It waits for a period of time called the interframe space or IFS.
2. Contention Window
‣ The contention window is an amount of time divided into slots.
‣ if station determine that the channel is free, they wait a random amount of time
before they start sending.
‣ This time window doubles with each collision and corresponds to the binary
exponential backoff (BEB) that is familiar from CSMA/CD.
3. Acknowledgment
‣ The positive acknowledgment and the time-out timer can help guarantee that
the receiver has received the frame.
Department of Computer Science & Engineering

Figure : Contention window

Figure : Flow
diagram of
CSMA/CA
Department of Computer Science & Engineering

Frame Exchange Time Line

1. Before sending a frame, the source station senses the medium


a.The channel uses a persistence strategywith backoff until the channel is idle.
b. After the station is found to be idle, the station waits for a period of time
called the DCF interframe space (DIFS); then the station sends a control
frame called the request to send (RTS).
2. After receiving the RTS and waiting a period of time called the short interframe
space (SIFS), the destination station sends a control frame, called the clear to
send (CTS), to the source station. This control frame indicates that the
destination station is ready to receive data.
3. The source station sends data after waiting an amount of time equal to SIFS.
4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received.
Department of Computer Science & Engineering
Department of Computer Science & Engineering

Network Allocation Vector

 When a station sends an RTS frame, it includes the duration of time that it needs
to occupy the channel.
 The stations that are affected by this transmission create a timer called a
network allocation vector (NAV) that shows how much time must pass before
these stations are allowed to check the channel for idleness.
 Each time a station accesses the system and sends an RTS frame, other stations
start their NAV.
 In other words, each station, before sensing the physical medium to see if it is
idle, first checks its NAV to see if it has expired.
Department of Computer Science & Engineering

Collision During Handshaking

 What happens if there is a collision during the time when RTS or CTS control
frames are in transition, often called the handshaking period?
‣ Two or more stations may try to send RTS frames at the same time. These
control frames may collide.
‣ However, because there is no mechanism for collision detection, the sender
assumes there has been a collision if it has not received a CTS frame from the
receiver.
‣ The backoff strategy is employed, and the sender tries again.
Department of Computer Science & Engineering

Hidden-Station Problem
 The solution to the hidden station problem
is the use of the handshake frames (RTS and
CTS).
 Figure shows that the RTS message from B
reaches A, but not C.
 However, because both B and C are within
the range of A, the CTS message, which
contains the duration of data transmission
from B to A, reaches C.
 Station C knows that some hidden station is
using the channel and refrains from
transmitting until that duration is over.
Department of Computer Science & Engineering

Controlled Access

 In controlled access, the stations consult one another to find which station has
the right to send.
 A station cannot send unless it has been authorized by other stations.
 Three common methods:
1. Reservation
2. Polling
3. Token passing
Department of Computer Science & Engineering

Reservation
 A station needs to make a reservation before sending data.
 Time is divided into intervals. In each interval, a reservation frame precedes the
data frames sent in that interval.
 If there are N stations in the system, there are exactly N reservation minislots in
the reservation frame.

 Figure shows a situation with five stations and a five-minislot reservation frame.
 In the first interval, only stations 1, 3, and 4 have made reservations.
 In the second interval, only station 1 has made a reservation.
Department of Computer Science & Engineering

Polling

 Polling works with topologies in which one device is designated as a primary


statioand the n other devices are secondary stations.

 Primary device is the initiator of a session.

 All data exchanges must be made through the primary device.

 Primary device controls the link; the secondary devices follow its
instructions.

 The drawback is if the primary station fails, the system goes down.
Department of Computer Science & Engineering

Select

 The select function is used whenever


the primary device has something to
send.
 The primary must alert the secondary to
the upcoming transmission and wait for
an acknowledgment of the secondary’s
ready status.
 Before sending data, the primary creates
and transmits a select (SEL) frame, one
field of which includes the address of
the intended secondary.
Department of Computer Science & Engineering

Poll

 The poll function is used by the primary device to


solicit transmissions from the secondary devices.
 When the primary is ready to receive data, it must
ask (poll) each device in turn if it has anything to
send.
 When the first secondary is approached, it responds
either with a NAK frame if it has nothing to send or
with data (in the form of a data frame) if it does.
 If the response is negative (a NAK frame), then the
primary polls the next secondary in the same manner
until it finds one with data to send.
 When the response is positive (a data frame), the
primary reads the frame and returns an
acknowledgment (ACK frame), verifying its receipt.
Department of Computer Science & Engineering

Token Passing

 The stations in a network are organized in a logical ring. For each station, there is a
predecessor and a successor.
 The right to this access has been passed from the predecessor to the current station.
The right will be passed to the successor when the current station has no more data
to send.
 The RIGHT passed from by means of special packet called “TOKEN”.
 A special packet called a token circulates through the ring. The possession of the
token gives the station the right to access the channel and send its data.
 When a station has some data to send, it waits until it receives the token from its
predecessor. It then holds the token and sends its data.
 When the station has no more data to send, it releases the token, passing it to the
next logical station in the ring.
 When a station receives the token and has no data to send, it just passes the data to
the next station.
Department of Computer Science & Engineering

 stations do not have to be physically connected in a ring; the ring can be a logical one
 Token management is needed for this access method.
‣ Stations must be limited in the time they can have possession of the token.
‣ The token must be monitored to ensure it has not been lost or destroyed. For example,
if a station that is holding the token fails, the token will disappear from the network.
‣ Another function of token management is to assign priorities to the stations and to the
types of data being transmitted.
‣ Token management is needed to make low-priority stations release the token to high-
priority stations.
Department of Computer Science & Engineering

You might also like