We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 89
DATA LINK LAYER
Design issues of DLL:
The data link layer uses the services of the physical layer to send and receive bits over communication channels. It has a number of functions, including: 1. 1. Providing a well-defined service interface to the network layer. 2. 2. Dealing with transmission errors. 3. 3. Regulating the flow of data so that slow receivers are not swamped by fast senders. • To accomplish these goals, the data link layer takes the packets it gets from the network layer and encapsulates them into frames for transmission. • Each frame contains a frame header, a payload field for holding the packet, and a frame trailer, as illustrated . Frame management forms the heart of what the data link layer does. Services provided to the network layer • The function of the data link layer is to provide services to the network layer. • The principal service is transferring data from the network layer on the source machine to the network layer on the destination machine. • On the source machine is SEC. 3.1 DATA LINK LAYER DESIGN ISSUES 195 an entity, call it a process, in the network layer that hands some bits to the data link layer for transmission to the destination. • The job of the data link layer is to transmit the bits to the destination machine so they can be handed over to the network layer there, as shown in Fig. 3-2(a). • The actual transmission follows the path of Fig. 3-2(b), but it is easier to think in terms of two data link layer processes communicating using a data link protocol. • The data link layer can be designed to offer various services. • The actual services that are offered vary from protocol to protocol. • Three reasonable possibilities that we will consider in turn are: 1. Unacknowledged connectionless service. • 2. Acknowledged connectionless service. • 3. Acknowledged connection-oriented service. Unacknowledged connectionless service consists of having the source machine send independent frames to the destination machine without having the destination machine acknowledge them. • Ethernet is a good example of a data link layer that provides this class of service. No logical connection is established beforehand or released afterward. FRAMING • If the channel is noisy, as it is for most wireless and some wired links, the physical layer will add some redundancy to its signals to reduce the bit error rate to a tolerable level. • However, the bit stream received by the data link layer is not guaranteed to be error free. • Some bits may have different values and the number of bits received may be less than, equal to, or more than the number of bits transmitted. • It is up to the data link layer to detect and, if necessary, correct errors. The usual approach is for the data link layer to break up the bit stream into discrete frames, compute a short token called a checksum for each frame, and include the checksum in the frame when it is transmitted. • (Checksum algorithms will be discussed later in this chapter.) • When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum is different from the one contained in the frame, the data link layer knows that an error has occurred and takes steps to deal with it (e.g., discarding the bad frame and possibly also sending back an error report) • Breaking up the bit stream into frames is more difficult than it at first appears. • A good design must make it easy for a receiver to find the start of new frames while using little of the channel bandwidth. • We will look at four methods: • 1. Byte count. • 2. Flag bytes with byte stuffing. • 3. Flag bits with bit stuffing. • 4. Physical layer coding violations. • The first framing method uses a field in the header to specify the number of bytes in the frame. • When the data link layer at the destination sees the byte count, it knows how many bytes follow and hence where the end of the frame is. • This technique is shown in Fig. 3-3(a) for four small example frames of sizes 5, 5, 8, and 8 bytes, respectively. • The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if the byte count of 5 in the second frame of Fig. 3-3(b) becomes a 7 due to a single bit flip, the destination will get out of synchronization. • It will then be unable to locate the correct start of the next frame. • Even if the checksum is incorrect so the destination knows that the frame is bad, it still has no way of telling where the next frame starts. • Sending a frame back to the source asking for a retransmission does not help either, since the destination does not know how many bytes to skip over to get to the start of the retransmission. • For this reason, the byte count method is rarely used by itself. • The second framing method gets around the problem of resynchronization after an error by having each frame start and end with special bytes. • Often the same byte, called a flag byte, is used as both the starting and ending delimiter. This byte is shown in Fig. 3-4(a) as FLAG. • Two consecutive flag bytes indicate the end of one frame and the start of the next. • Thus, if the receiver ever loses synchronization it can just search for two flag bytes to find the end of the current frame and the start of the next frame. • The data link layer on the receiving end removes the escape bytes before giving the data to the network layer. This technique is called byte stuffing. • The byte-stuffing scheme depicted in Fig. 3-4 is a slight simplification of the one used in PPP (Point-to-Point Protocol), which is used to carry packets over communications links. • Framing can be also be done at the bit level, so frames can contain an arbitrary number of bits made up of units of any size. • It was developed for the once very popular HDLC (High level Data Link Control) protocol. • Each frame begins and ends with a special bit pattern, 01111110 or 0x7E in hexadecimal. • This pattern is a flag byte. Whenever the sender’s data link layer encounters five consecutive 1s in the data, it automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the outgoing character stream before a flag byte in the data. • When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically destuffs (i.e., deletes) the 0 bit. • Just as byte stuffing is completely transparent to the network layer in both computers, so is bit stuffing. • If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in the receiver’s memory as 01111110. Figure 3-5 gives an example of bit stuffing. • Many data link protocols use a combination of these methods for safety. • A common pattern used for Ethernet and 802.11 is to have a frame begin with a well-defined pattern called a preamble. ERROR CONTROL • The usual way to ensure reliable delivery is to provide the sender with some feedback about what is happening at the other end of the line. • Typically, the protocol calls for the receiver to send back special control frames bearing positive or negative acknowledgements about the incoming frames. • If the sender receives a positive acknowledgement about a frame, it knows the frame has arrived safely. • On the other hand, a negative acknowledgement means that something has gone wrong and the frame must be transmitted again. • An additional complication comes from the possibility that hardware troubles may cause a frame to vanish completely (e.g., in a noise burst). • In this case, the receiver will not react at all, since it has no reason to react. • Similarly, if the acknowledgement frame is lost, the sender will not know how to proceed. • It should be clear that a protocol in which the sender transmits a frame and then waits for an acknowledgement, positive or negative, will hang forever if a frame is ever lost due to, for example, malfunctioning hardware or a faulty communication channel. • This possibility is dealt with by introducing timers into the data link layer. • When the sender transmits a frame, it generally also starts a timer. • The timer is set to expire after an interval long enough for the frame to reach the destination, be processed there, and have the acknowledgement propagate back to the sender. • Normally, the frame will be correctly received and the acknowledgement will get back before the timer runs out, in which case the timer will be canceled. • However, if either the frame or the acknowledgement is lost, the timer will go off, alerting the sender to a potential problem. • The obvious solution is to just transmit the frame again. However, when frames may be transmitted multiple times there is a danger that the receiver will accept the same frame two or more times and pass it to the network layer more than once. • To prevent this from happening, it is generally necessary to assign sequence numbers to outgoing frames, so that the receiver can distinguish retransmissions from originals. • The whole issue of managing the timers and sequence numbers so as to ensure that each frame is ultimately passed to the network layer at the destination exactly once, no more and no less, is an important part of the duties of the data link layer (and higher layers). FLOW CONTROL • Even if the transmission is error free, the receiver may be unable to handle the frames as fast as they arrive and will lose some. • Clearly, something has to be done to prevent this situation. • Two approaches are commonly used. • In the first one, feedback-based flow control, the receiver sends back information to the sender giving it permission to send more data, or at least telling the sender how the receiver is doing. • In the second one, rate-based flow control, the protocol has a built-in mechanism that limits the rate at which senders may transmit data, without using feedback from the receiver. • For example, hardware implementations of the link layer as NICs (Network Interface Cards) are sometimes said to run at ‘‘wire speed,’’ meaning that they can handle frames as fast as they can arrive on the link. • Any over runs are then not a link problem, so they are handled by higher layers. • Various feedback-based flow control schemes are known, but most of them use the same basic principle. • The protocol contains well-defined rules about when a sender may transmit the next frame. UTOPIAN SIMPLEX PROTOCOL • The protocol consists of two distinct procedures, a sender and a receiver. • The sender runs in the data link layer of the source machine, and the receiver runs in the data link layer of the destination machine. • No sequence numbers or acknowledgements are used here, so MAX SEQ is not needed. • The only event type possible is frame arrival (i.e., the arrival of an undamaged frame). • The sender is in an infinite while loop just pumping data out onto the line as fast as it can. • The body of the loop consists of three actions: a) go fetch a packet from the (always obliging) network layer, b) construct an outbound frame using the variable s, c) send the frame on its way The receiver is equally simple. Initially, it waits for something to happen, the only possibility being the arrival of an undamaged frame. Eventually, the frame arrives and the procedure wait for event returns, with event set to frame arrival (which is ignored anyway). • any packet coming to the router is considered as inbound. • any packet going out of the router is considered as outbound. SIMPLEX STOP AND WAIT PROTOCOL FOR ERROR CONTROL • Protocols in which the sender sends one frame and then waits for an acknowledgement before proceeding are called stop-and-wait. • Although data traffic in this example is simplex, going only from the sender to the receiver, frames do travel in both directions. • The only difference between receiver1 and receiver2 is that after delivering a packet to the network layer, receiver2 sends an acknowledgement frame back to the sender before entering the wait loop again. • Because only the arrival of the frame back at the sender is important, not its contents, the receiver need not put any particular information in it. SIMPLEX STOP AND WAIT PROTOCOL FOR NOISY CHANNEL • Frames may be either damaged or lost completely. • we assume that if a frame is damaged in transit, the receiver hardware will detect this when it computes the checksum. • If the frame is damaged in such a way that the checksum is nevertheless correct—an unlikely occurrence—this protocol (and all other protocols) can fail (i.e., deliver an incorrect packet to the network layer). • Consider the following scenario: • 1. The network layer on A gives packet 1 to its data link layer. The packet is correctly received at B and passed to the network layer on B. B sends an acknowledgement frame back to A. • 2. The acknowledgement frame gets lost completely. It just never arrives at all. • 3. The data link layer on A eventually times out. Not having received an acknowledgement, it (incorrectly) assumes that its data frame was lost or damaged and sends the frame containing packet 1 again. • 4. The duplicate frame also arrives intact at the data link layer on B and is unwittingly passed to the network layer there. If A is sending a file to B, part of the file will be duplicated (i.e., the copy of the file made by B will be incorrect and the error will not have been detected). In other words, the protocol will fail. • Protocols in which the sender waits for a positive acknowledgement before advancing to the next data item are often called ARQ (Automatic Repeat reQuest) or PAR (Positive Acknowledgement with Retransmission). • The sender remembers the sequence number of the next frame to send in next frame to send. • The receiver remembers the sequence number of the next frame expected in frame expected. SLIDING WINDOW PROTOCOLS • In the previous protocols, data frames were transmitted in one direction only. • In most practical situations, there is a need to transmit data in both directions. • One way of achieving full-duplex data transmission is to run two instances of one of the previous protocols, each using a separate link for simplex data traffic (in different directions). • Each link is then comprised of a ‘‘forward’’ channel (for data) and a ‘‘reverse’’ channel (for acknowledgements). • When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains itself and waits until the network layer passes it the next packet. • The acknowledgement is attached to the outgoing data frame (using the ack field in the frame header). In effect, the acknowledgement gets a free ride on the next outgoing data frame. • The technique of temporarily delaying outgoing acknowledgements so that they can be hooked onto the next outgoing data frame is known as piggybacking. ONE BIT SLIDING WINDOW PROTOCOL The notation is (seq, ack, packet number). An asterisk indicates where a network layer accepts a packet. A Side B Side next_frame_to_send = 0 next_frame_to_send = 0 frame_expected = 0 frame_expected = 0 s.seq = 0, s.ack = 1 B receives r = (0, 1, A0) Event A sends s = (0, 1, A0) is frame_arrival Since r.seq == frame_expected (frame is sent to N/W layer) frame_expected = 1 s.seq = 0 s.ack = 1-1 = 0 B sends s = (0, 0, B0) A receives r = (0, 0, B0) Event B receives r = (1, 0, A1) Event is frame_arrival is frame_arrival Since r.seq == Since r.seq == frame_expected (frame is frame_expected (frame is sent to N/W layer) sent to N/W layer) frame_expected = 1 frame_expected = (1+1) % 2 = since r.ack == 0 next_frame_to_send since r.ack == next_frame_to_send next_frame_to_send = 1 next_frame_to_send = 1 s.seq = 1 s.seq = 1 s.ack = 0 s.ack = 1 Sends (1 , 0, A1) Sends (1, 1, B1)