Error Detection and correction concepts in Data communication and networksNt Arvind
single bit , burst error detection and correction in data communication networks , block coding ( hamming code , simple parity check code , Cyclic redundancy check-CRC , checksum , internet checksum etc
This document discusses various techniques for error detection and correction in digital communications. It begins by describing common types of errors like single-bit and burst errors. It then explains error detection methods like parity checks and cyclic redundancy checks (CRCs). CRCs use cyclic codes and polynomial division to detect errors. Block codes like Hamming codes can detect and correct errors by ensuring a minimum Hamming distance between codewords. Checksums are also discussed as a simpler error detection technique than CRCs. The document provides examples to illustrate how these different error control methods work.
This document discusses various techniques for error detection and correction in digital communications. It describes common types of errors like single-bit and burst errors. It then explains different coding schemes for error detection and correction including block coding, linear block codes like parity codes and Hamming codes, and cyclic redundancy checks (CRCs). Key concepts covered are redundancy, minimum Hamming distance requirements for detection and correction capabilities, and encoders and decoders for different coding schemes.
This document discusses error control coding (ECC) techniques used to detect and correct errors that may occur during data transmission. It begins with an introduction to ECC and the transmission model that shows where ECC encoding and decoding are performed. It then discusses error models, error control techniques including error detection, forward error correction (FEC), and automatic repeat request (ARQ). It focuses on block codes, describing parity codes, Hamming distance, linear block codes, and decoding techniques for linear block codes. The key points covered are the use of redundancy to permit error detection/correction, and how techniques like parity codes, block codes, and linear block codes can detect and correct different numbers of errors depending on their Hamming distance.
The document discusses error detection and correction techniques. It explains that bit errors can occur during data transmission due to noise. Various error detection strategies are described, including parity schemes, checksum, and CRC. Parity schemes like even and odd parity can detect some errors but not all. Checksum adds up data words and transmits the sum. CRC generates a polynomial for data and divisor, allowing it to detect more error types than checksum. For error correction, Hamming codes are discussed which can detect and correct single bit errors by using redundant parity bits and calculating separate parities.
This document provides an introduction to error correcting codes. It discusses key concepts such as channel capacity, error detection and correction, information rate, weight and distance, maximum likelihood decoding, and linear codes. Error correcting codes add redundant bits to messages to facilitate detecting and correcting errors that may occur during transmission or storage. Linear codes allow for simpler encoding and decoding procedures compared to non-linear codes. The generator and parity check matrices are important for encoding and decoding messages using linear codes.
The document discusses various aspects of data link layer protocols. It describes the services provided by the data link layer, including framing, error control, and flow control. It then discusses different types of framing, error detection techniques like CRC codes, and elementary data link protocols including stop-and-wait and sliding window protocols. It also covers topics like pipelining, error recovery methods for noisy channels, and the use of finite state machines to model protocols.
The document summarizes key concepts relating to the data link layer, including:
1) The data link layer provides services to the network layer such as framing data and error control. It regulates data flow and deals with transmission errors.
2) Framing involves delimiting frames with flags or escape sequences to handle bit stuffing. Error detection uses techniques like CRC checksums while error correction uses codes like Hamming codes.
3) Stop-and-wait protocols were improved with sliding window protocols using sequence numbers and acknowledgments to allow pipelining and handle lost frames more efficiently through techniques like selective repeat.
Chapter 10: Error Correction and DetectionJeoffnaRuth
This document discusses error detection and correction techniques. It defines single-bit and burst errors and explains how redundancy is used to detect or correct errors by adding extra bits. It describes the differences between error detection and correction. Various error correction methods are presented, including forward error correction, retransmission, and the use of modular arithmetic and cyclic redundancy checks. Hardware implementations of cyclic redundancy checks are also summarized.
Cyclic redundancy check (CRC) is a technique used to detect errors in digital data during transmission. It works by adding check bits to the message at the transmitter and checking for errors at the receiver. An encoder generates CRC check bits using a generator polynomial and appends them to the message. The receiver decodes the message using the same CRC method and checks if the remainder is zero, indicating no errors. CRC can be implemented using linear feedback shift registers (LFSRs) to generate the check bits.
to transfer data in network from one device to another with acceptable accuracy, so the system must guarantee the transmitted data should be identical to received data.
there should be no errors if any error occurs in how many ways it can be detected and corrected
This document discusses computer network error detection and correction. It begins by defining single-bit errors and burst errors. It then explains three common error detection techniques: parity check, cyclic redundancy check (CRC), and checksum. Parity check uses a redundant bit to make the total number of 1s even or odd. CRC performs binary division to generate redundant bits. Checksum adds data bits and compares the sum. For error correction, it describes Hamming codes, which add redundant bits in specific positions to detect and correct single-bit errors.
The document discusses error detection and correction techniques at the data link layer. It describes how errors can occur during data transmission and the need for reliable communication. Error detection allows a receiver to detect errors while error correction enables identifying and correcting bit errors without retransmission. Common techniques discussed include parity checks, checksums, and cyclic redundancy checks which add redundant bits to detect errors. CRC is based on binary division of data and checksum on addition. Forward error correction and retransmission are compared. Coding schemes use redundancy to detect or correct errors.
Human: Thank you for the summary. Can you provide a 2 sentence summary that captures the key aspects?
This document discusses error detection and correction techniques at the data link layer. It covers different types of errors, the use of redundancy to detect or correct errors, block coding and convolutional coding approaches. Specific coding schemes like parity checks, cyclic redundancy checks (CRC), and Hamming codes are explained in detail. The key aspects covered are the use of redundant bits, minimum Hamming distance requirements for detection and correction capabilities, and how techniques like CRC and Hamming codes function to detect and correct single-bit errors. Assignments and example problems are also listed.
This document discusses error detection and correction techniques at the data link layer. It covers various types of errors that can occur and how redundancy is used to detect or correct them. Error correcting codes like block codes and convolutional codes are introduced. Specific coding schemes like parity checks, cyclic redundancy checks (CRC), and Hamming codes are explained in detail. The document provides examples of how these codes are implemented and their performance characteristics in terms of detecting and correcting single and burst errors. Standard polynomials used in CRC and properties of good polynomials are also discussed.
This document discusses error coding techniques used in digital communications. It begins by explaining modulo-2 arithmetic operations that are the basis for digital coding. It then covers binary manipulation of addition and multiplication. Various error detection and correction codes are described, including cyclic redundancy check (CRC), linear block codes, and Hamming codes. Hamming codes add redundant bits to allow detection and correction of bit errors during transmission.
This document summarizes error detection and correction techniques. It discusses types of errors like single-bit errors and burst errors. It covers basic concepts of error detection, including adding redundant bits and using techniques like parity checks. Error correction requires knowing the number and positions of errors. Linear block codes and cyclic codes are introduced. Hamming distance and minimum distance are important metrics for error detection and correction capability. Specific codes like parity codes, Hamming codes, and cyclic redundancy checks (CRCs) are described through examples.
This document discusses linear block codes. It begins by introducing error control coding and its use in modern communication systems to reduce error induced by noisy channels. It then describes the basic components of a channel encoding system including the channel encoder and decoder. It discusses different types of block codes and focuses on linear block codes, describing their generator matrices, parity check matrices, and how they can be used to detect and correct errors through syndrome decoding. Key concepts covered include systematic codes, hamming weight/distance, and the error detection and correction capabilities of linear block codes.
The document discusses various functions and protocols of the data link layer, including:
1. Framing of data, error detection using checksums, and flow control to prevent faster senders from overwhelming slower receivers.
2. Common data link layer protocols like stop-and-wait and sliding window protocols using go-back-N and selective repeat to allow multiple frames to be transmitted.
3. Error detection techniques like parity bits, checksums, and cyclic redundancy checks to detect errors in transmitted frames.
The document discusses various functions and protocols of the data link layer, including:
1. Framing of data, error detection using checksums, and flow control to prevent faster senders from overwhelming slower receivers.
2. Common data link layer protocols like stop-and-wait and sliding window protocols using go-back-N and selective repeat to allow multiple frames to be transmitted.
3. Error detection techniques like parity bits, checksums, and cyclic redundancy checks to detect errors in transmitted frames.
The document discusses error detection techniques used in direct link networks, focusing on cyclic redundancy checks (CRCs) and internet checksum algorithms. It provides an overview of CRCs, describing how they make use of polynomial arithmetic modulo 2 to generate a redundant checksum that is sent along with messages. The document also gives examples of how CRCs can be used to detect errors by dividing the received polynomial by the generator polynomial and checking if the remainder is zero. Finally, it compares error detection, which requires retransmissions, to error correction, which has overhead on all messages.
its all about Artificial Intelligence(Ai) and Machine Learning and not on advanced level you can study before the exam or can check for some information on Ai for project
This document provides an introduction to error correcting codes. It discusses key concepts such as channel capacity, error detection and correction, information rate, weight and distance, maximum likelihood decoding, and linear codes. Error correcting codes add redundant bits to messages to facilitate detecting and correcting errors that may occur during transmission or storage. Linear codes allow for simpler encoding and decoding procedures compared to non-linear codes. The generator and parity check matrices are important for encoding and decoding messages using linear codes.
The document discusses various aspects of data link layer protocols. It describes the services provided by the data link layer, including framing, error control, and flow control. It then discusses different types of framing, error detection techniques like CRC codes, and elementary data link protocols including stop-and-wait and sliding window protocols. It also covers topics like pipelining, error recovery methods for noisy channels, and the use of finite state machines to model protocols.
The document summarizes key concepts relating to the data link layer, including:
1) The data link layer provides services to the network layer such as framing data and error control. It regulates data flow and deals with transmission errors.
2) Framing involves delimiting frames with flags or escape sequences to handle bit stuffing. Error detection uses techniques like CRC checksums while error correction uses codes like Hamming codes.
3) Stop-and-wait protocols were improved with sliding window protocols using sequence numbers and acknowledgments to allow pipelining and handle lost frames more efficiently through techniques like selective repeat.
Chapter 10: Error Correction and DetectionJeoffnaRuth
This document discusses error detection and correction techniques. It defines single-bit and burst errors and explains how redundancy is used to detect or correct errors by adding extra bits. It describes the differences between error detection and correction. Various error correction methods are presented, including forward error correction, retransmission, and the use of modular arithmetic and cyclic redundancy checks. Hardware implementations of cyclic redundancy checks are also summarized.
Cyclic redundancy check (CRC) is a technique used to detect errors in digital data during transmission. It works by adding check bits to the message at the transmitter and checking for errors at the receiver. An encoder generates CRC check bits using a generator polynomial and appends them to the message. The receiver decodes the message using the same CRC method and checks if the remainder is zero, indicating no errors. CRC can be implemented using linear feedback shift registers (LFSRs) to generate the check bits.
to transfer data in network from one device to another with acceptable accuracy, so the system must guarantee the transmitted data should be identical to received data.
there should be no errors if any error occurs in how many ways it can be detected and corrected
This document discusses computer network error detection and correction. It begins by defining single-bit errors and burst errors. It then explains three common error detection techniques: parity check, cyclic redundancy check (CRC), and checksum. Parity check uses a redundant bit to make the total number of 1s even or odd. CRC performs binary division to generate redundant bits. Checksum adds data bits and compares the sum. For error correction, it describes Hamming codes, which add redundant bits in specific positions to detect and correct single-bit errors.
The document discusses error detection and correction techniques at the data link layer. It describes how errors can occur during data transmission and the need for reliable communication. Error detection allows a receiver to detect errors while error correction enables identifying and correcting bit errors without retransmission. Common techniques discussed include parity checks, checksums, and cyclic redundancy checks which add redundant bits to detect errors. CRC is based on binary division of data and checksum on addition. Forward error correction and retransmission are compared. Coding schemes use redundancy to detect or correct errors.
Human: Thank you for the summary. Can you provide a 2 sentence summary that captures the key aspects?
This document discusses error detection and correction techniques at the data link layer. It covers different types of errors, the use of redundancy to detect or correct errors, block coding and convolutional coding approaches. Specific coding schemes like parity checks, cyclic redundancy checks (CRC), and Hamming codes are explained in detail. The key aspects covered are the use of redundant bits, minimum Hamming distance requirements for detection and correction capabilities, and how techniques like CRC and Hamming codes function to detect and correct single-bit errors. Assignments and example problems are also listed.
This document discusses error detection and correction techniques at the data link layer. It covers various types of errors that can occur and how redundancy is used to detect or correct them. Error correcting codes like block codes and convolutional codes are introduced. Specific coding schemes like parity checks, cyclic redundancy checks (CRC), and Hamming codes are explained in detail. The document provides examples of how these codes are implemented and their performance characteristics in terms of detecting and correcting single and burst errors. Standard polynomials used in CRC and properties of good polynomials are also discussed.
This document discusses error coding techniques used in digital communications. It begins by explaining modulo-2 arithmetic operations that are the basis for digital coding. It then covers binary manipulation of addition and multiplication. Various error detection and correction codes are described, including cyclic redundancy check (CRC), linear block codes, and Hamming codes. Hamming codes add redundant bits to allow detection and correction of bit errors during transmission.
This document summarizes error detection and correction techniques. It discusses types of errors like single-bit errors and burst errors. It covers basic concepts of error detection, including adding redundant bits and using techniques like parity checks. Error correction requires knowing the number and positions of errors. Linear block codes and cyclic codes are introduced. Hamming distance and minimum distance are important metrics for error detection and correction capability. Specific codes like parity codes, Hamming codes, and cyclic redundancy checks (CRCs) are described through examples.
This document discusses linear block codes. It begins by introducing error control coding and its use in modern communication systems to reduce error induced by noisy channels. It then describes the basic components of a channel encoding system including the channel encoder and decoder. It discusses different types of block codes and focuses on linear block codes, describing their generator matrices, parity check matrices, and how they can be used to detect and correct errors through syndrome decoding. Key concepts covered include systematic codes, hamming weight/distance, and the error detection and correction capabilities of linear block codes.
The document discusses various functions and protocols of the data link layer, including:
1. Framing of data, error detection using checksums, and flow control to prevent faster senders from overwhelming slower receivers.
2. Common data link layer protocols like stop-and-wait and sliding window protocols using go-back-N and selective repeat to allow multiple frames to be transmitted.
3. Error detection techniques like parity bits, checksums, and cyclic redundancy checks to detect errors in transmitted frames.
The document discusses various functions and protocols of the data link layer, including:
1. Framing of data, error detection using checksums, and flow control to prevent faster senders from overwhelming slower receivers.
2. Common data link layer protocols like stop-and-wait and sliding window protocols using go-back-N and selective repeat to allow multiple frames to be transmitted.
3. Error detection techniques like parity bits, checksums, and cyclic redundancy checks to detect errors in transmitted frames.
The document discusses error detection techniques used in direct link networks, focusing on cyclic redundancy checks (CRCs) and internet checksum algorithms. It provides an overview of CRCs, describing how they make use of polynomial arithmetic modulo 2 to generate a redundant checksum that is sent along with messages. The document also gives examples of how CRCs can be used to detect errors by dividing the received polynomial by the generator polynomial and checking if the remainder is zero. Finally, it compares error detection, which requires retransmissions, to error correction, which has overhead on all messages.
its all about Artificial Intelligence(Ai) and Machine Learning and not on advanced level you can study before the exam or can check for some information on Ai for project
☁️ GDG Cloud Munich: Build With AI Workshop - Introduction to Vertex AI! ☁️
Join us for an exciting #BuildWithAi workshop on the 28th of April, 2025 at the Google Office in Munich!
Dive into the world of AI with our "Introduction to Vertex AI" session, presented by Google Cloud expert Randy Gupta.
International Journal of Distributed and Parallel systems (IJDPS)samueljackson3773
The growth of Internet and other web technologies requires the development of new
algorithms and architectures for parallel and distributed computing. International journal of
Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to
publish high quality scientific papers arising from original research and development from
the international community in the areas of parallel and distributed systems. IJDPS serves
as a platform for engineers and researchers to present new ideas and system technology,
with an interactive and friendly, but strongly professional atmosphere.
The role of the lexical analyzer
Specification of tokens
Finite state machines
From a regular expressions to an NFA
Convert NFA to DFA
Transforming grammars and regular expressions
Transforming automata to grammars
Language for specifying lexical analyzers
Raish Khanji GTU 8th sem Internship Report.pdfRaishKhanji
This report details the practical experiences gained during an internship at Indo German Tool
Room, Ahmedabad. The internship provided hands-on training in various manufacturing technologies, encompassing both conventional and advanced techniques. Significant emphasis was placed on machining processes, including operation and fundamental
understanding of lathe and milling machines. Furthermore, the internship incorporated
modern welding technology, notably through the application of an Augmented Reality (AR)
simulator, offering a safe and effective environment for skill development. Exposure to
industrial automation was achieved through practical exercises in Programmable Logic Controllers (PLCs) using Siemens TIA software and direct operation of industrial robots
utilizing teach pendants. The principles and practical aspects of Computer Numerical Control
(CNC) technology were also explored. Complementing these manufacturing processes, the
internship included extensive application of SolidWorks software for design and modeling tasks. This comprehensive practical training has provided a foundational understanding of
key aspects of modern manufacturing and design, enhancing the technical proficiency and readiness for future engineering endeavors.
Passenger car unit (PCU) of a vehicle type depends on vehicular characteristics, stream characteristics, roadway characteristics, environmental factors, climate conditions and control conditions. Keeping in view various factors affecting PCU, a model was developed taking a volume to capacity ratio and percentage share of particular vehicle type as independent parameters. A microscopic traffic simulation model VISSIM has been used in present study for generating traffic flow data which some time very difficult to obtain from field survey. A comparison study was carried out with the purpose of verifying when the adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and multiple linear regression (MLR) models are appropriate for prediction of PCUs of different vehicle types. From the results observed that ANFIS model estimates were closer to the corresponding simulated PCU values compared to MLR and ANN models. It is concluded that the ANFIS model showed greater potential in predicting PCUs from v/c ratio and proportional share for all type of vehicles whereas MLR and ANN models did not perform well.
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
Concept of Problem Solving, Introduction to Algorithms, Characteristics of Algorithms, Introduction to Data Structure, Data Structure Classification (Linear and Non-linear, Static and Dynamic, Persistent and Ephemeral data structures), Time complexity and Space complexity, Asymptotic Notation - The Big-O, Omega and Theta notation, Algorithmic upper bounds, lower bounds, Best, Worst and Average case analysis of an Algorithm, Abstract Data Types (ADT)
In tube drawing process, a tube is pulled out through a die and a plug to reduce its diameter and thickness as per the requirement. Dimensional accuracy of cold drawn tubes plays a vital role in the further quality of end products and controlling rejection in manufacturing processes of these end products. Springback phenomenon is the elastic strain recovery after removal of forming loads, causes geometrical inaccuracies in drawn tubes. Further, this leads to difficulty in achieving close dimensional tolerances. In the present work springback of EN 8 D tube material is studied for various cold drawing parameters. The process parameters in this work include die semi-angle, land width and drawing speed. The experimentation is done using Taguchi’s L36 orthogonal array, and then optimization is done in data analysis software Minitab 17. The results of ANOVA shows that 15 degrees die semi-angle,5 mm land width and 6 m/min drawing speed yields least springback. Furthermore, optimization algorithms named Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Genetic Algorithm (GA) are applied which shows that 15 degrees die semi-angle, 10 mm land width and 8 m/min drawing speed results in minimal springback with almost 10.5 % improvement. Finally, the results of experimentation are validated with Finite Element Analysis technique using ANSYS.
8. Three Types of addresses
Unicast Address
•Unicasting means one-to-one communication.
Multicast Address
•Multicasting means one-to-many communication
Broadcast Address
•A frame with a destination broadcast address is sent to
all entities in the link.
12. Error Detection and Correction
Types of Errors
•The central concept in detecting or correcting errors is
redundancy
•Redundancy is achieved through various coding
schemes
13. Block Coding
• Divide our message into blocks, each of k bits, called
datawords
• Add r redundant bits to each block to make the
length n = k + r. n-bit blocks are called codewords.
• create a combination of 2k
datawords and create a
combination of 2n
codewords
• This means that we have 2n
− 2k
codewords that are
not used, call these codewords invalid or illegal
14. Error Detection
• The following two conditions are met, the receiver
can detect a change in the original codeword.
• The receiver has (or can find) a list of valid
codewords. The original codeword has changed to an
invalid one.
15. Error Detection- Example
• Let us assume that k = 2 and n = 3. Table shows the list of
datawords and codewords.
• sender encodes the dataword 01 as 011 and sends it
• The receiver receives 011. It is a valid codeword.
• 111 is received . This is not a valid codeword and is discarded
• Even data corrupted, 000 is received, So error undetectable
16. Hamming Distance
• The Hamming distance between two words is the number of
differences between the corresponding bits
• Hamming distance between two words x and y as d(x, y)
• For example, if the codeword 00000 is sent and 01101 is
received, 3 bits are in error
• Hamming distance between the two is d(00000, 01101) = 3
• Hamming distance not zero, the codeword is corrupted, The
Hamming distance is found by apply the XOR operation ( )
⊕
17. Hamming Distance - Example
• Let us find the Hamming distance between two pairs of words
• The Hamming distance d(000, 011) is 2 because (000 011)
⊕
is 011 (two 1s).
• The Hamming distance d(10101, 11110) is 3 because (10101
11110) is 01011 (three 1s).
⊕
18. Minimum Hamming Distance for Error
Detection
• The minimum Hamming distance is the smallest Hamming
distance between all possible pairs of codewords.
• Now let us find the minimum Hamming distance in a code if
we want to be able to detect up to s errors.
• This means that dmin must be an integer greater than s or
dmin = s + 1
19. Linear Block Codes
• Subset of block codes called linear block codes.
• linear block code is a code in which the exclusive OR (addition
modulo-2) of two valid codewords creates another valid
codeword
• The minimum Hamming distance is the number of 1s in the
nonzero valid codeword with the smallest number of 1s
• In our first code table, the numbers of 1s in the nonzero
codewords are 2, 2, and 2. So the minimum Hamming
distance is dmin = 2.
20. Parity-Check Code
• n = k + 1
• r0 = a3 + a2 + a1 + a0 (modulo-2)
• s0 = b3 + b2 + b1 + b0 + q0 (modulo-2)
• syndrome is 0, there is no detectable error ,syndrome is 1, the
data portion of the received codeword is discarded
21. CYCLIC CODES
• In a cyclic code, if a codeword is cyclically shifted (rotated),
the result is another codeword
• For example, if 1011000 is a codeword and we cyclically left-
shift, then 0110001 is also a codeword
23. Cyclic Redundancy Check
• The dataword has k bits (4 here); the codeword has n bits (7).
• The size of the dataword is by adding n − k (3 here) 0s in right
• The n-bit result is fed into the generator.
• The generator uses a divisor of size n − k + 1 (4 here)
• The generator divides the dataword by the divisor (modulo-2)
• The quotient of the division is discarded, the remainder
(r2r1r0) is appended to the dataword to create the codeword.
27. Polynomials
Degree of a Polynomial- x6 + x + 1 is 6
Adding and Subtracting Polynomials- adding x5 + x4 + x2 and x6 +
x4 + x2 gives just x6 + x5
Multiplying or Dividing Terms- x3 × x4 is x7, x5/x2 is x3
28. Polynomials
Multiplying Two Polynomials-
(x5 + x3 + x2 + x)(x2 + x + 1) = x7 + x6 + x5 + x5 + x4 + x3 + x4 + x3
+ x2 + x3 + x2 + x = x7 + x6 + x3 + x
Shifting-
Shifting left 3 bits:
10011 becomes 10011000
x4 + x + 1 becomes x7 + x4 + x3
Shifting right 3 bits:
10011 becomes 10
x4 + x + 1 becomes x
29. Polynomials
Multiplying Two Polynomials-
(x5 + x3 + x2 + x)(x2 + x + 1) = x7 + x6 + x5 + x5 + x4 + x3 + x4 + x3
+ x2 + x3 + x2 + x = x7 + x6 + x3 + x
Shifting-
Shifting left 3 bits:
10011 becomes 10011000
x4 + x + 1 becomes x7 + x4 + x3
Shifting right 3 bits:
10011 becomes 10
x4 + x + 1 becomes x
30. Cyclic Code Encoder Using Polynomials
• The dataword 1001 is represented as x3 + 1. The divisor 1011
is represented as x3 + x + 1.
• To find the augmented dataword, we have left-shifted the
dataword 3 bits (multiplying by x3).
• The result is x6 + x3 . Division is straightforward.
• We divide the first term of the dividend, x6 , by the first term
of the divisor, x3
• Then we multiply x3 by the divisor and subtract
32. Cyclic Code Encoder Using Polynomials
• We define the following, where f(x) is a polynomial with binary
coefficients
• In a cyclic code,
1. If s(x) ¦ 0, one or more bits is corrupted.
2. If s(x) = 0, either
a. No bit is corrupted, or
b. Some bits are corrupted, but the decoder failed
to detect them.
33. Cyclic Code Encoder Using Polynomials
• Dataword: d(x) Codeword: c(x) Generator: g(x) Syndrome: s(x)
Error: e(x)
• Received codeword = c(x) + e(x)
• The receiver divides the received codeword by g(x) to get the
syndrome.
• If this term does not have a remainder (syndrome = 0), either
e(x) is 0 or e(x) is divisible by g(x)
34. Cyclic Code Encoder Using Polynomials
Single-Bit Error
•If the generator has more than one term and the coefficient of x0
is 1, all single-bit errors can be caught.
•If a generator cannot divide xt
+ 1 (t between 0 and n - 1), then
all isolated double errors can be detected.
•A generator that contains a factor of x + 1 can detect all odd-
numbered errors
35. Cyclic Code Encoder Using Polynomials
Burst Errors
L is the length of the error
r is check bits
37. Forward Error Correction
Using Hamming Distance
•To detect s errors, the minimum Hamming distance should be
dmin = s + 1.
•For error detection, we definitely need more distance.
•It can be shown that to detect t errors, we need to have dmin=2t
+ 1
38. Forward Error Correction
Using XOR
R = P1 P2 … Pi … PN → Pi = P1 P2 … R …
⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕
PN
•we apply the exclusive OR operation on N data items (P1 to PN),
•replacing the one to be created by the result of the previous operation (R).
•This means that we can divide a packet into N chunks, create the exclusive OR
of all the chunks and send N + 1 chunks.
•If any chunk is lost or corrupted, it can be created at the receiver site
40. DLC Services- data link control (DLC)
• Data link control functions include framing and flow and error
control
Framing
Character-Oriented Framing
41. DLC Services- data link control (DLC)
Byte stuffing and unstuffing
Byte stuffing is the process of adding one extra byte whenever there is a flag
or escape character in the text.
43. DLC Services- data link control (DLC)
Bit stuffing and unstuffing
Bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does
not mistake the pattern 0111110 for a flag.
44. DLC Services- data link control (DLC)
Flow Control
Flow control in this case can be feedback from the receiving
node to the sending node to stop or slow down pushing frames
45. DLC Services- data link control (DLC)
Buffers
•Flow control can be implemented in several ways, one of the
solutions is normally to use two buffers;
•one at the sending data-link layer and the other at the receiving
data-link layer
•A buffer is a set of memory locations that can hold packets at the
sender and receiver.
•When the buffer of the receiving data-link layer is full, it informs
the sending data-link layer to stop pushing frames
46. DLC Services- data link control (DLC)
Error Control
A CRC is added to the frame header by the sender and checked
by the receiver.
Flow control is implemented using two methods
•In the first method, if the frame is corrupted, it is silently
discarded; if it is not corrupted, the packet is delivered to the
network layer
•In the second method, if the frame is corrupted, it is silently
discarded; if it is not corrupted, an acknowledgment is sent to the
sender
47. Data-link Layer Protocols
Traditionally four protocols have been defined for the data-link
layer to deal with flow and error control:
•Simple,
•Stop-and-Wait,
•Go-Back-N,
•Selective-Repeat
48. Data-link Layer Protocols
The behavior of a data-link-layer protocol can be better shown as
a finite state machine (FSM)
53. High-level Data Link Control (HDLC)
• It is a bit-oriented protocol for communication over point-to-
point and multipoint links.
• It implements the Stop-and-Wait protocol
• HDLC provides two common transfer modes that can be used
in different configurations:
Normal response mode (NRM)
Asynchronous balanced mode (ABM)
54. High-level Data Link Control (HDLC)
Normal response mode
Asynchronous balanced mode
55. HDLC frames
Information frames (I-frames)- used to data-link user data and
control information relating to user data
supervisory frames (S-frames)- used only to transport control
information
unnumbered frames (U-frames)- reserved for system
management
56. HDLC frames
Flag field- synchronization pattern 01111110
Address field- the secondary station
Control field- one or two bytes used for flow and error control
Information field- the user’s data from the network layer
FCS field- The frame check sequence (FCS) error detection field, It
can contain either a 2- or 4-byte CRC
57. Control field formats
N(S)- define the sequence number of the frame
N(R)- correspond to the acknowledgment number
P/F(Poll/Final)- 1- Poll, sender to receiver ( Receiver address)
0-Final Receiver to sender (sender address)
Control Field for S-Frames
58. Code 2 bits- used to define type of S frame
Code 2 bits- used to define type of S frame
The last 3 bits, called N(R), correspond to the (ACK) or (NAK),
depending on the type of S-frame
The 2 bits called code represents type of S frame
Receive ready (RR). If the value of the code subfield is 00
Receive not ready (RNR). If the value of the code subfield is 10
Reject (REJ). If the value of the code subfield is 01
Selective reject (SREJ). If the value of the code subfield is 11
59. POINT-TO-POINT PROTOCOL (PPP)
• PPP defines the format of the frame to be exchanged between
devices.
• It also defines how two devices can negotiate the
establishment of the link and the exchange of data.
• PPP is designed to accept payloads from several network
layers (not only IP).
• The new version of PPP, called Multilink PPP, provides
connections over multiple links
60. Framing
Address. The address field in this protocol is a constant value and
set to 11111111 (broadcast address)
Control. This field is set to the constant value 00000011
Protocol. The protocol field defines what is being carried in the
data field: either user data or other information
Payload field. This field carries either the user data or other
information
66. Efficiency of Standard Ethernet
• The efficiency of the Ethernet is defined as the ratio of the
time used by a station to send data to the time the medium is
occupied by this station.
• The practical efficiency of standard Ethernet has been
measured to be
• in which the parameter “a” is the number of frames that can
fit on the medium
• It can be calculated as a = (propagation delay)/(transmission
delay)
67. Efficiency of Standard Ethernet
Example
In the Standard Ethernet with the transmission rate of 10 Mbps,
we assume that the length of the medium is 2500 m and the size
of the frame is 512 bits. The propagation speed of a signal in a
cable is normally 2 × 108 m/s.
77. IEEE 802.11 PROJECT
The standard defines two kinds of services:
• the basic service set (BSS)
• the extended service set (ESS)
Basic service sets (BSSs)