Multiple Xing
Multiple Xing
Whenever the bandwidth of a medium linking two devices is greater than the
bandwidth needs of the devices, the link can be shared. Multiplexing is the set of
techniques that allows the simultaneous transmission of multiple signals across a
single data link.
In a multiplexed system, n lines share the bandwidth of one link.
The lines on the left direct their transmission streams to a multiplexer (MUX),
which combines them into a single stream. At the receiving end, that stream is fed
into a demultiplexer (DEMUX), which separates the stream back into its
component transmissions and directs them to their corresponding lines.
There are three basic multiplexing techniques: frequency division multiplexing,
wavelength division multiplexing, and time-division multiplexing. The first two
are techniques designed for analog signals, the third, for digital signals.
Very narrow bands of light from different sources are combined to make a wider
band of light. At the receiver, the signals are separated by the demultiplexer.
Although WDM technology is very complex, the basic idea is very simple. We
want to combine multiple light sources into one single light at the multiplexer and
do the reverse at the demultiplexer. The combining and splitting of light sources
are easily handled by a prism.
Time-Division Multiplexing: TDM is a digital process that allows several
connections to share the high bandwidth of a link. Instead of sharing a portion of
the bandwidth as FDM, time is shared. Each connection occupies a portion of time
in the link.
We can divide TDM into two different schemes: synchronous and statistical.
Synchronous TDM: In Synchronous TDM, the data flow of each input connection
is divided into units, where each input occupies one input time slot. A unit can be 1
bit, one character, or one block of data. Each input unit becomes one output unit
and occupies one output time slot.
The duration of an output time slot is n times shorter than the duration of an input
time slot. If an input time slot is T sec, the output time slot is T/n Sec, where n is
the number of connections. In other words, a unit in the output connections has a
shorter duration, it travels faster.
A round of data units from each input connection is collected into a frame. If we
have n connections, a frame is divided into n time slots and one slot is allocated for
each unit, one for each input line. If the duration of the input unit is T, the duration
of each slot is T/n and the duration of each frame is T.
The data rate of the output link must be n times the data rate of a connection to
guarantee the flow of data. In the above figure, the data rate of the link is 3 times
the data rate of a connection: likewise, the duration of a unit on a connection is 3
times that of the time slot.
In synchronous TDM, the data rate of the link is n times faster and the unit
duration in n times shorter.
Time slots are grouped into frames. A frame consists of one complete cycle of time
slots, with one slot dedicated to each sending device. In a system with n input
lines, each frame has n slots, with each slot allocated to carrying data from a
specific input line.
The data rate for each input connection is 3kbps. If 1 bit at a time is
multiplexed( a unit is 1 bit), what is the duration of (a) each input slot, (b) each
output slot, and (c) each frame?
(a) The data rate of each input connection is 1 kbps. This means that the bit
duration is 1/1000 sec or 1 ms. The duration of the input time slot is 1ms (same as
bit duration).
(b) The duration of each output time slot is one-third of the input time slot. This
means that the duration of the output time slot is 1/3 ms.
(c) Each frame carries three output time slots. So the duration of a frame is 3*1/3
ms, or 1ms. The duration of a frame is the same as the duration of an input unit.
Synchronous TDM is not as efficient as it could be. If a source does not have data
to send, the corresponding slot in the output frame is empty.
One problem with TDM is how to bandle a disparity in the input data rates. If data
rates are not the same, three strategies, or a combination of them can be use:
Multilevel Multiplexing, Multiple-slot allocation, and Pulse stuffing.
Multilevel Multiplexing: It is a technique used when the data rate of an input
line is a multiple of others.
Multiple-slot Allocation: Sometimes it is more efficient to allot more than
one slot in a frame to a single input line.
Pulse Stuffing: Sometimes the bit rate of sources are not multiple integers of
each other. One solution is to make the highest input data rate the dominant
data rate and then add dummy bits to the input lines with lower rates.
The term burst error means that 2 or more bits in the data unit have changed from
1 to 0 or from 0 to 1.
Redundancy: To detect or correct errors, we need to sent extra (redundant) bits
with data. These redundant bits are added by the sender and removed by the
receiver.
M FCS
Frame check Sequence is a function of M. f ( M )=FCS
Detection Versus Correction: The correction of errors is more difficult than the
detection. In error detection, we are looking only to see if any error has occurred.
The answer is a simple yes or no. we are not even interested in the number of
errors.
In error correction, we need to know the exact number of bits that are corrupted
and more importantly, their location in the message. The number of errors and the
size of the message are important factors.
Forward Error Correction Versus Retransmission: there are two main methods
of error correction. Forward error correction is the process in which the receiver
tries to guess the message by using redundant bits. This is possible, if the number
of errors is small. Correction by retransmission is a technique in which the receiver
detects the occurrence of an error and asks the sender to resend the message.
Resending is repeated until a message arrives that the receiver believes is error-
free.
Error Detection Technique
Following are popular techniques for error detection:
Simple Parity Check
Two-dimensional Parity Check
Checksum
Cyclic redundancy Check
Simple Parity Check: The simplest and the most popular error detection
technique.
Appends a parity bit to the end of the data.
Performance of Simple Parity Check:
Simple parity check can detect all single-bit errors.
It can also detect burst errors, if the number of bits in the error is odd.
The technique is not foolproof against burst errors that inverts more than one
bits. If an even number of bits are inverted due to error, the error is not
detected.
Two dimensional parity check:
It organizes the block of bits in the form of a table.
Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit.
Parity check bits are also calculated for all columns.
Both are sent along with data.
At the receiving end these are compared with the parity bits calculated on
the received data.
Performance:
Extra overhead is traded for better error detection capability
It can detect many burst errors, but not all.
Checksum: At the sender’s end –
The data is divided into k segments each of m bits.
The segment are added using ones complement arithmetic to get the sum.
The sum is complemented to get the checksum
The checksum segment is along with the data segments.
At receiver’s end –
All received segments are added using ones complement arithmetic to get
the sum.
The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
Performance:
The checksum detects all errors involving an odd number of bits.
It also detects most errors involving even number of bits.
Cyclic Redundancy Check:
One of the most powerful and commonly used technique
Given a m-bit block of bit sequence, the sender generates an n-bit sequence,
known as a Frame Check Sequence(FCS), so that the resulting frame,
consisting of (m+n) bits, is exactly divisible by same predetermined number.
The receiver divides the incoming frame by that number and, if there is no
remainder, assumes there was no error.
Performance:
CRC can detect all single-bit errors.
CRC can detect all double-bit errors.
CRC can detect any odd number errors.
CRC can detect all burst errors of less than the degree of the polynomial.
Hamming Distance
The hamming distance between two words (of the same size) is the number of
differences between the corresponding bits. Hamming distance between two words
x and y as d(x,y).
Hamming distance can easily be found if we apply the XOR operation on the two
words and count the number of 1s in the result. Hamming Distance is a value
greater than zero.
The Minimum Hamming Distance is the smallest Hamming distance between all
possible pairs in a set of words.
Any coding scheme needs to have at least three parameters: the codeword sized n,
the dataword size k, and the minimum Hamming distance d min.
A coding scheme C is written as C(n,k) with a separate expression for d min
To guarantee the detection of up to s errors in all cases, the minimum
Hamming distance in a block code must be d min=s+1.
Framing
The whole message could be packed in one frame that is not normally done. One
reason is that a frame can be very large, making flow and error control very
inefficient. When a message is carried in one large frame, even a single-bit error
would require the retransmission of the whole message. When a message is divided
into smaller frames, a single-bit error affects only that small frame.
Fixed-Size Framing: In fixed-size framing there is no need for defining the
boundaries of the frames; the size itself can be used as a delimiter. An example of
this type of training is the ATM wide-area network, which uses frames of fixed
size called cells.
Variable-size Framing: In variable size framing, we need a way to define the end
of the frame and the beginning of the next. Two approaches are used for this
purpose: a character-oriented approach and a bit-oriented approach.
Character-Oriented Protocol: in that protocol, data to be carried are 8-bit
characters from a coding system such as ASCII. To separate one frame from the
next, an 8-bit flag is added at the beginning and the end of a frame.
\
Any pattern used for the flag could also be part of the information. If this happens,
the receiver, when it encounters this pattern in the middle of the data, thinks it has
reached the end of the frame.
To fix this problem, a byte-stuffing strategy was added to character-oriented
framing. In byte stuffing, a special byte is added to the data section of the frame
when there is a character with the same pattern as the flag. The data section is
stuffed with an extra byte. This byte is usually called the escape character(ESC).
Whenever the receiver encounters the ESC character, it removes it from the data
section and treats the next character as data, not a delimiting flag.
What happens if the text contains one or more escape characters followed by a
flag? The receiver removes the escape character, but keeps the flag, which is
incorrectly interpreted as the end of the frame. To solve this problem, the escape
character that is part of the text must also be marked by another escape character.
In other words, if the escape character is part of the text, an extra one is added to
show that the second one is part of the text.
Bit-Oriented Protocol: In a bit-oriented protocol, the data section of a
frame is a sequence of bits to be interpreted by the upper layer as text,
graphic, audio and so on. Most protocol use a special 8-bit pattern flag
01111110 as the delimiter to define the beginning and the end of the
frame.
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s
follow a 0 in the data, so that the receiver does not mistake the pattern 01111110
for a flag.