0% found this document useful (0 votes)
12 views

Multiple Xing

Multiplexing allows the simultaneous transmission of multiple signals across a single data link by combining or dividing the signals in the frequency, wavelength, or time domain. There are three main multiplexing techniques: frequency-division multiplexing (FDM), wavelength-division multiplexing (WDM), and time-division multiplexing (TDM). FDM and WDM are analog techniques that combine signals at different frequencies or wavelengths, while TDM is digital and allocates signals to different time slots. Switching establishes temporary connections between linked devices to allow communication, with circuit switching reserving dedicated paths during data transfer.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Multiple Xing

Multiplexing allows the simultaneous transmission of multiple signals across a single data link by combining or dividing the signals in the frequency, wavelength, or time domain. There are three main multiplexing techniques: frequency-division multiplexing (FDM), wavelength-division multiplexing (WDM), and time-division multiplexing (TDM). FDM and WDM are analog techniques that combine signals at different frequencies or wavelengths, while TDM is digital and allocates signals to different time slots. Switching establishes temporary connections between linked devices to allow communication, with circuit switching reserving dedicated paths during data transfer.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Multiplexing

Whenever the bandwidth of a medium linking two devices is greater than the
bandwidth needs of the devices, the link can be shared. Multiplexing is the set of
techniques that allows the simultaneous transmission of multiple signals across a
single data link.
In a multiplexed system, n lines share the bandwidth of one link.

The lines on the left direct their transmission streams to a multiplexer (MUX),
which combines them into a single stream. At the receiving end, that stream is fed
into a demultiplexer (DEMUX), which separates the stream back into its
component transmissions and directs them to their corresponding lines.
There are three basic multiplexing techniques: frequency division multiplexing,
wavelength division multiplexing, and time-division multiplexing. The first two
are techniques designed for analog signals, the third, for digital signals.

Frequency-Division Multiplexing: FDM is an analog technique that can be


applied when the bandwidth of a link (in hertz) is greater than the combined
bandwidths of the signals to be transmitted.
In FDM, signals generated by each sending device modulate different carrier
frequencies. These modulated signals are then combined into a single composite
signal that can be transported by the link.
Carrier frequencies are separated by sufficient bandwidth to accommodate the
modulated signal. These bandwidth ranges are the channels through which the
various signals travels.
Channels can be separated by strips of unused bandwidth - guard bands – to
prevent the signal from overlapping.

FDM is an analog multiplexing technique that combines analog signals.

 Five channels, each with a 100-kHz bandwidth are to be multiplexed


together. What is the minimum bandwidth of the link if there is a need for a
guard band of 10 kHz between the channels to prevent interference?
For five channels, we need at least four guard bands. This means that the required
bandwidth is at least 5 ×100+ 4 ×10=540 kHz
A very common application of FDM is AM and FM radio broadcasting. Radio uses
the air as the transmission medium. A special band from 530 to 1700 kHz is
assigned to AM radio. All radio stations need to share this band. Each station uses
a different carrier frequency, which means it is shifting its signal and multiplexing.
The signal that goes to the air is a combination of signals. A receiver receives all
these signals, but filters only the one which is desired. Without multiplexing, only
one AM station could broadcast to the common link, the air.
The situation is similar in FM broadcasting.
Wavelength-Division Multiplexing: WDM is designed to use the high-data-rate
capability of fiber-optic cable. WDM is conceptually the same as FDM, except that
the multiplexing and demultiplexing involve optical signal transmitted through
fiber-optic channels. The idea is same: we are combining different signals of
different frequencies. The difference is that the frequencies are very high.

Very narrow bands of light from different sources are combined to make a wider
band of light. At the receiver, the signals are separated by the demultiplexer.
Although WDM technology is very complex, the basic idea is very simple. We
want to combine multiple light sources into one single light at the multiplexer and
do the reverse at the demultiplexer. The combining and splitting of light sources
are easily handled by a prism.
Time-Division Multiplexing: TDM is a digital process that allows several
connections to share the high bandwidth of a link. Instead of sharing a portion of
the bandwidth as FDM, time is shared. Each connection occupies a portion of time
in the link.

We can divide TDM into two different schemes: synchronous and statistical.
Synchronous TDM: In Synchronous TDM, the data flow of each input connection
is divided into units, where each input occupies one input time slot. A unit can be 1
bit, one character, or one block of data. Each input unit becomes one output unit
and occupies one output time slot.
The duration of an output time slot is n times shorter than the duration of an input
time slot. If an input time slot is T sec, the output time slot is T/n Sec, where n is
the number of connections. In other words, a unit in the output connections has a
shorter duration, it travels faster.

A round of data units from each input connection is collected into a frame. If we
have n connections, a frame is divided into n time slots and one slot is allocated for
each unit, one for each input line. If the duration of the input unit is T, the duration
of each slot is T/n and the duration of each frame is T.
The data rate of the output link must be n times the data rate of a connection to
guarantee the flow of data. In the above figure, the data rate of the link is 3 times
the data rate of a connection: likewise, the duration of a unit on a connection is 3
times that of the time slot.
In synchronous TDM, the data rate of the link is n times faster and the unit
duration in n times shorter.
Time slots are grouped into frames. A frame consists of one complete cycle of time
slots, with one slot dedicated to each sending device. In a system with n input
lines, each frame has n slots, with each slot allocated to carrying data from a
specific input line.
 The data rate for each input connection is 3kbps. If 1 bit at a time is
multiplexed( a unit is 1 bit), what is the duration of (a) each input slot, (b) each
output slot, and (c) each frame?
(a) The data rate of each input connection is 1 kbps. This means that the bit
duration is 1/1000 sec or 1 ms. The duration of the input time slot is 1ms (same as
bit duration).
(b) The duration of each output time slot is one-third of the input time slot. This
means that the duration of the output time slot is 1/3 ms.
(c) Each frame carries three output time slots. So the duration of a frame is 3*1/3
ms, or 1ms. The duration of a frame is the same as the duration of an input unit.
Synchronous TDM is not as efficient as it could be. If a source does not have data
to send, the corresponding slot in the output frame is empty.
One problem with TDM is how to bandle a disparity in the input data rates. If data
rates are not the same, three strategies, or a combination of them can be use:
Multilevel Multiplexing, Multiple-slot allocation, and Pulse stuffing.
 Multilevel Multiplexing: It is a technique used when the data rate of an input
line is a multiple of others.
 Multiple-slot Allocation: Sometimes it is more efficient to allot more than
one slot in a frame to a single input line.

 Pulse Stuffing: Sometimes the bit rate of sources are not multiple integers of
each other. One solution is to make the highest input data rate the dominant
data rate and then add dummy bits to the input lines with lower rates.

Statistical Time-Division Multiplexing: In synchronous TDM, each input has a


reserved slot in the output frame. This can be inefficient if some input lines have
no data to send. In statistical TDM, slots are dynamically allocated to improve
bandwidth efficiency. Only when an input line had a slot’s worth of data to send is
it given a slot in the output frame. In statistical multiplexing, the number of slots in
each frame is less than the number of input lines. The multiplexer checks each
input line in round-robin fashion: it allocates a slot for an input line if the line had
data to send; otherwise, it skips the line and checks the next line.
An output slot in synchronous TDM is totally occupied by data; in statistical TDM,
a slot needs to carry data as well as the address of the destination.
Switching
Whenever we have multiple devices, we have the problem of how to connect
them to make one-to-one communication possible. One solution is to make a
point-to-point connection between each pair of devices (a mesh topology) or
between a central device and every other device (a star topology).
A better solution is switching. A switched network consists of a series of
interlinked nodes, called switches. Switches are devices capable of creating
temporary connections between two or more devices linked to the switch.

Traditionally, three methods of switching have been important: circuit switching,


packet switching, and message switching.
Circuit-Switched Networks
A circuit-switched network is made of a set of switches connected by physical
links, in which each link is divided into n channels.
A trivial circuit-switched network with four switches and four links is as follows:

The end systems, such as computers or telephones, are directly connected to a


switch. When end system A needs to communicate with end system M, system A
needs to request a connection to M that must be accepted by all switches as well as
by M itself. This is called the setup phase; a circuit (channel) is reserved on each
link, and the combination of circuits or channels defines the dedicated path.
After the dedicated path made of connected circuits (channels) is established, data
transfer can take place.
Circuit switching takes place at the physical layer.
Before starting communication, the stations must make a reservation for the
resources to be used during the communication. These resources, such as channels,
switch buffers, switch processing time, and switch input/output ports, must remain
dedicated during the entire duration of data transfer until the teardown phase.
Data transfers between the two stations are not packetized. The data are a
continuous flow sent by the source station and received by the destination station,
although there may be periods of silence.
There is no addressing involved during data transfer.
In circuit switching, the resources need to be reserved during the setup phase;
the resources remain dedicated for the entire duration of data transfer until
the teardown phase.
Datagram Networks
In packet switching, there is no resource allocation for a packet. This means that
there is no reserved bandwidth on the links, and there is no scheduled processing
time for each packet. Resources are allocated on demand. The allocation is done on
first-come, first-served basis. When a switch receives a packet, no matter what is
the source or destination, the packet must wait if there are other packets being
processed.
In a packet-switched network, there is no resource reservation; resources are
allocated on demand.
In a datagram network, each packet is treated independently of all others. Even if a
packet is part of a multipacket transmission, the network treats it as though it
existed alone. Packets in this approach are referred to as datagrams.
Datagram switching is normally done at the network layer.
In the above diagram, the datagram approach is used to deliver four packets from
station A to station X. The switches in a datagram network are traditionally
referred to as routers.
In this example, all four packets belong to the same message, but may travel
different paths to reach their destination. This is so because the links may be
involved in carrying packets from other sources and do not have the necessary
bandwidth available to carry all the packets from A to X.
The datagram networks are sometimes referred to as connectionless networks. The
term connectionless means that the switch does not keep information about the
connection state.
Virtual-Circuit Networks
A virtual-circuit network is a cross between a circuit-switched network and a
datagram network. It has some characteristics of both.
1. As in a circuit-switched network, there are setup and teardown phases in
addition to the data transfer phase.
2. Resources can be allocated during the setup phase, as in a circuit-switched
network, or on demand, as in a datagram network.
3. As in datagram network, data are packetized and each packet carries an address
in the header. However, the address in the header has local jurisdiction.
4. As in circuit-switched network, all packets follow the same path established
during connection.
5. A virtual-circuit network is normally implemented in the data link layer, while a
circuit-switched network is implemented in the physical layer and a datagram
network in the network layer.
Following is the example of a virtual-circuit network
The network has switches that allow traffic from sources to destination. A source
or destination can be a computer, packet switch, bridge, or any other device that
connects other networks.

Error Detection and Correction


Data can be corrupted during transmission. Some applications require that errors
be detected and corrected.
Types of errors: The term single-bit error means that only 1 bit of a given data
unit is changed from 1 to 0 or from 0 to 1.

The term burst error means that 2 or more bits in the data unit have changed from
1 to 0 or from 0 to 1.
Redundancy: To detect or correct errors, we need to sent extra (redundant) bits
with data. These redundant bits are added by the sender and removed by the
receiver.
M FCS
Frame check Sequence is a function of M. f ( M )=FCS
Detection Versus Correction: The correction of errors is more difficult than the
detection. In error detection, we are looking only to see if any error has occurred.
The answer is a simple yes or no. we are not even interested in the number of
errors.
In error correction, we need to know the exact number of bits that are corrupted
and more importantly, their location in the message. The number of errors and the
size of the message are important factors.
Forward Error Correction Versus Retransmission: there are two main methods
of error correction. Forward error correction is the process in which the receiver
tries to guess the message by using redundant bits. This is possible, if the number
of errors is small. Correction by retransmission is a technique in which the receiver
detects the occurrence of an error and asks the sender to resend the message.
Resending is repeated until a message arrives that the receiver believes is error-
free.
Error Detection Technique
Following are popular techniques for error detection:
 Simple Parity Check
 Two-dimensional Parity Check
 Checksum
 Cyclic redundancy Check
Simple Parity Check: The simplest and the most popular error detection
technique.
Appends a parity bit to the end of the data.
Performance of Simple Parity Check:
 Simple parity check can detect all single-bit errors.
 It can also detect burst errors, if the number of bits in the error is odd.
 The technique is not foolproof against burst errors that inverts more than one
bits. If an even number of bits are inverted due to error, the error is not
detected.
Two dimensional parity check:
 It organizes the block of bits in the form of a table.
 Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit.
 Parity check bits are also calculated for all columns.
 Both are sent along with data.
 At the receiving end these are compared with the parity bits calculated on
the received data.
Performance:
 Extra overhead is traded for better error detection capability
 It can detect many burst errors, but not all.
Checksum: At the sender’s end –
 The data is divided into k segments each of m bits.
 The segment are added using ones complement arithmetic to get the sum.
 The sum is complemented to get the checksum
 The checksum segment is along with the data segments.
At receiver’s end –
 All received segments are added using ones complement arithmetic to get
the sum.
 The sum is complemented.
 If the result is zero, the received data is accepted; otherwise discarded.

Performance:
 The checksum detects all errors involving an odd number of bits.
 It also detects most errors involving even number of bits.
Cyclic Redundancy Check:
 One of the most powerful and commonly used technique
 Given a m-bit block of bit sequence, the sender generates an n-bit sequence,
known as a Frame Check Sequence(FCS), so that the resulting frame,
consisting of (m+n) bits, is exactly divisible by same predetermined number.
 The receiver divides the incoming frame by that number and, if there is no
remainder, assumes there was no error.
Performance:
 CRC can detect all single-bit errors.
 CRC can detect all double-bit errors.
 CRC can detect any odd number errors.
 CRC can detect all burst errors of less than the degree of the polynomial.
Hamming Distance
The hamming distance between two words (of the same size) is the number of
differences between the corresponding bits. Hamming distance between two words
x and y as d(x,y).
Hamming distance can easily be found if we apply the XOR operation on the two
words and count the number of 1s in the result. Hamming Distance is a value
greater than zero.
The Minimum Hamming Distance is the smallest Hamming distance between all
possible pairs in a set of words.

Any coding scheme needs to have at least three parameters: the codeword sized n,
the dataword size k, and the minimum Hamming distance d min.
A coding scheme C is written as C(n,k) with a separate expression for d min
To guarantee the detection of up to s errors in all cases, the minimum
Hamming distance in a block code must be d min=s+1.

Framing
The whole message could be packed in one frame that is not normally done. One
reason is that a frame can be very large, making flow and error control very
inefficient. When a message is carried in one large frame, even a single-bit error
would require the retransmission of the whole message. When a message is divided
into smaller frames, a single-bit error affects only that small frame.
Fixed-Size Framing: In fixed-size framing there is no need for defining the
boundaries of the frames; the size itself can be used as a delimiter. An example of
this type of training is the ATM wide-area network, which uses frames of fixed
size called cells.
Variable-size Framing: In variable size framing, we need a way to define the end
of the frame and the beginning of the next. Two approaches are used for this
purpose: a character-oriented approach and a bit-oriented approach.
Character-Oriented Protocol: in that protocol, data to be carried are 8-bit
characters from a coding system such as ASCII. To separate one frame from the
next, an 8-bit flag is added at the beginning and the end of a frame.
\

Any pattern used for the flag could also be part of the information. If this happens,
the receiver, when it encounters this pattern in the middle of the data, thinks it has
reached the end of the frame.
To fix this problem, a byte-stuffing strategy was added to character-oriented
framing. In byte stuffing, a special byte is added to the data section of the frame
when there is a character with the same pattern as the flag. The data section is
stuffed with an extra byte. This byte is usually called the escape character(ESC).
Whenever the receiver encounters the ESC character, it removes it from the data
section and treats the next character as data, not a delimiting flag.
What happens if the text contains one or more escape characters followed by a
flag? The receiver removes the escape character, but keeps the flag, which is
incorrectly interpreted as the end of the frame. To solve this problem, the escape
character that is part of the text must also be marked by another escape character.
In other words, if the escape character is part of the text, an extra one is added to
show that the second one is part of the text.
Bit-Oriented Protocol: In a bit-oriented protocol, the data section of a
frame is a sequence of bits to be interpreted by the upper layer as text,
graphic, audio and so on. Most protocol use a special 8-bit pattern flag
01111110 as the delimiter to define the beginning and the end of the
frame.

Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s
follow a 0 in the data, so that the receiver does not mistake the pattern 01111110
for a flag.

You might also like