0% found this document useful (0 votes)
5 views

23 Switching Technologies

Uploaded by

5xhhbfkk5q
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

23 Switching Technologies

Uploaded by

5xhhbfkk5q
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

1

Network Switching Technologies.

In this chapter we will consider some of the basic concepts from computer networks which
would be useful in the subsequent deliberation.

LAN versus WAN.

A computer network is essentially a communication system connecting a number of computers.


The need for computer networks arises out of a number of requirements, such as better
connectivity and faster and more efficient communication, among the others. Computer
networks allow for sharing of resources. By using a network, for example, more users can have
access to a printer or a scanner connected to the network. In addition to sharing of hardware
and data resources, computer networks facilitate remote collaboration among the people.

Generally speaking, computer networks can be broadly categorized into two types: Local Area
Networks or LANs and Wide Area Networks or WANs. This classification is based on some of
the fundamental differences between these two types of networks.

In a LAN, the computers that are connected in the network are within a relatively small
geographical span. Here, relatively small span, may mean for example, that the computers are
in the same room or laboratory, the same building or within the same campus consisting of
several buildings.

In contrast, in a WAN, the connected hosts (i.e. the computers) may be widely dispersed. They
can be in different buildings, across cities, or even across continents.

As a comparison between a LAN and a WAN, in terms of the network design and maintenance
costs, a LAN is generally cheaper than a WAN, whereas a WAN network can be expensive to
implement and maintain.

In terms of ownership, a LAN is usually under the control of a single organization. That means a
LAN has a single owner. It is the responsibility of the owner to manage, upgrade if necessary
and to configure a LAN network in a way they decide to. A WAN, on the other hand, is usually
not under the control of a single person or entity. This is natural given that WANs can spread
across countries, and even continents, and are usually formed by a number of intermediate
networks owned or controlled by different entities.

In terms of performance however, a LAN is typically faster than a WAN. A LAN can work at
speeds typically from 10 Mbps up to 10 Gbps. 1 Gbps is the same as 1000 Mbps. Typically the
present-day LAN speeds range between 100 Mbps and 1 Gbps. In contrast, in wide area
networks, speeds can be as low as 64 Kbps.

The cost for a LAN is mostly a one-time cost. There are maintenance charges but maintenance
occurs rarely in a LAN. In contrast, in a WAN there is cost that is typically recurring. This cost
2

will have to be paid as an annual maintenance, that is, a recurring expense that is payable per
year. So, in the long run, a WAN will prove to be much more expensive.

Circuit vs. Packet Switching.

Speaking of the way data flows from one node to another in a network, one principle of data
communications is called circuit switching. The basic concept behind circuit switching is that a
dedicated communication path is required and established between two end stations. The
established path will follow a fixed sequence of intermediate nodes and links. A logical channel
can be defined for each path and this logical channel remains dedicated to the connection while
the connection lasts.

To understand the concept of a logical channel, we will consider the example network in the
Figure. Suppose that node A is the source of some messages and node H is the destination.
We choose a path through which all data packets will flow: the path will go through the
intermediate nodes C, D and G. The path is then ACDGH and is a dedicated path, meaning that
all the messages that A sends to H will be following this path.

The intermediate links can be either absolutely dedicated to one established connection, or
shared among several connections. The dedicated connections were used in the early analog
telephone networks. In such networks, when a user dialed a number, a piece of copper was
dedicated to the connection. There were intermediate stations (telephone exchanges) in which
some switches were set and a continuous copper wire was dedicated for the connection
between the two telephone devices.

In a modern network, on the other hand, the links need not be dedicated, since there is usually
a very fast or high speed link in between. Only a fraction of the bandwidth of that link is typically
needed for a connection between two data terminals. Therefore, a link can be shared among
several connections, while still guaranteeing a particular bandwidth for each connection on that
link.

In any case, if there is a dedicated path for communication, we call this type of connecting
terminals circuit switching. For circuit switching, there are three steps that need to be taken for
communication to happen. First, a connection is established. This step must be completed
before data transmission can begin. Following connection establishment, one can start the
actual data transfer. Since a connection has been established and a dedicated path is there,
data transfer can proceed at the maximum speed allowed by the dedicated link. Thus, in circuit
switching, the bandwidth that was negotiated in the connection establishment phase will be
guaranteed throughout the communication. Finally, after the transmission has finished, the
connection has to be terminated. In this last step, one is actually deallocating the network
resources (links and bandwidth) that were allocated during connection establishment phase.

A drawback of circuit switching is that the channel capacity along the path is dedicated for the
entire duration of communication. This may be acceptable, for example for voice communication
3

where the voice is continuously transmitted and the line is busy most of the time. Computer
traffic, however, is bursty and unpredictable in nature, meaning that there are short periods with
a lot of traffic, followed by prolonged periods of little or no traffic. So, for bursty data
communications over a circuit-switched network, the link will be underutilized most of the time,
which is a drawback. Another drawback is that, due to the initial connection setup procedure,
there can be an initial delay, which is unwanted for some applications.

To circumvent these drawbacks of circuit switching, for data communications in modern-day


computer networks, packet switching is preferred. Packet switching is based on technology
developed for long distance data communications. But today packet switching is used even in
the case of shorter distance communications, such as within a local area network. Thus, packet
switching has become a de facto standard in most of the modern computer networks.

While packet switching technology has technologically evolved since its beginnings, the
underlying concept has remained the same. One of the main characteristics of packet switching
is that the network resources are not dedicated. A link or the bandwidth on the link can be
shared. The actual performance may vary, and depends on the network load.

Another important characteristic of packet switching is that when there is a big message to
transmit, the message gets split into smaller chunks or pieces called packets. Data is
transmitted in short packets, typically a few kilobytes each. Each packet contains a header with
added information needed for the packet to reach the destination. Thus the header contains
information required for routing of each packet toward its destination.

The diagram in the Figure shows how a message can be divided into packets. The message is
divided into 3 packets and each packet obtains its own header. This header will allow the packet
to reach the destination. The intermediate routing nodes will make appropriate routing decisions
based on the information contained in the packet headers.

Another notable characteristic of packet switching is that it is based on the so-called store and
forward data communications concept. The idea of store-and-forward is that as the source node
sends a packet, each intermediate node receives the whole packet before making a routing
decision. Each intermediate node will store the entire packet in a buffer first. Only when the
packet is fully received, it will check the packet header. Since there can be several possible
outgoing links at each intermediate node, one outgoing link must be chosen based on the
information in the packet header. Each packet's header also contains a checksum used to
detect possible transmission errors. If the packet is received correctly according to the
checksum, the intermediate node will make the forwarding decision, depending on the
destination address. Once the next intermediate node is chosen, the packet will be forwarded to
it.

An advantage of packet switching is that the link utilization is notably better, since the links can
be shared. This is particularly true for computer generated traffic which is bursty in nature.
Another advantage is that data rate conversion can be performed at routing nodes. Rate
4

conversion is possible since packet buffering is involved. At an intermediate node, an incoming


link may be faster than an outgoing link (or vice versa). However, since incoming packets are
stored in a buffer, they can be sent once they are fully received at a different data rate. This
results in a data rate conversion (from slower to faster or vice versa). Yet another advantage of
packet switching compared to circuit switching is that one can have the notion of packet priority.
High priority packets can be selected from the receiving buffer and sent first to the next node. In
this way, high priority packets will be incurring less delay at the intermediate nodes.

Virtual Circuit versus Datagram Packet Switching

Generally, there are two approaches to transmitting data packets in the packet switching
paradigm: virtual circuit packet switching and datagram packet switching.

To understand these approaches, we will use the example network given in the Figure. In a
network, typically there are a number of nodes which are connected by links. The task of packet
switching is to send the data packets from some node, say, from the source node A to the
destination node H.

Virtual circuit (VC) packet switching is similar to circuit switching in the sense that all transmitted
packets follow the same path. Virtual circuit packet switching is different from classical circuit
switching in that resources along the path are not dedicated. If the route is congested, the
packets will take longer. If the route is not congested, the packets can move faster. The path is
again fixed, but the quality of service or the bandwidth along the path is not guaranteed.

An example of VC packet switching is the modern-day telephone system, where the exchanges
are digital exchanges. None of the links are dedicated, they are often shared, and whenever a
packet is sent from one device to another, the packets follow the same path. Packets may travel
through a number of shared links in between.

In VC packet switching, again, the route has to be established before the packet transmission
starts. Once the route is established, packets will be forwarded along the route from one node to
the next using the store-and-forward scheme. Each intermediate node receives a packet, stores
it and then forwards to the next node. In this way, each packet will eventually reach the
destination.

A characteristic of VC packet switching is that, since the route is established a priori, the packet
headers need not store the destination address explicitly. Instead, each packet header contains
a virtual circuit number. All intermediate nodes keep track of the virtual circuit number. As the
connection is established, a virtual circuit number is assigned and all intermediate nodes will
have it as an entry in their routing tables. Thus, each intermediate node will use only the virtual
circuit number of the incoming packets. It need not know at the source and destination node
addresses. For this purpose, each intermediate node maintains a virtual-circuit routing table.
This table is created during the route establishment phase and is consulted when a packet is to
be forwarded to the correct outgoing link. The intermediate nodes are thus not making complex
5

or dynamic decisions for VC packet switching. Routing decisions are based on simple table
lookup.

Datagram packet switching, on the other hand, is different from virtual circuit switching in the
sense that there is no initial route establishment. There is no predefined route, and each packet
is treated as an independent entity and is transmitted separately. Packets that contain sufficient
information to be routed independently are called datagrams. Hence, datagrams are routed
independently, one by one in the datagram packet switching paradigm. Due to the datagram
independence, there is no guarantee in datagram switching that the packets will reach the
destination in the order in which they were sent. There is also no guarantee for the time by
which the datagrams will be delivered, and some of them may not even be delivered at all, that
is, some datagrams may be lost.

Thus, datagrams can be used in network applications that can tolerate unequal time delays,
packets delivered out of order, or lost packets. With datagram switching, any recovery of lost
data has to be done by the application explicitly, and not by the underlying network
mechanisms.

In datagram switching, since each datagram is sent independently, every intermediate node has
to make routing decisions dynamically for every incoming packet. This imposes an additional
computational burden to the intermediate nodes for datagram switching compared to the
switching approaches considered before. When it receives a packet, a routing node reads the
destination IP address from the packet header and makes a decision where to send the packet
next.

Due to unequal delays individual packets experience while they travel through a datagram
switched network, they may be delivered out of order. This is because a path is not fixed a
priori; each packet may follow a different route. Packet loss can also occur if, for example, there
is a node crash or an intermittent node failure. In such cases, all packets that were queued in a
failed node will be lost. In some systems, packets whose arrival is not acknowledged are
retransmitted. In such systems, duplicate packets may be sent if the acknowledgment packets
themselves are delayed or lost.

Datagram packet switching, in spite of its higher complexity and problems, also has some
advantages, which are the reasons why it is used in modern data networks. Datagram packet
switching is faster than virtual circuit switching, especially for a smaller number of packets per
message, because there is no need for a route establishment and termination phase. Datagram
switching is more flexible because, depending on the network traffic, the routers can choose the
best route for each packet individually. If a link goes down or becomes congested, the packets
will be forwarded using an alternate path. So datagram switching is more tolerant to congestion
or failed links.

Comparison of Switching Technologies


6

We will compare the network switching technologies described in the previous sections – circuit
switching, virtual circuit packet switching and datagram packet switching. The comparison will
be based mostly on estimating the data delays each of these switching technologies is causing,
and will add more details on the differences about switching techniques to our deliberation. We
will examine the total delay the data packets experience on their way from the source to the
destination. We distinguish among three types of delays which may contribute to the total delay
of the sent data.

Propagation delay is the time taken by the data signal to propagate from one node to the next.
When sending a data signal through a satellite link, for example, the time the signal needs to
travel from Earth to the satellite and then back to Earth may be as much as 250 milliseconds.
So here the propagation delay will be 250 milliseconds and it has nothing to do with the speed
of the link. Propagation delay is a delay caused by the physical characteristics of various
transmission media.

The transmission time depends on the speed of transmission. A link may be carrying data at 64
kbps or 10 Mbps, or 1 Gbps. The link speed determines the time needed to send a specified
amount of bits or a whole packet through the link.

And finally, there is also processing delay: As each routing node is performing the store-and-
forward procedure, there is processing delay in each of the nodes. After the received packet is
stored, the routing node may have to check whether the packet is correctly received by
computing its checksum and comparing it against the checksum stored in the packet header.
The routing node may also have to look up its routing table in order to decide where to send the
packet next. Some packets may arrive near simultaneously at the node, and these packets have
to be queued in a buffer, so some queuing delay may occur as well. All these delays count
toward the processing delay.

In the case of circuit switching, after the initial circuit establishment, data bits are sent
continuously without any processing delay. Continuously here means a continuous stream of
bits, since in circuit switching there is no concept of a packet. The last bit of the sent data will
reach the destination after a delay which is the sum of the transmission time of the whole
message and the propagation delay of the signal through the medium.

In contrast, virtual circuit packet switching will have a call request packet at the beginning to
establish the connection. The call request packet will go from the source node to the destination
node. After it reaches the destination, there will be an acknowledgment from the destination, a
call accept packet, which will go through the same route up to the source. The forward and
backward call packets have a dual purpose: First, they establish and confirm the route. Second,
they allow the intermediate nodes to update the routing table with the virtual circuit
appropriately.

Data packets are then sent sequentially, in a pipelined fashion. But in virtual circuit switching,
packets are not being sent continuously bit by bit, but rather they are sent on a store-and-
7

forward basis. After the path has been established, the first packet is sent to the next node. The
last bit of the first packet will be received after a delay equal to the propagation delay plus the
transmission time for the packet. Thus each forwarding node will be adding a delay to each
packet equal to the propagation time of the signal plus the transmission time for that packet.
The transmission is not continuous. Rather, the message is sent packet by packet and is being
forwarded from one node to the next. That is the basic difference between circuit switching and
virtual circuit packet switching.

In case of the datagram packet switching, there is no initial call establishment and call
termination delay. The packets are sent out independently and may follow different paths. But
again packets are sent using the store-and-forward approach, causing processing delays at
each forwarding node.

Thus, to calculate the total delay that is encountered by a packet, one has to sum up all the
delays at each forwarding step: the propagation delay, the packet transmission delay, and also
the processing delay. Delays of the individual hops along the path are summed up since, in
packet switching, each intermediate node has to wait until the whole packet is received before it
can forward it to the next node.

You might also like