Unit I, II, III Notes CNCC
Unit I, II, III Notes CNCC
1. Wired NIC
2. Wireless NIC
▪ Wired NIC: The Wired NIC is present inside the motherboard. Cables and connectors are used
with wired NIC to transfer data.
▪ Wireless NIC: The wireless NIC contains the antenna to obtain the connection over the
wireless network. For example, laptop computer contains the wireless NIC.
▪ Hub:
A Hub is a hardware device that divides the network connection among multiple devices.
When computer requests for some information from a network, it first sends the request to the
Hub through cable. Hub will broadcast this request to the entire network. All the devices will
check whether the request belongs to them or not. If not, the request will be dropped.
The process used by the Hub consumes more bandwidth and limits the amount of
communication. Nowadays, the use of hub is obsolete, and it is replaced by more advanced
computer network components such as Switches, Routers.
▪ Switch:
A switch is a hardware device that connects multiple devices on a computer network. A
Switch contains more advanced features than Hub. The Switch contains the updated table that
decides where the data is transmitted or not. Switch delivers the message to the correct
destination based on the physical address present in the incoming message. A Switch does not
broadcast the message to the entire network like the Hub. It determines the device to which the
message is to be transmitted. Therefore, we can say that switch provides a direct connection
between the source and destination. It increases the speed of the network.
▪ Router:
• A router is a hardware device which is used to connect a LAN with an internet
connection. It is used to receive, analyze and forward the incoming packets to another
network.
• A router works in a Layer 3 (Network layer) of the OSI Reference model.
• A router forwards the packet based on the information available in the routing table.
• It determines the best path from the available paths for the transmission of the packet.
▪ Advantages of Router:
• Security: The information which is transmitted to the network will traverse the entire
cable, but the only specified device which has been addressed can read the data.
• Reliability: If the server has stopped functioning, the network goes down, but no other
networks are affected that are served by the router.
• Performance: Router enhances the overall performance of the network. Suppose there are
24 workstations in a network generates a same amount of traffic. This increases the
traffic load on the network. Router splits the single network into two networks of 12
workstations each, reduces the traffic load by half.
• Network range
▪ Modem:
• A modem is a hardware device that allows the computer to connect to the internet over
the existing telephone line.
• A modem is not integrated with the motherboard rather than it is installed on the PCI slot
found on the motherboard.
• It stands for Modulator/Demodulator. It converts the digital data into an analog signal
over the telephone lines.
Based on the differences in speed and transmission rate, a modem can be classified in the
following categories:
• Standard PC modem or Dial-up modem
• Cellular Modem
• Cable modem
▪ Internetwork:
• An internetwork is defined as two or more computer network LANs or WAN or
computer network segments are connected using devices, and they are configured by a
local addressing scheme. This process is known as internetworking.
• An interconnection between public, private, commercial, industrial, or government
computer networks can also be defined as internetworking.
• An internetworking uses the internet protocol.
• The reference model used for internetworking is Open System Interconnection (OSI).
▪ Types of Internetwork:
1. Extranet: An extranet is a communication network based on the internet protocol such
as Transmission Control protocol and internet protocol. It is used
sed for information sharing.
The access to the extranet is restricted to only those users who have login credentials. An
extranet is the lowest level of internetworking. It can be categorized as MAN, WAN or
other computer networks. An extranet cannot have a single LAN, at least it must have
one connection to the external network.
2. Intranet: An intranet is a private network based on the internet protocol such
as Transmission Control protocol and internet protocol. An intranet belongs to an
organization which is only accessible by the organization's employee or members. The
main aim of the intranet is to share the information and resources among the organization
employees. An intranet provides the facility to work in groups and for teleconferences.
▪ Intranet advantages:
• Communication: It provides a cheap and easy communication. An employee of the
organization can communicate with another employee through email, chat.
• Time-saving: Information on the intranet is shared in real time, so it is time-saving.
time
• Collaboration: Collaboration is one of the most important advantages of the intranet. The
information is distributed among the employees of the organization and can only be
accessed by the authorized user.
• Platform independency: It is a neutral architecture as th thee computer can be connected to
another device with different architecture.
• Cost effective: People can see the data and documents by using the browser and
distributes the duplicate copies over the intranet. This leads to a reduction in the cost.
▪ What is Topology?
Topology defines the structure of the network of how all the components are
interconnected to each other. There are two types of topology: physical and logical topology.
Physical topology is the geometric representation of all the nodes in a network.
▪ Bus Topology:
• The bus topology is designed in such a way that all the stations are connected through a
single cable known as a backbone cable.
• Each node is either connected to the backbone cable by drop cable or directly connected
to the backbone cable.
• When a node wants to send a message over the network, it puts a message over the
network. All the stations available in the network will receive the message whether it has
been addressed or not.
• The bus topology is mainly used in 802.3 (Ethernet) and 802.4 standard networks.
• The configuration of a bus topology is quite simpler as compared to other topologies.
• The backbone cable is considered as a "single lane" through which the message is
broadcast to all the stations.
• The most common access method of the bus topologies is CSMA (Carrier Sense Multiple
Access).
▪ CSMA:
It is a media access control used to control the data flow so that data integrity is
maintained, i.e., the packets do not get lost. There are two alternative ways of handling the
problems that occur when two nodes send the messages simultaneously.
• CSMA CD: CSMA CD (Collision detection) is an access method used to detect the
collision. Once the collision is detected, the sender will stop transmitting the data.
Therefore, it works on "recovery after the collision".
• CSMA CA: CSMA CA (Collision Avoidance) is an access method used to avoid the
collision by checking whether the transmission media is busy or not. If busy, then the
sender waits until the media becomes idle. This technique effectively reduces the
possibility of the collision. It does not work on "recovery after the collision".
▪ Ring Topology:
▪ Star Topology:
• Star topology is an arrangement of the network in which every node is connected to the
central hub, switch or a central computer.
• The central computer is known as a server, and the peripheral devices attached to the
server are known as clients.
• Coaxial cable or RJ-45 cables are used to connect the computers.
• Hubs or Switches are mainly used as connection devices in a physical star topology.
• Star topology is the most popular topology in network implementation.
▪ Tree topology:
• Tree topology combines the characteristics of bus topology and star topology.
• A tree topology is a type of structure in which all the computers are connected with each
other in hierarchical fashion.
• The top-most node in tree topology is known as a root node, and all other nodes are the
descendants of the root node.
• There is only one path exists between two nodes for the data transmission. Thus, it forms
a parent-child hierarchy.
▪ Mesh topology:
• Full Mesh Topology: In a full mesh topology, each computer is connected to all the
computers available in the network.
• Partial Mesh Topology: In a partial mesh topology, not all but certain computers are
connected to those computers with which they communicate frequently.
We’ll describe OSI layers “top down” from the application layer that directly serves the
end user, down to the physical layer.
7. Application Layer:
The application layer is used by end-user software such as web browsers and email
clients. It provides protocols that allow software to send and receive information and present
meaningful data to users. A few examples of application layer protocols are the Hypertext
Transfer Protocol (HTTP), File Transfer Protocol (FTP), Post Office Protocol (POP), Simple
Mail Transfer Protocol (SMTP), and Domain Name System (DNS).
6. Presentation Layer:
The presentation layer prepares data for the application layer. It defines how two devices
should encode, encrypt, and compress data so it is received correctly on the other end. The
presentation layer takes any data transmitted by the application layer and prepares it for
transmission over the session layer.
5. Session Layer:
The session layer creates communication channels, called sessions, between devices. It is
responsible for opening sessions, ensuring they remain open and functional while data is being
transferred, and closing them when communication ends. The session layer can also set
checkpoints during a data transfer—if the session is interrupted, devices can resume data transfer
from the last checkpoint.
4. Transport Layer:
The transport layer takes data transferred in the session layer and breaks it into
“segments” on the transmitting end. It is responsible for reassembling the segments on the
receiving end, turning it back into data that can be used by the session layer. The transport layer
carries out flow control, sending data at a rate that matches the connection speed of the receiving
device, and error control, checking if data was received incorrectly and if not, requesting it again.
3. Network Layer:
The network layer has two main functions. One is breaking up segments into network
packets, and reassembling the packets on the receiving end. The other is routing packets by
discovering the best path across a physical network. The network layer uses network addresses
(typically Internet Protocol addresses) to route packets to a destination node.
1. Physical Layer:
The physical layer is responsible for the physical cable or wireless connection between
network nodes. It defines the connector, the electrical cable or wireless technology connecting
the devices, and is responsible for transmission of the raw data, which is simply a series of 0s
and 1s, while taking care of bit rate control.
▪ TCP/IP Model:
The OSI Model we just looked at is just a reference/logical model. It was designed to
describe the functions of the communication system by dividing the communication procedure
into smaller and simpler components. But when we talk about the TCP/IP model, it was designed
and developed by Department of Defense (DoD) in 1960s and is based on standard protocols. It
stands for Transmission Control Protocol/Internet Protocol. The TCP/IP model is a concise
version of the OSI model. It contains four layers, unlike seven layers in the OSI model. The
layers are:
• Process/Application Layer
• Host-to-Host/Transport Layer
• Internet Layer
• Network Access/Link Layer
The diagrammatic comparison of the TCP/IP and OSI model is as follows :
TCP/IP OSI
TCP refers to Transmission Control OSI refers to Open Systems
Protocol. Interconnection.
TCP/IP uses both session and presentation OSI uses different session and
layer in the application layer itself. presentation layers.
TCP/IP developed protocols then model. OSI developed model then protocol.
Transport layer in TCP/IP does not In OSI model, transport layer provides
provide assurance delivery of packets. assurance delivery of packets.
2. Internet Layer:
This layer parallels the functions of OSI’s Network layer. It defines the protocols which
are responsible for logical transmission of data over the entire network. The main protocols
residing at this layer are:
1. IP: stands for Internet Protocol and it is responsible for delivering packets from the
source host to the destination host by looking at the IP addresses in the packet headers. IP
has 2 versions:
IPv4 and IPv6. IPv4 is the one that most of the websites are using currently. But IPv6 is
growing as the numbers of IPv4 addresses are limited in number when compared to the
number of users.
2. ICMP: stands for Internet Control Message Protocol. It is encapsulated within IP
datagrams and is responsible for providing hosts with information about network
problems.
3. ARP: stands for Address Resolution Protocol. Its job is to find the hardware address of a
host from a known IP address. ARP has several types: Reverse ARP, Proxy ARP,
Gratuitous ARP and Inverse ARP.
3. Host-to-Host Layer:
This layer is analogous to the transport layer of the OSI model. It is responsible for end-
to-end communication and error-free delivery of data. It shields the upper-layer applications
from the complexities of data. The two main protocols present in this layer are:
1. HTTP and HTTPS: HTTP stands for Hypertext transfer protocol. It is used by the
World Wide Web to manage communications between web browsers and servers.
HTTPS stands for HTTP-Secure. It is a combination of HTTP with SSL(Secure Socket
Layer). It is efficient in cases where the browser need to fill out forms, sign in,
authenticate and carry out bank transactions.
2. SSH: SSH stands for Secure Shell. It is terminal emulations software similar to Telnet.
The reason SSH is more preferred is because of its ability to maintain the encrypted
connection. It sets up a secure session over a TCP/IP connection.
3. NTP: NTP stands for Network Time Protocol. It is used to synchronize the clocks on our
computer to one standard time source. It is very useful in situations like bank
transactions. Assume the following situation without the presence of NTP. Suppose you
carry out a transaction, where your computer reads the time at 2:30 PM while the server
records it at 2:28 PM. The server can crash very badly if it’s out of sync.
UNIT-II
▪ Switching
witching in Computer Networks: Circuit, Packet and Message:
▪ Switching in Computer Networks
Networks:
In broad networks, there can be various paths to send a message from sender to receiver.
Switching in computer networks is used to select the best path for data transmission. For this
purpose, different switching techniques are used.
The switched network comprises a series of interlink nodes called switches. Switches are
hard-wired
wired software devices that are capable of creating temporary connections between two or
more devices.
There is a link to switch, but not to each other. The nodes are connected with each other
through common devices and some nodes are used to route pa packages.
▪ Circuit Switching:
It is a type of switching in which we set a physical connection between sender and
receiver. The connection is set up when the call is made from transmitter to receiver telephone.
Once a call is set up, the dedicated path exits between both ends. The path will continue
to exist until the call is disconnected.
The above diagram shows the functionality of circuit switching in computer networks.
Every computer has a physical connection to a node, as you can see in the circuit switching
diagram. Using nodes, devices can send a message from one end to another
▪ Packet Switching:
In packet switching, a message is broken into packets for transmission. Each packet has
the source, destination,, and intermediate node address information.
The entire message is divided into smaller pieces, called packets. Each packet travels
independently and contains address information.
These packets travel through the shortest path in a communication network. All the
packets are reassembled at the receiving end to make a complete message.
There are two types of packet switching in computer networks, as follows.
• Datagram Packet Switching
• Virtual Circuit Packet Switching
The above diagram shows the concept of packet switching. The message is divided into
four packets (i.e. 1, 2,, 3 and 4). These packets contain the addresses and information.
By travelling through the shortest path, packets reach their destination. At receiving end,
the packets are reassembled in the same order (which is 1234) to generate an entire message.
▪ Advantages of Packet Switching
Switching:
• Bandwidth is reduced.
• If one link goes down, the remaining packets can be sent through another route.
▪ Message Switching:
In message switching, the compl
complete
ete message is transferred from one end to another
through nodes. There is no physical connection or link between sender and receiver.
The message contains the destination address. Each node stores the message and then
forward it to the next node as shown in the below diagram.
In telegraphy, the text message is encoded using the morse code into a sequence of dots
and dashes. Each dot or dash is communicated by transmitting a short and long pulse of electrical
current. The following diagram shows the concep
conceptt of message switching in computer networks.
▪ Disadvantages:
• It does not establish a dedicated path between two communication paths.
Congestion has occurred per Congestion has occurred per There is no congestion in
minute. packet. message switching.
It is not suitable for handling It is suitable for handling high It is not suitable for handling
traffic. traffic. traffic.
The recording of the packet is The recording of the packet is The recording of the packet
not possible. possible. is possible.
The message is in the form of The message is in the form of The message is in the form
packets. packets. of blocks.
We can use it with a real-time We can use it in real-time We cannot use it in real-time
application. applications. applications.
▪ Framing:
The data link layer encapsulates each data packet from the network layer into frames that
are then transmitted.
A frame has three parts, namely −
• Frame Header
• Payload field that contains the data packet from network layer
• Trailer
▪ Error Control:
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are:
• Dealing with transmission errors
• Sending acknowledgement frames in reliable connections
• Retransmitting lost frames
• Identifying duplicate frames and deleting them
• Controlling access to shared channels in case of broadcasting
▪ Flow Control:
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not be able to
handle it. There will be frame losses even if the transmission is error-free. The two common
approaches for flow control are:
• Feedback based flow control
• Rate based flow control
▪ Errors:
When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data being
received by the destination and are called errors.
▪ Types of Errors:
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
• Single bit error: In the received frame, only one bit has been corrupted, i.e. either changed
from 0 to 1 or from 1 to 0
• Multiple bits error: In the received frame, more than one bits are corrupted.
• Burst error: In the received frame, more than one consecutive bits are corrupted.
▪ Error Control:
Error control can be done in two ways
• Error detection: Error detection involves checking whether any error has occurred or not.
The number of error bits and the type of error does not matter.
• Error correction: Error correction involves ascertaining the exact number of bits that has
been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits along
with the data bits. The receiver performs necessary checks based upon the additional redundant
bits. If it finds that the data is free from errors, it removes the redundant bits before passing the
message to the upper layers.
▪ Parity Check:
The parity check is done by adding an extra bit, called parity bit to the data to make a
number of 1s either even in case of even parity or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the
following way
• In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of
1s is odd then parity bit value is 1.
• In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is
even then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity
check, if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A similar rule is
adopted for odd parity check.
The parity check is suitable for single bit error detection only.
▪ Checksum:
In this error detection scheme, the following procedure is applied
• Data is divided into fixed sized frames or segments.
• The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.
• The receiver adds the incoming segments along with the checksum using 1’s complement
arithmetic to get the sum and then complements it.
• If the result is zero, the received frames are accepted; otherwise, they are discarded.
▪ Simplex Protocol:
The Simplex protocol is hypothetical protocol designed for unidirectional data
transmission over an ideal channel, i.e. a channel through which transmission can never go
wrong. It has distinct procedures for sender and receiver. The sender simply sends all its data
available onto the channel as soon as they are available its buffer. The receiver is assumed to
process all incoming data instantly. It is hypothetical since it does not handle flow control or
error control.
▪ Stop-and-Wait Protocol:
Stop-and-Wait protocol is for noiseless channel too. It provides unidirectional data
transmission without any error control facilities. However, it provides for flow control so that a
fast sender does not drown a slow receiver. The receiver has a finite buffer size with finite
processing speed. The sender can send a frame only when it has received indication from the
receiver that it is available for further data processing.
In this protocol we assume that data is transmitted in one direction only. No error occurs;
the receiver can only process the received information at finite rate. These assumptions imply
that the transmitter cannot send frames at rate faster than the receiver can process them.
The main problem here is how to prevent the sender from flooding the receiver. The
general solution for this problem is to have the receiver send some sort of feedback to sender,
the process is as follows:
Step1: The receiver send the acknowledgement frame back to the sender telling the sender that
the last received frame has been processed and passed to the host.
Step 2: Permission to send the next frame is granted.
Step 3: The sender after sending the sent frame has to wait for an acknowledge frame from the
receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender sends one frame and
waits for feedback from the receiver. When the ACK arrives, the sender sends the next frame.
The Simplex Stop and Wait Protocol is diagrammatically represented as follows
▪ Go-Back-N ARQ:
Go-Back-N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and so is also called
sliding window protocol. The frames are sequentially numbered and a finite number of frames
are sent. If the acknowledgement of a frame is not received within the time period, all frames
starting from that frame are retransmitted.
▪ Selective Repeat ARQ:
This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost frames are
retransmitted, while the good frames are received and buffered.
▪ Aloha Rules:
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after som
some random amount of time.
▪ Pure Aloha:
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the channel
is idle or not, the chances of collision may occur, and the data frame ccanan be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the back off time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the data
are successfully transmitted to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs curs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel
and transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall on
a shared channel simultaneously, collisions can occur, and both will suffer damage. If the new
frame's first bit enters the channel before ffinishing
inishing the last bit of the second frame. Both frames
are completely finished, and both stations must retransmit the data frame.
▪ Slotted Aloha:
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure
Aloha has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send
end a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is allowed to
be sent to each slot. And if the stations are unable to send data to the beginning of the slot, the
station will have to wait until tthehe beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of two or more
station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probabilityy of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
▪ CSMA/CA:
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
• Interframe space: In this method, the station waits for the channel to become idle, and if
it gets the channel is idle, it does not immediately send the data. Instead of this, it waits
for some time, and this time period is called the Interframe space or IFS. However, the
IFS time is often used to define the priority of the station.
• Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a random
slot number of slots as wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets when the channel is
inactive.
• Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.
C. Channelization Protocols:
It is a channelization protocol that allows the total usable bandwidth in a shared channel
to be shared across multiple stations based on their time, distance and codes. It can access all the
stations at the same time to send the data frames to the channel.
Following are the various methods to access the channel based on their time, distance
and codes:
1. DMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA:
It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different frequency to
the subchannel. Each station is reserved with a particular band to prevent the crosstalk between
the channels and interferences of stations.
▪ TDMA:
Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in the shared
channel, it divides the channel into different frequency slots that allocat
allocatee stations to transmit the
data frames. The same frequency bandwidth into the shared channel by dividing the signal into
various time slots to transmit it. However, TDMA has an overhead of synchronization that
specifies each station's time slot by adding ssynchronization bits to each slot.
▪ CDMA:
The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means tthat hat it allows each
station to transmit the data frames with full frequency on the shared channel at all times. It does
not require the division of bandwidth on a shared channel based on time slots. If multiple
stations send data to a channel simultaneously
simultaneously,, their data frames are separated by a unique code
sequence. Each station has a different unique code for transmitting the data over a shared
channel. For example, there are multiple users in a room that are continuously speaking. Data is
received by the users if only two-person interact with each other using the same language.
Similarly, in the network, if different stations communicate with each other simultaneously with
different code language.
▪ Network Layer:
Layer-3 in the OSI model is called Network layer. Network layer manages options
pertaining to host and network addressing, managing sub-networks, and internetworking.
Network layer takes the responsibility for routing packets from source to destination
within or outside a subnet. Two different subnet may have different addressing schemes or non-
compatible addressing types. Same with protocols, two different subnet may be operating on
different protocols which are not compatible with each other. Network layer has the
responsibility to route the packets from source to destination, mapping different addressing
schemes and protocols.
▪ Layer-3 Functionalities:
Devices which work on Network Layer mainly focus on routing. Routing may include
various tasks aimed to achieve a single goal. These can be:
• Addressing devices and networks.
• Populating routing tables or static routes.
• Queuing incoming and outgoing data and then forwarding them according to quality of
service constraints set for those packets.
• Internetworking between two different subnets.
• Delivering packets to destination with best efforts.
• Provides connection oriented and connection less mechanism.
▪ Network Addressing:
Layer 3 network addressing is one of the major tasks of Network Layer. Network
Addresses are always logical i.e. these are software based addresses which can be changed by
appropriate configurations.
A network address always points to host / node / server or it can represent a whole
network. Network address is always configured on network interface card and is generally
mapped by system with the MAC address (hardware address or layer-2 address) of the machine
for Layer-2 communication.
There are different kinds of network addresses in existence:
• IP
• IPX
• AppleTalk
We are discussing IP here as it is the only one we use in practice these days.
This method is easy on router's CPU but may cause the problem of duplicate packets
received from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its
predecessor from where it should receive broadcast. This technique is used to detect and discard
duplicates.
▪ Multicast Routing:
Multicast routing is special case of broadcast routing with significance difference and
challenges. In broadcast routing, packets are sent to all nodes even if they do not want it. But in
Multicast routing, the data is sent to only nodes which wants to receive the packets.
The router must know that there are nodes, which wish to receive multicast packets (or
stream) then only it should forward. Multicast routing works spanning tree protocol to avoid
looping.
Multicast routing also uses reverse path Forwarding technique, to detect and discard
duplicates and loops.
Anycast Routing
Anycast packet forwarding is a mechanism where multiple hosts can have same logical
address. When a packet destined to this logical address is received, it is sent to the host which is
nearest in routing topology.
Anycast routing is done with help of DNS server. Whenever an Anycast packet is
received it is enquired with DNS to where to send it. DNS provides the IP address which is the
nearest IP configured on it.
▪ Routing Algorithms:
The routing algorithms are as follows:
▪ Flooding:
Flooding is simplest method packet forwarding. When a packet is received, the routers
send it to all the interfaces except the one on which it was received. This creates too much
burden on the network and lots of duplicate packets wandering in the network.
Time to Live (TTL) can be used to avoid infinite looping of packets. There exists another
approach for flooding, which is called Selective Flooding to reduce the overhead on the
network. In this method, the router does not flood out on all the interfaces, but selective ones.
▪ Shortest Path:
Routing decision in networks, are mostly taken on the basis of cost between source and
destination. Hop count plays major role here. Shortest path is a technique which uses various
algorithms to decide a path with minimum number of hops.
Common shortest path algorithms are:
• Dijkstra's algorithm
• Bellman Ford algorithm
• Floyd Warshall algorithm
▪ Non-Adaptive
Adaptive Routing Algorithms
Algorithms:
Non-adaptive
adaptive Routing algorithms, also known as static routing algorithms, construct a
static routing table to determine the path through which packets are to be sent. The static routing
table is constructed based upon the routing information stored in the routers when the network is
booted up.
The two types of non – adaptive routing algorithms are
are:
• Flooding: In flooding, when a data packet arrives at a router, it is sent to all the outgoing
links except the one it has arrived on. Flooding may be uncontrolled,
uncontrolle controlled or
selective flooding.
• Random walks: This is a probabilistic algorithm where a data packet is sent by the router
to any one of its neighbours randomly.
▪ Tunneling:
If they are two geographically separate networks, which want to communicate with each
other, they may deploy a dedicated line between or they have to pass their data through
intermediate networks.
Tunneling is a mechanism by which two or more same networks communicate with each
other, by passing intermediate networking complexities. Tunneling is configured at both ends.
When the data enters from one end of Tunnel, it is tagged. This tagged data is then routed
inside the intermediate or transit network to reach the other end of Tunnel. When data exists the
Tunnel its tag is removed and delivered to the other part of the network.
Both ends seem as if they are directly connected and tagging makes data travel through
transit network without any modifications.
▪ Packet Fragmentation:
Most Ethernet segments have their maximum transmission unit (MTU) fixed to 1500
bytes. A data packet can have more or less packet length depending upon the application.
Devices in the transit path also have their hardware and software capabilities which tell what
amount of data that device can handle and what size of packet it can process.
If the data packet size is less than or equal to the size of packet the transit network can
handle, it is processed neutrally. If the packet is larger, it is broken into smaller pieces and then
forwarded. This is called packet fragmentation. Each fragment contains the same destination and
source address and routed through transit path easily. At the receiving end it is assembled again.
If a packet with DF (don’t fragment) bit set to 1 comes to a router which cannot handle
the packet because of its length, the packet is dropped.
When a packet is received by a router has its MF (more fragments) bit set to 1, the router
then knows that it is a fragmented packet and parts of the original packet is on the way.
If packet is fragmented too small, the overhead is increases. If the packet is fragmented
too large, intermediate router may not be able to process it and it might get dropped.
The key difference between SDN and traditional networking is infrastructure: SDN is
software-based, while traditional networking is hardware-based. Because the control plane is
software-based, SDN is much more flexible than traditional networking. It allows administrators
to control the network, change configuration settings, provision resources, and increase network
capacity-all from a centralized user interface, without the need for more hardware.
There are also security differences between SDN and traditional networking. Thanks to
greater visibility and the ability to define secure pathways, SDN offers better security in many
ways. However, because software-defined networks use a centralized controller, securing the
controller is crucial to maintaining a secure network.
▪ Functions:
• This Layer is the first one which breaks the information data, supplied by Application
layer in to smaller units called segments. It numbers every byte in the segment and
maintains their accounting.
• This layer ensures that data must be received in the same sequence in which it was sent.
• This layer provides end-to-end delivery of data between hosts which may or may not
belong to the same subnet.
• All server processes intend to communicate over the network are equipped with well-
known Transport Service Access Points (TSAPs) also known as port numbers.
▪ End-to-End Communication:
A process on one host identifies its peer host on remote network by means of TSAPs,
also known as Port numbers. TSAPs are very well defined and a process which is trying to
communicate with its peer knows this in advance.
For example, when a DHCP client wants to communicate with remote DHCP server, it
always requests on port number 67. When a DNS client wants to communicate with remote
DNS server, it always requests on port number 53 (UDP).
The two main Transport layer protocols are:
• Transmission Control Protocol:
It provides reliable communication between two hosts.
• User Datagram Protocol:
It provides unreliable communication between two hosts.
▪ Features:
• TCP is reliable protocol. That is, the receiver always sends either positive or negative
acknowledgement about the data packet to the sender, so that the sender always has
bright clue about whether the data packet is reached the destination or it needs to resend
it.
• TCP ensures that the data reaches intended destination in the same order it was sent.
• TCP is connection oriented. TCP requires that connection between two remote points be
established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.
• TCP provides full duplex server, i.e. it can perform roles of both receiver and sender.
▪ Header:
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.
• Source Port (16-bits): It identifies source port of the application process on the sending
device.
• Destination Port (16-bits): It identifies destination port of the application process on the
receiving device.
• Sequence Number (32-bits): Sequence number of data bytes of a segment in a session.
• Acknowledgement Number (32-bits): When ACK flag is set, this number contains the
next sequence number of the data byte expected and works as acknowledgement of the
previous data received.
• Data offset (4-bits): This field implies both, the size of TCP header (32-bit words) and
the offset of data in current packet in the whole TCP segment.
• Reserved (3-bits): Reserved for future use and all are set zero by default.
• Flags (1-bit each):
o NS: Nonce Sum bit is used by Explicit Congestion Notification signaling process.
o CWR: When a host receives packet with ECE bit set, it sets Congestion Windows
Reduced to acknowledge that ECE received.
o ECE: It has two meanings:
If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
If SYN bit is set to 1, ECE means that the device is ECT capable.
o URG: It indicates that Urgent Pointer field has significant data and should be
processed.
o ACK: It indicates that Acknowledgement field has significance. If ACK is cleared to
0, it indicates that packet does not contain any acknowledgement.
o PSH: When set, it is a request to the receiving station to PUSH data (as soon as it
comes) to the receiving application without buffering it.
o RST: Reset flag has the following features:
It is used to refuse an incoming connection.
It is used to reject a segment.
It is used to restart a connection.
o SYN: This flag is used to set up a connection between hosts.
o FIN: This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers, they
are processed in correct order.
• Windows Size: This field is used for flow control between two stations and indicates the
amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how much data is
the receiver expecting.
• Checksum: This field contains the checksum of Header, Data and Pseudo Headers.
• Urgent Pointer: It points to the urgent data byte if URG flag is set to 1.
• Options: It facilitates additional options which are not covered by the regular header.
Option field is always described in 32-bit words. If this field contains data less than 32-bit,
padding is used to cover the remaining bits to reach 32-bit boundary.
▪ Addressing:
TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:
• System Ports (0-1023)
• User Ports (1024-49151)
• Private/Dynamic Ports (49152-65535)
▪ Connection Management:
TCP communication works in Server/Client model. The client initiates the connection
and the server either accepts or rejects it. Three-way handshaking is used for connection
management.
▪ Establishment:
Client initiates the connection and sends the segment with a Sequence number. Server
acknowledges it back with its own Sequence number and ACK of client’s segment which is one
more than client’s Sequence number. Client after receiving ACK of its segment sends an
acknowledgement of Server’s response.
▪ Release:
Either of server and client can send TCP segment with FIN flag set to 1. When the
receiving end responds it back by Acknowledging FIN, that direction of TCP communication is
closed and connection is released.
▪ Bandwidth Management:
TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data byte segments
the receiver at this end can receive. TCP uses slow start phase by using window size 1 and
increases the window size exponentially after each successful communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and next sent the
segment sent will be 4 data bytes long. When the acknowledgement of 4-byte data segment is
received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received NACK,
then the window size is reduced to half and slow start phase starts again.
▪ Multiplexing:
The technique to combine two or more data streams in one session is called Multiplexing.
When a TCP client initializes a connection with Server, it always refers to a well-defined port
number which indicates the application process. The client itself uses a randomly generated port
number from private port number pools.
Using TCP Multiplexing, a client can communicate with a number of different
application processes in a single session. For example, a client requests a web page which in turn
contains different types of data (HTTP, SMTP, FTP etc.) the TCP session timeout is increased
and the session is kept open for longer time so that the three-way handshake overhead can be
avoided.
This enables the client system to receive multiple connections over single virtual
connection. These virtual connections are not good for Servers if the timeout is too long.
▪ Congestion Control:
When large amount of data is fed to system which is not capable of handling it, congestion
occurs. TCP controls congestion by means of Window mechanism. TCP sets a window size
telling the other end how much data segment to send. TCP may use three algorithms for
congestion control:
• Additive increase, Multiplicative Decrease
• Slow Start
• Timeout React
▪ Timer Management:
TCP uses different types of timer to control and management various tasks:
▪ Keep-alive timer:
• This timer is used to check the integrity and validity of a connection.
• When keep-alive time expires, the host sends a probe to check if the connection still
exists.
▪ Retransmission timer:
• This timer maintains stateful session of data sent.
• If the acknowledgement of sent data does not receive within the Retransmission time, the
data segment is sent again.
▪ Persist timer:
• TCP session can be paused by either host by sending Window Size 0.
• To resume the session a host needs to send Window Size with some larger value.
• If this segment never reaches the other end, both ends may wait for each other for infinite
time.
• When the Persist timer expires, the host re-sends its window size to let the other end
know.
• Persist Timer helps avoid deadlocks in communication.
▪ Timed-Wait:
• After releasing a connection, either of the hosts waits for a Timed-Wait time to terminate
the connection completely.
• This is in order to make sure that the other end has received the acknowledgement of its
connection termination request.
• Timed-out can be a maximum of 240 seconds (4 minutes).
▪ Crash Recovery:
TCP is very reliable protocol. It provides sequence number to each of byte sent in
segment. It provides the feedback mechanism i.e. when a host receives a packet, it is bound to
ACK that packet having the next sequence number expected (if it is not the last segment).
When a TCP Server crashes mid-way communication and re-starts its process it sends TPDU
broadcast to all its hosts. The hosts can then send the last data segment which was never
unacknowledged and carry onwards.
▪ Requirement of UDP:
A question may arise, why do we need an unreliable protocol to transport the data? We
deploy UDP where the acknowledgement packets share significant amount of bandwidth along
with the actual data. For example, in case of video streaming, thousands of packets are forwarded
towards its users. Acknowledging all the packets is troublesome and may contain huge amount
of bandwidth wastage. The best delivery mechanism of underlying IP protocol ensures best
efforts to deliver its packets, but even if some packets in video streaming get lost, the impact is
not calamitous and can be ignored easily. Loss of few packets in video and voice traffic
sometimes goes unnoticed.
▪ Features:
• UDP is used when acknowledgement of data does not hold any significance.
• UDP is good protocol for data flowing in one direction.
• UDP is simple and suitable for query based communications.
• UDP is not connection oriented.
• UDP does not provide congestion control mechanism.
• UDP does not guarantee ordered delivery of data.
• UDP is stateless.
• UDP is suitable protocol for streaming applications such as VoIP, multimedia streaming.
▪ UDP Header:
UDP header is as simple as its function.
▪ UDP application:
Here are few applications where UDP is used to transmit data:
• Domain Name Services
• Simple Network Management Protocol
• Trivial File Transfer Protocol
• Routing Information Protocol
• Kerberos
▪ Network Services:
Computer systems and computerized systems help human beings to work efficiently and
explore the unthinkable. When these devices are connected together to form a network,
the capabilities are enhanced multiple-times. Some basic services computer network can
offer are.
▪ Directory Services:
These services are mapping between name and its value, which can be variable value or
fixed. This software system helps to store the information, organize it, and provides various
means of accessing it.
• Accounting:
In an organization, a number of users have their user names and passwords
mapped to them. Directory Services provide means of storing this information in cryptic
form and make available when requested.
• Authentication and Authorization:
User credentials are checked to authenticate a user at the time of login and/or
periodically. User accounts can be set into hierarchical structure and their access to
resources can be controlled using authorization schemes.
• Domain Name Services:
DNS is widely used and one of the essential services on which internet works.
This system maps IP addresses to domain names, which are easier to remember and recall
than IP addresses. Because network operates with the help of IP addresses and humans
tend to remember website names, the DNS provides website’s IP address which is
mapped to its name from the back-end on the request of a website name from the user.
▪ File Services:
File services include sharing and transferring files over the network.
• File Sharing:
One of the reasons which gave birth to networking was file sharing. File sharing
enables its users to share their data with other users. User can upload the file to a specific
server, which is accessible by all intended users. As an alternative, user can make its file
shared on its own computer and provides access to intended users.
• File Transfer:
This is an activity to copy or move file from one computer to another computer or
to multiple computers, with help of underlying network. Network enables its user to
locate other users in the network and transfers files.
▪ Communication Services:
Email:
Electronic mail is a communication method and something a computer user cannot work
without. This is the basis of today’s internet features. Email system has one or more
email servers. All its users are provided with unique IDs. When a user sends email to
other user, it is actually transferred between users with help of email server.
• Social Networking:
Recent technologies have made technical life social. The computer savvy peoples,
can find other known peoples or friends, can connect with them, and can share thoughts,
pictures, and videos.
• Internet Chat:
Internet chat provides instant text transfer services between two hosts. Two or
more people can communicate with each other using text based Internet Relay Chat
services. These days, voice chat and video chat are very common.
• Discussion Boards:
Discussion boards provide a mechanism to connect multiple peoples with same
interests. It enables the users to put queries, questions, suggestions etc. which can be seen
by all other users. Other may respond as well.
• Remote Access:
This service enables user to access the data residing on the remote computer. This
feature is known as Remote desktop. This can be done via some remote device, e.g.
mobile phone or home computer.
▪ Application Services:
These are nothing but providing network based services to the users such as web services,
database managing, and resource sharing.
• Resource Sharing:
To use resources efficiently and economically, network provides a mean to share
them. This may include Servers, Printers, and Storage Media etc.
• Databases:
This application service is one of the most important services. It stores data and
information, processes it, and enables the users to retrieve it efficiently by using queries.
Databases help organizations to make decisions based on statistics.
• Web Services:
World Wide Web has become the synonym for internet. It is used to connect to
the internet, and access files and information services provided by the internet servers.