0% found this document useful (0 votes)
25 views

Unit I, II, III Notes CNCC

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Unit I, II, III Notes CNCC

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

UNIT-I

▪ Computer Network Components:


Computer network components are the major parts which are needed to install the
software. Some important network components are NIC, switch, cable, hub, router, and modem.
Depending on the type of network that we need to install, some network components can also be
removed. For example, the wireless network does not require a cable.
Following are the major components required to install a network:
▪ NIC:
• NIC stands for network interface card.
• NIC is a hardware component used to connect a computer with another computer onto a
network
• It can support a transfer rate of 10,100 to 1000 Mb/s.
• The MAC address or physical address is encoded on the network card chip which is
assigned by the IEEE to identify a network card uniquely. The MAC address is stored in
the PROM (Programmable read-only memory).

There are two types of NIC:

1. Wired NIC
2. Wireless NIC

▪ Wired NIC: The Wired NIC is present inside the motherboard. Cables and connectors are used
with wired NIC to transfer data.

▪ Wireless NIC: The wireless NIC contains the antenna to obtain the connection over the
wireless network. For example, laptop computer contains the wireless NIC.

▪ Hub:
A Hub is a hardware device that divides the network connection among multiple devices.
When computer requests for some information from a network, it first sends the request to the
Hub through cable. Hub will broadcast this request to the entire network. All the devices will
check whether the request belongs to them or not. If not, the request will be dropped.
The process used by the Hub consumes more bandwidth and limits the amount of
communication. Nowadays, the use of hub is obsolete, and it is replaced by more advanced
computer network components such as Switches, Routers.
▪ Switch:
A switch is a hardware device that connects multiple devices on a computer network. A
Switch contains more advanced features than Hub. The Switch contains the updated table that
decides where the data is transmitted or not. Switch delivers the message to the correct
destination based on the physical address present in the incoming message. A Switch does not
broadcast the message to the entire network like the Hub. It determines the device to which the
message is to be transmitted. Therefore, we can say that switch provides a direct connection
between the source and destination. It increases the speed of the network.

▪ Router:
• A router is a hardware device which is used to connect a LAN with an internet
connection. It is used to receive, analyze and forward the incoming packets to another
network.
• A router works in a Layer 3 (Network layer) of the OSI Reference model.
• A router forwards the packet based on the information available in the routing table.
• It determines the best path from the available paths for the transmission of the packet.

▪ Advantages of Router:
• Security: The information which is transmitted to the network will traverse the entire
cable, but the only specified device which has been addressed can read the data.
• Reliability: If the server has stopped functioning, the network goes down, but no other
networks are affected that are served by the router.
• Performance: Router enhances the overall performance of the network. Suppose there are
24 workstations in a network generates a same amount of traffic. This increases the
traffic load on the network. Router splits the single network into two networks of 12
workstations each, reduces the traffic load by half.
• Network range

▪ Modem:
• A modem is a hardware device that allows the computer to connect to the internet over
the existing telephone line.
• A modem is not integrated with the motherboard rather than it is installed on the PCI slot
found on the motherboard.
• It stands for Modulator/Demodulator. It converts the digital data into an analog signal
over the telephone lines.
Based on the differences in speed and transmission rate, a modem can be classified in the
following categories:
• Standard PC modem or Dial-up modem
• Cellular Modem
• Cable modem

▪ Cables and Connectors:


Cable is a transmission media used for transmitting a signal.
There are three types of cables used in transmission:
• Twisted pair cable
• Coaxial cable
• Fibre-optic cable

▪ Computer Network Types:


A computer network is a group of computers linked to each other that enables the
computer to communicate with another computer and share their resources, data, and
applications.
A computer network can be categorized by their size. A computer network is mainly
of four types:

• LAN(Local Area Network)


• PAN(Personal Area Network)
• MAN(Metropolitan Area Network)
• WAN(Wide Area Network)

▪ LAN (Local Area Network):


• Local Area Network is a group of computers connected to each other in a small area such
as building, office.
• LAN is used for connecting two or more personal computers through a communication
medium such as twisted pair, coaxial cable, etc.
• It is less costly as it is built with inexpensive hardware such as hubs, network adapters,
and Ethernet cables.
• The data is transferred at an extremely faster rate in Local Area Network.
• Local Area Network provides higher security.
▪ PAN (Personal Area Network):
• Personal Area Network is a network arranged within an individual person, typically
within a range of 10 meters.
• Personal Area Network is used for connecting the computer devices of personal use is
known as Personal Area Network.
• Thomas Zimmerman was the first research scientist to bring the idea of the Personal Area
Network.
• Personal Area Network covers an area of 30 feet.
• Personal computer devices that are used to develop the personal area network are the
laptop, mobile phones, media player and play stations.

There are two types of Personal Area Network:

• Wired Personal Area Network


• Wireless Personal Area Network

▪ Wireless Personal Area Network:


Wireless Personal Area Network is developed by simply using wireless technologies such
as Wi-Fi, Bluetooth. It is a low range network.

▪ Wired Personal Area Network:


Wired Personal Area Network is created by using the USB.
▪ Examples of Personal Area Network:
• Body Area Network: Body Area Network is a network that moves with a person. For
example, a mobile network moves with a person. Suppose a person establishes a network
connection and then creates a connection with another device to share the information.
• Offline Network: An offline network can be created inside the home, so it is also known
as a home network. A home network is designed to integrate the devices such as printers,
computer, television but they are not connected to the internet.
• Small Home Office: It is used to connect a variety of devices to the internet and to a
corporate network using a VPN

▪ MAN (Metropolitan Area Network):


• A metropolitan area network is a network that covers a larger geographic area by
interconnecting a different LAN to form a larger network.
• Government agencies use MAN to connect to the citizens and private industries.
• In MAN, various LANs are connected to each other through a telephone exchange line.
• The most widely used protocols in MAN are RS-232, Frame Relay, ATM, ISDN, OC-3,
ADSL, etc.
• It has a higher range than Local Area Network (LAN).

▪ Uses of Metropolitan Area Network:


• MAN is used in communication between the banks in a city.
• It can be used in an Airline Reservation.
• It can be used in a college within a city.
• It can also be used for communication in the military.

▪ WAN (Wide Area Network):


• A Wide Area Network is a network that extends over a large geographical area such as
states or countries.
• A Wide Area Network is quite bigger network than the LAN.
• A Wide Area Network is not limited to a single location, but it spans over a large
geographical area through a telephone line, fibre optic cable or satellite links.
• The internet is one of the biggest WAN in the world.
• A Wide Area Network is widely used in the field of Business, government, and
education.
▪ Examples of Wide Area Network:
• Mobile Broadband: A 4G network is widely used across a region or country.
• Last mile: A telecom company is used to provide the internet services to the customers in
hundreds of cities by connecting their home with fiber.
• Private network: A bank provides a private network that connects the 44 offices. This
network is made by using the telephone leased line provided by the telecom company.

▪ Advantages of Wide Area Network:


Following are the advantages of the Wide Area Network:
• Geographical area: A Wide Area Network provides a large geographical area. Suppose if
the branch of our office is in a different city then we can connect with them through
WAN. The internet provides a leased line through which we can connect with another
branch.
• Centralized data: In case of WAN network, data is centralized. Therefore, we do not need
to buy the emails, files or back up servers.
• Get updated files: Software companies work on the live server. Therefore, the
programmers get the updated files within seconds.
• Exchange messages: In a WAN network, messages are transmitted fast. The web
application like Facebook, Whatsapp, Skype allows you to communicate with friends.
• Sharing of software and resources: In WAN network, we can share the software and other
resources like a hard drive, RAM.
• Global business: We can do the business over the internet globally.
• High bandwidth: If we use the leased lines for our company then this gives the high
bandwidth. The high bandwidth increases the data transfer rate which in turn increases
the productivity of our company.

▪ Disadvantages of Wide Area Network:


The following are the disadvantages of the Wide Area Network:
• Security issue: A WAN network has more security issues as compared to LAN and MAN
network as all the technologies are combined together that creates the security problem.
• Needs Firewall & antivirus software: The data is transferred on the internet which can be
changed or hacked by the hackers, so the firewall needs to be used. Some people can
inject the virus in our system so antivirus is needed to protect from such a virus.
• High Setup cost: An installation cost of the WAN network is high as it involves the
purchasing of routers, switches.
• Troubleshooting problems: It covers a large area so fixing the problem is difficult.

▪ Internetwork:
• An internetwork is defined as two or more computer network LANs or WAN or
computer network segments are connected using devices, and they are configured by a
local addressing scheme. This process is known as internetworking.
• An interconnection between public, private, commercial, industrial, or government
computer networks can also be defined as internetworking.
• An internetworking uses the internet protocol.
• The reference model used for internetworking is Open System Interconnection (OSI).
▪ Types of Internetwork:
1. Extranet: An extranet is a communication network based on the internet protocol such
as Transmission Control protocol and internet protocol. It is used
sed for information sharing.
The access to the extranet is restricted to only those users who have login credentials. An
extranet is the lowest level of internetworking. It can be categorized as MAN, WAN or
other computer networks. An extranet cannot have a single LAN, at least it must have
one connection to the external network.
2. Intranet: An intranet is a private network based on the internet protocol such
as Transmission Control protocol and internet protocol. An intranet belongs to an
organization which is only accessible by the organization's employee or members. The
main aim of the intranet is to share the information and resources among the organization
employees. An intranet provides the facility to work in groups and for teleconferences.

▪ Intranet advantages:
• Communication: It provides a cheap and easy communication. An employee of the
organization can communicate with another employee through email, chat.
• Time-saving: Information on the intranet is shared in real time, so it is time-saving.
time
• Collaboration: Collaboration is one of the most important advantages of the intranet. The
information is distributed among the employees of the organization and can only be
accessed by the authorized user.
• Platform independency: It is a neutral architecture as th thee computer can be connected to
another device with different architecture.
• Cost effective: People can see the data and documents by using the browser and
distributes the duplicate copies over the intranet. This leads to a reduction in the cost.

▪ What is Topology?
Topology defines the structure of the network of how all the components are
interconnected to each other. There are two types of topology: physical and logical topology.
Physical topology is the geometric representation of all the nodes in a network.
▪ Bus Topology:

• The bus topology is designed in such a way that all the stations are connected through a
single cable known as a backbone cable.
• Each node is either connected to the backbone cable by drop cable or directly connected
to the backbone cable.
• When a node wants to send a message over the network, it puts a message over the
network. All the stations available in the network will receive the message whether it has
been addressed or not.
• The bus topology is mainly used in 802.3 (Ethernet) and 802.4 standard networks.
• The configuration of a bus topology is quite simpler as compared to other topologies.
• The backbone cable is considered as a "single lane" through which the message is
broadcast to all the stations.
• The most common access method of the bus topologies is CSMA (Carrier Sense Multiple
Access).

▪ CSMA:
It is a media access control used to control the data flow so that data integrity is
maintained, i.e., the packets do not get lost. There are two alternative ways of handling the
problems that occur when two nodes send the messages simultaneously.
• CSMA CD: CSMA CD (Collision detection) is an access method used to detect the
collision. Once the collision is detected, the sender will stop transmitting the data.
Therefore, it works on "recovery after the collision".
• CSMA CA: CSMA CA (Collision Avoidance) is an access method used to avoid the
collision by checking whether the transmission media is busy or not. If busy, then the
sender waits until the media becomes idle. This technique effectively reduces the
possibility of the collision. It does not work on "recovery after the collision".

▪ Advantages of Bus topology:


• Low-cost cable: In bus topology, nodes are directly connected to the cable without
passing through a hub. Therefore, the initial cost of installation is low.
• Moderate data speeds: Coaxial or twisted pair cables are mainly used in bus-based
networks that support upto 10 Mbps.
• Familiar technology: Bus topology is a familiar technology as the installation and
troubleshooting techniques are well known, and hardware components are easily
available.
• Limited failure: A failure in one node will not have any effect on other nodes.

▪ Disadvantages of Bus topology:


• Extensive cabling: A bus topology is quite simpler, but still it requires a lot of cabling.
• Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
• Signal interference: If two nodes send the messages simultaneously, then the signals of
both the nodes collide with each other.
• Reconfiguration difficult: Adding new devices to the network would slow down the
network.
• Attenuation: Attenuation is a loss of signal leads to communication issues. Repeaters are
used to regenerate the signal.

▪ Ring Topology:

• Ring topology is like a bus topology, but with connected ends.


• The node that receives the message from the previous computer will retransmit to the
next node.
• The data flows in one direction, i.e., it is unidirectional.
• The data flows in a single loop continuously known as an endless loop.
• It has no terminated ends, i.e., each node is connected to other node and having no
termination point.
• The data in a ring topology flow in a clockwise direction.
• The most common access method of the ring topology is token passing.
• Token passing: It is a network access method in which token is passed from one node to
another node.
• Token: It is a frame that circulates around the network.
▪ Working of Token passing:
• A token move around the network and it is passed from computer to computer until it
reaches the destination.
• The sender modifies the token by putting the address along with the data.
• The data is passed from one device to another device until the destination address
matches. Once the token received by the destination device, then it sends the
acknowledgment to the sender.
• In a ring topology, a token is used as a carrier.

▪ Advantages of Ring topology:


• Network Management: Faulty devices can be removed from the network without
bringing the network down.
• Product availability: Many hardware and software tools for network operation and
monitoring are available.
• Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the installation
cost is very low.
• Reliable: It is a more reliable network because the communication system is not
dependent on the single host computer.

▪ Disadvantages of Ring topology:


• Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
• Failure: The breakdown in one station leads to the failure of the overall network.
• Reconfiguration difficult: Adding new devices to the network would slow down the
network.
• Delay: Communication delay is directly proportional to the number of nodes. Adding
new devices increases the communication delay.

▪ Star Topology:

• Star topology is an arrangement of the network in which every node is connected to the
central hub, switch or a central computer.
• The central computer is known as a server, and the peripheral devices attached to the
server are known as clients.
• Coaxial cable or RJ-45 cables are used to connect the computers.
• Hubs or Switches are mainly used as connection devices in a physical star topology.
• Star topology is the most popular topology in network implementation.

▪ Advantages of Star topology:


• Efficient troubleshooting: Troubleshooting is quite efficient in a star topology as
compared to bus topology. In a bus topology, the manager has to inspect the kilometers
of cable. In a star topology, all the stations are connected to the centralized network.
Therefore, the network administrator has to go to the single station to troubleshoot the
problem.
• Network control: Complex network control features can be easily implemented in the star
topology. Any changes made in the star topology are automatically accommodated.
• Limited failure: As each station is connected to the central hub with its own cable,
therefore failure in one cable will not affect the entire network.
• Familiar technology: Star topology is a familiar technology as its tools are cost-effective.
• Easily expandable: It is easily expandable as new stations can be added to the open ports
on the hub.
• Cost effective: Star topology networks are cost-effective as it uses inexpensive coaxial
cable.
• High data speeds: It supports a bandwidth of approx 100Mbps. Ethernet 100BaseT is one
of the most popular Star topology networks.

▪ Disadvantages of Star topology:


• A Central point of failure: If the central hub or switch goes down, then all the connected
nodes will not be able to communicate with each other.
• Cable: Sometimes cable routing becomes difficult when a significant amount of routing is
required.

▪ Tree topology:

• Tree topology combines the characteristics of bus topology and star topology.
• A tree topology is a type of structure in which all the computers are connected with each
other in hierarchical fashion.
• The top-most node in tree topology is known as a root node, and all other nodes are the
descendants of the root node.
• There is only one path exists between two nodes for the data transmission. Thus, it forms
a parent-child hierarchy.

▪ Advantages of Tree topology:


• Support for broadband transmission: Tree topology is mainly used to provide broadband
transmission, i.e., signals are sent over long distances without being attenuated.
• Easily expandable: We can add the new device to the existing network. Therefore, we can
say that tree topology is easily expandable.
• Easily manageable: In tree topology, the whole network is divided into segments known
as star networks which can be easily managed and maintained.
• Error detection: Error detection and error correction are very easy in a tree topology.
• Limited failure: The breakdown in one station does not affect the entire network.
• Point-to-point wiring: It has point-to-point wiring for individual segments.

▪ Disadvantages of Tree topology:


• Difficult troubleshooting: If any fault occurs in the node, then it becomes difficult to
troubleshoot the problem.
• High cost: Devices required for broadband transmission are very costly.
• Failure: A tree topology mainly relies on main bus cable and failure in main bus cable
will damage the overall network.
• Reconfiguration difficult: If new devices are added, then it becomes difficult to
reconfigure.

▪ Mesh topology:

• Mesh technology is an arrangement of the network in which computers are


interconnected with each other through various redundant connections.
• There are multiple paths from one computer to another computer.
• It does not contain the switch, hub or any central computer which acts as a central point
of communication.
• The Internet is an example of the mesh topology.
• Mesh topology is mainly used for WAN implementations where communication failures
are a critical concern.
• Mesh topology is mainly used for wireless networks.
• Mesh topology can be formed by using the formula:
Number of cables = (n*(n-1))/2; Where n is the number of nodes that represents the
network.

▪ Mesh topology is divided into two categories:


• Fully connected mesh topology
• Partially connected mesh topology

• Full Mesh Topology: In a full mesh topology, each computer is connected to all the
computers available in the network.
• Partial Mesh Topology: In a partial mesh topology, not all but certain computers are
connected to those computers with which they communicate frequently.

▪ Advantages of Mesh topology:


• Reliable: The mesh topology networks are very reliable as if any link breakdown will not
affect the communication between connected computers.
• Fast Communication: Communication is very fast between the nodes.
• Easier Reconfiguration: Adding new devices would not disrupt the communication
between other devices.

▪ Disadvantages of Mesh topology:


• Cost: A mesh topology contains a large number of connected devices such as a router and
more transmission media than other topologies.
• Management: Mesh topology networks are very large and very difficult to maintain and
manage. If the network is not monitored carefully, then the communication link failure
goes undetected.
• Efficiency: In this topology, redundant connections are high that reduces the efficiency of
the network.
▪ Hybrid Topology:

• The combination of various different topologies is known as Hybrid topology.


• A Hybrid topology is a connection between different links and nodes to transfer the data.
• When two or more different topologies are combined together is termed as Hybrid
topology and if similar topologies are connected with each other will not result in Hybrid
topology. For example, if there exist a ring topology in one branch of ICICI bank and bus
topology in another branch of ICICI bank, connecting these two topologies will result in
Hybrid topology.

▪ Advantages of Hybrid Topology:


• Reliable: If a fault occurs in any part of the network will not affect the functioning of the
rest of the network.
• Scalable: Size of the network can be easily expanded by adding new devices without
affecting the functionality of the existing network.
• Flexible: This topology is very flexible as it can be designed according to the
requirements of the organization.
• Effective: Hybrid topology is very effective as it can be designed in such a way that the
strength of the network is maximized and weakness of the network is minimized.

▪ Disadvantages of Hybrid topology:


• Complex design: The major drawback of the Hybrid topology is the design of the Hybrid
network. It is very difficult to design the architecture of the Hybrid network.
• Costly Hub: The Hubs used in the Hybrid topology are very expensive as these hubs are
different from usual Hubs used in other topologies.
• Costly infrastructure: The infrastructure cost is very high as a hybrid network requires a
lot of cabling, network devices, etc.
▪ What Is the OSI Model:
The Open Systems Interconnection (OSI) model describes seven layers that computer
systems use to communicate over a network. It was the first standard model for network
communications, adopted by all major computer and telecommunication companies in the early
1980s
The modern Internet is not based on OSI, but on the simpler TCP/IP model. However, the
OSI 7-layer model is still widely used, as it helps visualize and communicate how networks
operate, and helps isolate and troubleshoot networking problems.
OSI was introduced in 1983 by representatives of the major computer and telecom
companies, and was adopted by ISO as an international standard in 1984.

▪ OSI Model Explained: The OSI 7 Layers:

We’ll describe OSI layers “top down” from the application layer that directly serves the
end user, down to the physical layer.

7. Application Layer:
The application layer is used by end-user software such as web browsers and email
clients. It provides protocols that allow software to send and receive information and present
meaningful data to users. A few examples of application layer protocols are the Hypertext
Transfer Protocol (HTTP), File Transfer Protocol (FTP), Post Office Protocol (POP), Simple
Mail Transfer Protocol (SMTP), and Domain Name System (DNS).
6. Presentation Layer:
The presentation layer prepares data for the application layer. It defines how two devices
should encode, encrypt, and compress data so it is received correctly on the other end. The
presentation layer takes any data transmitted by the application layer and prepares it for
transmission over the session layer.

5. Session Layer:
The session layer creates communication channels, called sessions, between devices. It is
responsible for opening sessions, ensuring they remain open and functional while data is being
transferred, and closing them when communication ends. The session layer can also set
checkpoints during a data transfer—if the session is interrupted, devices can resume data transfer
from the last checkpoint.

4. Transport Layer:
The transport layer takes data transferred in the session layer and breaks it into
“segments” on the transmitting end. It is responsible for reassembling the segments on the
receiving end, turning it back into data that can be used by the session layer. The transport layer
carries out flow control, sending data at a rate that matches the connection speed of the receiving
device, and error control, checking if data was received incorrectly and if not, requesting it again.

3. Network Layer:
The network layer has two main functions. One is breaking up segments into network
packets, and reassembling the packets on the receiving end. The other is routing packets by
discovering the best path across a physical network. The network layer uses network addresses
(typically Internet Protocol addresses) to route packets to a destination node.

2. Data Link Layer:


The data link layer establishes and terminates a connection between two physically-
connected nodes on a network. It breaks up packets into frames and sends them from source to
destination. This layer is composed of two parts—Logical Link Control (LLC), which identifies
network protocols, performs error checking and synchronizes frames, and Media Access Control
(MAC) which uses MAC addresses to connect devices and define permissions to transmit and
receive data.

1. Physical Layer:
The physical layer is responsible for the physical cable or wireless connection between
network nodes. It defines the connector, the electrical cable or wireless technology connecting
the devices, and is responsible for transmission of the raw data, which is simply a series of 0s
and 1s, while taking care of bit rate control.

▪ Advantages of OSI Model:


▪ The OSI model helps users and operators of computer networks:
• Determine the required hardware and software to build their network.
• Understand and communicate the process followed by components communicating
across a network.
• Perform troubleshooting, by identifying which network layer is causing an issue and
focusing efforts on that layer.
▪ The OSI model helps network device manufacturers and networking software vendors:
• Create devices and software that can communicate with products from any other vendor,
allowing open interoperability
• Define which parts of the network their products should work with.
• Communicate to users at which network layers their product operates – for example, only
at the application layer, or across the stack.

▪ TCP/IP Model:
The OSI Model we just looked at is just a reference/logical model. It was designed to
describe the functions of the communication system by dividing the communication procedure
into smaller and simpler components. But when we talk about the TCP/IP model, it was designed
and developed by Department of Defense (DoD) in 1960s and is based on standard protocols. It
stands for Transmission Control Protocol/Internet Protocol. The TCP/IP model is a concise
version of the OSI model. It contains four layers, unlike seven layers in the OSI model. The
layers are:
• Process/Application Layer
• Host-to-Host/Transport Layer
• Internet Layer
• Network Access/Link Layer
The diagrammatic comparison of the TCP/IP and OSI model is as follows :

▪ OSI vs. TCP/IP Model:


The Transfer Control Protocol/Internet Protocol (TCP/IP) is older than the OSI model
and was created by the US Department of Defense (DoD). A key difference between the models
is that TCP/IP is simpler, collapsing several OSI layers into one:
• OSI layers 5, 6, 7 are combined into one Application Layer in TCP/IP
• OSI layers 1, 2 are combined into one Network Access Layer in TCP/IP – however
TCP/IP does not take responsibility for sequencing and acknowledgement functions,
leaving these to the underlying transport layer.
Other important differences:
• TCP/IP is a functional model designed to solve specific communication problems, and
which is based on specific, standard protocols. OSI is a generic, protocol-independent
model intended to describe all forms of network communication.
• In TCP/IP, most applications use all the layers, while in OSI simple applications do not
use all seven layers. Only layers 1, 2 and 3 are mandatory to enable any data
communication.

▪ Difference between TCP/IP and OSI Model:

TCP/IP OSI
TCP refers to Transmission Control OSI refers to Open Systems
Protocol. Interconnection.

TCP/IP has 4 layers. OSI has 7 layers.

TCP/IP is more reliable OSI is less reliable

TCP/IP does not have very strict


OSI has strict boundaries
boundaries.

TCP/IP follows a horizontal approach. OSI follows a vertical approach.

TCP/IP uses both session and presentation OSI uses different session and
layer in the application layer itself. presentation layers.

TCP/IP developed protocols then model. OSI developed model then protocol.

Transport layer in TCP/IP does not In OSI model, transport layer provides
provide assurance delivery of packets. assurance delivery of packets.

Connection less and connection oriented


TCP/IP model network layer only
both services are provided by network
provides connection less services.
layer in OSI model.
While in OSI model, Protocols are better
Protocols cannot be replaced easily in
covered and is easy to replace with the
TCP/IP model.
change in technology.

1. Network Access Layer:


This layer corresponds to the combination of Data Link Layer and Physical Layer of the
OSI model. It looks out for hardware addressing and the protocols present in this layer allows for
the physical transmission of data. We just talked about ARP being a protocol of Internet layer,
but there is a conflict about declaring it as a protocol of Internet Layer or Network access layer.
It is described as residing in layer 3, being encapsulated by layer 2 protocols.

2. Internet Layer:
This layer parallels the functions of OSI’s Network layer. It defines the protocols which
are responsible for logical transmission of data over the entire network. The main protocols
residing at this layer are:

1. IP: stands for Internet Protocol and it is responsible for delivering packets from the
source host to the destination host by looking at the IP addresses in the packet headers. IP
has 2 versions:
IPv4 and IPv6. IPv4 is the one that most of the websites are using currently. But IPv6 is
growing as the numbers of IPv4 addresses are limited in number when compared to the
number of users.
2. ICMP: stands for Internet Control Message Protocol. It is encapsulated within IP
datagrams and is responsible for providing hosts with information about network
problems.
3. ARP: stands for Address Resolution Protocol. Its job is to find the hardware address of a
host from a known IP address. ARP has several types: Reverse ARP, Proxy ARP,
Gratuitous ARP and Inverse ARP.

3. Host-to-Host Layer:
This layer is analogous to the transport layer of the OSI model. It is responsible for end-
to-end communication and error-free delivery of data. It shields the upper-layer applications
from the complexities of data. The two main protocols present in this layer are:

1. Transmission Control Protocol (TCP): It is known to provide reliable and error-free


communication between end systems. It performs sequencing and segmentation of data.
It also has acknowledgment feature and controls the flow of the data through flow control
mechanism. It is a very effective protocol but has a lot of overhead due to such features.
Increased overhead leads to increased cost.
2. User Datagram Protocol (UDP): On the other hand does not provide any such features.
It is the go-to protocol if your application does not require reliable transport as it is very
cost-effective. Unlike TCP, which is connection-oriented protocol, UDP is
connectionless.
4. Application Layer:
This layer performs the functions of top three layers of the OSI model: Application,
Presentation and Session Layer. It is responsible for node-to-node communication and controls
user-interface specifications. Some of the protocols present in this layer are: HTTP, HTTPS,
FTP, TFTP, Telnet, SSH, SMTP, SNMP, NTP, DNS, DHCP, NFS, X Window, LPD. Have a
look at Protocols in Application Layer for some information about these protocols. Protocols
other than those present in the linked article are:

1. HTTP and HTTPS: HTTP stands for Hypertext transfer protocol. It is used by the
World Wide Web to manage communications between web browsers and servers.
HTTPS stands for HTTP-Secure. It is a combination of HTTP with SSL(Secure Socket
Layer). It is efficient in cases where the browser need to fill out forms, sign in,
authenticate and carry out bank transactions.
2. SSH: SSH stands for Secure Shell. It is terminal emulations software similar to Telnet.
The reason SSH is more preferred is because of its ability to maintain the encrypted
connection. It sets up a secure session over a TCP/IP connection.
3. NTP: NTP stands for Network Time Protocol. It is used to synchronize the clocks on our
computer to one standard time source. It is very useful in situations like bank
transactions. Assume the following situation without the presence of NTP. Suppose you
carry out a transaction, where your computer reads the time at 2:30 PM while the server
records it at 2:28 PM. The server can crash very badly if it’s out of sync.
UNIT-II
▪ Switching
witching in Computer Networks: Circuit, Packet and Message:
▪ Switching in Computer Networks
Networks:
In broad networks, there can be various paths to send a message from sender to receiver.
Switching in computer networks is used to select the best path for data transmission. For this
purpose, different switching techniques are used.
The switched network comprises a series of interlink nodes called switches. Switches are
hard-wired
wired software devices that are capable of creating temporary connections between two or
more devices.
There is a link to switch, but not to each other. The nodes are connected with each other
through common devices and some nodes are used to route pa packages.

▪ Types of Switching Techniques in Computer Networks


Networks:
There are three types of switching techniques in computer networks, and each technique
has a different purpose.
Types of switching techniques in computer networks are as follows.
• Circuit Switching
• Packet Switching
• Message Switching.

▪ Circuit Switching:
It is a type of switching in which we set a physical connection between sender and
receiver. The connection is set up when the call is made from transmitter to receiver telephone.
Once a call is set up, the dedicated path exits between both ends. The path will continue
to exist until the call is disconnected.
The above diagram shows the functionality of circuit switching in computer networks.
Every computer has a physical connection to a node, as you can see in the circuit switching
diagram. Using nodes, devices can send a message from one end to another

▪ Advantages of Circuit Switching


Switching:
• It provides a guaranteed data rate.
• No delay in the data flow.

▪ Disadvantages of Circuit Switching


Switching:
• It requires more bandwidth.
• It takes a long time to establish a connection.
• It is not suitable for high traffic.

▪ Packet Switching:
In packet switching, a message is broken into packets for transmission. Each packet has
the source, destination,, and intermediate node address information.
The entire message is divided into smaller pieces, called packets. Each packet travels
independently and contains address information.
These packets travel through the shortest path in a communication network. All the
packets are reassembled at the receiving end to make a complete message.
There are two types of packet switching in computer networks, as follows.
• Datagram Packet Switching
• Virtual Circuit Packet Switching

The above diagram shows the concept of packet switching. The message is divided into
four packets (i.e. 1, 2,, 3 and 4). These packets contain the addresses and information.
By travelling through the shortest path, packets reach their destination. At receiving end,
the packets are reassembled in the same order (which is 1234) to generate an entire message.
▪ Advantages of Packet Switching
Switching:
• Bandwidth is reduced.
• If one link goes down, the remaining packets can be sent through another route.

▪ Message Switching:
In message switching, the compl
complete
ete message is transferred from one end to another
through nodes. There is no physical connection or link between sender and receiver.
The message contains the destination address. Each node stores the message and then
forward it to the next node as shown in the below diagram.
In telegraphy, the text message is encoded using the morse code into a sequence of dots
and dashes. Each dot or dash is communicated by transmitting a short and long pulse of electrical
current. The following diagram shows the concep
conceptt of message switching in computer networks.

▪ Advantages of Message Switching


Switching:
• Reduces network traffic
• Network devices share the channel.

▪ Disadvantages:
• It does not establish a dedicated path between two communication paths.

▪ Difference between Circuit, Packet and Message Switching


Switching:
The following table shows a comparison between the three types of switching techniques
in computer networks.

Circuit Switching Packet Switching Message Switching


There is a physical connection There is no physical connection There is no physical path
between sender and receiver. between sender and receiver. between sender and receiver.
Circuit Switching Packet Switching Message Switching
Packets are stored then
All packets use the same path. Packets travel independently.
forwarded.

Congestion has occurred per Congestion has occurred per There is no congestion in
minute. packet. message switching.
It is not suitable for handling It is suitable for handling high It is not suitable for handling
traffic. traffic. traffic.

There is no wastage of There is also no wastage of


The wastage of bandwidth is
bandwidth. bandwidth.
possible.

The recording of the packet is The recording of the packet is The recording of the packet
not possible. possible. is possible.
The message is in the form of The message is in the form of The message is in the form
packets. packets. of blocks.

We can use it with a real-time We can use it in real-time We cannot use it in real-time
application. applications. applications.

▪ Data Link Layer Design Issues:


The data link layer in the OSI (Open System Interconnections) Model is in between the
physical layer and the network layer. This layer converts the raw transmission facility provided
by the physical layer to a reliable and error-free link.
The main functions and the design issues of this layer are
Providing services to the network layer
• Framing
• Error Control
• Flow Control

▪ Services to the Network Layer:


In the OSI Model, each layer uses the services of the layer below it and provides services
to the layer above it. The data link layer uses the services offered by the physical layer. The
primary function of this layer is to provide a well defined service interface to network layer
above it.
The types of services provided can be of three types −
• Unacknowledged connectionless service
• Acknowledged connectionless service
• Acknowledged connection - oriented service

▪ Framing:
The data link layer encapsulates each data packet from the network layer into frames that
are then transmitted.
A frame has three parts, namely −
• Frame Header
• Payload field that contains the data packet from network layer
• Trailer
▪ Error Control:
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are:
• Dealing with transmission errors
• Sending acknowledgement frames in reliable connections
• Retransmitting lost frames
• Identifying duplicate frames and deleting them
• Controlling access to shared channels in case of broadcasting

▪ Flow Control:
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not be able to
handle it. There will be frame losses even if the transmission is error-free. The two common
approaches for flow control are:
• Feedback based flow control
• Rate based flow control

▪ Error Detection and Correction in Data Link Layer:


Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data,
are transmitted from the source to the destination with a certain extent of accuracy.

▪ Errors:
When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data being
received by the destination and are called errors.

▪ Types of Errors:
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
• Single bit error: In the received frame, only one bit has been corrupted, i.e. either changed
from 0 to 1 or from 1 to 0
• Multiple bits error: In the received frame, more than one bits are corrupted.

• Burst error: In the received frame, more than one consecutive bits are corrupted.

▪ Error Control:
Error control can be done in two ways
• Error detection: Error detection involves checking whether any error has occurred or not.
The number of error bits and the type of error does not matter.
• Error correction: Error correction involves ascertaining the exact number of bits that has
been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits along
with the data bits. The receiver performs necessary checks based upon the additional redundant
bits. If it finds that the data is free from errors, it removes the redundant bits before passing the
message to the upper layers.

▪ Error Detection Techniques:


There are three main techniques for detecting errors in frames: Parity Check, Checksum
and Cyclic Redundancy Check (CRC).

▪ Parity Check:
The parity check is done by adding an extra bit, called parity bit to the data to make a
number of 1s either even in case of even parity or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the
following way
• In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of
1s is odd then parity bit value is 1.
• In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is
even then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity
check, if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A similar rule is
adopted for odd parity check.
The parity check is suitable for single bit error detection only.

▪ Checksum:
In this error detection scheme, the following procedure is applied
• Data is divided into fixed sized frames or segments.
• The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.
• The receiver adds the incoming segments along with the checksum using 1’s complement
arithmetic to get the sum and then complements it.
• If the result is zero, the received frames are accepted; otherwise, they are discarded.

▪ Cyclic Redundancy Check (CRC):


Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a
predetermined divisor agreed upon by the communicating system. The divisor is generated using
polynomials.
• Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of the data segment. This makes the
resulting data unit exactly divisible by the divisor.
• The receiver divides the incoming data unit by the divisor. If there is no remainder, the
data unit is assumed to be correct and is accepted. Otherwise, it is understood that the data
is corrupted and is therefore rejected.

▪ Error Correction Techniques:


Error correction techniques find out the exact number of bits that have been corrupted and
as well as their locations. There are two principle ways
• Backward Error Correction (Retransmission): If the receiver detects an error in the
incoming frame, it requests the sender to retransmit the frame. It is a relatively simple
technique. But it can be efficiently used only where retransmitting is not expensive as in
fiber optics and the time for retransmission is low relative to the requirements of the
application.
• Forward Error Correction: If the receiver detects some error in the incoming frame, it
executes error-correcting code that generates the actual frame. This saves bandwidth
required for retransmission. It is inevitable in real-time systems. However, if there are too
many errors, the frames need to be retransmitted.
The four main error correction codes are
• Hamming Codes
• Binary Convolution Code
• Reed-Solomon Code
• Low-Density Parity-Check Code
▪ Elementary Data Link Protocols:
Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control. Framing is the process of dividing bit -
streams from physical layer into data frames whose size ranges from a few hundred to a few
thousand bytes. Error control mechanisms deals with transmission errors and retransmission of
corrupted and lost frames. Flow control regulates speed of delivery and so that a fast sender does
not drown a slow receiver.

▪ Types of Data Link Protocols:


Data link protocols can be broadly divided into two categories, depending on whether the
transmission channel is noiseless or noisy.

▪ Simplex Protocol:
The Simplex protocol is hypothetical protocol designed for unidirectional data
transmission over an ideal channel, i.e. a channel through which transmission can never go
wrong. It has distinct procedures for sender and receiver. The sender simply sends all its data
available onto the channel as soon as they are available its buffer. The receiver is assumed to
process all incoming data instantly. It is hypothetical since it does not handle flow control or
error control.
▪ Stop-and-Wait Protocol:
Stop-and-Wait protocol is for noiseless channel too. It provides unidirectional data
transmission without any error control facilities. However, it provides for flow control so that a
fast sender does not drown a slow receiver. The receiver has a finite buffer size with finite
processing speed. The sender can send a frame only when it has received indication from the
receiver that it is available for further data processing.
In this protocol we assume that data is transmitted in one direction only. No error occurs;
the receiver can only process the received information at finite rate. These assumptions imply
that the transmitter cannot send frames at rate faster than the receiver can process them.
The main problem here is how to prevent the sender from flooding the receiver. The
general solution for this problem is to have the receiver send some sort of feedback to sender,
the process is as follows:

Step1: The receiver send the acknowledgement frame back to the sender telling the sender that
the last received frame has been processed and passed to the host.
Step 2: Permission to send the next frame is granted.
Step 3: The sender after sending the sent frame has to wait for an acknowledge frame from the
receiver before sending another frame.

This protocol is called Simplex Stop and wait protocol, the sender sends one frame and
waits for feedback from the receiver. When the ACK arrives, the sender sends the next frame.
The Simplex Stop and Wait Protocol is diagrammatically represented as follows

▪ Go-Back-N ARQ:
Go-Back-N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and so is also called
sliding window protocol. The frames are sequentially numbered and a finite number of frames
are sent. If the acknowledgement of a frame is not received within the time period, all frames
starting from that frame are retransmitted.
▪ Selective Repeat ARQ:
This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost frames are
retransmitted, while the good frames are received and buffered.

▪ Simplex Protocol for Noisy Channel:


Data transfer is only in one direction, consider separate sender and receiver, finite
processing capacity and speed at the receiver, since it is a noisy channel, errors in data frames or
acknowledgement frames are expected. Every frame has a unique sequence number.
After a frame has been transmitted, the timer is started for a finite time. Before the timer expires,
if the acknowledgement is not received , the frame gets retransmitted, when the
acknowledgement gets corrupted or sent data frames gets damaged, how long the sender should
wait to transmit the next frame is infinite.

The Simplex Protocol for Noisy Channel is diagrammatically represented as follows:

▪ What is channel allocation in computer network?


When there are more than one user who desire to access a shared network channel, an
algorithm is deployed for channel allocation among the competing users. The network channel
may be a single cable or optical fiber connecting multiple nodes, or a portion of the wireless
spectrum. Channel allocation algorithms allocate the wired channels and bandwidths to the
users, who may be base stations, access points or terminal equipment.
▪ Channel Allocation Schemes:
Channel Allocation may be done using two schemes:
• Static Channel Allocation
• Dynamic Channel Allocation
▪ Static Channel Allocation:
In static channel allocation scheme, a fixed portion of the frequency channel is allotted to
each user. For N competing users, the bandwidth is divided into N channels using frequency
division multiplexing (FDM), and each portion is assigned to one user.
This scheme is also referred as fixed channel allocation or fixed channel assignment.
In this allocation scheme, there is no interference between the users since each user is assigned a
fixed channel. However, it is not suitable in case of a large number of users with variable
bandwidth requirements.

▪ Dynamic Channel Allocation:


In dynamic channel allocation scheme, frequency bands are not permanently assigned to
the users. Instead channels are allotted to users dynamically as needed, from a central pool. The
allocation is done considering a number of parameters so that transmission interference is
minimized.
This allocation scheme optimizes bandwidth usage and results are faster transmissions.
Dynamic channel allocation is further divided into centralized and distributed allocation.

▪ Multiple Access Protocols in Computer Network:


The Data Link Layer is responsible for transmission of data between two nodes. Its main
functions are-
Data Link Control
Multiple Access Control

▪ Data Link control:


The data link control is responsible for reliable transmission of message over
transmission channel by using techniques like framing, error control and flow control. For Data
link control refer to-Stop and Wait ARQ

▪ Multiple Access Control:


If there is a dedicated link between the sender and the receiver then data link control
layer is sufficient, however if there is no dedicated link present then multiple stations can access
the channel simultaneously. Hence multiple access protocols are required to decrease collision
and avoid crosstalk. For example, in a classroom full of students, when a teacher asks a question
and all the students (or stations) start answering simultaneously (send data at same time) then a
lot of chaos is created (data overlap or data lost) then it is the job of the teacher (multiple access
protocols) to manage the students and make them answer one at a time.
For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the same
time (transferring the data simultaneously). All the students respond at the same time due to
which data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol)
col) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different
process as:

A. Random Access Protocol:


In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any station
control another station. Depending on the channel's state (idle or bu busy),
sy), each station transmits
the data frame. However, if more than one station sends the data over a channel, there may be a
collision or data conflict. Due to the collision, the data frame packets may be lost or changed.
And hence, it does not receive by tthe receiver end.
Following are the different methods of random
random-access
access protocols for broadcasting frames
on the channel.
• Aloha
• CSMA
• CSMA/CD
• CSMA/CA

▪ ALOHA Random Access Protocol


Protocol:
It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

▪ Aloha Rules:
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after som
some random amount of time.
▪ Pure Aloha:
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the channel
is idle or not, the chances of collision may occur, and the data frame ccanan be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the back off time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the data
are successfully transmitted to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs curs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel
and transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall on
a shared channel simultaneously, collisions can occur, and both will suffer damage. If the new
frame's first bit enters the channel before ffinishing
inishing the last bit of the second frame. Both frames
are completely finished, and both stations must retransmit the data frame.

▪ Slotted Aloha:
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure
Aloha has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send
end a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is allowed to
be sent to each slot. And if the stations are unable to send data to the beginning of the slot, the
station will have to wait until tthehe beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of two or more
station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probabilityy of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

▪ CSMA (Carrier Sense Multiple Access)Access):


It is a carrier sense multiple access based on media access protocol to sense the traffic on
a channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.
▪ CSMA Access Modes:
Persistent mode of CSMA that defines each node, first sense the
• 1-Persistent: In the 1-Persistent
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
• Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when
the channel is found to be idle, it transmits the frames.
• P-Persistent: It is the combination of 11-Persistent and Non-persistent
persistent modes.
modes The P-
Persistent mode defines that each node senses the channel, and if the channel is inactive,
it sends a frame with a P probability. If the data is not transmitted, it waits for a (q
( = 1-p
probability)) random time and resumes the frame with the next ttime ime slot.
• O-Persistent: It is an O O-persistent
persistent method that defines the superiority of the station
before the transmission of the frame on the shared channel. If it is found that the channel
is inactive, each station waits for its turn to retransmit the data
data.
▪ CSMA/ CD:
It is carriers sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits
a frame to check whether the transmission was successful. If the frame is successfully received,
the station sends another frame. If any collision is detected in the CSMA/CD, the station sends a
jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for a
random time before sending a frame to a channel.

▪ CSMA/CA:
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
• Interframe space: In this method, the station waits for the channel to become idle, and if
it gets the channel is idle, it does not immediately send the data. Instead of this, it waits
for some time, and this time period is called the Interframe space or IFS. However, the
IFS time is often used to define the priority of the station.
• Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a random
slot number of slots as wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets when the channel is
inactive.
• Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol:


It is a method of reducing data frame collision on a shared channel. In the controlled
access method, each station interacts and decides to send a data frame by a particular station
approved by all other stations. It means that a single station cannot send the data frames unless
all other stations are not approved. It has three types of controlled access: Reservation, Polling,
and Token Passing.

C. Channelization Protocols:
It is a channelization protocol that allows the total usable bandwidth in a shared channel
to be shared across multiple stations based on their time, distance and codes. It can access all the
stations at the same time to send the data frames to the channel.
Following are the various methods to access the channel based on their time, distance
and codes:
1. DMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA:
It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different frequency to
the subchannel. Each station is reserved with a particular band to prevent the crosstalk between
the channels and interferences of stations.

▪ TDMA:
Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in the shared
channel, it divides the channel into different frequency slots that allocat
allocatee stations to transmit the
data frames. The same frequency bandwidth into the shared channel by dividing the signal into
various time slots to transmit it. However, TDMA has an overhead of synchronization that
specifies each station's time slot by adding ssynchronization bits to each slot.

▪ CDMA:
The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means tthat hat it allows each
station to transmit the data frames with full frequency on the shared channel at all times. It does
not require the division of bandwidth on a shared channel based on time slots. If multiple
stations send data to a channel simultaneously
simultaneously,, their data frames are separated by a unique code
sequence. Each station has a different unique code for transmitting the data over a shared
channel. For example, there are multiple users in a room that are continuously speaking. Data is
received by the users if only two-person interact with each other using the same language.
Similarly, in the network, if different stations communicate with each other simultaneously with
different code language.

▪ Network Layer:
Layer-3 in the OSI model is called Network layer. Network layer manages options
pertaining to host and network addressing, managing sub-networks, and internetworking.
Network layer takes the responsibility for routing packets from source to destination
within or outside a subnet. Two different subnet may have different addressing schemes or non-
compatible addressing types. Same with protocols, two different subnet may be operating on
different protocols which are not compatible with each other. Network layer has the
responsibility to route the packets from source to destination, mapping different addressing
schemes and protocols.

▪ Layer-3 Functionalities:
Devices which work on Network Layer mainly focus on routing. Routing may include
various tasks aimed to achieve a single goal. These can be:
• Addressing devices and networks.
• Populating routing tables or static routes.
• Queuing incoming and outgoing data and then forwarding them according to quality of
service constraints set for those packets.
• Internetworking between two different subnets.
• Delivering packets to destination with best efforts.
• Provides connection oriented and connection less mechanism.

▪ Network Layer Features:


With its standard functionalities, Layer 3 can provide various features as:
• Quality of service management
• Load balancing and link management
• Security
• Interrelation of different protocols and subnets with different schema.
• Different logical network design over the physical network design.
• L3 VPN and tunnels can be used to provide end to end dedicated connectivity.
Internet protocol is widely respected and deployed Network Layer protocol which helps
to communicate end to end devices over the internet. It comes in two flavors. IPv4 which has
ruled the world for decades but now is running out of address space. IPv6 is created to replace
IPv4 and hopefully mitigates limitations of IPv4 too.

▪ Network Addressing:
Layer 3 network addressing is one of the major tasks of Network Layer. Network
Addresses are always logical i.e. these are software based addresses which can be changed by
appropriate configurations.
A network address always points to host / node / server or it can represent a whole
network. Network address is always configured on network interface card and is generally
mapped by system with the MAC address (hardware address or layer-2 address) of the machine
for Layer-2 communication.
There are different kinds of network addresses in existence:
• IP
• IPX
• AppleTalk
We are discussing IP here as it is the only one we use in practice these days.

IP addressing provides mechanism to differentiate between hosts and network. Because


IP addresses are assigned in hierarchical manner, a host always resides under a specific network.
The host which needs to communicate outside its subnet, needs to know destination network
address, where the packet/data is to be sent.
Hosts in different subnet need a mechanism to locate each other. This task can be done
by DNS. DNS is a server which provides Layer-3 address of remote host mapped with its
domain name or FQDN. When a host acquires the Layer-3 Address (IP Address) of the remote
host, it forwards its entire packet to its gateway. A gateway is a router equipped with all the
information which leads to route packets to the destination host.
Routers take help of routing tables, which has the following information:
• Method to reach the network
Routers upon receiving forwarding request, forwards packet to its next hop (adjacent
router) towards the destination.
The next router on the path follows the same thing and eventually the data packet reaches
its destination.
Network address can be of one of the following:
• Unicast (destined to one host)
• Multicast (destined to group)
• Broadcast (destined to all)
• Anycast (destined to nearest one)
A router never forwards broadcast traffic by default. Multicast traffic uses special
treatment as it is most a video stream or audio with highest priority. Anycast is just similar to
unicast, except that the packets are delivered to the nearest destination when multiple
destinations are available.

▪ Network Layer Routing:


When a device has multiple paths to reach a destination, it always selects one path by
preferring it over others. This selection process is termed as Routing. Routing is done by special
network devices called routers or it can be done by means of software processes. The software
based routers have limited functionality and limited scope.
A router is always configured with some default route. A default route tells the router
where to forward a packet if there is no route found for specific destination. In case there are
multiple path existing to reach the same destination, router can make decision based on the
following information:
• Hop Count
• Bandwidth
• Metric
• Prefix-length
• Delay
Routes can be statically configured or dynamically learnt. One route can be configured to
be preferred over others.
Unicast routing
Most of the traffic on the internet and intranets known as unicast data or unicast traffic is
sent with specified destination. Routing unicast data over the internet is called unicast routing. It
is the simplest form of routing because the destination is already known. Hence the router just
has to look up the routing table and forward the packet to next hop.
▪ Broadcast routing:
By default, the broadcast packets are not routed and forwarded by the routers on any
network. Routers create broadcast domains. But it can be configured to forward broadcasts in
some special cases. A broadcast message is destined to all network devices.
Broadcast routing can be done in two ways (algorithm):
• A router creates a data packet and then sends it to each host one by one. In this case, the
router creates multiple copies of single data packet with different destination addresses. All
packets are sent as unicast but because they are sent to all, it simulates as if router is
broadcasting.
This method consumes lots of bandwidth and router must destination address of each node.
• Secondly, when router receives a packet that is to be broadcasted, it simply floods those
packets out of all interfaces. All routers are configured in the same way.

This method is easy on router's CPU but may cause the problem of duplicate packets
received from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its
predecessor from where it should receive broadcast. This technique is used to detect and discard
duplicates.

▪ Multicast Routing:
Multicast routing is special case of broadcast routing with significance difference and
challenges. In broadcast routing, packets are sent to all nodes even if they do not want it. But in
Multicast routing, the data is sent to only nodes which wants to receive the packets.
The router must know that there are nodes, which wish to receive multicast packets (or
stream) then only it should forward. Multicast routing works spanning tree protocol to avoid
looping.
Multicast routing also uses reverse path Forwarding technique, to detect and discard
duplicates and loops.
Anycast Routing
Anycast packet forwarding is a mechanism where multiple hosts can have same logical
address. When a packet destined to this logical address is received, it is sent to the host which is
nearest in routing topology.
Anycast routing is done with help of DNS server. Whenever an Anycast packet is
received it is enquired with DNS to where to send it. DNS provides the IP address which is the
nearest IP configured on it.

▪ Unicast Routing Protocols:


There are two kinds of routing protocols available to route unicast packets:
• Distance Vector Routing Protocol:
Distance Vector is simple routing protocol which takes routing decision on the
number of hops between source and destination. A route with less number of hops is
considered as the best route. Every router advertises its set best routes to other routers.
Ultimately, all routers build up their network topology based on the advertisements of
their peer routers,
For example Routing Information Protocol (RIP).
• Link State Routing Protocol:
Link State protocol is slightly complicated protocol than Distance Vector. It takes
into account the states of links of all the routers in a network. This technique helps routes
build a common graph of the entire network. All routers then calculate their best path for
routing purposes. for example, Open Shortest Path First (OSPF) and Intermediate System
to Intermediate System (ISIS).

▪ Multicast Routing Protocols:


Unicast routing protocols use graphs while Multicast routing protocols use trees, i.e.
spanning tree to avoid loops. The optimal tree is called shortest path spanning tree.
• DVMRP: Distance Vector Multicast Routing Protocol
• MOSPF: Multicast Open Shortest Path First
• CBT: Core Based Tree
• PIM: Protocol independent Multicast
Protocol Independent Multicast is commonly used now. It has two flavors:
• PIM Dense Mode:
This mode uses source-based trees. It is used in dense environment such as LAN.
• PIM Sparse Mode:
This mode uses shared trees. It is used in sparse environment such as WAN.

▪ Routing Algorithms:
The routing algorithms are as follows:
▪ Flooding:
Flooding is simplest method packet forwarding. When a packet is received, the routers
send it to all the interfaces except the one on which it was received. This creates too much
burden on the network and lots of duplicate packets wandering in the network.
Time to Live (TTL) can be used to avoid infinite looping of packets. There exists another
approach for flooding, which is called Selective Flooding to reduce the overhead on the
network. In this method, the router does not flood out on all the interfaces, but selective ones.
▪ Shortest Path:
Routing decision in networks, are mostly taken on the basis of cost between source and
destination. Hop count plays major role here. Shortest path is a technique which uses various
algorithms to decide a path with minimum number of hops.
Common shortest path algorithms are:
• Dijkstra's algorithm
• Bellman Ford algorithm
• Floyd Warshall algorithm

▪ Network Layer Design Issues:


The network layer or layer 3 of the OSI (Open Systems Interconnection) model is
concerned delivery of data packets from the source to the destination across multiple hops or
links. It is the lowest layer that is concerned with end-to-end transmission. The designers who
are concerned with designing this layer needs to cater to certain issues. These issues encompass
the services provided to the upper layers as well as internal design of the layer.
The design issues can be elaborated under four heads:
• Store-and-Forward Packet Switching
• Services to Transport Layer
• Providing Connection Oriented Service
• Providing Connectionless Service

▪ Store-and-Forward Packet Switching:


The network layer operates in an environment that uses store and forward packet
switching. The node which has a packet to send, delivers it to the nearest router. The packet is
stored in the router until it has fully arrived and its checksum is verified for error detection.
Once, this is done, the packet is forwarded to the next router. Since, each router needs to store
the entire packet before it can forward it to the next hop, the mechanism is called store-and-
forward switching.

▪ Services to Transport Layer:


The network layer provides service its immediate upper layer, namely transport layer,
through the network-transport layer interface. The two types of services provided are:
• Connection-Oriented Service-In this service, a path is setup between the source and the
destination, and all the data packets belonging to a message are routed along this path.
• Connectionless Service − In this service, each packet of the message is considered as an
independent entity and is individually routed from the source to the destination.
The objectives of the network layer while providing these services are:
• The services should not be dependent upon the router technology.
• The router configuration details should not be of a concern to the transport layer.
• A uniform addressing plan should be made available to the transport layer, whether the
network is a LAN, MAN or WAN.

▪ Providing Connection Oriented Service:


In connection − oriented services, a path or route called a virtual circuit is setup between
the source and the destination nodes before the transmission starts. All the packets in the
message are sent along this route. Each packet contains an identifier that denotes the virtual
circuit to which it belongs to. When all the packets are transmitted, the virtual circuit is
terminated and the connection is released. An example of connection − oriented service is Multi
Protocol Label Switching (MPLS).

▪ Providing Connectionless Service:


In connectionless service, since each packet is transmitted independently, each packet
contains its routing information and is termed as datagram. The network using datagrams for
transmission is called datagram networks or datagram subnets. No prior setup of routes are
needed before transmitting a message. Each datagram belong to the message follows its own
individual route from the source to the destination. An example of connectionless service is
Internet Protocol or IP.

▪ Routing Algorithm in Computer Network:


A routing algorithm is a procedure that lays down the route or path to transfer data
packets from source to the destination. They help in directing Internet traffic efficiently. After a
data packet leaves its source, it can choose among the many different paths to reach its
destination. Routing algorithm mathematically computes the best path, i.e. “least – cost path”
that the packet can be routed through.

▪ Types of Routing Algorithms:


Routing algorithms can be broadly categorized into two types, adaptive and nonadaptive
routing algorithms. They can be further categorized as shown in the following diagram −

▪ Adaptive Routing Algorithms:


Adaptive routing algorithms, also known as dynamic routing algorithms, makes routing
decisions dynamically depending on the network conditions. It constructs the routing table
depending upon the network traffic and topology. They try to compute the optimized route
depending upon the hop count, transit time and distance.
The three popular types of adaptive routing algorithms are
are:
• Centralized algorithm: It finds the least
least-cost
cost path between source and destination nodes
by using global knowledge about the network. So, it is also known as global routing
algorithm.
• Isolated algorithm: This algorithm procures the routing information by using local
information
ation instead of gathering information from other nodes.
• Distributed algorithm: This is a decentralized algorithm that computes the least-cost
least path
between source and destination iteratively in a distributed manner.

▪ Non-Adaptive
Adaptive Routing Algorithms
Algorithms:
Non-adaptive
adaptive Routing algorithms, also known as static routing algorithms, construct a
static routing table to determine the path through which packets are to be sent. The static routing
table is constructed based upon the routing information stored in the routers when the network is
booted up.
The two types of non – adaptive routing algorithms are
are:
• Flooding: In flooding, when a data packet arrives at a router, it is sent to all the outgoing
links except the one it has arrived on. Flooding may be uncontrolled,
uncontrolle controlled or
selective flooding.
• Random walks: This is a probabilistic algorithm where a data packet is sent by the router
to any one of its neighbours randomly.

▪ Internetworking in Computer Network


Network:
In real world scenario, networks under same administration are generally scattered
geographically. There may exist requirement of connecting two different networks of same kind
as well as of different kinds. Routing between two networks is called internetworking.
Networks can be considered different based on various parameters such as, Protocol,
topology, Layer-22 network and addressing scheme.
In internetworking, routers have knowledge of each other’s address and addresses
beyond them. They can be statically configured go on different network or they can learn by
using internetworking routing protocol.
Routing protocols which are used within an organization or administration are called
Interior Gateway Protocols or IGP. RIP, OSPF are examples of IGP. Routing between different
organizations or administrations may have Exterior Gateway Protocol, and there is only one
EGP i.e. Border Gateway Protocol.

▪ Tunneling:
If they are two geographically separate networks, which want to communicate with each
other, they may deploy a dedicated line between or they have to pass their data through
intermediate networks.
Tunneling is a mechanism by which two or more same networks communicate with each
other, by passing intermediate networking complexities. Tunneling is configured at both ends.
When the data enters from one end of Tunnel, it is tagged. This tagged data is then routed
inside the intermediate or transit network to reach the other end of Tunnel. When data exists the
Tunnel its tag is removed and delivered to the other part of the network.
Both ends seem as if they are directly connected and tagging makes data travel through
transit network without any modifications.

▪ Packet Fragmentation:
Most Ethernet segments have their maximum transmission unit (MTU) fixed to 1500
bytes. A data packet can have more or less packet length depending upon the application.
Devices in the transit path also have their hardware and software capabilities which tell what
amount of data that device can handle and what size of packet it can process.
If the data packet size is less than or equal to the size of packet the transit network can
handle, it is processed neutrally. If the packet is larger, it is broken into smaller pieces and then
forwarded. This is called packet fragmentation. Each fragment contains the same destination and
source address and routed through transit path easily. At the receiving end it is assembled again.
If a packet with DF (don’t fragment) bit set to 1 comes to a router which cannot handle
the packet because of its length, the packet is dropped.
When a packet is received by a router has its MF (more fragments) bit set to 1, the router
then knows that it is a fragmented packet and parts of the original packet is on the way.
If packet is fragmented too small, the overhead is increases. If the packet is fragmented
too large, intermediate router may not be able to process it and it might get dropped.

▪ What is Software-Defined Networking (SDN)?


Software-Defined Networking (SDN) is an approach to networking that uses software-
based controllers or application programming interfaces (APIs) to communicate with underlying
hardware infrastructure and direct traffic on a network.
This model differs from that of traditional networks, which use dedicated hardware
devices (i.e., routers and switches) to control network traffic. SDN can create and control
a virtual network – or control a traditional hardware – via software.
While network virtualization allows organizations to segment different virtual networks
within a single physical network, or to connect devices on different physical networks to create a
single virtual network, software-defined networking enables a new way of controlling the
routing of data packets through a centralized server.

▪ Why Software-Defined Networking is important?


SDN represents a substantial step forward from traditional networking, in that it enables
the following:
• Increased control with greater speed and flexibility: Instead of manually
programming multiple vendor-specific hardware devices, developers can control the flow
of traffic over a network simply by programming an open standard software-based
controller. Networking administrators also have more flexibility in choosing networking
equipment, since they can choose a single protocol to communicate with any number of
hardware devices through a central controller.
• Customizable network infrastructure: With a software-defined network,
administrators can configure network services and allocate virtual resources to change
the network infrastructure in real time through one centralized location. This allows
network administrators to optimize the flow of data through the network and prioritize
applications that require more availability.
• Robust security: A software-defined network delivers visibility into the entire network,
providing a more holistic view of security threats. With the proliferation of smart devices
that connect to the internet, SDN offers clear advantages over traditional networking.
Operators can create separate zones for devices that require different levels of security, or
immediately quarantine compromised devices so that they cannot infect the rest of the
network.

The key difference between SDN and traditional networking is infrastructure: SDN is
software-based, while traditional networking is hardware-based. Because the control plane is
software-based, SDN is much more flexible than traditional networking. It allows administrators
to control the network, change configuration settings, provision resources, and increase network
capacity-all from a centralized user interface, without the need for more hardware.
There are also security differences between SDN and traditional networking. Thanks to
greater visibility and the ability to define secure pathways, SDN offers better security in many
ways. However, because software-defined networks use a centralized controller, securing the
controller is crucial to maintaining a secure network.

▪ How does Software-Defined Networking (SDN) work?


Here are the SDN basics: In SDN (like anything virtualized), the software is decoupled
from the hardware. SDN moves the control plane that determines where to send traffic to
software, and leaves the data plane that actually forwards the traffic in the hardware. This allows
network administrators who use software-defined networking to program and control the entire
network via a single pane of glass instead of on a device by device basis.
There are three parts to a typical SDN architecture, which may be located in different
physical locations:
• Applications, which communicate resource requests or information about the network as
a whole
• Controllers, which use the information from applications to decide how to route a data
packet
• Networking devices, which receive information from the controller about where to
move the data
Physical or virtual networking devices actually move the data through the network. In
some cases, virtual switches, which may be embedded in either the software or the hardware,
take over the responsibilities of physical switches and consolidate their functions into a single,
intelligent switch. The switch checks the integrity of both the data packets and their virtual
machine destinations and moves the packets along.

▪ Benefits of Software-Defined Networking (SDN):


Many of today’s services and applications, especially when they involve the cloud, could
not function without SDN. SDN allows data to move easily between distributed locations, which
is critical for cloud applications.
Additionally, SDN supports moving workloads around a network quickly. For instance,
dividing a virtual network into sections, using a technique called network functions
virtualization (NFV), allows telecommunications providers to move customer services to less
expensive servers or even to the customer’s own servers. Service providers can use a virtual
network infrastructure to shift workloads from private to public cloud infrastructures as
necessary, and to make new customer services available instantly. SDN also makes it easier for
any network to flex and scale as network administrators add or remove virtual machines,
whether those machines are on-premises or in the cloud.
Finally, because of the speed and flexibility offered by SDN, it is able to support
emerging trends and technologies such as edge computing and the Internet of Things, which
require transferring data quickly and easily between remote sites.

▪ How is SDN different from Traditional Networking?


The key difference between SDN and traditional networking is infrastructure: SDN is
software-based, while traditional networking is hardware-based. Because the control plane is
software-based, SDN is much more flexible than traditional networking. It allows administrators
to control the network, change configuration settings, provision resources, and increase network
capacity-all from a centralized user interface, without adding more hardware.
There are also security differences between SDN and traditional networking. Thanks to
greater visibility and the ability to define secure pathways, SDN offers better security in many
ways. However, because software-defined networks use a centralized controller, securing the
controller is crucial to maintaining a secure network, and this single point of failure represents a
potential vulnerability of SDN.

▪ What are the different models of SDN?


While the premise of centralized software controlling the flow of data in switches and
routers applies to all software-defined networking, there are different models of SDN.
• Open SDN: Network administrators use a protocol like Open Flow to control the
behavior of virtual and physical switches at the data plane level.
• SDN by APIs: Instead of using an open protocol, application programming interfaces
control how data moves through the network on each device.
• SDN Overlay Model: Another type of software-defined networking runs a virtual
network on top of an existing hardware infrastructure, creating dynamic tunnels to
different on-premise and remote data centers. The virtual network allocates bandwidth
over a variety of channels and assigns devices to each channel, leaving the physical
network untouched.
• Hybrid SDN: This model combines software-defined networking with traditional
networking protocols in one environment to support different functions on a network.
Standard networking protocols continue to direct some traffic, while SDN takes on
responsibility for other traffic, allowing network administrators to introduce SDN in
stages to a legacy environment.
UNIT-III
▪ Transport Layer Introduction:
Transport Layer (Layer-4). All modules and procedures pertaining to transportation of
data or data stream are categorized into this layer. As all other layers, this layer communicates
with its peer Transport layer of the remote host.
Transport layer offers peer-to-peer and end-to-end connection between two processes on
remote hosts. Transport layer takes data from upper layer (i.e. Application layer) and then breaks
it into smaller size segments, numbers each byte, and hands over to lower layer (Network Layer)
for delivery.

▪ Functions:
• This Layer is the first one which breaks the information data, supplied by Application
layer in to smaller units called segments. It numbers every byte in the segment and
maintains their accounting.
• This layer ensures that data must be received in the same sequence in which it was sent.
• This layer provides end-to-end delivery of data between hosts which may or may not
belong to the same subnet.
• All server processes intend to communicate over the network are equipped with well-
known Transport Service Access Points (TSAPs) also known as port numbers.

▪ End-to-End Communication:
A process on one host identifies its peer host on remote network by means of TSAPs,
also known as Port numbers. TSAPs are very well defined and a process which is trying to
communicate with its peer knows this in advance.

For example, when a DHCP client wants to communicate with remote DHCP server, it
always requests on port number 67. When a DNS client wants to communicate with remote
DNS server, it always requests on port number 53 (UDP).
The two main Transport layer protocols are:
• Transmission Control Protocol:
It provides reliable communication between two hosts.
• User Datagram Protocol:
It provides unreliable communication between two hosts.

▪ Transmission Control Protocol:


The transmission Control Protocol (TCP) is one of the most important protocols of
Internet Protocols suite. It is most widely used protocol for data transmission in communication
network such as internet.

▪ Features:
• TCP is reliable protocol. That is, the receiver always sends either positive or negative
acknowledgement about the data packet to the sender, so that the sender always has
bright clue about whether the data packet is reached the destination or it needs to resend
it.
• TCP ensures that the data reaches intended destination in the same order it was sent.
• TCP is connection oriented. TCP requires that connection between two remote points be
established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.
• TCP provides full duplex server, i.e. it can perform roles of both receiver and sender.

▪ Header:
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.

• Source Port (16-bits): It identifies source port of the application process on the sending
device.
• Destination Port (16-bits): It identifies destination port of the application process on the
receiving device.
• Sequence Number (32-bits): Sequence number of data bytes of a segment in a session.
• Acknowledgement Number (32-bits): When ACK flag is set, this number contains the
next sequence number of the data byte expected and works as acknowledgement of the
previous data received.
• Data offset (4-bits): This field implies both, the size of TCP header (32-bit words) and
the offset of data in current packet in the whole TCP segment.
• Reserved (3-bits): Reserved for future use and all are set zero by default.
• Flags (1-bit each):
o NS: Nonce Sum bit is used by Explicit Congestion Notification signaling process.
o CWR: When a host receives packet with ECE bit set, it sets Congestion Windows
Reduced to acknowledge that ECE received.
o ECE: It has two meanings:
If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
If SYN bit is set to 1, ECE means that the device is ECT capable.
o URG: It indicates that Urgent Pointer field has significant data and should be
processed.
o ACK: It indicates that Acknowledgement field has significance. If ACK is cleared to
0, it indicates that packet does not contain any acknowledgement.
o PSH: When set, it is a request to the receiving station to PUSH data (as soon as it
comes) to the receiving application without buffering it.
o RST: Reset flag has the following features:
It is used to refuse an incoming connection.
It is used to reject a segment.
It is used to restart a connection.
o SYN: This flag is used to set up a connection between hosts.
o FIN: This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers, they
are processed in correct order.
• Windows Size: This field is used for flow control between two stations and indicates the
amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how much data is
the receiver expecting.
• Checksum: This field contains the checksum of Header, Data and Pseudo Headers.
• Urgent Pointer: It points to the urgent data byte if URG flag is set to 1.
• Options: It facilitates additional options which are not covered by the regular header.
Option field is always described in 32-bit words. If this field contains data less than 32-bit,
padding is used to cover the remaining bits to reach 32-bit boundary.

▪ Addressing:
TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:
• System Ports (0-1023)
• User Ports (1024-49151)
• Private/Dynamic Ports (49152-65535)

▪ Connection Management:
TCP communication works in Server/Client model. The client initiates the connection
and the server either accepts or rejects it. Three-way handshaking is used for connection
management.
▪ Establishment:
Client initiates the connection and sends the segment with a Sequence number. Server
acknowledges it back with its own Sequence number and ACK of client’s segment which is one
more than client’s Sequence number. Client after receiving ACK of its segment sends an
acknowledgement of Server’s response.

▪ Release:
Either of server and client can send TCP segment with FIN flag set to 1. When the
receiving end responds it back by Acknowledging FIN, that direction of TCP communication is
closed and connection is released.

▪ Bandwidth Management:
TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data byte segments
the receiver at this end can receive. TCP uses slow start phase by using window size 1 and
increases the window size exponentially after each successful communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and next sent the
segment sent will be 4 data bytes long. When the acknowledgement of 4-byte data segment is
received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received NACK,
then the window size is reduced to half and slow start phase starts again.

▪ Error Control and Flow Control:


TCP uses port numbers to know what application process it needs to handover the data
segment. Along with that, it uses sequence numbers to synchronize itself with the remote host.
All data segments are sent and received with sequence numbers. The Sender knows which last
data segment was received by the Receiver when it gets ACK. The Receiver knows about the last
segment sent by the Sender by referring to the sequence number of recently received packet.
If the sequence number of a segment recently received does not match with the sequence
number the receiver was expecting, then it is discarded and NACK is sent back. If two segments
arrive with the same sequence number, the TCP timestamp value is compared to make a
decision.

▪ Multiplexing:
The technique to combine two or more data streams in one session is called Multiplexing.
When a TCP client initializes a connection with Server, it always refers to a well-defined port
number which indicates the application process. The client itself uses a randomly generated port
number from private port number pools.
Using TCP Multiplexing, a client can communicate with a number of different
application processes in a single session. For example, a client requests a web page which in turn
contains different types of data (HTTP, SMTP, FTP etc.) the TCP session timeout is increased
and the session is kept open for longer time so that the three-way handshake overhead can be
avoided.
This enables the client system to receive multiple connections over single virtual
connection. These virtual connections are not good for Servers if the timeout is too long.

▪ Congestion Control:
When large amount of data is fed to system which is not capable of handling it, congestion
occurs. TCP controls congestion by means of Window mechanism. TCP sets a window size
telling the other end how much data segment to send. TCP may use three algorithms for
congestion control:
• Additive increase, Multiplicative Decrease
• Slow Start
• Timeout React

▪ Timer Management:
TCP uses different types of timer to control and management various tasks:
▪ Keep-alive timer:
• This timer is used to check the integrity and validity of a connection.
• When keep-alive time expires, the host sends a probe to check if the connection still
exists.

▪ Retransmission timer:
• This timer maintains stateful session of data sent.
• If the acknowledgement of sent data does not receive within the Retransmission time, the
data segment is sent again.

▪ Persist timer:
• TCP session can be paused by either host by sending Window Size 0.
• To resume the session a host needs to send Window Size with some larger value.
• If this segment never reaches the other end, both ends may wait for each other for infinite
time.
• When the Persist timer expires, the host re-sends its window size to let the other end
know.
• Persist Timer helps avoid deadlocks in communication.

▪ Timed-Wait:
• After releasing a connection, either of the hosts waits for a Timed-Wait time to terminate
the connection completely.
• This is in order to make sure that the other end has received the acknowledgement of its
connection termination request.
• Timed-out can be a maximum of 240 seconds (4 minutes).

▪ Crash Recovery:
TCP is very reliable protocol. It provides sequence number to each of byte sent in
segment. It provides the feedback mechanism i.e. when a host receives a packet, it is bound to
ACK that packet having the next sequence number expected (if it is not the last segment).
When a TCP Server crashes mid-way communication and re-starts its process it sends TPDU
broadcast to all its hosts. The hosts can then send the last data segment which was never
unacknowledged and carry onwards.

▪ User Datagram Protocol:


The User Datagram Protocol (UDP) is simplest Transport Layer communication protocol
available of the TCP/IP protocol suite. It involves minimum amount of communication
mechanism. UDP is said to be an unreliable transport protocol but it uses IP services which
provides best effort delivery mechanism.
In UDP, the receiver does not generate an acknowledgement of packet received and in turn, the
sender does not wait for any acknowledgement of packet sent. This shortcoming makes this
protocol unreliable as well as easier on processing.

▪ Requirement of UDP:
A question may arise, why do we need an unreliable protocol to transport the data? We
deploy UDP where the acknowledgement packets share significant amount of bandwidth along
with the actual data. For example, in case of video streaming, thousands of packets are forwarded
towards its users. Acknowledging all the packets is troublesome and may contain huge amount
of bandwidth wastage. The best delivery mechanism of underlying IP protocol ensures best
efforts to deliver its packets, but even if some packets in video streaming get lost, the impact is
not calamitous and can be ignored easily. Loss of few packets in video and voice traffic
sometimes goes unnoticed.

▪ Features:
• UDP is used when acknowledgement of data does not hold any significance.
• UDP is good protocol for data flowing in one direction.
• UDP is simple and suitable for query based communications.
• UDP is not connection oriented.
• UDP does not provide congestion control mechanism.
• UDP does not guarantee ordered delivery of data.
• UDP is stateless.
• UDP is suitable protocol for streaming applications such as VoIP, multimedia streaming.

▪ UDP Header:
UDP header is as simple as its function.

▪ UDP header contains four main parameters:


• Source Port: This 16 bits information is used to identify the source port of the packet.
• Destination Port: This 16 bits information is used identify application level service on
destination machine.
• Length: Length field specifies the entire length of UDP packet (including header). It is
16-bits field and minimum value is 8-byte, i.e. the size of UDP header itself.
• Checksum: This field stores the checksum value generated by the sender before sending.
IPv4 has this field as optional so when checksum field does not contain any value it is
made 0 and all its bits are set to zero.

▪ UDP application:
Here are few applications where UDP is used to transmit data:
• Domain Name Services
• Simple Network Management Protocol
• Trivial File Transfer Protocol
• Routing Information Protocol
• Kerberos

▪ Application Layer Introduction:


Application layer is the top most layers in OSI and TCP/IP layered model. This layer
exists in both layered Models because of its significance, of interacting with user and user
applications. This layer is for applications which are involved in communication system.
A user may or may not directly interact with the applications. Application layer is where
the actual communication is initiated and reflects. Because this layer is on the top of the layer
stack, it does not serve any other layers. Application layer takes the help of Transport and all
layers below it to communicate or transfer its data to the remote host.
When an application layer protocol wants to communicate with its peer application layer
protocol on remote host, it hands over the data or information to the Transport layer. The
transport layer does the rest with the help of all the layers below it.
There is an ambiguity in understanding Application Layer and its protocol. Not every
user application can be put into Application Layer. except those applications which interact with
the communication system. For example, designing software or text-editor cannot be considered
as application layer programs.
On the other hand, when we use a Web Browser, which is actually using Hyper Text
Transfer Protocol (HTTP) to interact with the network. HTTP is Application Layer protocol.
Another example is File Transfer Protocol, which helps a user to transfer text based or binary
files across the network. A user can use this protocol in either GUI based software like FileZilla
or CuteFTP and the same user can use FTP in Command Line mode.
Hence, irrespective of which software you use, it is the protocol which is considered at
Application Layer used by that software. DNS is a protocol which helps user application
protocols such as HTTP to accomplish its work.

▪ Application Protocols in Computer Network:


There are several protocols which work for users in Application Layer. Application layer
protocols can be broadly divided into two categories:
• Protocols which are used by users. For email for example, email.
• Protocols which help and support protocols used by users. For example DNS.
Few of Application layer protocols are described below:

▪ Domain Name System:


The Domain Name System (DNS) works on Client Server model. It uses UDP protocol
for transport layer communication. DNS uses hierarchical domain based naming scheme. The
DNS server is configured with Fully Qualified Domain Names (FQDN) and email addresses
mapped with their respective Internet Protocol addresses.
A DNS server is requested with FQDN and it responds back with the IP address mapped
with it. DNS uses UDP port 53.
▪ Simple Mail Transfer Protocol:
The Simple Mail Transfer Protocol (SMTP) is used to transfer electronic mail from one
user to another. This task is done by means of email client software (User Agents) the user is
using. User Agents help the user to type and format the email and store it until internet is
available. When an email is submitted to send, the sending process is handled by Message
Transfer Agent which is normally comes inbuilt in email client software.
Message Transfer Agent uses SMTP to forward the email to another Message Transfer
Agent (Server side). While SMTP is used by end user to only send the emails, the Servers
normally use SMTP to send as well as receive emails. SMTP uses TCP port number 25 and 587.
Client software uses Internet Message Access Protocol (IMAP) or POP protocols to receive
emails.

▪ File Transfer Protocol:


The File Transfer Protocol (FTP) is the most widely used protocol for file transfer over
the network. FTP uses TCP/IP for communication and it works on TCP port 21. FTP works on
Client/Server Model where a client requests file from Server and server sends requested resource
back to the client.
FTP uses out-of-band controlling i.e. FTP uses TCP port 20 for exchanging controlling
information and the actual data is sent over TCP port 21.
The client requests the server for a file. When the server receives a request for a file, it
opens a TCP connection for the client and transfers the file. After the transfer is complete, the
server closes the connection. For a second file, client requests again and the server reopens a new
TCP connection.

▪ Post Office Protocol (POP):


The Post Office Protocol version 3 (POP 3) is a simple mail retrieval protocol used by
User Agents (client email software) to retrieve mails from mail server.
When a client needs to retrieve mails from server, it opens a connection with the server
on TCP port 110. User can then access his mails and download them to the local computer.
POP3 works in two modes. The most common mode the delete mode is to delete the emails from
remote server after they are downloaded to local machines. The second mode, the keep mode,
does not delete the email from mail server and gives the user an option to access mails later on
mail server.

▪ Hyper Text Transfer Protocol (HTTP):


The Hyper Text Transfer Protocol (HTTP) is the foundation of World Wide Web.
Hypertext is well organized documentation system which uses hyperlinks to link the pages in the
text documents. HTTP works on client server model. When a user wants to access any HTTP
page on the internet, the client machine at user end initiates a TCP connection to server on port
80. When the server accepts the client request, the client is authorized to access web pages.
To access the web pages, a client normally uses web browsers, who are responsible for
initiating, maintaining, and closing TCP connections. HTTP is a stateless protocol, which means
the Server maintains no information about earlier requests by clients.
▪ HTTP versions:
• HTTP 1.0 uses non persistent HTTP. At most one object can be sent over a single TCP
connection.
• HTTP 1.1 uses persistent HTTP. In this version, multiple objects can be sent over a single
TCP connection.

▪ Network Services:
Computer systems and computerized systems help human beings to work efficiently and
explore the unthinkable. When these devices are connected together to form a network,
the capabilities are enhanced multiple-times. Some basic services computer network can
offer are.

▪ Directory Services:
These services are mapping between name and its value, which can be variable value or
fixed. This software system helps to store the information, organize it, and provides various
means of accessing it.
• Accounting:
In an organization, a number of users have their user names and passwords
mapped to them. Directory Services provide means of storing this information in cryptic
form and make available when requested.
• Authentication and Authorization:
User credentials are checked to authenticate a user at the time of login and/or
periodically. User accounts can be set into hierarchical structure and their access to
resources can be controlled using authorization schemes.
• Domain Name Services:
DNS is widely used and one of the essential services on which internet works.
This system maps IP addresses to domain names, which are easier to remember and recall
than IP addresses. Because network operates with the help of IP addresses and humans
tend to remember website names, the DNS provides website’s IP address which is
mapped to its name from the back-end on the request of a website name from the user.

▪ File Services:
File services include sharing and transferring files over the network.
• File Sharing:
One of the reasons which gave birth to networking was file sharing. File sharing
enables its users to share their data with other users. User can upload the file to a specific
server, which is accessible by all intended users. As an alternative, user can make its file
shared on its own computer and provides access to intended users.
• File Transfer:
This is an activity to copy or move file from one computer to another computer or
to multiple computers, with help of underlying network. Network enables its user to
locate other users in the network and transfers files.
▪ Communication Services:
Email:
Electronic mail is a communication method and something a computer user cannot work
without. This is the basis of today’s internet features. Email system has one or more
email servers. All its users are provided with unique IDs. When a user sends email to
other user, it is actually transferred between users with help of email server.
• Social Networking:
Recent technologies have made technical life social. The computer savvy peoples,
can find other known peoples or friends, can connect with them, and can share thoughts,
pictures, and videos.
• Internet Chat:
Internet chat provides instant text transfer services between two hosts. Two or
more people can communicate with each other using text based Internet Relay Chat
services. These days, voice chat and video chat are very common.
• Discussion Boards:
Discussion boards provide a mechanism to connect multiple peoples with same
interests. It enables the users to put queries, questions, suggestions etc. which can be seen
by all other users. Other may respond as well.
• Remote Access:
This service enables user to access the data residing on the remote computer. This
feature is known as Remote desktop. This can be done via some remote device, e.g.
mobile phone or home computer.

▪ Application Services:
These are nothing but providing network based services to the users such as web services,
database managing, and resource sharing.
• Resource Sharing:
To use resources efficiently and economically, network provides a mean to share
them. This may include Servers, Printers, and Storage Media etc.
• Databases:
This application service is one of the most important services. It stores data and
information, processes it, and enables the users to retrieve it efficiently by using queries.
Databases help organizations to make decisions based on statistics.
• Web Services:
World Wide Web has become the synonym for internet. It is used to connect to
the internet, and access files and information services provided by the internet servers.

You might also like