Cs3591-Unit 1
Cs3591-Unit 1
Reshma/AP/CSE
Computer Network uses distributed processing in which a task is divided among several computers. Instead, a single
computer handles an entire task, each separate computer handles a subset. In recent computer network technology, several
types of networks vary from simple to complex level. The distributed processing gives the advantages as follows:
o Security: It provides limited interaction that a user can have with the entire system. For example, a bank allows
users to access their accounts through an ATM without allowing them to access the bank's entire database.
o Faster problem solving: Multiple computers can solve the problem faster than a single machine working alone.
o Security through redundancy: Multiple computers running the same program at the same time can provide
security through redundancy. For example, if four computers run the same program and any computer has a
hardware error, then other computers can override it.
Computer Networking is the practice of connecting computers to enable communication and data exchange.
• Servers: These are application or storage servers where the main computation and data storage occur. All requests
for specific tasks or data come to the servers.
• Routers: Routing is the process of selecting the network path through which the data packets traverse. Routers are
devices that forward these packets between networks to ultimately reach the destination. They add efficiency to
large networks.
• Switches: Repeaters are to networks what transformers are to electricity grids—they are electronic devices that
receive network signals and clean or strengthen them. Hubs are repeaters with multiple ports in them. They pass on
the data to whichever ports are available. Bridges are smarter hubs that only pass the data to the destination port. A
switch is a multi-port bridge. Multiple data cables can be plugged into switches to enable communication with
multiple network devices.
• Gateways: Gateways are hardware devices that act as ‘gates’ between two distinct networks. They can be firewalls,
routers, or servers.
2. Links
Links are the transmission media which can be of two types:
• Wired: Examples of wired technologies used in networks include coaxial cables, phone lines, twisted-pair cabling,
and optical fibers. Optical fibers carry pulses of light to represent data.
• Wireless: Network connections can also be established through radio or other electromagnetic signals. This kind of
transmission is called ‘wireless’. The most common examples of wireless links include communication
satellites, cellular networks, and radio and technology spread spectrums. Wireless LANs use spectrum technology
to establish connections within a small area.
3. Communication protocols
A communication protocol is a set of rules followed by all nodes involved in the information transfer. Some common
protocols include the internet protocol suite (TCP/IP), IEEE 802, Ethernet, wireless LAN, and cellular standards. TCP/IP is
a conceptual model that standardizes communication in a modern network. It suggests four functional layers of these
communication links:
• Network access layer: This layer defines how the data is physically transferred. It includes how hardware sends
data bits through physical wires or fibers.
• Internet layer: This layer is responsible for packaging the data into understandable packets and allowing it to be
sent and received.
• Transport layer: This layer enables devices to maintain a conversation by ensuring the connection is valid and
stable.
• Application layer: This layer defines how high-level applications can access the network to initiate data transfer.
Most of the modern internet structure is based on the TCP/IP model, though there are still strong influences of the similar
but seven-layered open systems interconnection (OSI) model.
IEEE802 is a family of IEEE standards that deals with local area networks (LAN) and metropolitan area networks (MAN).
Wireless LAN is the most well-known member of the IEEE 802 family and is more widely known as WLAN or Wi-Fis.
4. Network Defense
While nodes, links, and protocols form the foundation of a network, a modern network cannot exist without its defenses.
Security is critical when unprecedented amounts of data are generated, moved, and processed across networks. A few
examples of network defense tools include firewall, intrusion detection systems (IDS), intrusion prevention systems (IPS),
network access control (NAC), content filters, proxy servers, anti-DDoS devices, and load balancers.
Types of Computer Networks
Computer networks can be classified based on several criteria, such as the transmission medium, the network size, the
topology, and organizational intent. Based on a geographical scale, the different types of networks are:
1. Nanoscale networks: These networks enable communication between minuscule sensors and actuators.
2. Personal area network (PAN): PAN refers to a network used by just one person to connect multiple devices, such
as laptops to scanners, etc.
3. Local area network (LAN): The local area network connects devices within a limited geographical area, such as
schools, hospitals, or office buildings.
4. Storage area network (SAN): SAN is a dedicated network that facilitates block-level data storage. This is used in
storage devices such as disk arrays and tape libraries.
5. Campus area network (CAN): Campus area networks are a collection of interconnected LANs. They are used by
larger entities such as universities and governments.
6. Metropolitan area network (MAN): MAN is a large computer network that spans across a city.
7. Wide area network (WAN): Wide area networks cover larger areas such as large cities, states, and even countries.
8. Enterprise private network (EPN): An enterprise private network is a single network that a large organization
uses to connect its multiple office locations.
9. Virtual private network (VPN): VPN is an overlay private network stretched on top of a public network.
10. Cloud network: Technically, a cloud network is a WAN whose infrastructure is delivered via cloud services.
Based on organizational intent, networks can be classified as:
1. Intranet: Intranet is a set of networks that is maintained and controlled by a single entity. It is generally the most
secure type of network, with access to authorized users alone. An intranet usually exists behind the router in a local
area network.
2. Internet: The internet (or the internetwork) is a collection of multiple networks connected by routers and layered
by networking software. This is a global system that connects governments, researchers, corporates, the public, and
individual computer networks.
3. Extranet: An extranet is similar to the intranet but with connections to particular external networks. It is generally
used to share resources with partners, customers, or remote employees.
4. Darknet: The darknet is an overlay network that runs on the internet and can only be accessed by specialized
software. It uses unique, customized communication protocols.
Key Objectives of Creating and Deploying a Computer Network
There is no industry—education, retail, finance, tech, government, or healthcare—that can survive without well-designed
computer networks. The bigger an organization, the more complex the network becomes. Before taking on the onerous task
of creating and deploying a computer network, here are some key objectives that must be considered.
A network ensures that resources are not present in inaccessible silos and are available from multiple points. The high
reliability comes from the fact that there are usually different supply authorities. Important resources must be backed
up across multiple machines to be accessible in case of incidents such as hardware outages.
3. Performance management
A company’s workload only increases as it grows. When one or more processors are added to the network, it improves the
system’s overall performance and accommodates this growth. Saving data in well-architected databases can drastically
improve lookup and fetch times.
4.Cost savings
Huge mainframe computers are an expensive investment, and it makes more sense to add processors at strategic points in
the system. This not only improves performance but also saves money. Since it enables employees to access information in
seconds, networks save operational time, and subsequently, costs. Centralized network administration also means that fewer
investments need to be made for IT support.
5. Increased storage capacity
Network-attached storage devices are a boon for employees who work with high volumes of data. For example, every
member in the data science team does not need individual data stores for the huge number of records they crunch.
Centralized repositories get the job done in an even more efficient way. With businesses seeing record levels of customer
data flowing into their systems, the ability to increase storage capacity is necessary in today’s world.
6. Streamlined collaboration & communication
Networks have a major impact on the day-to-day functioning of a company. Employees can share files, view each other’s
work, sync their calendars, and exchange ideas more effectively. Every modern enterprise runs on internal messaging
systems such as Slack for the uninhibited flow of information and conversations. However, emails are still the formal mode
of communication with clients, partners, and vendors.
7. Reduction of errors
Networks reduce errors by ensuring that all involved parties acquire information from a single source, even if they are
viewing it from different locations. Backed-up data provides consistency and continuity. Standard versions of customer and
employee manuals can be made available to a large number of people without much hassle.
8. Secured remote access
Computer networks promote flexibility, which is important in uncertain times like now when natural disasters and
pandemics are ravaging the world. A secure network ensures that users have a safe way of accessing and working on
sensitive data, even when they’re away from the company premises. Mobile handheld devices registered to the network
even enable multiple layers of authentication to ensure that no bad actors can access the system.
Communication speed
Network provides us to communicate over the network in a fast and efficient manner. For example, we can do video
conferencing, email messaging, etc. over the internet. Therefore, the computer network is a great way to share our
knowledge and ideas.
File sharing
File sharing is one of the major advantage of the computer network. Computer network provides us to share the files with
each other.
Back up and Roll back is easy
Since the files are stored in the main server which is centrally located. Therefore, it is easy to take the back up from the
main server.
Software and Hardware sharing
We can install the applications on the main server, therefore, the user can access the applications centrally. So, we do not
need to install the software on every machine. Similarly, hardware can also be shared.
Security
Network allows the security by ensuring that the user has the right to access the certain files and applications.
Scalability
Scalability means that we can add the new components on the network. Network must be scalable so that we can extend the
network by adding new devices. But, it decreases the speed of the connection and data of the transmission speed also
decreases, this increases the chances of error occurring. This problem can be overcome by using the routing or switching
devices.
Reliability
Computer network can use the alternative source for the data communication in case of any hardware failure.
Therefore, there are some set of rules (protocols) that is followed by every computer connected to the internet and they are:
TCP (Transmission Control Protocol): It is responsible for dividing messages into packets on the source computer and
reassembling the received packet at the destination or recipient computer. It also makes sure that the packets have the
information about the source of the message data, the destination of the message data, the sequence in which the message
data should be re-assembled, and checks if the message has been sent correctly to the specific destination.
IP (Internet Protocol): It is a protocol, or set of rules, for routing and addressing packets of data so that they can travel
across networks and arrive at the correct destination. IP is responsible for handling the address of the destination computer
so that each packet is sent to its proper destination.
User Datagram Protocol (UDP): It is a communications protocol for time-sensitive applications like gaming, playing
videos, or Domain Name System (DNS) lookups. UDP results in speedier communication because it does not spend time
forming a firm connection with the destination before transferring the data.
Simplex Communication: It is one-way communication or we can say that unidirectional communication in which one
device only receives and another device only sends data and devices uses their entire capacity in transmission. For example,
IoT, entering data using a keyboard, listing music using a speaker, etc.
Half Duplex communication: It is a two-way communication, or we can say that it is a bidirectional communication in
which both the devices can send and receive data but not at the same time. When one device is sending data then another
device is only receiving and vice-versa. For example, walkie-talkie.
Full-duplex communication: It is a two-way communication or we can say that it is a bidirectional communication in which
both the devices can send and receive data at the same time. For example, mobile phones, landlines, etc.
Communication Channels
Communication channels are the medium that connects two or more workstations. Workstations can be connected by either
wired media or wireless media. It is also known as a transmission medium. The transmission medium or channel is a link
that carries messages between two or more devices. We can group the communication media into two categories:
• Guided media transmission
• Unguided media transmission
1. Guided Media: In this transmission medium, the physical link is created using wires or cables between two or more
computers or devices, and then the data is transmitted using these cables in terms of signals. Guided media transmission of
the following types:
1. Twisted pair cable: It is the most common form of wire used in communication. In a twisted-pair cable, two
identical wires are wrapped together in a double helix. The twisting of the wire reduces the crosstalk. It is known
as the leaking of a signal from one wire to another due to which signal can corrupt and can cause network errors.
The twisting protects the wire from internal crosstalk as well as external forms of signal interference.
1. Microwave: Microwave offers communication without the use of cables. Microwave signals are just like radio
and television signals. It is used in long-distance communication. Microwave transmission consists of a transmitter,
receiver, and atmosphere. In microwave communication, there are parabolic antennas that are mounted on the
towers to send a beam to another antenna. The higher the tower, the greater the range.
2. Radio wave: When communication is carried out by radio frequencies, then it is termed radio waves transmission.
It offers mobility. It consists of the transmitter and the receiver. Both use antennas to radiate and capture the radio
signal.
3. Infrared: It is short-distance communication and can pass through any object. It is generally used in TV remotes,
wireless mouse, etc.
Network Architecture and Its Types
Computer Network Architecture is defined as the physical and logical design of the software, hardware, protocols, and
media for the transmission of data. Simply we can say that how computers are organized and how tasks are allocated to the
computer.
The two types of network architectures are used:
o Peer-To-Peer network
o Client/Server network
1. Peer-To-Peer network
o Peer-to-Peer network is a network in which all the computers are linked together with equal privilege and
responsibilities for processing the data.
o Peer-to-Peer network are useful for small environments, usually up to 10 computers.
o Peer-To-Peer network has no dedicated server.
o Special permissions are assigned to each computer for sharing the resources, but this can lead to a problem if the
computer with the resource is down.
2. Client/Server Network
o Client/Server network is a network model designed for the end users called clients, to access the resources such as
songs, video, etc. from a central computer known as Server.
o The central controller is known as a server while all other computers in the network are called clients.
o A server performs all the major operations such as security and network management.
o A server is responsible for managing all the resources such as files, directories, printer, etc.
o All the clients communicate with each other through a server. For example, if client1 wants to send some data to
client 2, then it first sends the request to the server for the permission. The server sends the response to the client 1
to initiate its communication with the client 2.
communicate and share data with each other. PAN offers a network range of 1 to 100 meters from person to device providing
communication. Its transmission speed is very high with very easy maintenance and very low cost. This
uses Bluetooth, IrDA, and Zigbee as technology. Examples of PAN are USB, computer, phone, tablet, printer, PDA, etc.
Advantages of CAN
• Speed: Communication within a CAN takes place over Local Area Network (LAN) so data transfer rate between
systems is little bit fast than Internet.
• Security: Network administrators of campus take care of network by continuous monitoring, tracking and limiting
access. To protect network from unauthorized access firewall is placed between network and internet.
• Cost effective: With a little effort and maintenance, network works well by providing fast data transfer rate with
multi-departmental network access. It can be enabled wirelessly, where wiring and cabling costs can be managed.
So to work with in a campus using CAN is cost-effective in view of performance
4. Metropolitan Area Network (MAN)
A MAN is larger than a LAN but smaller than a WAN. This is the type of computer network that connects computers over
a geographical distance through a shared communication path over a city, town, or metropolitan area. This network mainly
uses FDDI, CDDI, and ATM as the technology with a range from 5km to 50km. Its transmission speed is average. It is
difficult to maintain and it comes with a high cost. Examples of MAN are networking in towns, cities, a single large city, a
large area within multiple buildings, etc.
Full Name Personal Area Local Area Campus Area Metropolitan Wide Area Network
Network Network Network Area Network
Technology Bluetooth, IrDA, Ethernet & Wifi Ethernet FDDI, CDDi. Leased Line, Dial-
Zigbee ATM Up
Range 1-100 m Upto 2km 1 – 5 km 5-50 km Above 50 km
Transmission Very High Very High High Average Low
Speed
Ownership Private Private Private Private or Public Private or Public
Maintenance Very Easy Easy Moderate Difficult Very Difficult
Protocol Layering
A protocol is a set of rules and standards that primarily outline a language that devices will use to communicate. There are
an excellent range of protocols in use extensively in networking, and they are usually implemented in numerous layers.
What does a protocol tell us?
Syntax of a message of a message
what fields does it contain? what fields does it contain?
in what format? in what format?
Semantics of a message
what does a message mean? what does a message mean?
for example, not-OK message means receiver got a corrupted file for example, not-OK message means receiver
got a corrupted file
Actions to take on receipt of a message
for example, on receiving not-OK message, retransmit the entire file
It provides a communication service where the process is used to exchange the messages. When the communication is
simple, we can use only one simple protocol.
When the communication is complex, we must divide the task between different layers, so, we need to follow a protocol at
each layer, this technique we used to call protocol layering. This layering allows us to separate the services from the
implementation.
Each layer needs to receive a set of services from the lower layer and to give the services to the upper layer. The modification
done in any one layer will not affect the other layers.
Scenarios
Let us develop two simple scenarios to better understand the need for protocol layering.
First Scenario
In the first scenario, communication is so simple that it can occur in only one layer. Assume Maria and Ann are neighbors
with a lot of common ideas. Communication between Maria and Ann takes place in one layer, face to face, in the same
language
• Layering of protocols provides well-defined interfaces between the layers, so that a change in one layer does not
affect an adjacent layer.
• The protocols of a network are extremely complicated and designing them in layers makes their implementation
more feasible.
Advantages
The advantages of layered protocols are as follows −
• Assists in protocol style, as a result of protocols that operate at a particular layer have outlined information that they
work and a defined interface to the layers on top of and below.
• Foster’s competition because products from completely different vendors will work along.
• Prevents technology or capability changes in one layer from touching different layers above and below.
• Provides a typical language to explain networking functions and capabilities.
Disadvantages
The disadvantages of layered protocols are as follows −
• The main disadvantages of layered systems consist primarily of overhead each in computation and in message
headers caused by the abstraction barriers between layers. Because a message typically should pass through several
(10 or more) protocol layers the overhead of those boundaries is commonly more than the computation being done.
• The upper-level layers cannot see what is within the lower layers, implying that an application cannot correct where
in an exceedingly connection a problem is or precisely what the matter is.
• The higher-level layers cannot control all aspects of the lower layers, so that they cannot modify the transfer system
if helpful (like controlling windowing, header compression, CRC/parity checking, et cetera), nor specify routing,
and should rely on the lower protocols operating, and cannot specify alternatives when there are issues.
Instead, specific protocols and technologies are often designed based on the principles outlined in the OSI model to
facilitate efficient data transmission and networking operations
The OSI model, created in 1984 by ISO, is a reference framework that explains the process of transmitting data between
computers. It is divided into seven layers that work together to carry out specialised network functions, allowing for a
more systematic approach to networking.
OSI Model
Data Flow In OSI Model
When we transfer information from one device to another, it travels through 7 layers of OSI model. First data travels
down through 7 layers from the sender’s end and then climbs back 7 layers on the receiver’s end.
Data flows through the OSI model in a step-by-step process:
• Application Layer: Applications create the data.
• Presentation Layer: Data is formatted and encrypted.
• Session Layer: Connections are established and managed.
• Transport Layer: Data is broken into segments for reliable delivery.
• Network Layer: Segments are packaged into packets and routed.
• Data Link Layer: Packets are framed and sent to the next device.
• Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination correctly, and these steps are reversed upon
arrival.
Step 2: Mail application prepares for data transmission like encrypting data and formatting it for transmission. (This
happens in Layer 6: Presentation Layer)
Step 3: There is a connection established between the sender and receiver on the internet. (This happens in Layer 5:
Session Layer)
Step 4: Email data is broken into smaller segments. It adds sequence number and error-checking information to maintain
the reliability of the information. (This happens in Layer 4: Transport Layer)
Step 5: Addressing of packets is done in order to find the best route for transfer. (This happens in Layer 3: Network
Layer)
Step 6: Data packets are encapsulated into frames, then MAC address is added for local devices and then it checks for
error using error detection. (This happens in Layer 2: Data Link Layer)
Step 7: Lastly Frames are transmitted in the form of electrical/ optical signals over a physical network medium like
ethernet cable or WiFi.
After the email reaches the receiver i.e. Zoro, the process will reverse and decrypt the e-mail content. At last, the email
will be shown on Zoro’s email client.
• Transmission Mode: Physical layer also defines how the data flows between the two connected devices. The
various transmission modes possible are Simplex, half-duplex and full-duplex.
Note:
• Hub, Repeater, Modem, and Cables are Physical Layer devices.
• Network Layer, Data Link Layer, and Physical Layer are also known as Lower Layers or Hardware Layers.
Note:
• Packet in the Data Link layer is referred to as Frame.
• Data Link layer is handled by the NIC (Network Interface Card) and device drivers of host machines.
• Switch & Bridge are Data Link Layer devices.
Note: The sender needs to know the port number associated with the receiver’s application.
Generally, this destination port number is configured, either by default or manually. For example, when a web application
requests a web server, it typically uses port number 80, because this is the default port assigned to web applications. Many
applications have default ports assigned.
• At the receiver’s side: Transport Layer reads the port number from its header and forwards the Data which it has
received to the respective application. It also performs sequencing and reassembling of the segmented data.
Note:
• Data in the Transport Layer is called Segments.
• Transport layer is operated by the Operating System. It is a part of the OS and communicates with the Application
Layer by making system calls.
• The transport layer is called as Heart of the OSI model.
• Device or Protocol Use : TCP, UDP NetBIOS, PPTP
Note: The OSI model acts as a reference model and is not implemented on the Internet because of its late invention. The
current model being used is the TCP/IP model.
• Not Practical: In real-life networking, most systems use a simpler model called the Internet protocol suite
(TCP/IP), so the OSI Model isn’t always directly applicable.
• Slow Adoption: When it was introduced, the OSI Model was not quickly adopted by the industry, which preferred
the simpler and already-established TCP/IP model.
• Overhead: Each layer in the OSI Model adds its own set of rules and operations, which can make the process more
time-consuming and less efficient.
• Theoretical: The OSI Model is more of a theoretical framework, meaning it’s great for understanding concepts but
not always practical for implementation.
TCP/IP Model
• TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of communication protocols used
to interconnect network devices on the internet. TCP/IP is also used as a communications protocol in a private
computer network -- an intranet or extranet.
• The entire IP suite -- a set of rules and procedures -- is commonly referred to as TCP/IP. TCP and IP are the two
main protocols, though others are included in the suite. The TCP/IP protocol suite functions as an abstraction layer
between internet applications and the routing and switching fabric.
• TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that identify
how it should be broken into packets, addressed, transmitted, routed and received at the destination. TCP/IP requires
little central management and is designed to make networks reliable with the ability to recover automatically from
the failure of any device on the network.
• Internet Protocol Version 4 (IPv4) is the primary version used on the internet today. However, due to a limited
number of addresses, a newer protocol known as IPv6 was developed in 1998 by the Internet Engineering Task
Force (IETF). IPv6 expands the pool of available addresses from IPv4 significantly and is progressively being
embraced.
TCP
• It ensures a reliable and orderly delivery of packets across networks.
• TCP is a higher-level smart communications protocol that still uses IP as a way to transport data packets, but it also
connects computers, applications, web pages and web servers.
• TCP understands holistically the entire stream of data that these assets require to operate and it ensures the entire
volume of data needed is sent the first time.
• TCP defines how applications can create channels of communication across a network.
• It manages how a message is assembled into smaller packets before they're transmitted over the internet and
reassembled in the right order at the destination address.
• TCP operates at Layer 4, or the transport layer, of the Open Systems Interconnection (OSI model).
• TCP is a connection-oriented protocol, which means it establishes a connection between the sender and the receiver
before delivering data to ensure reliable delivery.
• As it does its work, TCP can also control the size and flow rate of data. It ensures that networks are free of any
congestion that could block the receipt of data. An example is an application that wants to send a large amount of
data over the internet. If the application only used IP, the data would have to be broken into multiple IP packets.
This would require multiple requests to send and receive data, as IP requests are issued per packet.
• With TCP, only a single request to send an entire data stream is needed; TCP handles the rest.
• TCP runs checks to ensure data is delivered. It can detect problems that arise in IP and request retransmission of
any data packets that were lost.
• TCP can reorganize packets so they're transmitted in the proper order. This minimizes network congestion by
preventing network bottlenecks caused by out-of-order packet delivery.
IP
• IP is a low-level internet protocol that facilitates data communications over the internet.
• IP delivers packets of data that consist of a header, which contains routing information, such as the source and
destination of the data and the data payload itself.
• It defines how to address and route each packet to ensure it reaches the right destination. Each gateway computer
on the network checks this IP address to determine where to forward the message.
• IP is limited by the amount of data it can send. The maximum size of a single IP data packet, which contains both
the header and the data, is between 20 and 24 bytes. This means that longer strings of data must be broken into
multiple data packets that have to be sent independently and then reorganized into the correct order.
• It provides the mechanism for delivering data from one network node to another.
• IP operates at Layer 3, or the network access layer, of the OSI model.
• IP is a connection-less protocol, which means it doesn't guarantee delivery nor does it provide error checking and
correction.
TCP/IP is highly scalable and, as a routable protocol, can determine the most efficient path through the network. It's widely
used in current internet architecture.
OSI vs TCP/IP
Why Does The OSI Model Matter?
Even though the modern Internet doesn’t strictly use the OSI Model (it uses a simpler Internet protocol suite), the OSI
Model is still very helpful for solving network problems. Whether it’s one person having trouble getting their laptop online,
or a website being down for thousands of users, the OSI Model helps to identify the problem. If you can narrow down the
issue to one specific layer of the model, you can avoid a lot of unnecessary work.
Imperva Application Security
Imperva security solutions protect your applications at different levels of the OSI model. They use DDoS mitigation to
secure the network layer and provide web application firewall (WAF), bot management, and API security to protect the
application layer.
To secure applications and networks across the OSI stack, Imperva offers multi-layered protection to ensure websites and
applications are always available, accessible, and safe. The Imperva application security solution includes:
• DDoS Mitigation: Protects the network layer from Distributed Denial of Service attacks.
• Web Application Firewall (WAF): Shields the application layer from threats.
• Bot Management: Prevents malicious bots from affecting the application.
• API Security: Secures APIs from various vulnerabilities and attacks.
Introduction to Sockets
A socket is one endpoint of a two way communication link between two programs running on the network. The socket
mechanism provides a means of inter-process communication (IPC) by establishing named contact points between which
the communication take place.
Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call. The socket provides bidirectional FIFO
Communication facility over the network. A socket connecting to the network is created at each end of the communication.
Each socket has a specific address. This address is composed of an IP address and a port number.
Socket are generally employed in client server applications. The server creates a socket, attaches it to a network port
addresses then waits for the client to contact it. The client creates a socket and then attempts to connect to the server socket.
When the connection is established, transfer of data takes place.
Types of Sockets: There are two types of Sockets: the datagram socket and the stream socket.
• Datagram Socket: This is a type of network that has a connectionless point for sending and receiving packets. It is
similar to a mailbox. The letters (data) posted into the box are collected and delivered (transmitted) to a letterbox
(receiving socket).
• Stream Socket: In a computer operating system, a stream socket is a type of inter-process communications socket
or network socket that provides a connection-oriented, sequenced, and unique flow of data without record
boundaries with well-defined mechanisms for creating and destroying connections and for detecting errors. It is
similar to a phone. A connection is established between the phones (two ends) and a conversation (transfer of data)
takes place.
In computer networks, application layer protocols are a set of standards and rules that govern the communication between
end-user applications over a network. Specific services and functionality are provided by these protocols to support various
types of application-level communication, such as file transfers, email, remote terminal connections, and web browsing.
Here is the list of commonly used application layer protocols in computer networks
1) HTTP
HTTP is an application-level protocol that is widely used for transmitting data over the internet. It is used by the World
Wide Web, and it is the foundation of data communication for the web.
HTTP defines a set of rules and standards for transmitting data over the internet. It allows clients, such as web browsers, to
send requests to servers, such as web servers, and receive responses. HTTP requests contain a method, a URI, and a set of
headers, and they can also contain a payload, which is the data being sent. HTTP responses contain a status code, a set of
headers, and a payload, which is the data being returned.
HTTP has several important features that make it a popular choice for transmitting data over the internet. For example, it is
stateless, which means that each request and response are treated as separate transactions, and the server does not retain any
information about previous requests. This makes it simple to implement, and it allows for better scalability. HTTP is also
extensible, which means that new headers and methods can be added to accommodate new requirements as they arise.
HTTP is used by a wide range of applications and services, including websites, APIs, and streaming services. It is a reliable
and efficient way to transmit data, and it has proven to be a flexible and scalable solution for the growing demands of the
internet.
2) FTP
FTP, or File Transfer Protocol, is a standard network protocol used for the transfer of files from one host to another over a
TCP-based network, such as the Internet. FTP is widely used for transferring large files or groups of files, as well as for
downloading software, music, and other digital content from the Internet.
FTP operates in a client-server architecture, where a client establishes a connection to an FTP server and can then upload
or download files from the server. The client and server exchange messages to initiate transfers, manage data transfers, and
terminate the connection. FTP supports both active and passive modes, which determine the way the data connection is
established between the client and the server.
FTP is generally considered an insecure protocol, as it transmits login credentials and files contents in cleartext, which
makes it vulnerable to eavesdropping and tampering. For this reason, it’s recommended to use SFTP (Secure FTP), which
uses SSL/TLS encryption to secure the data transfer.
3) SMTP
SMTP (Simple Mail Transfer Protocol) is a standard protocol for transmitting electronic mail (email) messages from one
server to another. It’s used by email clients (such as Microsoft Outlook, Gmail, Apple Mail, etc.) to send emails and by mail
servers to receive and store them.
• SMTP is responsible for the actual transmission of email messages, which includes the following steps:
• The client sends the recipient’s email address to the server and specifies the message to be sent.
• The server checks if the recipient’s email address is valid and if the sender has the proper authorization to send
emails.
• The server forwards the message to the recipient’s email server, which stores the message in the recipient’s inbox.
• The recipient’s email client retrieves the message from the server and displays it to the user.
4) DNS
DNS stands for "Domain Name System," and it is an essential component of the internet that translates domain names into
IP addresses. A domain name is a human-readable string of characters, such as "google.com," that can be easily remembered,
while an IP address is a set of numbers and dots that computers use to communicate with each other over the internet.
The DNS system is a hierarchical, distributed database that maps domain names to IP addresses. When you enter a domain
name into your web browser, your computer sends a query to a DNS server, which then returns the corresponding IP address.
The browser can then use that IP address to send a request to the server hosting the website you’re trying to access.
DNS has several benefits. It makes it possible for humans to access websites and other internet resources using easy-to-
remember domain names, rather than having to remember IP addresses. It also allows website owners to change the IP
address of their server without affecting the domain name, making it easier to maintain and update their website.
DNS is maintained by a network of servers around the world, and it is constantly being updated and maintained to ensure
that it is accurate and up-to-date. This system of servers is organized into a hierarchy, with the root DNS servers at the top
and local DNS servers at the bottom. When a DNS query is made, it is passed from one server to another until the correct
IP address is found.
5) Telnet
Telnet is a protocol that was widely used in the past for accessing remote computer systems over the internet. It allows a
user to log in to a remote system and access its command line interface as if they were sitting at the remote system’s
keyboard. Telnet was one of the first widely used remote access protocols, and it was particularly popular in the days of
mainframe computers and timesharing systems.
Telnet operates on the Application Layer of the OSI model and uses a client-server architecture. The client program, which
is typically run on a user’s computer, establishes a connection to a Telnet server, which is running on the remote system.
The user can then send commands to the server and receive responses.
While Telnet was widely used in the past, it has largely been replaced by more secure protocols such as SSH (Secure Shell).
Telnet is not considered a secure protocol, as it sends all data, including passwords, in plain text. This makes it vulnerable
to eavesdropping and interception. In addition, Telnet does not provide any encryption for data transmission, which makes
it vulnerable to man-in-the-middle attacks.
Today, Telnet is primarily used for debugging and testing network services, and it is not typically used for accessing remote
systems for daily use. Instead, most users access remote systems using protocols such as SSH, which provide stronger
security and encryption.
6) SSH
SSH (Secure Shell) is a secure network protocol used to remotely log into and execute commands on a computer. It’s
commonly used to remotely access servers for management and maintenance purposes, but it can also be used for secure
file transfers and tunneling network connections.
With SSH, you can securely connect to a remote computer and execute commands as if you were sitting in front of it. All
data transmitted over the network is encrypted, which provides a high level of security for sensitive information. This makes
it a useful tool for securely accessing servers, especially over an unsecured network like the internet.
SSH can be used on a variety of platforms, including Windows, Linux, macOS, and UNIX. It’s widely used by system
administrators, developers, and other IT professionals to securely manage remote servers and automate tasks.
In addition to providing secure access to remote computers, SSH can also be used to securely tunnel network connections,
which allows you to securely connect to a remote network through an encrypted channel. This can be useful for accessing
resources on a remote network or bypassing network restrictions.
7) NFS
NFS stands for "Network File System," and it is a protocol that allows a computer to share files and directories over a
network. NFS was developed by Sun Microsystems in the 1980s and is now maintained by the Internet Assigned Numbers
Authority (IANA).
NFS enables a computer to share its file system with another computer over the network, allowing users on the remote
computer to access files and directories as if they were local to their own computer. This makes it possible for users to work
with files and directories on remote systems as if they were on their own computer, without having to copy the files back
and forth.
NFS operates on the Application Layer of the OSI model and uses a client-server architecture. The computer sharing its file
system is the NFS server, and the computer accessing the shared files is the NFS client. The client sends requests to the
server to access files and directories, and the server sends back responses with the requested information.
NFS is widely used in enterprise environments and has been implemented on many operating systems, including Linux,
Unix, and macOS. It provides a simple and efficient way for computers to share files over a network and is particularly
useful for environments where multiple users need to access the same files and directories.
8) SNMP
SNMP (Simple Network Management Protocol) is a standard protocol used for managing and monitoring network devices,
such as routers, switches, servers, and printers. It provides a common framework for network management and enables
network administrators to monitor and manage network devices from a central location.
SNMP allows network devices to provide information about their performance and status to a network management system
(NMS), which can then use this information to monitor the health and performance of the network. This information can
also be used to generate reports, identify trends, and detect problems.
SNMP operates using a client-server model, where the network management system acts as the client and the network
devices act as servers. The client sends SNMP requests to the servers, which respond with the requested information. The
information is stored in a management information base (MIB), which is a database of objects that can be monitored and
managed using SNMP.
SNMP provides a flexible and scalable way to manage and monitor large networks, and it’s supported by a wide range of
network devices and vendors. It’s an essential tool for network administrators and is widely used in enterprise networks and
service provider networks.
9) DHCP
DHCP stands for "Dynamic Host Configuration Protocol," and it is a network protocol used to dynamically assign IP
addresses to devices on a network. DHCP is used to automate the process of assigning IP addresses to devices, eliminating
the need for a network administrator to manually assign IP addresses to each device.
DHCP operates on the Application Layer of the OSI model and uses a client-server architecture. The DHCP server is
responsible for managing a pool of available IP addresses and assigning them to devices on the network as they request
them. The DHCP client, typically built into the network interface of a device, sends a broadcast request for an IP address
when it joins the network. The DHCP server then assigns an IP address to the client and provides it with information about
the network, such as the subnet mask, default gateway, and DNS servers.
The DHCP protocol provides several benefits. It reduces the administrative overhead of managing IP addresses, as the
DHCP server automatically assigns and manages IP addresses. It also provides a flexible way to manage IP addresses, as
the DHCP server can easily reassign IP addresses to different devices if needed. Additionally, DHCP provides a way to
centrally manage IP addresses and network configuration, making it easier to make changes to the network configuration.
DHCP is widely used in most networks today and is supported by many operating systems, including Windows, Linux, and
macOS. It is an essential component of most IP networks and is typically used in conjunction with other network protocols,
such as TCP/IP and DNS, to provide a complete solution for network communication.
10) RIP
RIP (Routing Information Protocol) is a distance-vector routing protocol that is used to distribute routing information within
a network. It’s one of the earliest routing protocols developed for use in IP (Internet Protocol) networks, and it’s still widely
used in small to medium-sized networks.
RIP works by exchanging routing information between routers in a network. Each router periodically sends its routing table,
which lists the network destinations it knows about and the distance (measured in hop count) to each destination. Routers
use this information to update their own routing tables and determine the best path to a particular destination.
RIP has a simple and straightforward operation, which makes it easy to understand and configure. However, it also has
some limitations, such as its slow convergence time and limited scalability. In large networks, RIP can become slow and
inefficient, which is why it’s often replaced by more advanced routing protocols such as OSPF (Open Shortest Path First)
or EIGRP (Enhanced Interior Gateway Routing Protocol).
Despite its limitations, RIP is still widely used in small and medium-sized networks because of its simplicity and
compatibility with a wide range of networking devices. It’s also commonly used as a backup routing protocol in case of
failure of the primary routing protocol.
11. IMAP
The Internet Message Access Protocol (IMAP) is a protocol for receiving email. Protocols standardize technical processes
so computers and servers can connect with each other regardless of whether or not they use the same hardware or software.
A key feature of IMAP is that it allows users to access their emails from any device. This is because IMAP acts as an
intermediary between email servers and email clients, rather than downloading emails from the server onto the email client.
Compare this aspect of IMAP to the differences between using Microsoft Word and Google Docs. Microsoft Word
documents are saved locally to a computer and can be transported via email attachments or USB drives, but they do not
update dynamically. If, for example, Sally makes changes to their Word document, those modifications are only saved to
Sally's computer (and not to the version Linda might have on her computer).
By comparison, Google Docs can be accessed via the Internet on different devices, and update dynamically when a user
makes changes to a file. In this scenario, any change Sally makes to a shared file would be visible to Linda, even if they use
different computers to access the same document. Similarly, using IMAP, users can access their email accounts from
different devices without any differences in experience, and do not necessarily need to be on the device where they originally
read the email.
12. POP3
Email can be retrieved from a server and delivered to a local client using this application layer protocol. Users can manage
and read email messages locally on their devices using POP. POP3 is the most recent iteration of POP. For unencrypted
transmission, it uses TCP port 110, and for encrypted communication, it uses port 995.
IMAP downloads a copy of the email and leaves the original on the server, whereas POP3 downloads and deletes it from
the server. POP3 is a text-based protocol that involves requests from the client and answers from the server.
The POP3 protocol has commands such as USER, PASS, LIST, RETR, and QUIT.
13. MIME
MIME stands for Multipurpose Internet Mail Extension. This protocol is designed to extend the capabilities of the existing
Internet email protocol like SMTP. MIME allows non-ASCII data to be sent via SMTP. It allows users to send/receive
various kinds of files over the Internet like audio, video, programs, etc. MIME is not a standalone protocol it works in
collaboration with other protocols to extend their capabilities.
MIME transforms non-ASCII data at the sender side to NVT 7-bit data and delivers it to the client SMTP. The message on
the receiver side is transferred back to the original data. As well as we can send video and audio data using MIME as it
transfers them also in 7-bit ASCII data.
***********************