DCN Sanjivani
DCN Sanjivani
The TCP/IP protocol suite, which stands for Transmission Control Protocol/Internet Protocol, is a
set of networking protocols that are used to establish and maintain communication between
devices on the internet. It is the foundation of the modern internet and is responsible for the
reliable transmission of data across different networks.
TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications
that identify how it should be broken into packets, addressed, transmitted, routed and received at
the destination. TCP/IP requires little central management and is designed to make networks
reliable with the ability to recover automatically from the failure of any device on the network.
The two main protocols in the IP suite serve specific functions. TCP defines how applications can
create channels of communication across a network. It also manages how a message is assembled
into smaller packets before they are then transmitted over the internet and reassembled in the
right order at the destination address.
IP defines how to address and route each packet to make sure it reaches the right destination. A
subnet mask tells a computer, or other network device, what portion of the IP address is used to
represent the network and what part is used to represent hosts, or other
computers, on the network.
• Hypertext Transfer Protocol (HTTP) handles the communication between a web server and
a web browser.
• HTTP Secure handles secure communication between a web server and a web browser.
• File Transfer Protocol handles transmission of files between computers.
❖ The packet switching is a switching technique in which the message is sent in one go, but it
is divided into smaller pieces, and they are sent individually.
❖ The message splits into smaller pieces known as packets and packets are given a unique
number to identify their order at the receiving end.
❖ Every packet contains some information in its headers such as source address, destination
address and sequence number.
❖ Packets will travel across the network, taking the shortest path as possible.
❖ All the packets are reassembled at the receiving end in correct order.
❖ If any packet is missing or corrupted, then the message will be sent to resend the message.
❖ If the correct order of the packets is reached, then the acknowledgment message will be
sent.
❖ There are two approaches to Packet Switching: 1.Datagram Packet switching 2.Virtual
Circuit Switching.
The basis of network classification depends on several factors, including the geographical scope,
the purpose of the network, and the method of connection. Here are the primary criteria for
network classification:
❖ Geographical Scope:
• Local Area Network (LAN): A LAN is a network that covers a small geographic area,
typically within a single building or campus.
• Metropolitan Area Network (MAN): A MAN covers a larger geographical area, such
as a city or a metropolitan area.
• Wide Area Network (WAN): A WAN spans across vast distances, connecting devices
across cities, countries, or even continents.
❖ Method of Connection:
• Wired Network: A wired network uses physical cables, such as Ethernet cables, to
connect devices.
• Wireless Network: A wireless network utilizes wireless signals, such as Wi-Fi, to
connect devices without the need for physical cables.
Network classification helps in understanding the scale, purpose, and infrastructure requirements
of a particular network. It also influences the design, implementation, and management of
network systems to meet specific needs and ensure efficient communication and resource sharing
among devices.
• What are the various types of Transmission media in use?
Transmission Media is a means of establishing a communication medium to send and receive
information in the form of electromagnetic signal waves. It operates with various physical
elements, therefore, it is placed beneath the physical layer while being worked on by physical
elements from the physical layer. The Local Area Network (LAN), which contains both the
transmitter and the receiver, is the network that operates via the transmission medium. The
electrical or optical signals are transmitted through either copper or fiber-based transmission
media.
Guided Media:
Guided media is also referred to as wired media. Sometimes its also referred to as bounded media
because it is bounded to a specific limit in the communication network. In guided media, the
transmission signal properties are controlled and focused in a fixed constricted channel, which can
be implemented with the help of physiologically connected contacts. One of the most prominent
aspects of Guided Media is its fast transmission velocity. Other reasons why users choose directed
media over unguided media include transmission security and the ability to regulate the network
within a limited geographical area.
• The cost of guided media is very low (inexpensive) and easily available.
• This is very Flexible and Lightweight.
• Very easy to set up and install.
• Provides high transmission speed.
• Less secure.
• Unavailability of bandwidth.
In guided media, the transmission signal properties are controlled and focused in a fixed
constricted channel, which can be implemented with the help of physiologically connected
contacts.
Unguided Media is described as a wireless transmission medium without a physical link to the
network's nodes or servers.
• How do Network losses and delays occur? What are the different types of Network
delay? Explain
Network losses and delays can occur due to various factors in a computer network. Here are some
common reasons for network losses and delays:
1. Congestion: When there is high traffic or congestion on a network, it can lead to packet
loss and delays. Congestion occurs when the network's capacity is exceeded, causing
packets to be dropped or delayed in transmission.
2. Network Errors: Network errors can occur due to issues in the physical network
infrastructure, such as faulty cables, connectors, or network interface cards. These errors
can result in packet loss or delays.
3. Latency: Latency refers to the time it takes for data to travel from the source to the
destination. It can be caused by various factors, including the physical distance between
devices, the network's routing protocols, and the processing time at network devices.
4. Jitter: Jitter is the variation in packet delay, causing unevenness in the delivery of packets.
It can occur due to congestion, varying network conditions, or inconsistent network device
performance. Jitter can lead to poor audio and video quality in real-time applications.
5. Queuing Delays: When packets arrive at a network device faster than they can be
transmitted, they are placed in a queue. Queuing delays occur when packets have to wait
in the queue before being transmitted, leading to increased latency.
6. Processing Delays: Network devices, such as routers and switches, need to process packets
before forwarding them to the next hop. Processing delays can occur due to factors like
packet inspection, routing table lookups, and other processing tasks performed by the
network device.
• Transmission Delay: This is the time taken to transmit a packet over a link and is
determined by the packet size and the link's transmission rate.
• Propagation Delay: Propagation delay is the time it takes for a packet to travel from the
source to the destination, influenced by the physical distance between the devices and the
speed of the signal propagation.
• Processing Delay: Processing delay occurs when a network device needs to perform tasks
like packet header analysis, error checking, and routing table lookups before forwarding
the packet.
• Queueing Delay: Queueing delay happens when packets have to wait in a queue at a
network device before being transmitted, caused by congestion or limited resources.
Understanding these different types of delays helps network administrators and engineers identify
and troubleshoot performance issues in the network, optimize network design, and implement
measures to minimize losses and delays.
Chapter 2 :
User Device
|
DNS Resolver
|
Recursive DNS Resolver
|
Root Server
|
TCP Serve
|
Destination IP Adress
Working of DNS:
• DNS is a client/server network communication protocol. DNS clients send requests to the.
server while DNS servers send responses to the client.
• Client requests contain a name which is converted into an IP address known as a forward
DNS lookups while requests containing an IP address which is converted into a name
known as reverse DNS lookups.
• DNS implements a distributed database to store the name of all the hosts available on the
internet.
• If a client like a web browser sends a request containing a hostname, then a piece of
software such as DNS resolver sends a request to the DNS server to obtain the IP address
of a hostname. If DNS server does not contain the IP address associated with a hostname,
then it forwards the request to another DNS server. If IP address has arrived at the
resolver, which in turn completes the request over the internet protocol.
In file distribution activities, two common architectures used are Client-Server and Peer-to-Peer
(P2P). Let's explore how each architecture works:
• Client-Server Architecture:
In a Client-Server architecture, there are two main components: the client and the server.
❖ Client: The client is the user's device that requests and receives files from the server. It
typically runs client software or applications to interact with the server.
❖ Server: The server is a powerful computer or a network of computers that store files
and respond to client requests. It runs server software responsible for managing file
distribution.
Working:
In client-server architecture, the server acts as a centralized entity, managing file storage and
distribution. Clients rely on the server to access and retrieve files. This architecture provides
centralized control, efficient management, and scalability. However, it may face limitations in
handling high traffic and can be a single point of failure if the server becomes unavailable.
Working:
❖ Peers join the P2P network and make their files available for sharing.
❖ When a peer requests a file, it searches the network for other peers hosting the
desired file.
❖ Once the requesting peer identifies other peers with the file, it establishes direct
connections to download the file from multiple sources simultaneously.
❖ As the file downloads, the requesting peer may also become a source and share the
downloaded parts with other peers.
In P2P architecture, each peer contributes resources (storage, bandwidth) and helps distribute
files. This decentralized approach offers advantages such as scalability, fault tolerance, and load
balancing. P2P networks can handle high traffic and are resilient to individual peer failures.
However, managing security, ensuring fairness, and maintaining efficient file discovery can be
challenging in large P2P networks.
Both architectures have their strengths and weaknesses, and their suitability depends on the
specific requirements of the file distribution activity. Client-Server architecture is well-suited for
centralized control and management, while P2P architecture excels in scalability and decentralized
resource sharing.
• What is the difference between persistent HTTP with pipelining and without pipelining?
Persistent HTTP, also known as HTTP keep-alive, is a feature that allows multiple HTTP requests
and responses to be sent over a single TCP connection. This helps reduce the overhead of
establishing and tearing down connections for each request. Pipelining, on the other hand, is a
technique that allows sending multiple HTTP requests without waiting for the corresponding
responses.
Here are the main differences between persistent HTTP with pipelining and without pipelining:
• Multiple HTTP requests are sent over a single TCP connection without waiting for the
corresponding responses.
• The requests are sent in a pipelined manner, one after the other, without waiting for the
responses.
• The server processes the requests sequentially and sends the responses back in the same
order.
• The client waits for all the responses to arrive before processing them.
• Pipelining reduces the overhead of connection establishment and teardown, improving the
overall performance by allowing concurrent request-response cycles over a single
connection.
The main benefit of persistent HTTP with pipelining is that it enables more efficient utilization of
the TCP connection by overlapping request and response transmission. It helps reduce latency and
improves throughput by minimizing the impact of connection setup and teardown.
It's worth noting that while pipelining can offer performance benefits, it may also introduce some
challenges. For example, if a single request in the pipeline encounters an error or is slow to
process, it can block the processing of subsequent requests, causing a phenomenon known as the
"head-of-line blocking." This limitation has led to the decreased usage of pipelining in favor of
newer techniques like HTTP/2 and HTTP/3, which offer more advanced multiplexing and
concurrency features.
• What are the various standard services extended by Application Layer Protocol?
The application layer protocol extends various standard services that facilitate communication
between applications or services running on different devices. Some of the commonly provided
standard services by application layer protocols include:
1. File Transfer: Application layer protocols like FTP (File Transfer Protocol) and TFTP (Trivial
File Transfer Protocol) provide services for transferring files between systems. They enable
uploading, downloading, and management of files on remote servers.
2. Email Services: SMTP (Simple Mail Transfer Protocol) is a widely used application layer
protocol for sending and receiving email messages. It handles the transmission of email
between mail servers and client applications.
3. Web Services: HTTP (Hypertext Transfer Protocol) is the fundamental application layer
protocol for accessing and retrieving resources on the World Wide Web. It enables web
browsers and web servers to communicate and exchange information, allowing users to
browse websites, submit forms, and retrieve web pages.
4. Domain Name System (DNS): DNS is an application layer protocol used for translating
domain names (e.g., www.example.com) into IP addresses. It provides services for domain
name resolution, allowing devices to locate and communicate with specific servers on the
internet.
5. Remote Login and Execution: Protocols like SSH (Secure Shell) and Telnet provide services
for remotely logging into and executing commands on remote systems. They establish
secure or non-secure remote terminal sessions for administrative purposes.
7. Voice and Video Communication: Protocols such as SIP (Session Initiation Protocol) and
H.323 enable real-time voice and video communication over IP networks. They provide
services for establishing, managing, and terminating multimedia sessions between
participants.
8. Network Time Synchronization: Protocols like NTP (Network Time Protocol) provide
services for synchronizing the clocks of network devices. They enable accurate
timekeeping, which is essential for various applications that rely on synchronized time,
such as logging and authentication.
These are just a few examples of the standard services extended by application layer protocols.
There are many other protocols catering to different communication needs, ranging from remote
access to database queries, streaming media, and more. The application layer protocols play a
crucial role in enabling different applications and services to interact and communicate effectively
over networks.
• Compare SMTP and HTTP protocols.
SMTP (Simple Mail Transfer Protocol) and HTTP (Hypertext Transfer Protocol) are both application
layer protocols used for communication over computer networks. However, they serve different
purposes and have distinct features. Let's compare SMTP and HTTP:
Purpose:
• SMTP: SMTP is primarily used for sending and receiving email messages between mail
servers. It handles the transmission and delivery of emails over the internet.
• HTTP: HTTP is used for accessing and retrieving resources on the World Wide Web. It
facilitates the communication between web browsers and web servers to retrieve web
pages, submit forms, and interact with web applications.
Protocol Type:
Communication Model:
• SMTP: SMTP operates in a client-server model, where the client (sending mail server)
initiates the communication by connecting to the server (receiving mail server) and
delivering the email message. The server processes the email and delivers it to the
recipient's mailbox.
• HTTP: HTTP also follows a client-server model. The client (web browser) sends HTTP
requests to the server (web server) to fetch web pages or interact with web services. The
server processes the request and sends back an HTTP response containing the requested
resource or information.
Operations:
• SMTP: SMTP provides commands for mail transfer, such as HELO/EHLO (greeting),
MAIL FROM (specifying the sender), RCPT TO (specifying the recipient), DATA
(transmitting the email content), and QUIT (ending the session).
• HTTP: HTTP defines methods for various operations, including GET (retrieving
resources), POST (submitting data to a server), PUT (uploading resources), DELETE
(removing resources), and more. It also supports header fields for additional
information and control.
Security:
SMTP: SMTP does not provide built-in encryption and is susceptible to eavesdropping and
tampering. However, secure variants like SMTPS (SMTP over SSL/TLS) and STARTTLS (SMTP
with opportunistic encryption) can be used for secure email transmission.
HTTP: HTTP does not inherently provide security. However, HTTPS (HTTP Secure) uses
SSL/TLS encryption to secure the communication between the client and the server,
ensuring confidentiality and integrity of data.
Explain: Cookies
A web server transmits certain messages to a web browser so that the web server can monitor the
user’s activity on a particular website, the messages are known as cookies. It is a small piece of
information that a website stores on your computer, and uses it at the time of your iteration on
that website. When you visit the website again, your browser sends information back to the site.
• The web browser stores the message/information in a text file, the message/information
then sent back to the server each time the browser request a page from the server.
• The main aim of cookies is to identify users and perhaps prepare customized Web pages
for them.
• The name cookie is derived from UNIX objects called magic cookies. These are the tokens
that are attached to a user or program and switch depending on the areas entered by the
user or program.
• Cookies do not act maliciously on computer systems i.e., they are only a text files that can
be deleted any time- they are not plugins, nor they program.
• Cookies cannot be used to spread viruses, and they cannot access your hard drive.
Uses of Cookies :
• Session management – Cookies let websites allow users and recollect their individual login
information and preferences.
• Tracking – e-commerce sites use cookies to track items users previously viewed allowing
the sites to suggest other goods in which you are interested.
• Personalization – It is a customized advertising which is the main way cookies are used to
personalize your sessions.
The activity of saving data for reuse, such as a copy of a web page supplied by a web server, is
known as web caching.
• It is cached or saved the first time a user accesses the page, and a cache will deliver the
copy the next time a user requests the same page, preventing the origin server from
becoming overwhelmed.
• Web caching techniques dramatically improve page delivery speed and reduce the amount
of work required of the backend server.
• Caching can help protect against total outages by delivering content that has already been
cached while servers are unavailable.
• Varnish is a subscription-based and service-based solution suite that includes robust web
caching.
Chapter 3 :
• What do you mean by pipelined protocol? Explain: i. selective repeat ii. GBN
A pipelined protocol is a method used in computer networks to improve the efficiency of data
transmission by allowing multiple packets to be in transit simultaneously. It enables the sender to
transmit multiple packets without waiting for an acknowledgment for each individual packet
before sending the next one. The receiver also acknowledges multiple packets at once, rather than
acknowledging each packet separately.
i. Selective Repeat:
Selective Repeat is a specific pipelined protocol used in network communications. In this protocol,
the sender divides the data into a sequence of packets and assigns a unique sequence number to
each packet. The sender then sends the packets to the receiver, which buffers them until their
arrival is acknowledged.
When a packet is received correctly at the receiver, an acknowledgment (ACK) is sent back to the
sender to confirm its successful delivery. However, if a packet is lost or damaged during
transmission, the receiver discards the packet and does not send an ACK for that particular packet.
The key feature of Selective Repeat is that the sender keeps track of the acknowledgments
received from the receiver. If the sender doesn't receive an ACK for a specific packet within a
timeout period, it retransmits only that particular packet. The receiver, on the other hand, discards
duplicate packets and stores out-of-order packets until the missing packets arrive.
This selective retransmission mechanism allows for efficient retransmission of only the lost or
damaged packets, reducing unnecessary retransmissions and optimizing network utilization.
The sender can transmit a continuous stream of packets without waiting for individual
acknowledgments. It maintains a window of packets that have been sent but not yet
acknowledged by the receiver. The receiver buffers the received packets and sends cumulative
acknowledgments (ACK) to the sender, indicating the highest sequence number received
successfully.
If a packet is lost or damaged during transmission, the receiver discards the packet and stops
further processing until the missing or damaged packet is retransmitted. The sender, upon
receiving an ACK, moves its window forward, discarding the acknowledged packets and sending
the next set of packets.
However, unlike Selective Repeat, Go-Back-N protocol requires the sender to retransmit all the
unacknowledged packets in its window if a timeout or error occurs. This means that the sender has
to go back to the beginning of the window and retransmit all packets from that point onwards.
While Go-Back-N is relatively simple to implement, it can lead to inefficient retransmissions and
limited network utilization, especially if there are frequent packet losses or errors. Nevertheless, it
is still used in certain scenarios where simplicity is prioritized over efficiency.
• Explain Principle of Reliable Data Transfer Service Model.
The principle of the Reliable Data Transfer (RDT) service model is to ensure the reliable delivery of
data between a sender and a receiver over an unreliable communication channel. It provides
mechanisms to guarantee that data sent by the sender is received correctly by the receiver, even in
the presence of errors, packet loss, or delays.
The key elements of the RDT service model include 1. Acknowledgment 2. Retransmission 3.
Sequence Numbers 4. Error Detection and Correction 5. Flow Control
• It’s the responsibility of a reliable data transfer protocol to implement the service
abstraction of providing a reliable data transfer.
o Which can be hard because the layers bellow it might be unreliable.
• We will use the terminology “packet” rather than transport-layer “segment”.
• We will only discuss unidirectional data transfer, that is data transfer from the sending
to the receiving side. Not bidirectional data transfer (full-duplex)
With the help of FSM, explain rdt2.0 “operation with error” scenario.
To explain the "operation with error" scenario in the RDT 2.0 (Reliable Data Transfer) protocol
using a Finite State Machine (FSM), let's consider the basic components involved: a sender (S)
and a receiver (R) communicating over an unreliable channel.
The FSM for RDT 2.0 typically consists of three states: Wait for 0, Wait for 1, and Send ACK.
Here's how the "operation with error" scenario can be illustrated:
1. Initial State:
7. Error Scenario:
• While waiting for the acknowledgment, the packet sent by the sender encounters
an error during transmission.
• S, upon detecting the error due to the absence of ACK0 within a timeout period,
assumes the packet was lost or damaged during transmission.
• S retransmits the packet with sequence number 0 to the receiver.
• S, upon not receiving an ACK for the retransmitted packet within a timeout period,
assumes the packet was lost again and retransmits it.
This cycle of retransmission continues until the packet is successfully received by the receiver
or until a maximum number of retransmissions is reached.
• With the help of FSM, explain TCP’s Congestion Control Mechanism: i. Congestion
Avoidance ii. Fast Recovery
TCP (Transmission Control Protocol) employs a congestion control mechanism to regulate the rate at which data is
transmitted in order to prevent network congestion. Two important components of TCP's congestion control are
Congestion Avoidance and Fast Recovery. Let's illustrate these mechanisms using a Finite State Machine (FSM):
i. Congestion Avoidance:
1) Initial State:
Sender (S): Normal transmission
2) Sender (S) Actions:
S sends packets at the maximum allowed rate without congestion.
3) Congestion Occurs:
The network becomes congested due to increased traffic or limited capacity.
4) Sender (S) Actions:
S detects congestion based on indications like packet loss or explicit congestion notification (ECN) feedback.
S reduces its sending rate to alleviate congestion.
5) Sender (S) State Transition:
S enters Congestion Avoidance state.
6) Sender (S) Actions:
S reduces the sending rate by decreasing the congestion window (cwnd).
S continues sending data, but at a slower rate to avoid further congestion.
7) Congestion Eases:
The network congestion decreases, indicating improved conditions.
8) Sender (S) Actions:
S increases the sending rate gradually to utilize the available capacity.
9) Sender (S) State Transition:
S returns to the Normal transmission state.
The Congestion Avoidance mechanism in TCP ensures that the sender's transmission rate is responsive to network
congestion. When congestion is detected, the sender reduces its sending rate, allowing the network to recover and
prevent further congestion. As the congestion eases, the sender gradually increases its rate to efficiently utilize the
network capacity.
ii. Fast Recovery:
1) Initial State:
Sender (S): Normal transmission
2) Sender (S) Actions:
S sends packets at the maximum allowed rate without congestion.
3) Packet Loss Occurs:
A packet sent by the sender is lost in transit.
4) Sender (S) Actions:
S detects packet loss by recognizing the absence of an acknowledgment (ACK) for a specific packet.
5) Sender (S) State Transition:
S enters Fast Recovery state.
6) Sender (S) Actions:
S reduces its congestion window (cwnd) by half to decrease the sending rate.
S retransmits the lost packet immediately.
7) Receiver (R) Actions:
R receives the retransmitted packet and sends a duplicate ACK to the sender.
8) Sender (S) Actions:
S receives the duplicate ACK and infers that the network congestion is not severe.
S increases cwnd by a smaller increment (instead of reducing it further).
9) Receiver (R) Actions:
R continues to send duplicate ACKs for subsequent packets until the out-of-order packet arrives.
10) Lost Packet Arrives:
The lost packet finally arrives at the receiver.
11) Sender (S) Actions:
S receives the ACK for the lost packet and transitions to the Normal transmission state.
• What is Multiplexing and De-multiplexing? Explain: Connection-less and Connection-
oriented de-multiplexing.
Multiplexing and Demultiplexing services are provided in almost every protocol architecture ever designed. UDP and TCP
perform the demultiplexing and multiplexing jobs by including two special fields in the segment headers: the source port
number field and the destination port number field.
• Multiplexing –
Gathering data from multiple application processes of the sender, enveloping that data with a header, and
sending them as a whole to the intended receiver is called multiplexing.
• Demultiplexing –
Delivering received segments at the receiver side to the correct app layer processes is called
demultiplexing.
Connection-oriented De-multiplexing:
The TCP segment structure provides the necessary information for establishing connections, ensuring reliable data
delivery, managing flow control, and handling various control functions within the TCP protocol. The header fields and
control flags play crucial roles in facilitating effective and efficient communication between TCP endpoints.
• Explain TCP’s Flow Control and Congestion Control Mechanism.
TCP (Transmission Control Protocol) utilizes two mechanisms to manage data transmission in a
reliable and efficient manner: flow control and congestion control.
Flow Control:
• Flow control ensures that the sender does not overwhelm the receiver with more data
than it can handle, preventing packet loss and buffer overflow at the receiver's end.
• The receiver controls the flow of data by specifying the receive window size in the TCP
header.
• The receiver advertises its available buffer space to the sender through the window size
field in TCP segments.
• The sender limits its transmission rate to match the receiver's window size, avoiding data
overflow at the receiver.
• As the receiver processes and frees up space in its buffer, it advertises an increased
window size, allowing the sender to send more data.
Congestion Control:
• Congestion control prevents network congestion by regulating the rate at which data is
transmitted into the network.
• It ensures that the sender's transmission rate is aligned with the network's capacity and
avoids overwhelming the network with excessive data.
• TCP employs several techniques for congestion control, including:
• Slow Start: Upon establishing a connection or recovering from a congestion event, the
sender starts with a conservative sending rate and gradually increases it to probe the
network's capacity.
• Congestion Avoidance: Once the network is probed and no congestion is detected, the
sender increases its sending rate linearly until it approaches a threshold, known as the
congestion window.
• Fast Retransmit and Recovery: When the sender receives duplicate acknowledgments
(indicating packet loss), it assumes network congestion and performs fast retransmission
by retransmitting the missing packet without waiting for a timeout. It then enters a
recovery phase to reduce congestion window size.
• Explicit Congestion Notification (ECN): TCP can use ECN to detect network congestion
signals from network devices. ECN allows routers to mark packets with an explicit
congestion indication, and the receiver informs the sender about congestion through TCP
flags.
• Timeout-based Retransmission: If the sender does not receive an acknowledgment within
a specified timeout period, it assumes packet loss due to congestion and retransmits the
packet.
By combining flow control and congestion control mechanisms, TCP ensures reliable and efficient
data transmission over networks. Flow control prevents overwhelming the receiver, while
congestion control prevents network congestion and optimizes data flow based on network
conditions. These mechanisms contribute to TCP's robustness and ability to adapt to varying
network environments.
Chapter 4 :
DHCP (Dynamic Host Configuration Protocol) is a network protocol used to automatically assign IP addresses
and other network configuration parameters to devices on a network. It simplifies the process of IP address
allocation and ensures efficient utilization of available IP addressesDHCP Discover (Client):
When a DHCP client connects to a network, it sends a DHCP Discover message as a broadcast.
The DHCP Discover message is used to locate available DHCP servers on the network.
1) DHCP Offer (Server):
Upon receiving the DHCP Discover message, DHCP servers respond with a DHCP Offer message.
The DHCP Offer message contains an available IP address, lease duration, subnet mask, and other
network configuration parameters.
2) DHCP Request (Client):
The client receives multiple DHCP Offer messages from different servers and chooses one offer.
The client then sends a DHCP Request message to the chosen DHCP server, indicating its acceptance
of the offered IP address.
3) DHCP Acknowledgment (Server):
The DHCP server, upon receiving the DHCP Request message, sends a DHCP Acknowledgment
message to the client.
The DHCP Acknowledgment message confirms the allocation of the requested IP address to the
client.
The message may also include additional configuration parameters, such as default gateway, DNS
server addresses, and domain name.
4) IP Address Assignment (Client):
The DHCP client receives the DHCP Acknowledgment message and configures its network interface
with the allocated IP address and other configuration parameters received from the server.
The client starts using the assigned IP address to communicate on the network.
5) Lease Renewal:
DHCP leases have a finite duration, after which the IP address lease expires.
To maintain network connectivity, the client must renew the lease before it expires.
The client can initiate the lease renewal process by sending a DHCP Request message to the server
that originally assigned the IP address.
The server responds with a DHCP Acknowledgment message, either renewing the lease or providing
a new IP address if the original IP address is unavailable.
6) Lease Release:
When a client disconnects from the network or no longer requires an IP address, it can release the
IP address lease.
The client sends a DHCP Release message to the DHCP server, indicating that it no longer needs the
IP address.
The DHCP server can then reclaim the IP address and make it available for other clients.
This DHCP client-server interaction enables automatic IP address assignment, simplifies network
configuration, and allows efficient management of IP addresses within a network environment. The DHCP
protocol streamlines the process of network connectivity for devices, reducing manual configuration efforts
and facilitating dynamic allocation of IP addresses.
Explain NAT protocol. What is the need and advantage of it?
NAT (Network Address Translation) is a protocol used to translate IP addresses between two
different network domains. It allows devices on a local network to communicate with devices on
external networks, such as the internet, by translating private IP addresses used within the local
network into public IP addresses used on the external network.
• NAT Protocol:
NAT operates at the network layer (Layer 3) of the OSI model.
It translates IP addresses and port numbers in IP packets between private and public
networks.
• Need for NAT:
Limited IPv4 Address Space: NAT addresses the limitation of available IPv4 addresses by
allowing multiple devices to share a single public IP address.
Private Network Addressing: Private IP addresses are reserved for use within private
networks.
However, these private IP addresses are not routable on the internet. NAT enables devices
with private IP addresses to communicate with the internet using a single public IP
address.
Security and Privacy: NAT acts as a barrier between the public internet and private
network, concealing the private IP addresses of devices from external networks. This adds
a layer of security and privacy, as external entities only see the public IP address assigned
by NAT.
• NAT Types:
Static NAT: Maps a private IP address to a specific public IP address. This type of NAT
provides a one-to-one mapping.
Dynamic NAT: Maps multiple private IP addresses to a pool of public IP addresses. The
mapping is dynamically assigned based on demand.
Network Address Port Translation (NAPT): Also known as Port Address Translation (PAT) or
Overloaded NAT, it maps multiple private IP addresses to a single public IP address by using
different port numbers to distinguish between multiple connections. This type of NAT is
commonly used in home networks and small office environments.
• Advantages of NAT:
IP Address Conservation: NAT allows multiple devices within a private network to share a
single public IP address, effectively conserving the limited IPv4 address space.
Security and Privacy: NAT acts as a firewall by hiding the private IP addresses of devices
from external networks, making it more difficult for unauthorized access to the devices.
Simplified Network Configuration: NAT simplifies the network configuration process by
eliminating the need to assign unique public IP addresses to each device within a private
network. This reduces the administrative overhead and complexity of IP address
management.
Seamless Integration of Private Networks: NAT enables the integration of private networks
with the public internet, allowing devices with private IP addresses to access internet
resources without requiring public IP addresses for each device.
Overall, NAT provides a practical solution for overcoming the limitations of IPv4 address space,
securing private networks, and simplifying network configuration. It allows for efficient utilization
of IP addresses, enhances network security, and facilitates seamless communication between
private and public networks.
Draw and explain IPv4 & IPv6 Datagram structure.
1) Version (4 bits):
Specifies the IP protocol version, which is 6 for IPv6.
2) Traffic Class (8 bits):
Similar to the Type of Service field in IPv4, it represents the quality of service and
prioritization of traffic.
3) Flow Label (20 bits):
Used to identify and categorize packets belonging to the same flow or stream of data.
4) Payload Length (16 bits):
Indicates the length of the IPv6 payload (data) in bytes.
5) Next Header (8 bits):
Identifies the type of the next header or protocol following the IPv6 header.
6) Hop Limit (8 bits):
Replaces the Time to Live (TTL) field in IPv4, specifying the maximum number of hops the
packet can traverse.
7) Source IP Address (128 bits):
Specifies the IP address of the sender.
8) Destination IP Address (128 bits):
Indicates the IP address of the intended recipient.
9) Extension Headers (variable length, if present):
Optional headers that can be added to the IPv6 datagram for specific purposes, such as
fragmentation, security, or routing.
10) Data (variable length):
Contains the payload or data being transmitted.
The IPv6 datagram structure is more simplified compared to IPv4 and includes support for larger IP
addresses, improved header efficiency, and enhanced functionality through extension headers.
IPv6 was developed to address the limitations of IPv4 and accommodate the growing
requirements of modern networks.
• Explain IP data fragmentation & Re-assembly mechanism.
IP data fragmentation and reassembly are mechanisms used in the Internet Protocol (IP) to handle
the transmission of data packets that exceed the maximum transmission unit (MTU) size of a
network. Let's explore these mechanisms in detail:
1) IP Data Fragmentation:
• Maximum Transmission Unit (MTU):
The MTU refers to the maximum size of a data packet that can be transmitted over a
particular network.
Different networks have different MTU sizes based on factors like link-layer protocols,
network infrastructure, and technology.
• Fragmentation:
When a data packet's size exceeds the MTU of a network along its path, IP fragmentation is
performed to break the packet into smaller fragments.
Each fragment is a separate IP packet that can be transmitted across the network without
exceeding the MTU size.
• Fragmentation Fields:
Identification: Each fragment carries the same identification value to identify the original IP
packet.
Flags: The "Don't Fragment" (DF) flag indicates whether the packet can be fragmented
further, while the "More Fragments" (MF) flag indicates if more fragments follow.
Fragment Offset: Indicates the position of the fragment within the original IP packet.
• Fragmentation Process:
When a router encounters an IP packet larger than the MTU of the outgoing interface, it
performs fragmentation.
The router creates multiple fragments by dividing the original packet into smaller pieces,
each fitting within the MTU.
The fragments are individually transmitted to the destination.
2) IP Data Reassembly:
• Reassembly Fields:
Identification: Used to match and identify fragments belonging to the same original IP
packet.
Flags: The DF and MF flags are checked during reassembly to determine if fragmentation is
allowed and if more fragments are expected.
Fragment Offset: Used to order and reassemble the fragments correctly.
• Reassembly Process:
The destination or an intermediate router collects and buffers the incoming fragments.
Fragments with the same identification value are identified as belonging to the same
original packet.
The fragments are ordered and reassembled based on their fragment offset.
The "More Fragments" flag is checked to determine if all fragments have arrived.
If all fragments have arrived, the original packet is reconstructed by concatenating the
fragments in the correct order.
The reassembled packet is then passed up the protocol stack for further processing.
It's important to note that IP fragmentation and reassembly add additional overhead and can introduce
potential delays and increased processing overhead. Therefore, minimizing fragmentation is generally
preferred, and network protocols such as Path MTU Discovery (PMTUD) are used to determine the maximum
MTU along the path to avoid fragmentation whenever possible.
• Explain how the Link State Algorithm works?
The Link State Algorithm, also known as Dijkstra's Algorithm, is used by routing protocols to
calculate the shortest path between nodes in a network. It is typically used in link-state routing
protocols like OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate
System).
• Taking-Turns Protocol:
The Taking-Turns protocol, also known as the Round-Robin protocol, is a communication
protocol that allows multiple participants to take turns transmitting data on a shared
channel. The protocol ensures fairness by providing each participant an equal opportunity
to transmit.
When it is a participant's turn, they have exclusive access to the channel and can send their
data without interference from other participants. The other participants must wait until
their turn arrives. This process continues in a cyclical manner until all participants have
completed their transmissions.
The Taking-Turns protocol is commonly used in situations where fairness and equal
opportunity for all participants are important. It ensures that no single participant
dominates the channel and prevents collisions between transmissions.
In the Slotted ALOHA protocol, time is divided into discrete slots, and each slot
corresponds to a fixed unit of time. The length of a slot is typically equal to the time
required to transmit the smallest unit of data. All devices in the network are synchronized
to these time slots.
When a device wants to transmit data, it waits for the beginning of the next time slot. If
the channel is idle during that slot, meaning no other device is transmitting, the device can
proceed to transmit its data without interference. However, if multiple devices attempt to
transmit simultaneously and a collision occurs, the data sent by those devices becomes
corrupted.
After a collision, each device waits for a random amount of time before attempting to
transmit again. This random backoff helps to reduce the chances of subsequent collisions.
The devices continue this process of waiting for a free slot and retransmitting until their
data is successfully transmitted.
The Slotted ALOHA protocol improves the efficiency of data transmission by reducing
collisions compared to the original ALOHA protocol. By dividing time into slots and
introducing synchronization, it allows devices to access the channel in a coordinated
manner, increasing the overall throughput of the network.
Explain with example: i. CRC ii. Parity Check
Suppose we have a data sequence that we want to transmit: 101001. To generate the CRC
checksum, we use a CRC polynomial, which is a predetermined binary pattern used by both the
sender and receiver.
Let's use a simple CRC polynomial: 1101 (represented as x^3 + x^2 + 1). We append three zeros
(the degree of the polynomial minus 1) to the original data sequence:
To calculate the CRC checksum, we perform a bitwise division. We divide the appended data
sequence by the CRC polynomial, ignoring any remainder. The result of the division becomes the
CRC checksum.
Suppose we want to transmit a 4-bit data sequence: 1011. We can use even parity, which means
the parity bit is set to make the total number of ones in the data (including the parity bit) an even
number.
To calculate the parity bit, we count the number of ones in the data sequence. In this case, there
are three ones. Since three is an odd number, we set the parity bit to 1, making the total number
of ones even.
So, the transmitted data sequence becomes: 10111, where the last bit is the parity bit.
At the receiver's end, the received data sequence is checked for errors by recalculating the parity
bit based on the received data. If the recalculated parity bit matches the received parity bit, it
suggests that no errors occurred during transmission. However, if he parity bits do not match, it
indicates that an error has occurred, and the data is considered corrupted.
Parity check is a simple and efficient technique to detect single-bit errors. However, it is not
suitable for detecting multiple bit errors or correcting errors
• What are the various techniques for detection and correction?
There are several techniques for error detection and correction in communication networks and
storage systems. Here are some commonly used techniques:
• Parity Check: Parity check is a simple error-detection technique that uses a single
additional bit (parity bit) to check for errors. It can detect single-bit errors but cannot
correct them.
• Checksums: Checksums involve the generation of a checksum value based on the data
being transmitted. The receiver recalculates the checksum and compares it with the
received checksum to detect errors. Checksums can detect errors but generally cannot
correct them.
• Hamming Codes: Hamming codes are a class of error-correcting codes that add redundant
bits to the data to detect and correct errors. The number of redundant bits depends on the
size of the data being transmitted. Hamming codes can correct single-bit errors and detect
some multiple-bit errors.
• Reed-Solomon Codes: Reed-Solomon codes are widely used error-correcting codes that
can correct multiple-bit errors and detect and correct burst errors. They are commonly
used in applications such as data storage systems and digital communication.
• Forward Error Correction (FEC): FEC is an error-correction technique that adds redundant
information to the transmitted data. This redundant information allows the receiver to
detect and correct errors without the need for retransmission. FEC is used in various
communication systems, including satellite communications and digital television.
These are some of the commonly used techniques for error detection and correction. The choice of
technique depends on factors such as the type of errors expected, the required level of error
detection and correction, and the constraints of the specific application or system.
• Explain MAC addressing and Address Resolution Protocol (ARP)
• MAC Addressing:
MAC (Media Access Control) addressing is a unique identifier assigned to network interface
controllers (NICs) at the hardware level. It is a 48-bit address that is typically represented
in hexadecimal format and consists of six groups of two hexadecimal digits separated by
colons or hyphens (e.g., 00:1A:2B:3C:4D:5E).
Every device that connects to a network has a MAC address, including computers, routers,
switches, and other network-enabled devices. MAC addresses are used at the data link
layer of the OSI model to provide a globally unique identifier for each network device.
MAC addresses are assigned by the manufacturer of the network interface card and are
burned into the card's firmware. They are used for addressing and identification purposes
in local area networks (LANs) and are essential for communication between devices within
the same network segment.
When a device wants to communicate with another device on the local network, it needs
to know the MAC address of the destination device. ARP helps in this process by resolving
the IP address to its corresponding MAC address.
1) Device A wants to send a packet to Device B, but it only knows the IP address of B.
2) Device A checks its ARP cache (a local table that stores recent IP-to-MAC mappings). If
Device B's IP address is found in the cache, the corresponding MAC address is
retrieved.
3) If the IP-to-MAC mapping is not found in the cache, Device A sends an ARP request
packet (broadcast) to all devices on the local network, asking, "Who has IP address X?
Please send me your MAC address."
4) All devices on the network receive the ARP request packet, but only the device with
the requested IP address (Device B) responds with its MAC address.
5) Device A receives the ARP reply packet containing Device B's MAC address.
6) Device A updates its ARP cache with the IP-to-MAC mapping for future use.
7) Device A can now encapsulate the packet with the MAC address of Device B and send
it over the network.
ARP is a stateless protocol, meaning it does not maintain long-term mappings between IP and MAC
addresses. Instead, devices periodically refresh their ARP caches by sending ARP requests for
known IP addresses to ensure accurate mappings.
ARP is essential for the proper functioning of local networks, enabling devices to communicate
with each other using IP addresses and their corresponding MAC addresses.
• Draw and explain the Ethernet frame structure.
An Ethernet frame starts with a header, which contains the source and destination MAC addresses,
among other data. The middle part of the frame is the actual data. The frame ends with a field
called Frame Check Sequence (FCS).
The Ethernet frame structure is defined in the IEEE 802.3 standard. Here is a graphical
representation of an Ethernet frame and a description of each field in the frame:
Preamble – informs the receiving system that a frame is starting and enables synchronisation.
SFD (Start Frame Delimiter) – signifies that the Destination MAC Address field begins with the next
byte.
Destination MAC – identifies the receiving system.
Source MAC – identifies the sending system.
Type – defines the type of protocol inside the frame, for example IPv4 or IPv6.
Data and Pad – contains the payload data. Padding data is added to meet the minimum length
requirement for this field (46 bytes).
FCS (Frame Check Sequence) – contains a 32-bit Cyclic Redundancy Check (CRC) which allows
detection of corrupted data.
The switch table, also known as the MAC address table or forwarding table, is a critical component
of network switches. It plays a significant role in forwarding Ethernet frames within a local area
network (LAN). Here's a brief explanation of the significance of the switch table:
MAC Address Learning: The switch table dynamically learns MAC addresses by examining the
source MAC address of incoming frames. When a switch receives a frame, it records the source
MAC address along with the incoming port in the switch table. This learning process allows the
switch to build a database of MAC address-to-port mappings.
MAC Address Forwarding: The switch table is used to determine the outgoing port for each
incoming frame based on the destination MAC address. When a switch receives a frame, it looks
up the destination MAC address in the switch table. If a matching entry is found, the switch
forwards the frame only to the port associated with that MAC address. This process is known as
unicast forwarding.
Broadcast and Unknown Unicast Flooding: If the destination MAC address is not found in the
switch table or if the frame is a broadcast frame, the switch floods the frame to all ports except the
incoming port. This ensures that the frame reaches all devices in the network segment. Unknown
unicast frames are also flooded to all ports to handle situations where the MAC address of the
destination device is not yet known to the switch.
Efficient Traffic Forwarding: By maintaining MAC address mappings in the switch table, switches
can forward frames directly to the intended destination device, improving network efficiency.
Switches avoid unnecessary flooding of frames to all ports, reducing network congestion and
enhancing performance.
Enhancing Security: The switch table helps improve network security by allowing switches to
implement features like VLANs (Virtual Local Area Networks) and port security. VLANs enable
network segmentation, isolating traffic between different groups of devices. Port security restricts
unauthorized devices by associating MAC addresses with specific switch ports.
Chapter 6 :
• Explain and draw SNMP protocol data unit format.
The SNMP (Simple Network Management Protocol) protocol data unit (PDU) format specifies the
structure of the messages exchanged between SNMP managers and agents for network
management purposes. The SNMP PDU format varies depending on the SNMP version being used.
Here is an explanation of the SNMPv2c PDU format along with a simplified diagram:
PDU Type: This field specifies the type of SNMP message and can be one of the following:
Error Status: This field indicates the status of the PDU. It is used to report any errors or
exceptions encountered during the processing of the SNMP request. Common error status
values include "noError" for successful requests and various error codes for specific error
conditions.
Error Index: This field provides additional information about the specific SNMP variable that
caused the error. It indicates the index or position of the variable in the variable binding list.
Diagram:
Below is a simplified diagram representing the SNMPv2c PDU format:
+-------------------+
| PDU Type |
+-------------------+
| Request ID |
+-------------------+
| Error Status |
+-------------------+
| Error Index |
+-------------------+
| Variable Bindings |
+-------------------+
In the diagram, each field is represented by a block. The "Variable Bindings" field is a list of
variable bindings that contain the SNMP variables and their corresponding values.
• Explain the principle of Cryptography. What is Symmetric Key & Asymmetric Key
Encryption?
Cryptography is the practice and study of techniques used to secure communication and
information from unauthorized access or modification. It involves transforming plaintext (original
message) into ciphertext (encrypted message) using mathematical algorithms and keys. The
principle of cryptography is to ensure confidentiality, integrity, authentication, and non-
repudiation of data.
Symmetric key encryption, also known as secret key encryption or private key encryption, is a
type of encryption where the same key is used for both the encryption and decryption
processes. The sender and receiver share a secret key that is kept confidential. The key is used
to scramble the plaintext into ciphertext and then unscramble the ciphertext back to plaintext.
In symmetric key encryption, the sender encrypts the message using the secret key and sends
the ciphertext to the receiver. The receiver, possessing the same secret key, decrypts the
ciphertext to obtain the original plaintext.
Symmetric key encryption algorithms are typically fast and efficient, making them suitable for
encrypting large amounts of data. However, the challenge lies in securely distributing and
managing the secret key, especially when multiple parties are involved.
Common symmetric key encryption algorithms include Advanced Encryption Standard (AES),
Data Encryption Standard (DES), and Triple DES (3DES).
In asymmetric key encryption, the public key is used for encryption, and the private key is used
for decryption. The sender encrypts the plaintext using the receiver's public key and sends the
ciphertext. Only the receiver, who possesses the corresponding private key, can decrypt the
ciphertext to obtain the original plaintext.
Asymmetric key encryption offers several advantages, including secure key exchange, digital
signatures, and authentication. It eliminates the need for a shared secret key, making it
suitable for scenarios where secure key distribution is challenging.
Symmetric Key Algorithm: AES is a symmetric key algorithm, meaning the same key is used for
both encryption and decryption.
Block Cipher: AES operates on fixed-size blocks of data, where the block size is 128 bits (16 bytes).
It encrypts and decrypts data in chunks of 128 bits.
Variable Key Size: AES supports three different key sizes: 128 bits, 192 bits, and 256 bits. The key
size determines the number of rounds performed during the encryption and decryption processes.
Substitution-Permutation Network: AES employs a combination of substitution and permutation
operations to achieve confusion and diffusion of data. These operations are performed in multiple
rounds to ensure the security of the encrypted data.
Multiple Rounds: The number of rounds performed during AES encryption and decryption depends
on the key size. AES-128 performs 10 rounds, AES-192 performs 12 rounds, and AES-256 performs
14 rounds.
AES provides a high level of security, efficiency, and flexibility, which makes it suitable for a wide
range of applications. It has undergone extensive analysis and testing by cryptographic experts and
has stood the test of time as a robust encryption algorithm.
• Explain the various network management tools.
Network management tools are software applications or utilities used to monitor, control, and
troubleshoot computer networks. These tools provide network administrators with insights into
network performance, device status, and traffic analysis. Here are some common network
management tools:
Network Monitoring Tools: Network monitoring tools continuously monitor network devices,
interfaces, and services to detect and report any issues or anomalies. They provide real-time
visibility into network performance, bandwidth usage, device availability, and response times.
Examples of network monitoring tools include Nagios, Zabbix, PRTG Network Monitor, and
SolarWinds Network Performance Monitor.
Network Configuration Management Tools: These tools automate the management of network
device configurations. They help administrators maintain consistency across network devices,
deploy configuration changes efficiently, and revert to previous configurations if needed. Tools like
Cisco Prime Infrastructure, SolarWinds Network Configuration Manager, and Ansible are used for
network configuration management.
Network Performance Analysis Tools: Network performance analysis tools help administrators
identify bottlenecks, analyze traffic patterns, and optimize network performance. They provide
insights into network latency, packet loss, bandwidth utilization, and application performance.
Wireshark, SolarWinds NetFlow Traffic Analyzer, and Cisco Performance Monitor are examples of
network performance analysis tools.
Network Traffic Monitoring and Analysis Tools: These tools capture and analyze network traffic to
gain visibility into data packets, protocols, and application behavior. They assist in troubleshooting
network issues, detecting security threats, and optimizing network performance. Tools like
tcpdump, Wireshark, and SolarWinds Network Performance Monitor provide packet-level analysis
and traffic monitoring capabilities.
Network Security Tools: Network security tools are used to monitor and protect network
infrastructure against security threats, vulnerabilities, and attacks. They include firewalls, intrusion
detection systems (IDS), intrusion prevention systems (IPS), virtual private networks (VPNs), and
antivirus/antimalware solutions. Examples include Cisco ASA, Snort, Symantec Endpoint
Protection, and Palo Alto Networks Next-Generation Firewall.
Network Discovery and Mapping Tools: These tools automatically discover network devices, map
network topology, and provide visual representations of the network infrastructure. They assist in
network documentation, asset management, and identifying connectivity issues. Tools like
SolarWinds Network Topology Mapper, Nmap, and Cisco Discovery Protocol (CDP) help in network
discovery and mapping.
Bandwidth Monitoring and Traffic Shaping Tools: Bandwidth monitoring tools measure network
bandwidth usage, track application usage patterns, and help optimize bandwidth allocation. They
assist in identifying bandwidth-hungry applications, enforcing Quality of Service (QoS) policies,
and shaping traffic. Examples include SolarWinds Bandwidth Analyzer Pack, NetFlow Analyzer, and
PRTG Network Monitor.
These are just a few examples of network management tools available in the market. The specific
tools used by an organization may vary depending on their network infrastructure, requirements,
and budget.
• Explain following: i. Digital Signatures
Digital signatures are cryptographic mechanisms used to verify the authenticity, integrity, and non-
repudiation of digital documents or messages. They provide a way to ensure that a message or
document has not been tampered with and that it originates from the claimed sender.
• Message Digest: The original message is processed through a hash function (such as SHA-
256) to generate a fixed-length string of characters called a message digest or hash value.
The message digest is unique to the input message and serves as a digital fingerprint of its
content.
• Private Key Encryption: The sender uses their private key to encrypt the message digest.
This creates the digital signature, which is essentially a cryptographic representation of the
message digest encrypted with the sender's private key.
• Attach Signature: The digital signature is attached to the original message, forming a
signed message.
A message digest, also known as a hash value or checksum, is a fixed-length alphanumeric string
generated by applying a hash function to a message or data. It is a unique representation of the
input message and acts as a digital fingerprint.
• Hash Function: A hash function takes an input message of any length and processes it to
produce a fixed-length output called a message digest. Commonly used hash functions
include MD5 (Message Digest Algorithm 5), SHA-1 (Secure Hash Algorithm 1), and SHA-256
(Secure Hash Algorithm 256).
• Unique Output: A well-designed hash function ensures that even a small change in the
input message results in a significantly different message digest. It should be
computationally infeasible to find two different messages with the same message digest
(collision resistance).