CN Unit 4 imp q
CN Unit 4 imp q
TCP is a connection-oriented protocol that establishes a reliable communication channel between two
devices. The process involves three main phases: connection establishment, data transfer, and connection
termination. Below is a detailed explanation of each phase:
The TCP connection establishment process uses a method called 3-Way Handshaking. This ensures that
both the sender and receiver are synchronized before data transmission begins. The steps involved are:
1. SYN (Synchronize)
o The client initiates the connection by sending a SYN packet to the server. This packet
includes the client's initial sequence number (ISN).
o Packet Example:
Flags: SYN
Sequence Number: x
2. SYN-ACK (Synchronize-Acknowledgment)
o The server responds to the client's SYN with a SYN-ACK packet. This packet contains:
The server's initial sequence number (ISN).
An acknowledgment of the client's SYN (incremented by 1).
o Packet Example:
Flags: SYN, ACK
Sequence Number: y
Acknowledgment Number: x + 1
3. ACK (Acknowledgment)
o The client sends an ACK packet back to the server, acknowledging the server’s SYN-ACK.
This packet contains the next expected byte from the server.
o Packet Example:
Flags: ACK
Sequence Number: x + 1
Acknowledgment Number: y + 1
At this point, the TCP connection is established, and data transfer can begin.
B. Data Transfer
Once the TCP connection is established, data transfer can occur. The features of data transfer in TCP
include:
1. Reliable Delivery
o TCP ensures that all data segments are delivered reliably and in the correct order. Each
segment sent is acknowledged by the receiver.
2. Flow Control
oTCP uses a sliding window mechanism to manage flow control. The sender is allowed to
send multiple segments before needing an acknowledgment, up to a specified window size.
This prevents the sender from overwhelming the receiver.
3. Error Control
o Each TCP segment includes a checksum field to verify the integrity of the data. If a segment
is found to be corrupted, it is discarded, and the sender must retransmit it.
4. Segmenting Data
o Data from the application layer is divided into manageable segments, each with a TCP
header. The maximum segment size (MSS) is negotiated during connection establishment to
optimize performance.
5. Congestion Control
o TCP implements algorithms like slow start, congestion avoidance, and fast recovery to
manage network congestion and ensure efficient data transfer.
TCP connections are terminated using another 3-Way Handshaking process, ensuring that both sides agree
to close the connection gracefully. The steps involved are:
1. FIN (Finish)
o The device that wants to close the connection (the client or server) sends a FIN packet. This
indicates that it has finished sending data.
o Packet Example:
Flags: FIN
Sequence Number: u
2. ACK (Acknowledgment)
o The receiving device responds with an ACK packet, acknowledging the receipt of the FIN
packet. The acknowledgment number is incremented by 1.
o Packet Example:
Flags: ACK
Sequence Number: v
Acknowledgment Number: u + 1
3. FIN (Finish)
o The receiving device sends its own FIN packet, indicating that it is also done sending data.
o Packet Example:
Flags: FIN
Sequence Number: v + 1
4. Final ACK
o The original sender of the FIN packet responds with a final ACK, acknowledging the receipt
of the FIN.
o Packet Example:
Flags: ACK
Sequence Number: u + 1
Acknowledgment Number: v + 1
After these steps, the TCP connection is fully closed. Both devices can now release the resources associated
with the connection.
Summary
Connection Establishment (3-Way Handshaking): Ensures synchronization and readiness between
sender and receiver before data transfer.
Data Transfer: Provides reliable, ordered, and error-checked communication using flow control and
congestion management.
Connection Termination (3-Way Handshaking): Ensures a graceful and orderly closure of the
TCP connection.
TCP uses congestion control mechanisms to prevent overwhelming the network with too much data, which
could cause packet loss, delays, and reduced performance. The three key mechanisms are Slow Start,
AIMD (Additive Increase, Multiplicative Decrease), and Fast Recovery. These techniques allow TCP to
dynamically adjust its transmission rate based on network conditions.
A. Slow Start
Purpose: To avoid sending too much data too quickly and overwhelming the network.
Mechanism: At the beginning of a connection, TCP starts by sending a small amount of data
(usually one segment) and gradually increases the sending rate.
How it works:
o TCP starts with an initial congestion window (cwnd) size of 1 MSS (Maximum Segment
Size).
o For each successful acknowledgment (ACK) received, the congestion window size doubles
(exponential growth).
o This continues until the Slow Start Threshold (ssthresh) is reached.
Example: If the initial cwnd is 1 MSS, after one round of ACKs, the cwnd becomes 2 MSS, then 4 MSS, 8
MSS, and so on.
Purpose: To ensure steady, controlled growth of the sending rate while reacting to congestion events
like packet loss.
Additive Increase:
o After reaching the ssthresh in slow start, TCP switches to additive increase. It increases the
cwnd linearly by one MSS per round-trip time (RTT) for every ACK received.
o This avoids sending data too aggressively once the network conditions are stable.
Multiplicative Decrease:
o When packet loss is detected (usually indicated by a timeout or triple duplicate ACKs), TCP
assumes there is network congestion and cuts the cwnd in half (multiplicative decrease).
o The new ssthresh is set to half of the current cwnd, and TCP enters the congestion avoidance
phase.
Example: If the cwnd is 16 MSS and packet loss occurs, TCP reduces the cwnd to 8 MSS and starts the
additive increase process again.
C. Fast Recovery
Purpose: To recover quickly from packet loss without reducing the sending rate drastically.
How it works:
o If TCP detects three duplicate ACKs, instead of waiting for a timeout, it assumes only one
packet was lost, not widespread congestion.
o TCP reduces the cwnd by half (similar to AIMD), but instead of entering slow start, it enters
fast recovery.
o In this phase, TCP continues to send data at the reduced cwnd rate while waiting for the lost
packet to be retransmitted.
o Once the retransmitted packet is acknowledged, TCP exits fast recovery and continues with
additive increase.
Example: If the cwnd was 12 MSS and three duplicate ACKs are received, the cwnd is reduced to 6 MSS,
but data transmission continues at this rate rather than starting from 1 MSS as in slow start.
1. Slow Start: Gradually increases the sending rate at the beginning of a connection to avoid
overwhelming the network.
2. AIMD (Additive Increase, Multiplicative Decrease): Controls the sending rate by increasing it
steadily and decreasing it sharply when congestion is detected.
3. Fast Recovery: Recovers quickly from packet loss without drastically reducing the transmission
rate.
These mechanisms ensure that TCP adapts to network conditions efficiently, balancing the need for speed
with the prevention of congestion.
A TCP segment is the basic unit of data transmission in the Transmission Control Protocol (TCP). It
consists of two parts: the header (20 to 60 bytes) and the data (payload). The header contains important
control information to manage the connection and ensure reliable data transfer.
Summary:
The TCP segment format is essential for reliable communication in TCP. Key fields like ports,
sequence/acknowledgment numbers, window size, and checksum enable functions like flow control, error
checking, and ordered data transfer. This ensures TCP's ability to provide connection-oriented and reliable
communication between devices across a network.
The User Datagram Protocol (UDP) is a simple, connectionless communication protocol used in the
transport layer of the OSI model. Unlike TCP, UDP is faster but unreliable because it doesn’t provide
mechanisms for data integrity, flow control, or congestion control. It’s primarily used in real-time
applications where speed is essential, and minor data loss is acceptable.
1. Connectionless:
o UDP does not establish or terminate a connection between the sender and receiver, making it
a lightweight protocol. Each message is treated independently.
2. Unreliable:
o UDP does not guarantee the delivery of data, nor does it ensure data packets arrive in order or
without errors. There’s no acknowledgment of received packets.
3. No Flow Control or Congestion Control:
o UDP does not use flow control, so the sender can continue sending data regardless of the
receiver’s ability to handle it.
o Congestion control mechanisms are not implemented, meaning it doesn't adjust transmission
based on network conditions.
4. Fast and Efficient:
o UDP is suitable for applications that prioritize speed over reliability, such as video streaming,
VoIP (Voice over IP), online gaming, and DNS queries.
The UDP header is simple, with only 8 bytes compared to the more complex TCP header.
UDP Services:
1. Simple Communication:
o UDP is used when simple, fast communication is needed between devices or applications
without the overhead of establishing and maintaining a connection.
2. Message-Oriented:
o Unlike TCP, which deals with streams of data, UDP preserves message boundaries. Each
UDP packet is treated as a separate, self-contained message.
3. Broadcast and Multicast:
o UDP supports broadcasting and multicasting, making it ideal for applications like live
video streaming or sending data to multiple devices simultaneously.
Applications of UDP:
Advantages of UDP:
1. Low Overhead:
o With a small header and no need for connection management, UDP has minimal transmission
overhead, resulting in faster data transfer.
2. Efficient for Small Messages:
o UDP is efficient when dealing with small messages or when error recovery is handled at the
application layer.
3. Suitable for Real-Time Applications:
o In applications where speed is crucial and occasional data loss is acceptable, such as video
conferencing or gaming, UDP is highly suitable.
Disadvantages of UDP:
1. Unreliable:
o Since UDP lacks error recovery, data loss or corruption may occur without any
retransmission.
2. No Flow or Congestion Control:
o This means UDP can lead to congestion or overflow at the receiving end if too much data is
sent too quickly.
Summary:
UDP is a fast, simple, and connectionless transport protocol suited for applications that prioritize speed and
low overhead over reliability, such as real-time audio/video streaming, DNS, and online gaming. While it
lacks the robust features of TCP (like error recovery and flow control), it’s an ideal choice for scenarios
where rapid transmission of data is more important than ensuring its complete accuracy.
DEFINE UDP,DICUSS THE OPERATION OF UDF EXPLAIN UDP CHECKSUM WITH ONE
EXAMPLE.
Definition of UDP
The User Datagram Protocol (UDP) is a connectionless and unreliable transport layer protocol used to
send messages, called datagrams, across networks. It is a simpler alternative to the more complex
Transmission Control Protocol (TCP). Unlike TCP, UDP does not establish a connection, ensure delivery,
or correct errors, making it faster but less reliable.
UDP is widely used in applications where speed is critical, and occasional data loss is acceptable, such as
video streaming, voice over IP (VoIP), and online gaming.
Operation of UDP
1. Connectionless Communication:
o UDP does not require setting up or tearing down a connection between the sender and
receiver. Data can be sent at any time without any formal handshake.
2. Message-Oriented:
o UDP maintains message boundaries. Each message sent is treated as a separate, independent
packet (datagram) with its own header.
3. No Acknowledgment:
o There is no acknowledgment or retransmission of lost or corrupted packets. Once the
message is sent, it’s up to the application to handle any errors or retransmissions if necessary.
4. No Flow Control or Congestion Control:
o UDP does not manage the amount of data being sent or check the receiving capacity, which
means it can flood the network or cause congestion under heavy load conditions.
The UDP header is a simple, fixed-size header of 8 bytes. The main fields are:
UDP Checksum
The UDP checksum is used to verify the integrity of the data and header. It provides basic error detection,
ensuring that the data received is not corrupted. The checksum is calculated by the sender before sending the
packet and verified by the receiver upon receiving it.
1. Create a pseudo-header from the source and destination IP addresses, protocol number, and length
field.
2. Concatenate the pseudo-header with the UDP header and data.
3. Calculate the one's complement sum of the entire packet.
4. Take the one's complement of the result and place it in the checksum field.
5. At the receiver, perform the same calculation. If the result is 0, the packet is considered uncorrupted.
1. Pseudo-header is created from the source IP, destination IP, and UDP length.
2. UDP Header and Data are concatenated.
3. Perform a one’s complement sum of the packet.
4. The final checksum is computed, and the result is inserted into the UDP header.
If the receiver performs the same calculation and gets a non-zero value, the packet is rejected due to errors.
Summary
User Datagram Protocol (UDP) is a lightweight, fast, and connectionless protocol used for real-time
applications where speed is crucial, and minor data loss can be tolerated. Its operation involves sending
independent datagrams without establishing a connection, while the UDP checksum ensures basic error
detection by verifying data integrity, but it does not correct any errors, making the protocol less reliable than
TCP.
One key feature of Transmission Control Protocol (TCP) that can be used by the sender to insert record
boundaries into the byte stream is the Push function, represented by the PSH (Push) flag in the TCP
header. Although TCP is a stream-oriented protocol where data is sent as a continuous stream of bytes, the
PSH flag provides a way to signal the importance of the data and ensures that it is transmitted immediately,
without waiting for additional data.
The original purpose of the PSH flag is to improve the efficiency of interactive applications, such as remote
logins or real-time systems. When set, it tells the receiver to process and deliver the data immediately rather
than buffering it for a more efficient (larger) transmission. This is particularly useful in scenarios where the
timely delivery of each message is critical.
Even though TCP does not inherently have the concept of record boundaries (since it sends a continuous
byte stream), the sender can utilize the PSH flag to:
1. Delimit messages or records within the byte stream by sending data with the PSH flag set.
2. The receiver will know that the data should be delivered immediately (without waiting for more
data).
This mechanism indirectly creates the concept of boundaries within the continuous stream.
Example Scenario:
In an application where distinct messages or records are being sent (like chat messages or log records), the
sender can set the PSH flag for each message. The receiver will treat the data marked with the PSH flag as a
complete record or message, ensuring that it is delivered without delay.
Summary
The PSH flag in TCP provides a mechanism for sending urgent data that needs to be processed immediately.
While TCP is a stream-oriented protocol without built-in record boundaries, using the PSH flag allows the
sender to indicate when certain data (like a record or message) should be processed as a discrete unit.
Though originally designed for interactive applications, this feature can be used to mimic record boundaries
in the TCP byte stream.
Congestion control in TCP is essential to ensure efficient use of network resources and to prevent congestion
collapse. The primary concepts in TCP congestion control include the congestion window, congestion
detection, and congestion policies.
The Congestion Window (cwnd) is a TCP state variable that limits the amount of data a sender can transmit
into the network before receiving an acknowledgment (ACK).
Purpose: It controls the sender's rate based on network congestion conditions, preventing overload.
Mechanism: TCP adjusts the size of the cwnd dynamically to match the network's capacity. When
the network is uncongested, the cwnd increases, allowing the sender to transmit more data. When
congestion is detected, the cwnd is reduced to decrease traffic.
Growth: In the slow start phase, the congestion window grows exponentially until it hits a threshold
(ssthresh), after which it increases linearly (congestion avoidance phase).
b. Congestion Detection
Congestion Detection refers to TCP’s mechanisms for identifying congestion in the network. It relies on
packet loss and round-trip time (RTT) measurements to detect congestion.
Indicators: The primary sign of congestion is the loss of packets, which is inferred by the absence of
an acknowledgment (ACK) or the receipt of duplicate ACKs. Additionally, increasing RTTs can
indicate impending congestion.
Actions on Detection: Upon detecting congestion (packet loss or triple duplicate ACKs), TCP
immediately reduces the cwnd to control the traffic load. This process is known as multiplicative
decrease in the AIMD (Additive Increase, Multiplicative Decrease) strategy.
c. Congestion Policies
Congestion Policies are strategies and algorithms employed by TCP to manage congestion effectively and
ensure fair usage of network resources. Key congestion control policies include:
1. Slow Start:
o When a new connection starts, TCP slowly increases the congestion window size, starting
with a small value. This avoids overwhelming the network with a sudden burst of traffic.
o Exponential growth until it reaches a threshold (ssthresh), after which it switches to a linear
increase (congestion avoidance).
2. Congestion Avoidance:
o After the slow start, TCP switches to a linear increase in cwnd to avoid congestion. This
gradual approach ensures that the sender probes the network's capacity without causing
congestion.
3. Fast Retransmit and Fast Recovery:
o Upon receiving three duplicate ACKs, TCP assumes a packet is lost and retransmits it
without waiting for the timeout.
o Fast Recovery: Instead of dropping cwnd drastically, it only reduces it by half and then
resumes linear growth to avoid further congestion.
Summary
Congestion Window (cwnd) controls how much data can be sent based on network conditions.
Congestion Detection uses packet loss and RTT to detect network congestion, prompting immediate
action.
Congestion Policies like Slow Start, Congestion Avoidance, and Fast Recovery adjust the sender's
behavior to maintain a stable and efficient network.
These mechanisms collectively ensure that TCP adapts to varying network conditions while preventing
congestion and maintaining data flow efficiency.
Comparison of TCP Segment and SCTP:
Uses sliding window (single stream Per-stream flow control with independent
Flow Control
flow control) flow management
Commonly used for web browsing, Ideal for VoIP, telecommunications, real-
Application
email, file transfer time applications
TCP uses fewer resources for a single SCTP can use more resources due to multi-
Resource Use
stream streaming capabilities