0% found this document useful (0 votes)
28 views

CN - Unit3 - Notes GTU

Uploaded by

Shivang Parmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

CN - Unit3 - Notes GTU

Uploaded by

Shivang Parmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT - 3: Transport Layer

1. Introduction to Transport Layer Services and Protocols

● Purpose: The transport layer provides logical communication between application


processes on different hosts.
● Transport Protocols in End Systems:
○ Sender Side: Breaks down application messages into segments, passes segments
to the network layer.
○ Receiver Side: Reassembles segments into complete messages and delivers them
to the application layer.
● Examples: TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

2. Multiplexing and Demultiplexing

● Concept:
○ Multiplexing: Combining data from multiple applications for transport over a single
link.
○ Demultiplexing: Directing incoming data segments to the correct application at the
destination.
● Mechanism:
○ Each segment contains Source and Destination IP addresses and Port Numbers.
○ The transport layer at the destination uses IP and port numbers to forward the
segment to the correct application socket.

3. Connectionless and Connection-Oriented Demultiplexing

Connectionless Demultiplexing (UDP):

● Definition: Each segment is handled independently. No connection is established


before data transfer.
● Mechanism: The transport layer only uses destination port number and IP address
to forward data to the correct application.
● Usage: Best for applications that don’t require reliability, such as DNS and
streaming.

Connection-Oriented Demultiplexing (TCP):


● Definition: A virtual connection is established between the sender and receiver
before data transfer begins.
● Mechanism: The transport layer uses a combination of IP addresses, port numbers,
and sequence numbers to maintain the connection.
● Usage: Ideal for applications requiring reliable, ordered data delivery, like HTTP and
file transfer.

4. Connectionless Transport - UDP (User Datagram Protocol)

● Definition: UDP is a simple, connectionless protocol that provides best-effort delivery


without reliability or order guarantees.
● Usage: Ideal for applications that require low latency and can tolerate some data loss, such
as DNS, SNMP, and streaming multimedia.
● UDP Segment Structure:
○ Source Port: Identifies the sending application.
○ Destination Port: Identifies the receiving application.
○ Length: Length of the UDP header and data.
○ Checksum: Detects errors in transmitted segments.

● Characteristics of UDP:
○ No handshaking between sender and receiver.
○ No connection state tracking, meaning no retransmissions or acknowledgments.
○ Simple, with minimal overhead.

5. UDP Checksum

● Purpose: Detects errors that may occur during transmission.


● Process:
○ Sender: Treats segment contents (including header fields) as a series of 16-bit
integers, sums them up (one's complement sum), and includes this sum as a
checksum.
○ Receiver: Recomputes the checksum on the received segment and compares it
with the sent checksum.
■ Match: Indicates no errors.
■ Mismatch: Indicates an error in transmission.
● Example:

6. Principles of Reliable Data Transfer

● Reliable Data Transfer (rdt): Ensures accurate and in-order delivery of data.
● Unidirectional Data Transfer:
○ Control Information flows in both directions.
○ Uses Finite State Machines (FSMs) to manage state changes in the sender and
receiver.

Reliable Data Transfer Protocol Versions:

1. rdt 1.0 - Reliable Transfer over a Perfectly Reliable Channel:

○ Purpose: Assumes a perfectly reliable network with no errors or packet loss.


○ How it Works:
■ Data is sent directly from sender to receiver without any additional checks.
■ The receiver simply delivers the data to the application.
○ Limitations: Only works on a perfect channel with no errors or data loss, which is
unrealistic in real-world networks.
○ Summary: Simple and reliable, but only for error-free channels.

2. rdt 2.0 - Stop-and-Wait Protocol:

○ Purpose: Introduces error detection to handle channels that might flip bits (i.e.,
cause errors in packets).
○ How it Works:
■ Adds checksum to detect errors in packets.
■ Uses ACKs (Acknowledgments) and NAKs (Negative
Acknowledgments):
● ACK: Sent by the receiver to confirm a packet arrived without errors.
● NAK: Sent if the receiver detects an error in the packet.
■ When the sender receives an ACK, it sends the next packet. If it receives a
NAK, it retransmits the same packet.
○ Limitations: Still does not handle lost packets. Only suitable for channels with
occasional bit errors.
○ Summary: Introduces error detection with ACK/NAK feedback, but does not handle
lost packets.

3. rdt 2.1 - Reliable Transfer with Sequence Numbers:

○ Purpose: Enhances rdt 2.0 to handle duplicate packets caused by ACK/NAK errors
or re-transmissions.
○ How it Works:
■ Adds sequence numbers to packets (typically 0 and 1) to help the receiver
identify duplicates.
■ If an ACK or NAK is corrupted, the sender doesn’t know if the packet arrived
correctly, so it retransmits.
■ The receiver checks the sequence number:
● If it’s a new packet, it accepts and acknowledges it.
● If it’s a duplicate (same sequence number as before), it discards it
and sends an ACK for the last correctly received packet.
○ Limitations: Still has limited handling for packet loss. Extra logic is required to deal
with duplicate packets.
○ Summary: Adds sequence numbers to identify duplicates, improving reliability over
channels with occasional bit errors.

4. rdt 2.2 - Reliable Transfer with Only ACKs (No NAKs):


○ Purpose: Simplifies rdt 2.1 by removing the need for NAKs.
○ How it Works:
■ Uses only ACKs to acknowledge correctly received packets.
■ If a packet arrives corrupted or out of order, the receiver sends an ACK for
the last correctly received packet.
■ This approach causes the sender to retransmit the current packet if it
doesn’t get a new ACK, which effectively replaces NAKs.
○ Advantages: Eliminates NAKs, simplifying the protocol and making it more efficient.
○ Summary: Replaces NAKs with duplicate ACKs, improving efficiency while handling
duplicates and errors.
○ It is known as stop-and-wait protocol.

5. rdt 3.0 - Reliable Transfer over a Channel with Packet Loss:

○ Purpose: Enhances rdt 2.2 to handle packet loss in addition to errors.


○ How it Works:
■ Adds a timer to the sender. If an ACK isn’t received within a specified time,
the sender assumes the packet was lost and retransmits it.
■ The receiver still uses sequence numbers to identify duplicates and accepts
only new packets.
■ The protocol is known as the Alternating-Bit Protocol, as it alternates
between sequence numbers (e.g., 0 and 1) for each packet.
○ Advantages: Handles both errors and packet loss, making it suitable for real-world,
unreliable channels.
○ Summary: Uses timers and retransmissions to handle both errors and lost packets,
making it the most robust version for unreliable networks.
7. Pipelined Protocols

Concept: Allows sending multiple packets without waiting for an acknowledgment for each one.

1. Go-Back-N (GBN) Protocol


○ Concept: The sender can send multiple packets (up to a specific number, called the
"window size") without waiting for an acknowledgment for each one. If a packet is
lost or an error is detected, the sender goes back and retransmits all packets
starting from the lost or erroneous packet.
○ How It Works:
■ The sender maintains a window of packets it can send without waiting for
an acknowledgment.
■ For example, if the window size is 4, the sender can send packets 1, 2, 3, and
4 before waiting for an acknowledgment.
■ Each time a packet is sent, it is held in a buffer until it’s acknowledged by the
receiver.
■ The receiver sends cumulative ACKs for the last correctly received packet
in order. If packets 1 and 2 are received correctly, the receiver sends an ACK
for packet 2, acknowledging both packets 1 and 2.
■ If the sender doesn’t receive an acknowledgment for a packet within a
specific time (due to packet loss or error), it retransmits all packets in the
window starting from the unacknowledged packet.
○ Example:
■ Let’s say the window size is 4, and the sender sends packets 1, 2, 3, and 4.
■ If packet 2 is lost, the receiver acknowledges only up to packet 1.
■ The sender, after detecting the missing acknowledgment, will go back and
retransmit packets 2, 3, and 4, even if packets 3 and 4 were received
successfully.
○ Pros and Cons:
■ Pros: Simple and works well when errors are infrequent.
■ Cons: Can be inefficient if errors are common, as the sender may retransmit
many packets unnecessarily.
○ Summary: Go-Back-N allows multiple packets to be sent but retransmits a large
block of packets starting from the last unacknowledged packet if an error occurs.

2. Selective Repeat (SR) Protocol

○ Concept: In Selective Repeat, the sender also sends multiple packets at once, but it
only retransmits individual packets that are lost or contain errors rather than
retransmitting the entire window.
○ How It Works:
■ Similar to Go-Back-N, the sender has a window size and can send several
packets without waiting for an acknowledgment.
■ The receiver sends an acknowledgment (ACK) for each packet it receives,
even if packets are received out of order.
■ If a packet is received out of order, the receiver buffers it until it can fill in any
gaps (e.g., if it receives packets 1 and 3 but is missing packet 2, it waits for
packet 2 and keeps packet 3 in the buffer).
■ If a packet is lost or contains an error, the sender only retransmits the
specific packet that was not acknowledged, rather than the entire window.
○ Example:
■ Suppose the window size is 4, and the sender sends packets 1, 2, 3, and 4.
■ If packet 2 is lost but packets 1, 3, and 4 are received correctly, the receiver
will acknowledge packets 1, 3, and 4 individually.
■ The sender, noticing the missing acknowledgment for packet 2, will only
retransmit packet 2.
■ Once packet 2 is received, the receiver can deliver the packets in the correct
order (1, 2, 3, 4).
○ Pros and Cons:
■ Pros: More efficient than Go-Back-N because only the lost or erroneous
packets are retransmitted.
■ Cons: More complex to implement since the receiver must be able to handle
out-of-order packets and buffer them.
○ Summary: Selective Repeat allows the sender to retransmit only specific lost
packets, which reduces unnecessary retransmissions and increases efficiency.
8. Connection-Oriented Transport - TCP (Transmission Control Protocol)

● TCP Characteristics:
○ Reliable, connection-oriented, and provides in-order data delivery.
○ Establishes a virtual connection between sender and receiver before data transfer
begins.
● TCP Segment Structure:

○ Source Port Number (16 bits): Identifies the port number of the application sending
the data on the source device.
○ Destination Port Number (16 bits): Identifies the port number of the application
receiving the data on the destination device.
○ Sequence Number (32 bits): Used to ensure data is received in the correct order. It
indicates the position of the first byte of data in this segment within the entire data
stream.
○ Acknowledgment Number (32 bits): If the ACK flag is set, this field contains the
next sequence number that the sender of the segment expects to receive.
○ Header Length (4 bits): Specifies the length of the TCP header in 32-bit words.
This is also known as the Data Offset field.
○ Reserved (6 bits): Reserved for future use and should be set to zero.
○ Flags (9 bits):
■ URG: Urgent pointer field is significant.
■ ACK: The acknowledgement field is significant.
■ PSH: Push function; data should be sent to the receiving application
immediately.
■ RST: Reset the connection.
■ SYN: Synchronize sequence numbers to initiate a connection.
■ FIN: No more data from the sender (finish the connection).
○ Window Size (16 bits): Specifies the size of the receive window, which is the buffer
space available for incoming data. It helps with flow control.
○ TCP Checksum (16 bits): Used for error-checking the header and data.
○ Urgent Pointer (16 bits): Points to the sequence number of the byte following
urgent data, if the URG flag is set.
○ Options (Variable length): Optional settings for TCP, such as maximum segment
size or window scaling.
○ Data (Variable length): Contains the actual data being transmitted. This is optional,
as a TCP segment could consist solely of control information.

9. Flow Control

● Purpose: Flow control is a mechanism to prevent a fast sender from overwhelming a slow
receiver.
● How it works: The receiver informs the sender about the amount of data it can handle
through the Receive Window (rwnd) field in the TCP header. This value tells the sender the
size of the receiver's buffer, allowing the sender to control the rate at which it sends data.
● Mechanism:
○ Every time the receiver sends an acknowledgment (ACK), it updates the rwnd to
show how much buffer space is left.
○ The sender respects this rwnd and only sends data that fits within the advertised
buffer space.
○ If the rwnd is zero, the sender pauses until the receiver signals that it has space
available again.
● Example: Imagine a sender that can send data very fast, but the receiver can only process
it slowly. Flow control ensures the sender only sends data at a rate the receiver can handle,
preventing data loss due to buffer overflow.

10. Congestion Control in TCP


● Definition: Congestion control prevents too many data sources from sending data too
quickly, which the network cannot handle efficiently.
● Goal: Ensure the network can carry all the traffic without excessive delays or data loss.

Why Congestion Control is Needed

1. Buffer Overflow: When a receiver's buffer (storage space) is full, incoming data can be lost.
2. Internal Network Congestion: Even if the receiver has enough buffer space, the network's
internal carrying capacity may still be exceeded, causing congestion within the network.

Scenarios of Congestion Causes and Costs

1. Scenario 1: Ideal Conditions

● Setup: Two senders, two receivers, and one router with infinite buffer capacity (no
data loss).
● Result: Since there is no buffer limit, congestion issues related to data loss don’t
arise, but data may still face delays if the network is overloaded.

2. Scenario 2: Realistic Conditions

● Setup: One router with finite buffer space, which may fill up and cause packet loss.
● Key Points:
○ Original Data (λin): Data input directly from the application layer.
○ Transport Layer Input (λ'in): Includes original data plus any retransmitted
data due to timeouts or packet loss.
○ Outcome: When buffer space runs out, new packets cannot be stored, and
data loss occurs.
● Variations within Scenario 2:
○ Idealization with Known Buffer Space:
■ If the sender knows the exact available buffer space, it can avoid
sending too much data, which prevents buffer overflow.
■ Ideal Case: No congestion issues as the sender adjusts its rate
based on buffer availability.
○ Known Packet Loss:
■ When buffers are full, packets are dropped.
■ The sender only resends data if it knows which packets were lost.
■ This approach minimizes unnecessary retransmissions but still
suffers from data loss if congestion persists.
○ Realistic Condition with Duplicate Packets:
■ Packets may be dropped due to full buffers, causing the sender to
timeout and resend packets prematurely.
■ This results in duplicate packets, increasing congestion as both
copies are eventually delivered when buffer space becomes
available.

Approaches towards Congestion Control

Two broad approaches towards congestion control

1. End to End congestion control

○ No explicit feedback from network


○ Congestion inferred from end-system observed loss, delay
○ Approach taken by TCP

2. Network-assisted congestion control

○ Routers provide feedback to end systems


○ Single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)
○ Explicit rate for sender to send

11. TCP Slow Start Overview

● TCP Slow Start is an algorithm designed to manage the speed of data transmission in a
network, helping to prevent congestion. It’s one of the initial steps in TCP's congestion
control process.
● The main goal of TCP Slow Start is to find a balance between:
○ Congestion Window (cwnd): The amount of data the sender is allowed to send.
○ Receiver Window (rwnd): The amount of data the receiver is capable of handling.

How TCP Slow Start Works


1. Initial Transmission:
○ The sender begins by sending a small amount of data, set by a small initial
congestion window (cwnd). This size is based on the sender’s maximum possible
window.
2. Acknowledgment from Receiver:
○ The receiver sends back an acknowledgment (ACK) to confirm it received the data
and indicates how much more it can receive (its window size).
3. Increasing the Congestion Window:
○ If the sender receives an ACK, it assumes the network can handle more data. It then
increases the congestion window, allowing more data to be sent in the next round.
This growth continues exponentially, doubling with each round-trip as long as ACKs
are received.
4. Checking for Network Limit:
○ The window size keeps increasing until one of these happens:
■ The receiver can no longer acknowledge each packet because it’s reached
its limit.
■ The network's maximum capacity (carrying capacity) is reached.
■ Either the sender's or receiver's maximum window limit is hit.
5. Handling No Response:
○ If the receiver does not acknowledge the data, it signals possible congestion, so the
sender slows down or stops sending until it’s safe to resume.

You might also like