0% found this document useful (0 votes)
5 views

NOTES 6

TCP, or Transmission Control Protocol, is a connection-oriented transport layer protocol that ensures reliable data transmission by establishing a connection before communication and reassembling packets at the destination. It features full duplex communication, stream-oriented data transfer, and flow control mechanisms, while also providing error detection and congestion avoidance. In contrast, UDP (User Datagram Protocol) is a connectionless protocol that prioritizes speed over reliability, lacking sequencing and acknowledgment functionalities.

Uploaded by

roshnigaikwad886
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

NOTES 6

TCP, or Transmission Control Protocol, is a connection-oriented transport layer protocol that ensures reliable data transmission by establishing a connection before communication and reassembling packets at the destination. It features full duplex communication, stream-oriented data transfer, and flow control mechanisms, while also providing error detection and congestion avoidance. In contrast, UDP (User Datagram Protocol) is a connectionless protocol that prioritizes speed over reliability, lacking sequencing and acknowledgment functionalities.

Uploaded by

roshnigaikwad886
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

TCP

TCP stands for Transmission Control Protocol. It is a transport layer protocol that
facilitates the transmission of packets from source to destination. It is a connection-
oriented protocol that means it establishes the connection prior to the
communication that occurs between the computing devices in a network. This
protocol is used with an IP

protocol, so together, they are referred to as a TCP/IP.

The main functionality of the TCP is to take the data from the application layer. Then
it divides the data into a several packets, provides numbering to these packets, and
finally transmits these packets to the destination. The TCP, on the other side, will
reassemble the packets and transmits them to the application layer. As we know that
TCP is a connection-oriented protocol, so the connection will remain established
until the communication is not completed between the sender and the receiver.

Features of TCP protocol


The following are the features of a TCP protocol:

o Transport Layer Protocol

TCP is a transport layer protocol as it is used in transmitting the data from the sender
to the receiver.

o Full duplex

It is a full-duplex means that the data can transfer in both directions at the same
time.

o Stream-oriented

TCP is a stream-oriented protocol as it allows the sender to send the data in the form
of a stream of bytes and also allows the receiver to accept the data in the form of a
stream of bytes. TCP creates an environment in which both the sender and receiver
are connected by an imaginary tube known as a virtual circuit. This virtual circuit
carries the stream of bytes across the internet.
Need of Transport Control Protocol
In the layered architecture of a network model, the whole task is divided into smaller
tasks. Each task is assigned to a particular layer that processes the task. In the TCP/IP
model.

, five layers are application layer


, transport layer
, network layer
, data link layer
, and physical layer. The transport layer has a critical role in providing end-to-end
communication to the directly application processes. It creates 65,000 ports so that the
multiple applications can be accessed at the same time. It takes the data from the upper layer,
and it divides the data into smaller packets and then transmits them to the network layer.

Working of TCP
In TCP, the connection is established by using three-way handshaking. The client
sends the segment with its sequence number. The server, in return, sends its segment
with its own sequence number as well as the acknowledgement sequence, which is
one more than the client sequence number. When the client receives the
acknowledgment of its segment, then it sends the acknowledgment to the server. In
this way, the connection is established between the client and the server.
Advantages of TCP

o It provides a connection-oriented reliable service, which means that it


guarantees the delivery of data packets. If the data packet is lost across the
network, then the TCP will resend the lost packets.
o It provides a flow control mechanism using a sliding window protocol.
o It provides error detection by using checksum and error control by using Go
Back or ARP protocol.
o It eliminates the congestion by using a network congestion avoidance
algorithm that includes various schemes such as additive
increase/multiplicative decrease (AIMD), slow start, and congestion window.

Disadvantage of TCP
It increases a large amount of overhead as each segment gets its own TCP header, so
fragmentation by the router increases the overhead.

TCP Header format


o Source port: It defines the port of the application, which is sending the data.
So, this field contains the source port address, which is 16 bits.
o Destination port: It defines the port of the application on the receiving side.
So, this field contains the destination port address, which is 16 bits.
o Sequence number: This field contains the sequence number of data bytes in
a particular session.
o Acknowledgment number: When the ACK flag is set, then this contains the
next sequence number of the data byte and works as an acknowledgment for
the previous data received. For example, if the receiver receives the segment
number 'x', then it responds 'x+1' as an acknowledgment number.
o HLEN: It specifies the length of the header indicated by the 4-byte words in
the header. The size of the header lies between 20 and 60 bytes. Therefore,
the value of this field would lie between 5 and 15.
o Reserved: It is a 4-bit field reserved for future use, and by default, all are set
to zero.
o Flags
There are six control bits or flags:
1. URG: It represents an urgent pointer. If it is set, then the data is
processed urgently.
2. ACK: If the ACK is set to 0, then it means that the data packet does not
contain an acknowledgment.
3. PSH: If this field is set, then it requests the receiving device to push the
data to the receiving application without buffering it.
4. RST: If it is set, then it requests to restart a connection.
5. SYN: It is used to establish a connection between the hosts.
6. FIN: It is used to release a connection, and no further data exchange
will happen.

o Window size
It is a 16-bit field. It contains the size of data that the receiver can accept. This
field is used for the flow control between the sender and receiver and also
determines the amount of buffer allocated by the receiver for a segment. The
value of this field is determined by the receiver.
o Checksum
It is a 16-bit field. This field is optional in UDP, but in the case of TCP/IP, this
field is mandatory.
o Urgent pointer
It is a pointer that points to the urgent data byte if the URG flag is set to 1. It
defines a value that will be added to the sequence number to get the
sequence number of the last urgent byte.
o Options
It provides additional options. The optional field is represented in 32-bits. If
this field contains the data less than 32-bit, then padding is required to obtain
the remaining bits.

Transport Layer protocols


o The transport layer is represented by two protocols: TCP and UDP.
o The IP protocol in the network layer delivers a datagram from a source host to
the destination host.
o Nowadays, the operating system supports multiuser and multiprocessing
environments, an executing program is called a process. When a host sends a
message to other host means that source process is sending a process to a
destination process. The transport layer protocols define some connections to
individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from source
host to the destination host while transport layer protocols are port-to-port
protocols that work on the top of the IP protocols to deliver the packet from
the originating port to the IP services, and from IP services to the destination
port.
o Each port is defined by a positive integer address, and it is of 16 bits.

UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport
functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important
than speed and size.
o UDP is an end-to-end transport level protocol that adds transport-level
addresses, checksum error control, and length information to the data from
the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format


The user datagram has a 16-byte header which is shown below:

Where,

o Source port address: It defines the address of the application process that
has delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process
that will receive the message. The destination port address is of a 16-bit
address.
o Total length: It defines the total length of the user datagram in bytes. It is a
16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol

o UDP provides basic functions needed for the end-to-end delivery of a


transmission.
o It does not provide any sequencing or reordering functions and does not
specify the damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which
packet has been lost as it does not contain an ID or sequencing number of a
particular data segment.

TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established
between both the ends of the transmission. For creating the connection, TCP
generates a virtual circuit between sender and receiver for the duration of a
transmission.

Features Of TCP protocol

o Stream data transfer: TCP protocol transfers the data in the form of
contiguous stream of bytes. TCP group the bytes in the form of TCP segments
and then passed it to the IP layer for transmission to the destination. TCP itself
segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and
expects a positive acknowledgement from the receiving TCP. If ACK is not
received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the
sender indicating the number the bytes it can receive without overflowing its
internal buffer. The number of bytes is sent in ACK in the form of the highest
sequence number that it can receive without any problem. This mechanism is
also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different
computers. At the receiving end, the data is forwarded to the correct
application. This process is known as demultiplexing. TCP transmits the packet
to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and
window sizes, is called a logical connection. Each connection is identified by
the pair of sockets used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the
directions at the same time. To achieve Full Duplex service, each TCP should
have sending and receiving buffers so that the segments can flow in both the
directions. TCP is a connection-oriented protocol. Suppose the process A
wants to send and receive the data from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.
TCP Segment Format

Where,

o Source port address: It is used to define the address of the application


program in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application
program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP
segments. The 32-bit sequence number field represents the position of the
data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number
acknowledge the data from other communicating devices. If ACK field is set to
1, then it specifies the sequence number that the receiver is expecting to
receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit
words. The minimum size of the header is 5 words, and the maximum size of
the header is 15 words. Therefore, the maximum size of the TCP header is 60
bytes, and the minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and
independently. A control bit defines the use of a segment or serves as a
validity check for other fields.

There are total six types of flags in control field:


o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is
needed so if possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any
confusion occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three
types of segments: connection request, connection confirmation ( with the
ACK bit set ), and confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender
has finished sending data. It is used in connection termination in three types
of segments: termination request, termination confirmation, and
acknowledgement of termination confirmation.
o Window Size: The window is a 16-bit field that defines the size of the
window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset
from the sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the
additional information to the receiver.

Differences b/w TCP & UDP

Basis for TCP UDP


Comparison

Definition TCP establishes a virtual UDP transmits the data


circuit before transmitting directly to the destination
the data. computer without verifying
whether the receiver is ready
to receive or not.

Connection Type It is a Connection- It is a Connectionless protocol


Oriented protocol

Speed slow high


Reliability It is a reliable protocol. It is an unreliable protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the It neither takes the


acknowledgement of acknowledgement, nor it
data and has the ability to retransmits the damaged
resend the lost packets. frame.

Transport Layer responsibilities


 Difficulty Level : Medium
 Last Updated : 26 Apr, 2022

 Read

 Discuss

 Practice

 Video

 Courses

Transport Layer is the second layer in the TCP/IP model and the fourth layer
in the OSI model. It is an end-to-end layer used to deliver messages to a
host. It is termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to- hop, between the source host and
destination host to deliver the services reliably. The unit of data
encapsulation in the Transport Layer is a segment.

Working of Transport Layer:

The transport layer takes services from the Network layer and provides
services to the Application layer
At the sender’s side: The transport layer receives data (message) from the
Application layer and then performs Segmentation, divides the actual
message into segments, adds source and destination’s port numbers into the
header of the segment, and transfers the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network
layer, reassembles the segmented data, reads its header, identifies the port
number, and forwards the message to the appropriate port in the Application
layer.
Responsibilities of a Transport Layer:

Process to process delivery:


While Data Link Layer requires the MAC address (48 bits address contained
inside the Network Interface Card of every host machine) of source-
destination hosts to correctly deliver a frame and the Network layer requires
the IP address for appropriate routing of packets, in a similar way Transport
Layer requires a Port number to correctly deliver the segments of data to the
correct process amongst the multiple processes running on a particular host.
A port number is a 16-bit address used to identify any client-server program
uniquely.
End-to-end Connection between hosts:
The transport layer is also responsible for creating the end-to-end
Connection between hosts for which it mainly uses TCP and UDP. TCP is a
secure, connection-orientated protocol that uses a handshake protocol to
establish a robust connection between two end hosts. TCP ensures reliable
delivery of messages and is used in various applications. UDP, on the other
hand, is a stateless and unreliable protocol that ensures best-effort delivery.
It is suitable for applications that have little concern with flow or error control
and requires sending the bulk of data like video conferencing. It is often used
in multicasting protocols.
Multiplexing and Demultiplexing:
Multiplexing allows simultaneous use of different applications over a network
that is running on a host. The transport layer provides this mechanism which
enables us to send packet streams from various applications simultaneously
over a network. The transport layer accepts these packets from different
processes differentiated by their port numbers and passes them to the
network layer after adding proper headers. Similarly, Demultiplexing is
required at the receiver side to obtain the data coming from various
processes. Transport receives the segments of data from the network layer
and delivers it to the appropriate process running on the receiver’s machine.
Congestion Control:
Congestion is a situation in which too many sources over a network attempt
to send data and the router buffers start overflowing due to which loss of
packets occur. As a result retransmission of packets from the sources
increases the congestion further. In this situation, the Transport layer
provides Congestion Control in different ways. It uses open loop congestion
control to prevent the congestion and closed-loop congestion control to
remove the congestion in a network once it occurred. TCP provides AIMD-
additive increase multiplicative decrease, leaky bucket technique for
congestion control.
Data integrity and Error correction:
The transport layer checks for errors in the messages coming from the
application layer by using error detection codes, computing checksums, it
checks whether the received data is not corrupted and uses the ACK and
NACK services to inform the sender if the data has arrived or not and checks
for the integrity of data.
Flow control:
The transport layer provides a flow control mechanism between the adjacent
layers of the TCP/IP model. TCP also prevents data loss due to a fast sender
and slow receiver by imposing some flow control techniques. It uses the
method of sliding window protocol which is accomplished by the receiver by
sending a window back to the sender informing the size of data it can
receive.

Protocols of Transport Layer:

 TCP(Transmission Control Protocol)


 UDP (User Datagram Protocol)
 SCTP (Stream Control Transmission Protocol)
 DCCP (Datagram Congestion Control Protocol)
 ATP (AppleTalk Transaction Protocol)
 FCP (Fibre Channel Protocol)
 RDP (Reliable Data Protocol)
 RUDP (Reliable User Data Protocol)
 SST (Structured Steam Transport)
 SPX (Sequenced Packet Exchange)

Congestion Control in Computer


Networks
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows
down network response time.

Effects of Congestion
 As delay increases, performance decreases.
 If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms


 Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
 There are two congestion control algorithm which are as follows:
 Leaky Bucket Algorithm
 The leaky bucket algorithm discovers its use in the context of network traffic
shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the inefficient
use of available network resources.
 The large area of network resources such as bandwidth is not being used
effectively.

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom.No matter at what rate water enters
the bucket, the outflow is at constant rate.When the bucket is full with water
additional water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.

 Token bucket Algorithm


 The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses
information. Therefore, a token bucket algorithm finds its uses in network traffic
shaping or rate-limiting.
 It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.

Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

1. In regular intervals tokens are thrown into the bucket. ƒ


2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example,

In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In
figure (B) We see that three of the five packets have gotten through, but the other two
are stuck waiting for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket: The leaky bucket
algorithm controls the rate at which the packets are introduced in the network, but it is
very conservative in nature. Some flexibility is introduced in the token bucket
algorithm. In the token bucket, algorithm tokens are generated at each tick (up to a
certain limit). For an incoming packet to be transmitted, it must capture a token and
the transmission takes place at the same rate. Hence some of the busty packets are
transmitted at the same rate if tokens are available and thus introduces some amount
of flexibility in the system.

Formula: M * s = C + ρ * s where S – is time taken M – Maximum output rate ρ –


Token arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,
Link to question on leaky bucket algorithm: https://www.geeksforgeeks.org/computer-
networks-set-8/
This article is contributed by Vikash Kumar. Please write comments if you find
anything incorrect, or you want to share more information about the topic discussed
above.
Computer Network | Leaky bucket
algorithm
 Difficulty Level : Easy
 Last Updated : 27 Jul, 2022

 Read

 Discuss

 Practice

 Video

 Courses
In the network layer, before the network can make Quality of service guarantees, it
must know what traffic is being guaranteed. One of the main causes of congestion is
that traffic is often bursty.
To understand this concept first we have to know little about traffic shaping. Traffic
Shaping is a mechanism to control the amount and the rate of traffic sent to the
network. Approach of congestion management is called Traffic shaping. Traffic
shaping helps to regulate the rate of data transmission and reduces congestion.
There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Suppose we have a bucket in which we are pouring water, at random points in time,
but we have to get water at a fixed rate, to achieve this we will make a hole at the
bottom of the bucket. This will ensure that the water coming out is at some fixed rate,
and also if the bucket gets full, then we will stop pouring water into it.
The input rate can vary, but the output rate remains constant. Similarly, in networking,
a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are
stored in the bucket and sent out at an average rate.

In the above figure, we assume that the network has committed a bandwidth of 3
Mbps for a host. The use of the leaky bucket shapes the input traffic to make it
conform to this commitment. In the above figure, the host sends a burst of data at a
rate of 12 Mbps for 2s, for a total of 24 Mbits of data. The host is silent for 5 s and
then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host
has sent 30 Mbits of data in 10 s. The leaky bucket smooths out the traffic by sending
out data at a rate of 3 Mbps during the same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by
consuming more bandwidth than is set aside for this host. We can also see that the
leaky bucket may prevent congestion.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO
queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM
networks), the process removes a fixed number of packets from the queue at each tick
of the clock. If the traffic consists of variable-length packets, the fixed output rate
must be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the head of the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.
Note: In the below examples, the head of the queue is the rightmost position and the
tail of the queue is the leftmost position.
Example: Let n=1000
Packet=

Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.

Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.

Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.
Difference between Leaky and Token buckets –

Leaky Bucket Token Bucket

When the host has to send a packet , In this, the bucket holds tokens generated at
packet is thrown in bucket. regular intervals of time.

Bucket leaks at constant rate Bucket has maximum capacity.

Bursty traffic is converted into uniform If there is a ready packet , a token is removed
traffic by leaky bucket. from Bucket and packet is send.

In practice bucket is a finite queue If there is no token in the bucket, then the
outputs at finite rate packet cannot be sent.

Some advantage of token Bucket over leaky bucket


 If a bucket is full in tokens Bucket, tokens are discard not packets. While
in leaky bucket, packets are discarded.
 Token Bucket can send large bursts at a faster rate while leaky bucket
always sends packets at constant rate.

TCP 3-Way Handshake Process


This could also be seen as a way of how TCP connection is established.
Before getting into the details, let us look at some basics. TCP stands
for Transmission Control Protocol which indicates that it does something
to control the transmission of the data in a reliable way.
The process of communication between devices over the internet happens
according to the current TCP/IP suite model(stripped out version of OSI
reference model). The Application layer is a top pile of a stack of TCP/IP
models from where network referenced applications like web browsers on
the client-side establish a connection with the server. From the application
layer, the information is transferred to the transport layer where our topic
comes into the picture. The two important protocols of this layer are –
TCP, UDP(User Datagram Protocol) out of which TCP is prevalent(since it
provides reliability for the connection established). However, you can find an
application of UDP in querying the DNS server to get the binary equivalent of
the Domain Name used for the website.
TCP provides reliable communication with something called Positive
Acknowledgement with Re-transmission(PAR). The Protocol Data
Unit(PDU) of the transport layer is called a segment. Now a device using
PAR resend the data unit until it receives an acknowledgement. If the data
unit received at the receiver’s end is damaged(It checks the data with
checksum functionality of the transport layer that is used for Error Detection),
the receiver discards the segment. So the sender has to resend the data unit
for which positive acknowledgement is not received. You can realize from the
above mechanism that three segments are exchanged between
sender(client) and receiver(server) for a reliable TCP connection to get
established. Let us delve into how this mechanism works :

 Step 1 (SYN): In the first step, the client wants to establish a connection
with a server, so it sends a segment with SYN(Synchronize Sequence
Number) which informs the server that the client is likely to start
communication and with what sequence number it starts segments with
 Step 2 (SYN + ACK): Server responds to the client request with SYN-
ACK signal bits set. Acknowledgement(ACK) signifies the response of the
segment it received and SYN signifies with what sequence number it is
likely to start the segments with
 Step 3 (ACK): In the final part client acknowledges the response of the
server and they both establish a reliable connection with which they will
start the actual data transfer.

TCP 3-Way Handshake Process


TCP is a connection-oriented protocol and every connection-oriented protocol needs
to establish a connection in order to reserve resources at both the communicating
ends.
Connection Establishment –
1. Sender starts the process with the following:
 Sequence number (Seq=521): contains the random initial sequence number
generated at the sender side.
 Syn flag (Syn=1): request the receiver to synchronize its sequence number with
the above-provided sequence number.
 Maximum segment size (MSS=1460 B): sender tells its maximum segment size,
so that receiver sends datagram which won’t require any fragmentation. MSS field
is present inside Option field in TCP header.
 Window size (window=14600 B): sender tells about his buffer capacity in which
he has to store messages from the receiver.

2. TCP is a full-duplex protocol so both sender and receiver require a window for
receiving messages from one another.
 Sequence number (Seq=2000): contains the random initial sequence number
generated at the receiver side.
 Syn flag (Syn=1): request the sender to synchronize its sequence number with the
above-provided sequence number.
 Maximum segment size (MSS=500 B): sender tells its maximum segment size,
so that receiver sends datagram which won’t require any fragmentation. MSS field
is present inside Option field in TCP header.
Since MSSreceiver < MSSsender, both parties agree for minimum MSS i.e., 500 B to
avoid fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29 packets.
This is the receiver's sending window size.
 Window size (window=10000 B): receiver tells about his buffer capacity in which
he has to store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.
This is the sender's sending window size.
 Acknowledgement Number (Ack no.=522): Since sequence number 521 is
received by the receiver so, it makes a request for the next sequence number with
Ack no.=522 which is the next packet expected by the receiver since Syn flag
consumes 1 sequence no.
 ACK flag (ACk=1): tells that the acknowledgement number field contains the
next sequence expected by the receiver.
3. Sender makes the final reply for connection establishment in the following way:
 Sequence number (Seq=522): since sequence number = 521 in 1st step and SYN
flag consumes one sequence number hence, the next sequence number will be 522.
 Acknowledgement Number (Ack no.=2001): since the sender is acknowledging
SYN=1 packet from the receiver with sequence number 2000 so, the next sequence
number expected is 2001.
 ACK flag (ACK=1): tells that the acknowledgement number field contains the
next sequence expected by the sender.

Since the connection establishment phase of TCP makes use of 3 packets, it is also
known as 3-way Handshaking (SYN, SYN + ACK, ACK).
TCP Connection Termination
In TCP 3-way Handshake Process we studied that how connections are established
between client and server in Transmission Control Protocol (TCP) using SYN bit
segments. In this article, we will study how TCP close connection between Client and
Server. Here we will also need to send bit segments to a server which FIN bit is set to
1.
TCP supports two types of connection releases like most connection-oriented transport
protocols:

1. Graceful connection release –


In the Graceful connection release, the connection is open until both parties have
closed their sides of the connection.
2. Abrupt connection release –
In an Abrupt connection release, either one TCP entity is forced to close the
connection or one user closes both directions of data transfer.
Abrupt connection release :
An abrupt connection release is carried out when an RST segment is sent. An RST
segment can be sent for the below reasons:
1. When a non-SYN segment was received for a non-existing TCP connection.

2. In an open connection, some TCP implementations send an RST segment when a


segment with an invalid header is received. This will prevent attacks by closing the
corresponding connection.

3. When some implementations need to close an existing TCP connection, they send
an RST segment. They will close an existing TCP connection for the following
reasons:
 Lack of resources to support the connection

 The remote host is now unreachable and has stopped responding.

When a TCP entity sends an RST segment, it should contain 00 if it does not belong
to any existing connection else it should contain the current value of the sequence
number for the connection and the acknowledgment number should be set to the next
expected in- sequence number on this connection.
Graceful Connection Release :
The common way of terminating a TCP connection is by using the TCP header’s FIN
flag. This mechanism allows each host to release its own side of the connection
individually.
How mechanism works In TCP :

1. Step 1 (FIN From Client) –


Suppose that the client application decides it wants to close the connection. (Note
that the server could also choose to close the connection). This causes the client to
send a TCP segment with the FIN bit set to 1 to the server and to enter
the FIN_WAIT_1 state. While in the FIN_WAIT_1 state, the client waits for a
TCP segment from the server with an acknowledgment (ACK).
2. Step 2 (ACK From Server) –
When the Server received the FIN bit segment from Sender (Client), Server
Immediately sends acknowledgement (ACK) segment to the Sender (Client).
3. Step 3 (Client waiting) –
While in the FIN_WAIT_1 state, the client waits for a TCP segment from the
server with an acknowledgment. When it receives this segment, the client enters
the FIN_WAIT_2 state. While in the FIN_WAIT_2 state, the client waits for
another segment from the server with the FIN bit set to 1.
4. Step 4 (FIN from Server) –
The server sends the FIN bit segment to the Sender(Client) after some time when
the Server sends the ACK segment (because of some closing process in the
Server).
5. Step 5 (ACK from Client) –
When the Client receives the FIN bit segment from the Server, the client
acknowledges the server’s segment and enters the TIME_WAIT state.
The TIME_WAIT state lets the client resend the final acknowledgment in case
the ACK is lost. The time spent by clients in the TIME_WAIT state depends on
their implementation, but their typical values are 30 seconds, 1 minute, and 2
minutes. After the wait, the connection formally closes and all resources on the
client-side (including port numbers and buffer data) are released.
In the below Figures illustrate the series of states visited by the server-side and also
the Client-side, assuming the client begins connection tear-down. In these two state-
transition figures, we have only shown how a TCP connection is normally established
and shut down.
TCP states visited by ClientSide –

TCP states visited by ServerSide –


Here we have not described what happens in certain scenarios like when both sides of
a connection want to initiate or shut down at the same time. If you are interested in
learning more about this and other advanced issues concerning TCP, you are
encouraged to see Stevens’comprehensive book.

User Datagram Protocol (UDP)


User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the
Internet Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable
and connectionless protocol. So, there is no need to establish a connection prior to
data transfer. The UDP helps to establish low-latency and loss-tolerating connections
establish over the network.The UDP enables process to process communication.
Though Transmission Control Protocol (TCP) is the dominant transport layer protocol
used with most of the Internet services; provides assured delivery, reliability, and
much more but all these services cost us additional overhead and latency. Here, UDP
comes into the picture. For real-time services like computer gaming, voice or video
communication, live conferences; we need UDP. Since high performance is needed,
UDP permits packets to be dropped instead of processing delayed packets. There is no
error checking in UDP, so it also saves bandwidth.
User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.
UDP Header –
UDP header is an 8-bytes fixed and simple header, while for TCP it may vary from 20
bytes to 60 bytes. The first 8 Bytes contains all necessary header information and the
remaining part consist of data. UDP port number fields are each 16 bits long, therefore
the range for port numbers is defined from 0 to 65535; port number 0 is reserved. Port
numbers help to distinguish different user requests or processes.

1. Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
2. Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
3. Length: Length is the length of UDP including the header and the data. It is a 16-
bits field.
4. Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of
the one’s complement sum of the UDP header, the pseudo-header of information
from the IP header, and the data, padded with zero octets at the end (if necessary)
to make a multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error
control or flow control is provided by UDP. Hence UDP depends on IP and ICMP for
error reporting. Also UDP provides port numbers so that is can differentiate between
users requests.
Applications of UDP:
 Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
 It is a suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
 Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
 Following implementations uses UDP as a transport layer protocol:
 NTP (Network Time Protocol)
 DNS (Domain Name Service)
 BOOTP, DHCP.
 NNP (Network News Protocol)
 Quote of the day protocol
 TFTP, RTSP, RIP.
 The application layer can do some of the tasks through UDP-
 Trace Route
 Record Route
 Timestamp
 UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
 Actually, UDP is a null protocol if you remove the checksum field.
1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.

You might also like