NOTES 6
NOTES 6
TCP stands for Transmission Control Protocol. It is a transport layer protocol that
facilitates the transmission of packets from source to destination. It is a connection-
oriented protocol that means it establishes the connection prior to the
communication that occurs between the computing devices in a network. This
protocol is used with an IP
The main functionality of the TCP is to take the data from the application layer. Then
it divides the data into a several packets, provides numbering to these packets, and
finally transmits these packets to the destination. The TCP, on the other side, will
reassemble the packets and transmits them to the application layer. As we know that
TCP is a connection-oriented protocol, so the connection will remain established
until the communication is not completed between the sender and the receiver.
TCP is a transport layer protocol as it is used in transmitting the data from the sender
to the receiver.
o Full duplex
It is a full-duplex means that the data can transfer in both directions at the same
time.
o Stream-oriented
TCP is a stream-oriented protocol as it allows the sender to send the data in the form
of a stream of bytes and also allows the receiver to accept the data in the form of a
stream of bytes. TCP creates an environment in which both the sender and receiver
are connected by an imaginary tube known as a virtual circuit. This virtual circuit
carries the stream of bytes across the internet.
Need of Transport Control Protocol
In the layered architecture of a network model, the whole task is divided into smaller
tasks. Each task is assigned to a particular layer that processes the task. In the TCP/IP
model.
Working of TCP
In TCP, the connection is established by using three-way handshaking. The client
sends the segment with its sequence number. The server, in return, sends its segment
with its own sequence number as well as the acknowledgement sequence, which is
one more than the client sequence number. When the client receives the
acknowledgment of its segment, then it sends the acknowledgment to the server. In
this way, the connection is established between the client and the server.
Advantages of TCP
Disadvantage of TCP
It increases a large amount of overhead as each segment gets its own TCP header, so
fragmentation by the router increases the overhead.
o Window size
It is a 16-bit field. It contains the size of data that the receiver can accept. This
field is used for the flow control between the sender and receiver and also
determines the amount of buffer allocated by the receiver for a segment. The
value of this field is determined by the receiver.
o Checksum
It is a 16-bit field. This field is optional in UDP, but in the case of TCP/IP, this
field is mandatory.
o Urgent pointer
It is a pointer that points to the urgent data byte if the URG flag is set to 1. It
defines a value that will be added to the sequence number to get the
sequence number of the last urgent byte.
o Options
It provides additional options. The optional field is represented in 32-bits. If
this field contains the data less than 32-bit, then padding is required to obtain
the remaining bits.
UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport
functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important
than speed and size.
o UDP is an end-to-end transport level protocol that adds transport-level
addresses, checksum error control, and length information to the data from
the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.
Where,
o Source port address: It defines the address of the application process that
has delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process
that will receive the message. The destination port address is of a 16-bit
address.
o Total length: It defines the total length of the user datagram in bytes. It is a
16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established
between both the ends of the transmission. For creating the connection, TCP
generates a virtual circuit between sender and receiver for the duration of a
transmission.
o Stream data transfer: TCP protocol transfers the data in the form of
contiguous stream of bytes. TCP group the bytes in the form of TCP segments
and then passed it to the IP layer for transmission to the destination. TCP itself
segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and
expects a positive acknowledgement from the receiving TCP. If ACK is not
received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the
sender indicating the number the bytes it can receive without overflowing its
internal buffer. The number of bytes is sent in ACK in the form of the highest
sequence number that it can receive without any problem. This mechanism is
also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different
computers. At the receiving end, the data is forwarded to the correct
application. This process is known as demultiplexing. TCP transmits the packet
to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and
window sizes, is called a logical connection. Each connection is identified by
the pair of sockets used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the
directions at the same time. To achieve Full Duplex service, each TCP should
have sending and receiving buffers so that the segments can flow in both the
directions. TCP is a connection-oriented protocol. Suppose the process A
wants to send and receive the data from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.
TCP Segment Format
Where,
Read
Discuss
Practice
Video
Courses
Transport Layer is the second layer in the TCP/IP model and the fourth layer
in the OSI model. It is an end-to-end layer used to deliver messages to a
host. It is termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to- hop, between the source host and
destination host to deliver the services reliably. The unit of data
encapsulation in the Transport Layer is a segment.
The transport layer takes services from the Network layer and provides
services to the Application layer
At the sender’s side: The transport layer receives data (message) from the
Application layer and then performs Segmentation, divides the actual
message into segments, adds source and destination’s port numbers into the
header of the segment, and transfers the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network
layer, reassembles the segmented data, reads its header, identifies the port
number, and forwards the message to the appropriate port in the Application
layer.
Responsibilities of a Transport Layer:
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse.
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters
the bucket, the outflow is at constant rate.When the bucket is full with water
additional water entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.
In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In
figure (B) We see that three of the five packets have gotten through, but the other two
are stuck waiting for more tokens to be generated.
Ways in which token bucket is superior to leaky bucket: The leaky bucket
algorithm controls the rate at which the packets are introduced in the network, but it is
very conservative in nature. Some flexibility is introduced in the token bucket
algorithm. In the token bucket, algorithm tokens are generated at each tick (up to a
certain limit). For an incoming packet to be transmitted, it must capture a token and
the transmission takes place at the same rate. Hence some of the busty packets are
transmitted at the same rate if tokens are available and thus introduces some amount
of flexibility in the system.
Read
Discuss
Practice
Video
Courses
In the network layer, before the network can make Quality of service guarantees, it
must know what traffic is being guaranteed. One of the main causes of congestion is
that traffic is often bursty.
To understand this concept first we have to know little about traffic shaping. Traffic
Shaping is a mechanism to control the amount and the rate of traffic sent to the
network. Approach of congestion management is called Traffic shaping. Traffic
shaping helps to regulate the rate of data transmission and reduces congestion.
There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Suppose we have a bucket in which we are pouring water, at random points in time,
but we have to get water at a fixed rate, to achieve this we will make a hole at the
bottom of the bucket. This will ensure that the water coming out is at some fixed rate,
and also if the bucket gets full, then we will stop pouring water into it.
The input rate can vary, but the output rate remains constant. Similarly, in networking,
a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are
stored in the bucket and sent out at an average rate.
In the above figure, we assume that the network has committed a bandwidth of 3
Mbps for a host. The use of the leaky bucket shapes the input traffic to make it
conform to this commitment. In the above figure, the host sends a burst of data at a
rate of 12 Mbps for 2s, for a total of 24 Mbits of data. The host is silent for 5 s and
then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host
has sent 30 Mbits of data in 10 s. The leaky bucket smooths out the traffic by sending
out data at a rate of 3 Mbps during the same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by
consuming more bandwidth than is set aside for this host. We can also see that the
leaky bucket may prevent congestion.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO
queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM
networks), the process removes a fixed number of packets from the queue at each tick
of the clock. If the traffic consists of variable-length packets, the fixed output rate
must be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the head of the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.
Note: In the below examples, the head of the queue is the rightmost position and the
tail of the queue is the leftmost position.
Example: Let n=1000
Packet=
Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.
Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.
Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.
Difference between Leaky and Token buckets –
When the host has to send a packet , In this, the bucket holds tokens generated at
packet is thrown in bucket. regular intervals of time.
Bursty traffic is converted into uniform If there is a ready packet , a token is removed
traffic by leaky bucket. from Bucket and packet is send.
In practice bucket is a finite queue If there is no token in the bucket, then the
outputs at finite rate packet cannot be sent.
Step 1 (SYN): In the first step, the client wants to establish a connection
with a server, so it sends a segment with SYN(Synchronize Sequence
Number) which informs the server that the client is likely to start
communication and with what sequence number it starts segments with
Step 2 (SYN + ACK): Server responds to the client request with SYN-
ACK signal bits set. Acknowledgement(ACK) signifies the response of the
segment it received and SYN signifies with what sequence number it is
likely to start the segments with
Step 3 (ACK): In the final part client acknowledges the response of the
server and they both establish a reliable connection with which they will
start the actual data transfer.
2. TCP is a full-duplex protocol so both sender and receiver require a window for
receiving messages from one another.
Sequence number (Seq=2000): contains the random initial sequence number
generated at the receiver side.
Syn flag (Syn=1): request the sender to synchronize its sequence number with the
above-provided sequence number.
Maximum segment size (MSS=500 B): sender tells its maximum segment size,
so that receiver sends datagram which won’t require any fragmentation. MSS field
is present inside Option field in TCP header.
Since MSSreceiver < MSSsender, both parties agree for minimum MSS i.e., 500 B to
avoid fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29 packets.
This is the receiver's sending window size.
Window size (window=10000 B): receiver tells about his buffer capacity in which
he has to store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.
This is the sender's sending window size.
Acknowledgement Number (Ack no.=522): Since sequence number 521 is
received by the receiver so, it makes a request for the next sequence number with
Ack no.=522 which is the next packet expected by the receiver since Syn flag
consumes 1 sequence no.
ACK flag (ACk=1): tells that the acknowledgement number field contains the
next sequence expected by the receiver.
3. Sender makes the final reply for connection establishment in the following way:
Sequence number (Seq=522): since sequence number = 521 in 1st step and SYN
flag consumes one sequence number hence, the next sequence number will be 522.
Acknowledgement Number (Ack no.=2001): since the sender is acknowledging
SYN=1 packet from the receiver with sequence number 2000 so, the next sequence
number expected is 2001.
ACK flag (ACK=1): tells that the acknowledgement number field contains the
next sequence expected by the sender.
Since the connection establishment phase of TCP makes use of 3 packets, it is also
known as 3-way Handshaking (SYN, SYN + ACK, ACK).
TCP Connection Termination
In TCP 3-way Handshake Process we studied that how connections are established
between client and server in Transmission Control Protocol (TCP) using SYN bit
segments. In this article, we will study how TCP close connection between Client and
Server. Here we will also need to send bit segments to a server which FIN bit is set to
1.
TCP supports two types of connection releases like most connection-oriented transport
protocols:
3. When some implementations need to close an existing TCP connection, they send
an RST segment. They will close an existing TCP connection for the following
reasons:
Lack of resources to support the connection
When a TCP entity sends an RST segment, it should contain 00 if it does not belong
to any existing connection else it should contain the current value of the sequence
number for the connection and the acknowledgment number should be set to the next
expected in- sequence number on this connection.
Graceful Connection Release :
The common way of terminating a TCP connection is by using the TCP header’s FIN
flag. This mechanism allows each host to release its own side of the connection
individually.
How mechanism works In TCP :
1. Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
2. Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
3. Length: Length is the length of UDP including the header and the data. It is a 16-
bits field.
4. Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of
the one’s complement sum of the UDP header, the pseudo-header of information
from the IP header, and the data, padded with zero octets at the end (if necessary)
to make a multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error
control or flow control is provided by UDP. Hence UDP depends on IP and ICMP for
error reporting. Also UDP provides port numbers so that is can differentiate between
users requests.
Applications of UDP:
Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
Following implementations uses UDP as a transport layer protocol:
NTP (Network Time Protocol)
DNS (Domain Name Service)
BOOTP, DHCP.
NNP (Network News Protocol)
Quote of the day protocol
TFTP, RTSP, RIP.
The application layer can do some of the tasks through UDP-
Trace Route
Record Route
Timestamp
UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
Actually, UDP is a null protocol if you remove the checksum field.
1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.