0% found this document useful (0 votes)
12 views25 pages

Chap13-The Transport Layer Protocols

Uploaded by

chieclagreen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views25 pages

Chap13-The Transport Layer Protocols

Uploaded by

chieclagreen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

CHAPTER 13

THE TRANSPORT LAYER PROTOCOLS


13.1 About This Chapter
The Internet Protocol family includes the User Datagram Protocol (UDP), and the
Transmission Control Protocol (TCP). Transport protocols play an important role in the IP
suite, where they provide data delivery services for most application protocols and for
a large number of control protocols. This chapter introduces some of the concepts of
transport before describing each of the three transport protocols.
A transport layer allows communication to exist between network stations. Data is
handed down to this layer from an upper-level application. The transport layer then
envelopes the data with its headers and gives it to the IP layer for transmission onto
the network. There are two transport-layer protocols in TCP/IP: UDP and TCP.
The Internet protocol (IP) provides for end-to-end carriage of data through the
routers and across the hops of an internetwork, but it is the role of a transport protocol
to control and manage the end-to-end communication (between the hosts). The transport
protocol provides a transport service to the software application running in the host—
ensuring that data is delivered to the right application in the destination device, that
individual packets are in the right order and that none have gone missing en route.
This chapter will begin by studying the services that a transport layer protocol can
provide to network applications, including multiplexing/demultiplexing function for
communicating processes. We will see that a transport layer protocol can provide reliable
data transfer even if the underlying network layer is unreliable.
We will then go on to describe in detail the UDP provides for connectionless service
and TCP provides for connectionoriented service.
Flow control and congestion control algorithms will be examined in this chapter.
Without flow and congestion control, a network can easily become grid locked, with little
or no data being transported end-to-end.
Reliable data transfer will be covered in this chapter while we take a close look
at TCP. We will learn that TCP is complex, involving connection management, flow
control, roundtrip time estimation, as well as reliable data transfer.
Mobile systems require using a modified TCP an UDP protocols. We include a
short description of these protocols and how to be modified to accommodate the
requirement of mobility.
If a client on one host wants to reliably send data to a server on another host, it
simply opens a TCP socket to the server and then pumps data into that socket. The client-
server application is oblivious to all of TCP's complexity. In the final section of this
chapter we will briefly cover the principles of communication between processes using
client and server sockets.
420 COMPUTER NETWORKS AND COMMUNICATIONS

13.2 Learning Outcome


After this chapter, you should be able to:
1. Be familiar with the services provided by the transport layer protocols.
2. Under stand the TCP structure, services and functions.
3. Under stand the UDP structure, services and functions.
4. Explain the flow control mechanism and be familiar with its algorithms.
5. Explain the principles of congestion control and be familiar with its algorithms.
6. Make a right decision about where to use TCP and where to use UDP.
7. Understand the transport protocol for mobility.
8. Be familiar with the socket concepts.

13.3 The Transport Layer services


While the function of the network protocol is to provide for the actual carriage
of the data across the network, the function of a transport protocol is to manage the
end-to- end communication between applications in the two end devices (hosts) which
are interconnected by means of a data network. The transport protocol ensures that all
the individual packets of information making up the application’s message to its peer arrive
at the destination and are presented in the right order.
S A transport layer protocol provides for logical communication between
application processes running on different hosts. Application processes use the
logical communication provided by the transport layer to send messages to each
other, free for the worry of the details of the physical infrastructure used to
carry these messages.
As the router is a network layer device, transport layer protocols are implemented
in the end systems but not in network routers.
Data is handled down to this layer from an upper-level application. The transport
layer then envelopes the data with its headers and passes the resulting data unit
down to the network layer. At the receiving side, the transport layer receives these
data units, reassembles and passes them to a receiving application process.
A computer network can make more than one transport layer protocol
available to network applications. For example, the Internet has two
protocols TCP and UDP. Each of these protocols provides a different set of
transport layer services to the invoking application.
As it is mentioned, the network layer provides only one communication path
between the source network interface and the destination network interface, while
all transport layer protocols provide an application multiplexing/demultiplexing
service. The transport protocol, as Figure 13.1 illustrates, allows multiple
communication sessions to be in progress at the same time, all using different
protocols for email, network management, and network file system (NFS),
Worldwide Web (www) or other applications.
THE TRANSPORT LAYER 42

Host A Host B

Application

TCP/UDP

IP

Data link

Physical

Figure 13.1: Multiplexing function of TCP and UDP transport protocols.


Whereas a transport layer protocol provides logical communication between
processes running on different hosts, a network layer protocol provides logical
communication between hosts.

13.4 Application multiplexing and demultiplexing


The job of the transport layer’s application multiplexing and demultiplexing service
is to make a process-to-process delivery.
At the destination host, the transport layer receives segments from the network layer
just below. The transport layer has the responsibility of delivering the data in these
segments to the appropriate application process running in the host. When you are
downloading Web pages and running one FTP session and two Telnet sessions, you
therefore have four network application processes running. The question is: how the
transport layer in your computer will direct the received data to one of these four
processes?
Each transport-layer segment has a field that contains information that is used to
determine the process to which the segment's data is to be delivered.
At the sending end, the transport layer protocol gathers data at the source host from
different application processes, enveloping the data with header information including the
value of the mentioned field. This value indicates a certain application. The process of
creation segments and passing the segments to the network layer is called multiplexing.
At the receiving end, the transport layer can then examine this field to determine the
receiving process, and then direct the segment to that process. This job of delivering
the data in a transport-layer segment to the correct application process is called
demultiplexing.
UDP and TCP perform the demultiplexing and multiplexing jobs by including two
special fields in the segment headers: the source port number field and the destination port
422 COMPUTER NETWORKS AND COMMUNICATIONS

number field. When taken together, the fields uniquely identify an application process
running on the destination host.

er host C

Client host B

Figure 13.2: Multiplexing of two client applications, using the same port numbers
In order to be able to demultiplex the application when the two sessions have
exactly the same port number pair segments the server also uses the IP addresses in the IP
datagrams carrying these segments. The situation is illustrated in Figure 13.2, in which
host A initiates two FTP sessions to host C, and host B initiates one FTP session to host C.
Hosts A, B and C each has its own unique IP address. Host A assigns two different source
port (SP) numbers (x and y) to the two FTP connections emanating from host A. But
because host B is choosing source port numbers independently from A, it can also assign
SP=x to its FTP connection. Nevertheless, host C is still able to demultiplex the two
connections since the two connections have different source IP addresses.

W When a destination host receives data from the network layer, the triplet (source IP address, source port nu

13.5 The Transport Control Protocol (TCP)


TCP is a connection-oriented transport protocol that sends data as an unstructured
stream of bytes. By using sequence numbers and acknowledgment messages, TCP can
provide a sending node with delivery information about packets transmitted to a
destination node. Where data has been lost in transit from source to destination, TCP can
retransmit the data until either a timeout condition is reached or until successful delivery
has been achieved. TCP can also recognize duplicate messages and will discard them
appropriately. If the sending computer is transmitting too fast for the receiving computer,
TCP can employ flow control mechanisms to slow data transfer. TCP can also
communicate delivery information to the upper-layer protocols and applications it
supports.
THE TRANSPORT LAYER 42

The main services provided by TCP are:


1. Establishing, maintaining, and terminating connections between two processes.
2. Reliable packet delivery, through an acknowledgment process.
3. Sequencing of packets, reliable transfer of data.
4. TCP congestion control
5. Mechanism for contorting errors.
6. The ability to allow multiple connections with different processes inside a particular
source or destination host through the use of ports.
7. Data exchange using full-duplex operations.

13.5.1 TCP Segment


As defined earlier, a TCP segment is a TCP session packet containing part of a TCP
byte stream in transit. The fields of the TCP header segment are shown in Figure 13.3. The
TCP segment contains a minimum of 20 bytes of fixed fields and a variable-length options
field. The details of the fields are as follows:
S The source port field: specifies the application software in the transmitting host.
It take one of possible values in the range of 0-65 535.
S The destination port field: specifies the intended destination application
software in the receiving host. It takes one of possible values in the range of 0-65
535. Table 10.3 list some of the most popular port numbers

0 4 8 16 19 31
Destenation Port

Sequence number

acknowledgment number

Header length reserved bye f window size

Internet checksum Pointer to urgent data

Figure 13.3: TCP Data Unit structure


S Sequence number: is a 32-bit filed that TCP assigns to each first data byte in the
segment. The sequence number commences at the initial sequence number (ISN)
424 COMPUTER NETWORKS AND COMMUNICATIONS

number, which is chosen randomly by the transmitter, and each subsequent data
segment has a sequence number accordingly greater than the previous segment.
The increment in the sequence number depends upon the number of octets in the
previous segment. The sequence number thus counts octets, but its value also
uniquely identifies a particular segment. The sequence number restarts from 0
after the number reaches 232 - 1.
Suppose that a process in host A wants to send a stream of data to a process in host B
over a TCP connection. The TCP in host A will implicitly number each byte in the
data stream. Suppose that the data stream consists of a file consisting of 100,000 bytes,
that the MSS is 500 bytes, and that the first byte of the data stream is numbered
100. As shown in Figure13.4, TCP constructs 200 segments out of the data stream. The
first segment gets assigned sequence number 100, the second segment gets
assigned sequence number 600, the third segment gets assigned sequence number
1100, and so on.. Each sequence number is inserted in the sequence number field in
the header of the appropriate TCP segment.

100101 598599600601 1098 1099 1100 1101 100098 100099

Figure 13.4: TCP constructs segments out of the data stream


S Acknowledgment number: specifies the sequence number of the next byte that a
receiver waits for and acknowledges receipt of bytes up to this sequence number.
If the SYN field is set, the acknowledgment number refers to the initial sequence
number (ISN).
S Header length (HL): is a 4-bit field indicating the length of the header in 32-bit
words.
S Control Bits: 6 flag bits that identify the functions of the message.
o Urgent (URG): is a 1-bit field implying that the urgent-pointer field is
applicable.
O Acknowledgment (ACK): shows the validity of an acknowledgment.
O Push (PSH): if set, directs the receiver to immediately forward the data to the
destination application.
O Reset (RST): if set, directs the receiver to abort the connection.
O Synchronize (SYN): is a l-bit field used as a connection request to
synchronize the sequence numbers.
O Finished (FIN): is a 1-bit field indicating that the sender has finished sending
the data.
S Window size: specifies the advertised window size.
S Checksum: is used to check the validity of the received packet.
S Urgent pointer (URG): if set, directs the receiver to add up the values in the
urgent-pointer field and the sequence number field to specify the last byte number
of the data to be delivered urgently to the destination application.
THE TRANSPORT LAYER 42

S Options: is a variable-length field that specifies the functions that are not
available as part of the basic header. A receiver can use this option to specify the
maximum segment size it can receive. It can use this field also to scaling the
advertised beyond the specified 216 - 1 in the header. The advertised window can
be scaled to a maximum of 214.
S Data (variable): This field may contain one segment of an information sequence
generated by an application layer protocol.

& Both the sequence number and the acknowledgement number can be incremented up to the value 2' 2

& The reuse of sequence numbers can cause a problem, if there is any chance that two segments might

13.5.2 TCP Connection Setup


As a connection-oriented protocol, TCP requires an explicit connection set-up
phase. Connection is set up using a three-step mechanism, as shown in Figure 13.5.
Assume that host A is a sender and host B a destination.
Stepl: The sender sends a TCP connection request to the destination. This special segment
contains no application-layer data. It comprises the initial sequence number
indicated by seq(Ai), with the SYN bit set to 1.
Step2: The destination receipts the connection request extracts the TCP segment from the
datagram, allocates the TCP buffers and variables to the connection, and sends a
connection-granted segment to sender. This connection-granted segment also
contains no application-layer data. The destination sends an acknowledgment,
ack(Ai + 1), back to the source, indicating that the destination is waiting for
the next byte. The destination also sends a request packet comprising the
sequence number, seq(Bj), and the SYN bit set to 1.
Step3: Upon receiving the connection-granted segment, the sender also allocates buffers
and variables to the connection and returns an acknowledgment segment, ack(Bj +
1), specifying that it is waiting for the next byte. The sequence number of this next
segment is seq(Ai + 1). This process establishes a connection between the sender
and the receiver. The SYN bit is set to 0, since the connection is established.
Once these three steps have been completed, hosts A and B can send segments
containing data to each other. In each of these future segments, the SYN bit will be set to
zero. TCP connection establishment procedure is often referred to as a because of in order
to establish the connection, three packets are sent between the two hosts.
426 COMPUTER NETWORKS AND COMMUNICATIONS

When one of the hosts wants to end the connection, it sends a segment with the RST
bit is set to 1. If the application has no data to transmit, the sender sends a segment
with the FIN bit set to 1. The receiver acknowledges receipt of this segment by responding
with an ACK and notifies the application that the connection is terminated. Now, the flow
from the sender to the receiver is terminated. However, in such cases, the flow from the
receiver to the sender is still open. The receiver then sends a segment with the FIN bit
is set to 1. Once the sender acknowledges this by responding with an ACK, the
connection is terminated at both ends.

Host B

time time
Figure 13.5: TCP connection establishment

13.5.3 TCP Flow Control


Each peer machines in a TCP session has the capability to control the flow of
data that is streaming into its physical input (receive) buffers. The sending host stops
sending data when it becomes suspicious that the receiver might not have received one
of the segments, but it need not do this. It could continue sending data, but that might lead
to the receiving buffer becoming swamped with data that it cannot deliver to its application
and must store. The receiver could choose to throw away excess data and force the
sender to retransmit, but ideally it needs a way to apply back-pressure on the sender to
slow the sender down.
When the TCP connection receives bytes that are correct and in sequence, it places
the data in the receive buffer. In some situations when the application is relatively slow at
reading data from the receive buffer, the sender can overflow the connection’s receive
buffer by sending data which receiving application can't read. In such case TCP provides a
How control service (speed matching service). You are familiar with wait and stop
algorithms and sliding window algorithms used to provide flow control for frames of data
THE TRANSPORT LAYER 42

link layer. The same algorithms can be used here to handle the problem of segments flow
control in transport layer.

13.5.3.1 Sliding Window TCP Flow Control


TCP provides flow control by having the sender maintain a variable called the
receive window (to give the sender an idea about how much free buffer space is available
at the receiver). The receiver uses the receive window field to control the amount of data
beyond the last acknowledged byte that the sender may transmit. The receive window
effectively grants the sender permission to send a certain number more bytes. Figure 13.6
illustrates the use of the Window Size.
Sending host A Receiving host B

Figure 13.6: Illustration of the flow control algorithm.


Send and receive applications are shown with a send and receive buffer,
respectively. The send application places data in the send buffer for the sending TCP
application to transmit to the receiving TCP application. The receiving TCP application
places the data in the receive buffer from where the receiving application can retrieve it.
The receive window is dynamic, i.e., it changes throughout a connection's lifetime.
A window size is communicated between a source and destination machine via the
TCP header. Host B informs host A of how much spare room it has in the connection
buffer by placing its current value of RevWi ndow in the window field of every segment it
sends to A. When host B is becoming backlogged with incoming data and have no more
spare room it may throttle back the rate at which the transmitting machine can transmit,
simply by informing that machine of its new window size. If a machine’s buffers fill
up completely, it will send an acknowledgment of the last received data segment with a
new window size of 0. This effectively halts transmission until that host B can clear its
buffers. Each segment that it processes must be acknowledged and, with this
acknowledgment, comes another opportunity to restart transmissions by re-establishing
a window size greater than 0.
428 COMPUTER NETWORKS AND COMMUNICATIONS

As the application process at B empties the buffer, TCP does not send new segments with
new RcvWi ndows to host A. Therefore host A is never informed that some space has
opened up in host B’s receive buffer. In this case host A is blocked and can transmit
no more data!
To solve this problem TCP specification requires host A to continue to send
segments with one data byte when B's receive window is zero. These segments will be
acknowledged by the receiver. Eventually the buffer will begin to empty and the
acknowledgements will contain non-zero RcvWi ndow.
13.5.3.2 Round Trip Time and Timeout
TCP uses timing mechanisms for several critical functions. Each time a segment is
transmitted, a timer is set. If that timer expires (that is, decrements to 0) before an
acknowledgment is received, the segment is assumed to be lost. Consequently, it is
retransmitted. In theory, transmission of segments is throttled back until timeouts cease
occurring (this is the time from when the timer is started until when it expires). The
timeout should be larger than the connection's round-trip time. But the timeout should not
be much larger than the round-trip time to insure quick retransmition the segment, thereby
introducing significant data transfer delays into the application.

13.5.4 TCP Congestion Control


Congestion is the situation when too many packets are present in a part of the
subnet, causing the performance to degrade. In the case of sending packets in a rate which
doesn’t exceed the path capacity, they are all delivered, except for a few that are afflicted
with transmission errors. However, as traffic increases too far, the routers are no longer
able to cope and they begin losing packets.
Congestion can be brought on by several factors. If all of a sudden, streams of
packets begin arriving on three or four input lines and all need the same output line, a
queue will build up. If there is insufficient memory to hold all of them, packets will be
lost. Waiting in the memory increase the problem because of the sender will mark these
packets as timed out and will send duplicates of them increasing the load all the way to the
destination.
One reason to cause congestion is the inability of the node to process the incoming
packets as fast as needed. Similarly, low-bandwidth lines can also cause congestion.

& Congestion control has to do with making sure the subnet is able to carry the offered traffic. It is a global is
THE TRANSPORT LAYER 42

M Flow control, in contrast, relates to the point-to-point traffic between a given sender and a given re

TCP attempts to achieve this goal by dynamically manipulating the window size.
When a connection is established, a suitable window size has to be chosen (Based
on the buffer size of receiver). But even if the sender sticks to this window size:
S congestion will occur due to buffer overflow at the receiving end (Figure 13.7 (a)).
S congestion may still occur due to internal congestion within the network (Figure
13.7 (b)).

Transmission rate
congestion
adjustment

Transmission network

Transmission rate
Internal
adjustment
congestion

Transmission network

Figure 13.7: Illustration of congestion (a) A fast network feeding a low-capacity receiver.
(b) A slow network feeding a high-capacity receiver.
The Internet solution is to realize that two potential problems exist
S network capacity
S receiver capacity
Each sender maintains two windows each reflects the number of bytes the sender
may transmit:
1. the window the receiver has granted
2.the congestion window
The number of bytes that may be sent is the minimum of the two windows.
Thus, the effective window is the minimum of what the sender thinks is all right and
what the receiver thinks is all right.
13.5.4.1 Slow Start Congestion Control Algorithm:
When a connection is established
430 COMPUTER NETWORKS AND COMMUNICATIONS

S the sender initializes the congestion window to the size of the maximum segment
in use on the connection
S sends one maximum segment
S If this segment is acknowledged before the timer goes off, it doubles the
congestion window to make it two maximum size segments and sends two
segments
S As each of these segments is acknowledged, the congestion window is doubled. In
effect each burst acknowledged doubles the congestion window.
S The congestion window keeps growing exponentially until either a timeout occurs
or the receiver's window is reached.
The idea is that if bursts of size, say, n, 2n, and 4n bytes work fine but a burst of 8n
bytes gives a timeout, the congestion window should be set to 4n to avoid congestion. As
long as the congestion window remains at 4n, no bursts longer than that will be sent,
no matter how much window space the receiver grants.
13.5.4.2 Internet Congestion Control Algorithm
This algorithm is slightly different from the previous one in the manner that it is
used to accommodate data flow. It uses a third parameter, the threshold, initially 64 KB, in
addition to the receiver and congestion windows, and works as follows (see Figure 13.8):

Timeout

36 — Threshold "
32
Congestion window

Threshold . “

8— •

4—•

024681012141618202224
Transmission number

Figure 13.8: Illustration of the Internet congestion control algorithm


S When a timeout occurs, the threshold is set to half of the current congestion
window, and the congestion window is reset to one maximum segment.
S Slow start is then used to determine what the network can handle, except that
exponential growth stops when the threshold is hit.
S From that point on, successful transmissions grow the congestion window linearly
(by one maximum segment for each burst) instead of one per segment.
THE TRANSPORT LAYER 43

In effect, this algorithm is guessing that it is probably acceptable to cut the


congestion window in half, and then it gradually works its way up from there.
If no more timeouts occur, the congestion window will continue to grow up to the
size of the receiver's window.
At that point, it will stop growing and remain constant as long as there are no
more timeouts and the receiver’s window does not change size.

W Other transport protocols are sometimes used in a misguided attempt to handle some of the security i

13.6 User Datagram Protocol


The user datagram protocol (UDP) is another transport-layer protocol that is placed
on top of the network layer. UDP is a connectionless protocol, as no handshaking between
sending and receiving points occurs before sending a segment. UDP does not provide a
reliable service. The UDP/IP protocol combination is simply a means of delivering a
datagram from one program to another with a simple one-to-one correspondence between
IP datagrams and UDP datagrams. That is, each UDP datagram is carried by a single
IP datagram. Hence, UDP shares all of the unreliability of the IP protocol. It is up to
higher layers of software to deal with this unreliability as needed. As shown in
Figure12.6, an application data is encapsulated in a UDP header.
The enhancement provided by UDP over IP is its ability to check the integrity of
flowing packets. IP is capable of delivering a packet to its destination but stops delivering
them to an application. UDP fills this gap by providing a mechanism to differentiate
among multiple applications and deliver a packet to the desired application. UDP can
perform error detection to a certain extent but not to the level that TCP can.
UDP has the following characteristics:
1. UDP is a connectionless, unreliable transport protocol. Unlike TCP, which is
connection oriented, UDP operates in the datagram mode. UDP makes no attempt to
create a connection. Data is sent by encapsulating it in a UDP header and passing the
data to the IP Layer. The IP Layer sends the UDP packet in a single IP datagram
unless fragmentation is required.
2. Does not provide acknowledgment to the sender upon the receipt of data.
3. UDP does not attempt to provide sequencing of data; therefore, it is possible for data
to arrive in a different order from which it was sent. Applications that need sequencing
services must either build their own sequencing mechanism as part of the application
or use TCP instead of UDP. In many LAN environments, the chance of data being
received out of sequence is small because of small predictable delays and simple
network topology.
4. May lose packets or duplicate them without issuing an error message to the sender.
5. UDP tends to run faster than TCP, less overhead. UDP is useful in applications
that are command/response oriented and in which the commands and responses can be
sent
432 COMPUTER NETWORKS AND COMMUNICATIONS

in a single datagram. There is no overhead involved in opening and then closing a


connection just to send a small amount of data.

13.6.1 UDP Segment


The format of the UDP segment is shown in Figure 13.9. The segment starts
with the source port, followed by the destination port. These port numbers are used to
identify the ports of applications at the source or the destination, respectively. The
source port identifies the application that is sending the data. The destination port
helps UDP to demultiplex the packet and directs it to the right application. The UDP
length field indicates the length of the UDP segment, including both the header and
the data. UDP checksum specifies the computed checksum when transmitting the packet
from the host. If no checksum is computed, this field contains all zeroes. When this
segment is received at the destination, the checksum is computed; if there is an error, the
packet is discarded.

Figure 13.9: UDP segment structure

13.7 Choosing Between UDP and TCP


TCP is designed for reliable transmission of data. If data is lost or damaged in
transmission, TCP ensures that the data is resent; if packets of data arrive out of order,
TCP puts them back in the correct order; if the data is coming too fast for the connection,
TCP throttles the speed back so that packets won't be lost. A program never needs to
worry about receiving data that is out of order or incorrect. However, this reliability comes
at a price. That price is speed. Establishing and tearing down TCP connections can take a
fair amount of time.
The User Datagram Protocol (UDP) is an alternative protocol for sending data over
IP that is very quick, but not reliable. That is, when you send UDP data, you have no way
of knowing whether it arrived, much less whether different pieces of data arrived in the
order in which you sent them. However, the pieces that do arrive generally arrive quickly.
Surely, if you have data worth sending, you care about whether the data arrives
correctly? Clearly, UDP isn’t a good match for applications like FTP that require reliable
transmission of data over potentially unreliable networks. However, there are many kinds
of applications in which raw speed is more important than getting every bit right. For
example, in real-time audio or video, lost or swapped packets of data simply appear as
static. Static is tolerable, but awkward pauses in the audio stream, when TCP requests a
retransmission or waits for a wayward packet to arrive, are unacceptable. In other
applications, reliability tests can be implemented in the application layer. For example, if a
THE TRANSPORT LAYER 43

client sends a short UDP request to a server, it may assume that the packet is lost if no
response is returned within an established period of time; this is one way the Domain
Name System (DNS) works. (DNS can also operate over TCP.) In fact, you could
implement a reliable file transfer protocol using UDP, and many people have: Network
File System (NFS), Trivial FTP (TFTP), and FSP, a more distant relative of FTP, all
use UDP. (The latest version of NFS can use either UDP or TCP.) In these protocols,
the application is responsible for reliability; UDP doesn't take care of it. That is, the
application must handle missing or out-of-order packets. This is a lot of work, but there's
no reason it can’t be done—although if you find yourself writing this code, think carefully
about whether you might be better off with TCP.Table 10.4 lists popular Internet
applications and the transport protocols that they use.

& The correct amount of data to stuff into one packet depends on the situation. If the network is highly

Application Transport Protocol


electronic mail (SMTP) TCP
remote terminal access (Telnet) TCP
Web (HTTP) TCP
file transfer (FTP) TCP
remote file server (NFS) typically UDP
streaming multimedia typically UDP
Internet telephony typically UDP
Network Management (SNMP) typically UDP
Routing Protocol (RIP) typically UDP
Name Translation (DNS) typically UDP
Table 13.1: Some of popular Internet applications and their transport protocols.

13.8 Transport Protocols for Mobility


The Internet was born in an era when no mobile networking equipment was
available. Therefore, all the basic protocols were designed under the tacit assumption that
the end points of a communication would stay fixed all along a session. With the arrival of
modern communications equipment that allows these end-points to change their position,
new protocols for handling mobility have been proposed.
In order to retain transport layer connections, a mobile host's address must be
preserved regardless of its point of attachment to the network. The problem with a
transport layer protocol such as TCP is that a TCP connection is identified by source
IP address, source TCP port, destination IP address and destination TCP port. So, if
neither
434 COMPUTER NETWORKS AND COMMUNICATIONS

host moves, all of elements remain fixed and the TCP connection can be preserved.
However, if either ends of the connection moves, the following problem will take place:

S If the mobile host acquires a new IP address, then its associated TCP connection
identifier also changes. This causes all TCP connections involving the mobile host
to break.
S If the mobile retains its address, then the routing system cannot forward packets to
its new location.

In wireless mobile networks, both UDP and TCP have their own applications.
However, some modifications are needed in these protocols to become appropriate for
wireless networks.

13.8.1 TCP for Mobility


Mobile computing systems are characterized a poor link quality typically causes to
lose TCP data segments which lead to a possible timeout. While TCP provides a reliable
data delivery owing to the feature of its connection-oriented nature, the most challenging
aspect of providing its services to a mobile host is the prevention of disruption caused by
poor wireless link quality and thereby prevention the lose of packets due to congestion .
Disallowing a sender to shrink its congestion window when packets are lost for any
reason, serves as an option to solve this problem. If a wireless channel soon recovers from
disconnection, the mobile host begins to receive data immediately.
The Indirect Transmission Control Protocol (I-TCP), and the fast transmit will be
the focus of our discussion are two other protocols are use too to solve this problem.
13.8.1.1 Indirect TCP
If two hosts, A is a mobile host and B fixed, are trying to establish an ITCP
connection as shown in Figure 13.10. The protocol first splits the connection into two
separate connections. One wireless link is established between the mobile host and the
mobile switching center (MSC), and the other fixed link between the MSC and the fixed
host.
Fixed Network Mobile IP Network
ng

station

Fixed Portion of TCP Mobile Portion of


Connection TCP Connection

Figure 13.10: lndircct TCP for mobile hosts


This resembles two different TCP connections linked together. Note that, if for any
reason the mobile host disconnects the communication on the wireless portion, the sender
THE TRANSPORT LAYER 43

may not become aware of the disconnection as the wired portion is still in tact, and the
Base station still delivers the TCP segments to the mobile host. Consequently, the sender
of segments may not know of segments being delivered to the mobile host. A TCP
connection on the wireless link can separately support disconnections, and user mobility in
addition to wired TCP features such as notification to higher layers on changes in the
available bandwidth. Also, the flow control and congestion control mechanisms on the
wireless link remain separated from those on the wired link. In the I-TCP scheme, the TCP
acknowledgments are separate for the wireless and the wired links of the connection.
13.8.1.2 Fast Retransmit Mobile TCP
This scheme does not split the TCP connection to wireless and wired connections.
Fast Retransmit improves the connection throughput especially during a cell handoff.
Once two wireless cell MSCs hand off the switching function for a mobile host, the mobile
host stops receiving TCP segments. The sender may interpret this as a situation of
congestion leading to implement a congestion control such as window size reduction or
retransmitting. This may also result in a long timeout causing the mobile host to wait a
long period of time. With the fast retransmit TCP, the last old acknowledgment is
triplicated and retransmitted by the mobile host as soon as it finishes a handoff. This
results in significant reduction of the congestion window.

13.8.2 UDP for Mobility


UDP is used in wireless mobile IP networks because of a mobile host needs to
register with a foreign agent. This process starts with a foreign agent propagating
advertisements using UDP connection. Since the traditional UDP does not use
acknowledgments and does not perform flow control, it is not a preferred choice of
transport protocol. One way of handling this situation is to stop sending datagrams to a
mobile host once it reports fading. But this method cannot be practical due to its poor
quality of connection.

13.9 Communicating Processes Using Sockets


Any network application involves at lest two processes in two different hosts
communicating with each other over the network. These processes use sockets to send and
receive messages while they communicate with each. A process's socket can be thought of
as a gate for the process to send and receive messages through the network. The
process assumes that there is a transportation infrastructure on the other side of the
gate that will transport the message to the gate of the destination process. Figure 13.11
illustrates socket communication between two processes that communicate over the

& Sockets provide a mechanism for building distributed network applications such as client/server appl

network.
436 COMPUTER NETWORKS AND COMMUNICATIONS

Logical connection

Physical connection
I

Figure 13.11: Socket communication between two processes over the network using TCP.
The socket serves as an interface between the application layer and the transport
layer within a host. It is a logical endpoint for communication between two hosts on a
TCP/IP network. A socket is an application programming interface (API) for establishing,
maintaining, and tearing down communication between TCP/IP hosts. Sockets were first
developed as a way of providing support for creating virtual connections between different
processes.
The socket is uniquely identified by three attributes:
S The host’s IP address
S The type of service needed and consequently the transport layer protocol to use. If
applications need to guarantee the delivery of data, the socket chooses the
connection-oriented service (TCP). If the applications do not need to guarantee
data delivery, the socket chooses the connectionless service (UDP). Once the
application developer chooses a transport protocol, the application is built using
the transport layer the services offered by that protocol.
S The application or service that will use this socket. The application is defined by a
port number used by this application or service running on the host.
So the socket can perform the following basic operations:
S Connect to a remote machine
S Send data
S Receive data
S Close a connection
S Bind to a port
H Listen for incoming data
S Accept connections from remote machines on the bound port
We can refer to the previous Figure to illustrate how the sockets work:
1. The client process passes a stream of data through the socket
2. TCP directs this data to the connection's send buffer
THE TRANSPORT LAYER 43

3. From time to time, TCP will "grab" chunks of data (Maximum Segment Size
MSS) from the send buffer.
4. TCP encapsulates each MSS of client data with TCP header, thereby forming TCP
segments.
5. The segments are passed down to the network layer, where they are separately
encapsulated within network-layer IP datagrams.
6. The IP datagrams are then sent into the network.
7. The IP datagrams are received from the network.
8. The datagrams are passed up to the network layer, where they are decapsulated to
a separate TCP segments
9. TCP decapsulates each TCP segments, thereby forming MSS.
10. The MSS is placed in the TCP connection’s receive buffer
11. The MSSs in the TCP connection's receive buffer perform a stream of data
12. The application reads the stream of data from this buffer
The application developer has control of everything on the application-layer side of
the socket but has little control of the transport-layer side of the socket. The only control
that the application developer has on the transport-layer side is
S the choice of transport protocol
S the ability to fix a few transport-layer parameters such as maximum buffer and
maximum segment sizes.
Sockets are a nearly standard programming interface to IP and IP transport protocols
that allow applications to be written in a portable way and run on different systems, getting
the same level of access to the IP transport. Sockets implementations themselves provide a
level of queuing of messages and buffering of data that is of great help to an
application implementer.

Note that sockets implementations exist to provide access to UDP and TCP. Direct access to IP (withou

Although sockets are a roughly standardized solution, it should be noted that they
are not part of the specification of IP or the IP transport protocols but are only a means of
access to them. Many application implementations choose to use sockets because of their
convenience or because they provide the only access to the IP or IP transport support
in the systems in which they will run.
The sockets API deviates slightly from one implementation to another, with the
result that unless an application is to be run on a single well-known platform, it is usually
constrained to a subset of the API to ensure that it can be ported.

13.10 Quick Review


•I•Transport protocol manages the end-to-end communication between applications in the two end devic
438 COMPUTER NETWORKS AND COMMUNICATIONS

Each transport-layer segment has fields that help to perform the demultiplexing
and multiplexing jobs.
TCP is a connection-oriented transport protocol that sends data as an unstructured
stream of bytes using a three-step hand shaking. It provides reliable packet
delivery, congestion control, error controlling, and full-duplex data exchange.
Sliding window flow control is achieved by maintain a variable called the receive
window to give the sender an idea about how much free buffer space is available at
the receiver.
Congestion is the situation when too many packets are present in a part of the
subnet, causing the performance to degrade.
Slow start congestion control algorithm and Internet congestion control algorithm
are used to prevent the Congestion from occurring.
UDP) is a connectionless unreliable transport protocol, no handshaking between
sending and receiving points occurs before sending a segment. It does not provide
a reliable service. It does not provide acknowledgment. But it tends to run
faster than TCP, less overhead. UDP is useful in applications that are
command/response oriented and in which the commands and responses can be
sent in a single datagram.
In order to retain transport layer connections, a mobile host’s address must be
preserved regardless of its point of attachment to the network. If either ends of the
connection moves, many problems will take place.
Mobile computing systems are characterized a poor link quality typically causes to
lose TCP data segments which lead to a possible timeout.
Disallowing a sender to shrink its congestion window when packets are lost for
any reason, serves as an option to solve this problem..
The Indirect Transmission Control Protocol (I-TCP), and the fast transmit will be
the focus of our discussion are two other protocols are use too to solve this
problem.
Since the traditional UDP does not use acknowledgments and does not perform
flow control it is modified to accommodate the process of registering with a
foreign agent.
Any network application involves at lest two processes in two different hosts
communicating with each other over the network using sockets.
The socket serves as an interface between the application layer and the transport
layer within a host. It is a logical endpoint for communication between two hosts
on a TCP/IP network.

13.11 Self Test Questions


A- Answer the following questions
1. Explain how the transport layer protocol provides logical communication between
processes running on different hosts.
2. What are the services provided by the transport layer?
3. What are the three parameters used when a destination host receives data from the
network layer, in order to forward this data to the appropriate process
THE TRANSPORT LAYER 43

4. What is the role of sequence and acknowledge field numbers contained in the TCP
data unit?
5. List the main services provided by TCP?
6. Explain the structure of TCP data unit.
7. How does TCP setup the connection between the source and the destination?
8. How can the flow control be distinguished from the congestion control?
9. What are the main aspects of the sliding window algorithm?
10. How dose the receive window effectively grant the sender permission to send a
certain number more bytes?
11. How does the round trip time affect the Timeout of the timer?
12. What are the reasons that may cause congestion?
13. What TCP does try to prevent congestion from occurring in the first place?
14. What are the differences between slow start congestion and the Internet congestion
control algorithm?
15. List the main characteristics of the UDP.
16. What are the main fields in the UDP data unit?
17. Where are the UDP and TCP applicable?
18. How dose the correct amount of data stuffed one packet chosen?
19. What are the problems that will take place if either ends of the connection moves?
20. What are the differences between the Indirect and the fast transmit Transmission
Control Protocols?
21. How is the UDP modified to accommodate mobility?
22. What are the differences between the server socket and the client socket?
23. How does the server socket work?
24. How does the client socket work?

B- Identify the choice that best completes the statement or answers the
question.
1. Which two protocols carried within IP datagrams operate at the transport layer of the
OSI model?
I. IMCP II. TCP III. UDP
IV. IGMP V.ARP VI.
a. I, II and III c. II and III
b. II, III and d. V, VI
IV
2. TCP is
a. a data-link layer protocol
b. an application layer protocol
c. a transport layer protocol
d. a network layer protocol
3. Which of the followin protocols uses a handshake to establish a connection before
sending data?
a. Connection-oriented protocol
b. Routing protocol
c. Connectionless protocol
d. File Transfer Protocol
440 COMPUTER NETWORKS AND COMMUNICATIONS

4. The Data Offset field in the TCP header specifies:


a. The length of the TCP header
b. The location of the current segment in the sequence
c. The length of the Data field
d. The checksum value used for error detection
is a service that the UDP protocol provides.
a. Flow control c. Error detection
b. Guaranteed delivery d. None of the above
6. is responsible for managing the three-way handshake.
a. IP c. TCP
b. UDP d. ARP
7. The are a kind of Transport layer protocols that are more useful in situations
where data must be transferred quickly.
a. connectionless protocols
b. SYN-oriented protocols
c. connection-oriented protocols
d. ACK-oriented protocols
8. What is the transport layer protocol that establishes a connection with another node
before they begin transmitting data?
a. connectionless protocols
b. SYN-oriented protocols
c. connection-oriented protocols
d. ACK-oriented protocols
9. TCP/IP
a. comprises several subprotocols
b. comprises only one protocol
c. has been replaced by ARP
d. has been replaced by IPX/SPX
10. TCP/IP has grown extremely popular because
a. It is expensive
b. It cannot be routable
c. Its private nature made its programming code secure
d. Its open nature
11. TCP operates on layer of the OSI Model.
a. Physical c. Data Link
b. Session d. Transport
12. TCP
a. is a connectionless protocol
b. is a connection-oriented protocol
c. does not use checksums
d. does not ensure reliable data delivery
13. TCP
THE TRANSPORT LAYER 44

a. does not use checksums


b. is a connectionless protocol
c. provides flow control
d. does not ensure reliable data delivery
14. What is the address on a host where an application makes itself available to incoming
or outgoing data?
a. IP address c. MAC address
b. NIC address d. Port
15. field indicates how many bytes the sender can issue to a receiver while
acknowledgment for this segment is outstanding.
a. Acknowledge number c. Window
b. Reserved d. Checksum
16. field allows the receiving node to determine whether the TCP segment became
comipted during transmission.
a. Checksum c. Flags
b. TCP header length d. Padding
17. is the only TCP/IP core protocol that runs at the Transport layer of the OSI
Model.
a. UDP c. IP
b. TCP d. ARP
18. is a connectionless transport service.
a. TCP c. UDP
b. IP d. HTTP
19. UDP
a. is less efficient than TCP
c. is more reliable than TCP
b. also uses checksums
d. produces less transmission overhead than TCP
20. UDP contains header fields.
a. 4 c. 10
b. 13 d. 15
21. The client uses to broadcast a DHCP discover packet?
a. ARP c. TCP
b. IP d. UDP
22. port numbers indicate which application service is desired.
a.IP c. PVC
b. TCP d. WAP
23. The Secure Sockets Layer is an additional layer of software added between the
layer and the layer.
a. transport (TCP); physical
b. data link; network
c. presentation; network
442 COMPUTER NETWORKS AND COMMUNICATIONS

d. application; transport (TCP)


24. Which protocol is used by network-monitoring applications that do not require the
same level of reliability as offered by TCP?
a. IP c. FTP
b. UDP d. NCP
25. Which TCP/IP protocol is responsible for reliable delivery of data?
a. SPX c. TCP
b. UDP d. FTP
26. is found in a TCP header but not in a UDP header.
a. Source port c. Destination port
b. Window size d. Checksum
27. is found in a TCP header and in a UDP header.
a. Sequence number
c. Acknowledgment number
b. Window size
d. Checksum
28. Which of the following services use UDP?
I. DHCP II. SMTP III. SNMP
IV. FTP V. HTTP VI. TFTP
a. I,III and VI c. II,IVand V
b. I,II and V d. III, IV and VI
29. Which of the following services use TCP?
I. DHCP II. SMTP III. SNMP
IV. FTP V. HTTP VI. TFTP
a. I,III and VI c. II,IVand V
b. I,II and V d. III, IV and VI
30. Which TCP/IP protocol is responsible for reliable delivery of data?
a. SPX c. TCP
b. UDP d. FTP
31. Which of the following uses TCP as its transport protocol?
a. NFS c. TFTP
b. RTP d. HTTP
32. ITCP connection is composed of
a. one wireless and the other fixed connections
b. one wireless connection
c. one fixed connection
d. two wireless connections
33. Fast Retransmit Mobile TCP connection is composed of
a. one wireless and the other fixed connections
b. one wireless connection
c. one fixed connection
THE TRANSPORT LAYER 44

d. two wireless connections


34. is not a transport socket attribute
a. The host’s IP address
b. The type of service
c. The application
d. The Maximum Segment Size MSS
35. The application developer on the transport-layer side doesn't have the control over
a. the choice of transport protocol
b. the ability to fix the maximum buffer size
c. the ability to fix the maximum segment size.
d. the choice of error checking method
36. One of the following is NOT true about the transport sockets
a. Any network application involves at lest two processes in two different hosts
communicating with each other over the network using sockets.
b. The socket is a physical endpoint for communication between two hosts on a
TCP/IP network
The socket serves as an interface between the application layer and the transport
layer within a host.
d. The socket is a logical endpoint for communication between two hosts on a
TCP/IP network

You might also like