0% found this document useful (0 votes)
123 views58 pages

Computer Networks: Transport Layer Prof. M. Sreenivasa Rao

The document provides an overview of Chapter 6 on the Transport Layer in a computer networks course. It discusses the key tasks of the Transport Layer including providing end-to-end communication between processes, negotiating quality of service, addressing and mapping names to addresses, handling dynamic flow control and congestion control, and establishing, using and terminating connections. It also outlines some of the services provided to upper layers, such as enhancing quality of service through metrics like throughput, delay, reliability and security.

Uploaded by

Bala Chuppala
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views58 pages

Computer Networks: Transport Layer Prof. M. Sreenivasa Rao

The document provides an overview of Chapter 6 on the Transport Layer in a computer networks course. It discusses the key tasks of the Transport Layer including providing end-to-end communication between processes, negotiating quality of service, addressing and mapping names to addresses, handling dynamic flow control and congestion control, and establishing, using and terminating connections. It also outlines some of the services provided to upper layers, such as enhancing quality of service through metrics like throughput, delay, reliability and security.

Uploaded by

Bala Chuppala
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 58

Computer Networks

Chapter 6
Transport Layer

Prof. M. Sreenivasa Rao


The Weeks Ahead
Mar 11 Chapter 5.1: Network Layer
Mar 13 Chapter 5.1
Mar 18 EXAM 2
Mar 20 Chapter 5.1:
Mar 21 LAB – You should have several tests running.
Mar 25 Chapter 5.2: More Network Layer
Mar 27 Chapter 5.2:
Apr 1 Chapter 5.2
Apr 3 Chapter 6.1: Transport Layer
Apr 8 Chapter 6.1:
Apr 10 EXAM 3
Apr 15 Chapter 6.1:
Apr 17 Chapter 6.1:
Apr 22 Chapter 6.1:
Apr 24 Chapter 6.1:
Apr 25 LAB – Drop Dead Date!!
May 3 Final Exam – 8:00 – 10:00

Chap. 6- Transport 2
Chapter Overview
The Transport Layer is concerned about
getting packets from source to
destination in a reliable, cost-effective
manner.

6.1 The Transport Service


Overview of this layer
6.2 Elements of Transport Protocols
Jobs done in this layer

6.4 TCP and UDP


Description of popular protocols

Chap. 6- Transport 3
Overview
The Transport
Service

6.1 The Transport Service This section gives an overview of properties


6.2 Elements of Transport in this layer.
Protocols
6.4 TCP and UDP

Chap. 6- Transport 4
The Transport Unique Tasks For Transport
Service
The transport level provides end-to-end communication between processes executing on different
machines. The services provided by a transport protocol are similar to those provided by a data link
layer protocol.
 
There are several important differences between the transport and lower layers:
 
1. User Oriented.
 
Application programmers interact directly with the transport layer; from the programmer's perspective,
the transport layer is the `network'. The transport layer should be oriented more towards user services.
Should not simply reflect what the underlying layers happen to provide. Provides an API.
 
2. Negotiation of Quality and Type of Services.
 
The user and transport protocol may need to negotiate as to the quality or type of service to be
provided. A user may want to negotiate such options as: throughput, delay, protection, priority,
reliability, etc.
 
3. Guaranteed Service.
 
The transport layer may have to overcome service deficiencies of the lower layers (e.g. providing
reliable service over an unreliable network layer).

Chap. 6- Transport 5
The Transport Unique Tasks For Transport
Service
4. Address Resolution.
 
Naming and addressing become significant issues. In the transport layer, the user must deal with
them. For instance, how does a user connect to `the mail server process at Clark?
 
Two solutions:
 
a) Use well known addresses that rarely if ever change;
i. allows programs to `wire in' addresses.
ii. This works for services that are well established (mail, or Telnet);
iii. it doesn't allow a user to easily experiment with new services.

b) Use a name server.


i. Servers register services with the name server;
ii. clients contact the server to find the transport address of a given service.
iii. Our Oracle is an excellent example.
 

Chap. 6- Transport 6
The Transport Unique Tasks For Transport
Service

 In both these solutions, we need a mechanism for mapping


 
high-level service names ---> low-level encoding
 
that can be used within packet headers of the network protocols.

One simplification of the complex problem is to break the problem into two parts:
• transport addresses are a (machine address, local process) dyad describing a service on a
particular machine.
 
5. Storage capacity of the subnet.
 
Assumptions valid at the data link layer do not necessarily hold at the Transport Layer.

The subnet may buffer messages for a potentially long time, and an `old' packet may arrive at a
destination at unexpected times.

Chap. 6- Transport 7
The Transport Unique Tasks For Transport
Service
 6. Dynamic flow control.
 The data link layer solution of pre-allocating buffers is inappropriate because a machine may have
hundreds of connections sharing a single physical link.

Appropriate settings for the flow control parameters depend on the communicating end points (e.g.,
Cray supercomputers vs. PCs), not on the protocol used.
 
The network layer/data link layer solution of simply not acknowledging frames for which the receiver
has no space is unacceptable.

For the DLL, the line is not being used for anything else; so retransmission is inexpensive.

For the transport level, end-to-end retransmissions are needed; this wastes resources by sending
the same packet over the same links multiple times.

If the receiver has no buffer space, the sender should be prevented from sending data.
 
7. Congestion control.
 In connectionless internets, transport protocols have the job of congestion control.

When the network becomes congested, they must reduce the rate at which they insert packets into
the subnet,

The subnet has no way to prevent itself from becoming overloaded.

Chap. 6- Transport 8
The Transport Unique Tasks For Transport
Service
 8. Connection establishment.
 
Transport level protocols go through three phases: establishing, using, and terminating a connection.
Issues include:
 
• For datagram-oriented protocols, opening a connection simply allocates and initializes data
structures in the operating system kernel.

• Connection oriented protocols do handshaking that negotiates options with the remote peer at
the time a connection is opened.

• Establishing a connection may be tricky because of the possibility of old or duplicate packets.

• Terminating a connection presents interesting subtleties. For instance, both ends of the
connection must be sure that all the data in their queues have been delivered to the remote
application.
 
We'll look at these issues and more as we examine TCP and UDP. We will not spend much time on
OSI.

Chap. 6- Transport 9
The Transport Services to Upper Layers
Service
The transport layer makes use of the services provided by the network layer in order to provide connection-
oriented and/or connection-less service.
 
It provides many of the same services as the network layer;

a) you can get the services either place.

b) The issue is one of "control".

c) The network layer is part of the subnet and so users have no control over it.

d) If that subnet provides connections but not reliability, then users need to provide a transport
layer that gives them the reliability.
 
Features placed in the transport layer allow the same user interface and same quality of service to be
provided, independent of the underlying subnet.

The transport layer provides isolation between user and subnet.


 
 

Chap. 6- Transport 10
The Transport Quality of Service
Service
The Transport Layer enhances Quality of Service. These might be described as:
 
Connection Establishment Delay: Time between request for a connection and its acknowledgment.
Becoming increasingly important for web-related activity.
 
Connection Establishment Failure Probability: Chance that the connection won't be established within
a defined time. (A congestion issue.)
 
Throughput: Number of bytes of data transferred per time unit. This is a factor of host CPU time (on
benchmark tests) and congestion, window size, and number of hops, (in real life).
 
Transit Delay: Time between when data is sent and when it's received (user to user). Again, this
depends on host CPU time and subnet congestion.
 
Residual Error Ratio: What fraction of packets are lost or garbled. This is from the point of view of the
user - hopefully there are none, but in practice there is some none-zero probability of error.
 
Protection: The ability to provide security against a third party reading or modifying the transmitted data.
 
Priority: A way of designating that when congestion occurs, some messages should be given more
favorable treatment than others.
 
Resilience: The probability that the transport layer itself will mess up and terminate a connection. It also
relates to the ability of a transport layer to respond to problems and give graceful recovery.
 
Chap. 6- Transport 11
The Transport Transport Service Primitives
Service
See Figure 6.3 for a minimum set of user-Transport Layer interfaces.
 

(Note: A TPDU is a Transport Protocol Data Unit, the "packet" sent from one transport layer to another.
There is no "right" word here.)
 
Figure 6.4 shows the context between TPDU (transport) and PACKETS (network) and FRAMES (DLL ).

Chap. 6- Transport 12
Overview
Elements of
Transport Protocols

6.1 The Transport Service What jobs are done by the Transport Layer?
6.2 Elements of Transport
Protocols
6.4 TCP and UDP

Chap. 6- Transport 13
Elements of Transport Protocols
Transport Protocols
ELEMENTS OF TRANSPORT PROTOCOLS
 
The entity that implements the Transport Layer performs many of the same functions as does the Data Link
Layer. But there are some essential differences:
 
 
PROPERTY DATA LINK LAYER TRANSPORT LAYER
     
ROUTING Over Physical Channel Over Entire Subnet
     
ADDRESSING Destination router determined by Explicit addresses required.
outgoing line.
     
INITIAL CONNECTION Other end is always there. Many complicated possibilities.

     
STORAGE CAPACITY Packet either arrives or doesn't Packet may bounce around for a long
while.
     
BUFFERING Concerned with only one connection Must support many connections.

Chap. 6- Transport 14
Addressing
Elements of Transport
Protocols
 Uses a TSAP ( Transport Service Access Point ). ( In the Internet, this corresponds to an (IP address,
local port) pair. We have seen this in gory detail in Project 1.
 
There are four (at least!) methods for determining the location of a remote service:
 
1. That service has advertised globally that it is on a particular port. It's been on this port for years!
FTP is port 21.

2. It advertises it's service locally - once you get to that machine, you can find it's service name in
/host/services or the network equivalent of this. Our oracle server could be found in this fashion
using getservbyname.

3. A process server (inetd in the Internet) watches for a connection requesting a particular service.
Upon a request, it spawns a process to provide the service.

4. A local name server has handled registration of services. A process requesting a connection can ask
this name server for the port of the service. Our Oracle fits in this category.
 
Addresses can be hierarchical (they specify everything you need, to get from some remote machine to the
target service).
 
Addresses can be flat ( the address assumes that the requester can find the right machine, and provides
only within-the-module information.) A port number would fit this category.
Chap. 6- Transport 15
Elements of Transport Establishing A Connection
Protocols
 If all goes well, establishing a connection is easy. BUT, problems occur when portions of the handshake
are lost, or duplicated - duplicated is the REAL problem.
 
Imagine this scenario:
 
a) A connection is established,
b) A transaction performed and
c) The connection closed.
d) Various duplicate frames arrive in order and do the whole thing again!!
 
Ways to avoid this include:
 
1. Use a different transport address each time; but then well-known ports will no longer work.

2. Use a sequence number for each connection.


a) When a connection is closed, the server updates a number representing the next-expected
connection.
b) But this requires state in every server about every connection it's ever gotten.
c) Such state won't survive across a crash.
 

Chap. 6- Transport 16
Elements of Transport Establishing A Connection
Protocols
3. Devise a way to kill off aged packets; ensure that no packet can survive longer than some known time.
Ways of doing this include:
 
a) Restricted subnet design: Prevent packets from looping; limit the time used by congestion
delay.

b) Use a hop count: throw away any packet that's been forwarded more than N times.

c) Time stamp packets:


• Routers compare current time against time of creation of the packet and discard aged
packets greater than time T. (Requires clock synchronization.)

• We also need to ensure that ACKs of these packets are also gone; do this by waiting
some multiple of T.

Other useful tools include:


 
a) Each machine has a clock.
• Time survives even across system crashes.
• The clocks don't need synchronization but need granularity finer than the sequence
numbers in the packets.

b) No identically numbered packets will be outstanding at the same time as a result of a restart;
this can be assured by using a large sequence number space.
Chap. 6- Transport 17
Elements of Transport Establishing A Connection
Protocols
Many connection problems can be solved with the
Three Way Handshake.
 
In Figure 6.11 , note the use of sequence numbers
for all requests. Each host attaches its own
sequence number and ACKs the sequence
number of the request it just received.
 
Here the possible frame types are CR ==
Connection Request, ACK, DATA, REJECT.
 
We’ll see Three Way Handshake again as used by
TCP.

Chap. 6- Transport 18
Elements of Transport Releasing A Connection
Protocols
There are Potential Problems here as well as with connections.
 
One problem is shown in Figure 6.12 . Here a disconnect is issued by the receiver such that data in
transit from the sender is lost. Note that data and the disconnect request (DR) cross each other -
Host 2 thinks it's done.
 

Chap. 6- Transport 19
Elements of Transport Releasing A Connection
Protocols
The issue is: How does one side
know that the other side has
seen its disconnect message?

To be ABSOLUTELY sure is
impossible, but Figure 6.14
shows a number of methods
that work in practice, though
they become increasingly more
"iffy".

Chap. 6- Transport 20
Elements of Transport Flow Control & Buffering
Protocols

The issues involved here are:


 
1. For an unreliable subnet, the sender must maintain the data until it's acknowledged. And at the
Transport layer, the assumption is that the subnet IS unreliable!

2. The receiver needs to maintain buffers

• because there may be several packets in transit simultaneously; the receiving application may
not be ready for them.

• because the data may arrive out of order - and selective repeat is a good thing.

3. Many channels open and many buffers per channel leads to dramatic amounts of memory usage.

4. For bursty data, allocation of buffers at each request is a good thing (memory isn't grabbed for long
periods of time). For long, big-buffer transactions (like file transfer) holding on to buffers saves
allocation time.
 

Chap. 6- Transport 21
Elements of Transport Flow Control & Buffering
Protocols

ACKs as used in sliding window


protocols,
 
a) Say a frame has arrived,
b) Govern the amount of
outstanding data.

An alternative to this second method is:


 
1. At connection time, and at times during transmission, the sender and receiver reach an agreement
on the size of the receive buffer.

2. The sender, knowing how much it has sent, and knowing how many ACKs it's received, knows the
number of buffers available on the receiver. It won't send more than is available.
 
See how this works in Figure 6.16
Chap. 6- Transport 22
TCP & UDP Overview

6.1 The Transport Service We now look at specific mechanisms used


6.2 Elements of Transport by TCP and UDP.
Protocols
6.4 TCP and UDP

Chap. 6- Transport 23
What is TCP

• Transport layer protocol for data


communication.
• Properties: 
– Reliable data transfer
– Virtual circuit connection
– Full duplex connection
– Unstructured stream
– Buffered transfer
– Multiplexing
   
Chap. 6- Transport 24
Reliability

Networks can
– deliver packets out of order
– lose packets
– deliver duplicates
– corrupt packets due to noise
 
TCP provides protection against above
errors.
 

Chap. 6- Transport 25
Multiplexing
Problem: When a packet arrives at a host,

– Several network applications are running on a host.


– How to identify a specific application process.
 
Solution:
– Use Port numbers.
– Endpoint of a connection is defined as (host, port) pair.
– A connection is identified by
(lhost, lport, rhost, rport).

Chap. 6- Transport 26
Reserved TCP Port
Numbers
 
• For commonly used applications, port
numbers have been reserved.For example,
– 21 File Transfer Protocol
– 23 Telnet
– 25 Simple Mail Transfer Protocol
– 53 Domain Name Service
– 79 Finger
– 119 Network News Transfer Protocol

Chap. 6- Transport 27
User Datagram Protocol

• Provides unreliable, connectionless, end-


to-end delivery service using IP.

• Adds ability to distinguish among multiple


destinations within a given host.

• Optional checksum to ensure integrity of


data.
Chap. 6- Transport 28
Port Numbers
• Similar to TCP ports.
• 16-bit identifier to select process attached to a
communication.
• Port numbers 0-1023 are reserved for use by
system.
• Well known ports (0-511) are assigned to
important applications.For example,
53 -- Domain name server
123 -- Network time protocol
161 -- SNMP server
• Other applications can bind a port on demand.
Chap. 6- Transport 29
Overview
TCP & UDP

OVERVIEW OF TRANSMISSION CONTROL PROTOCOL


 
TCP provides reliable, full-duplex, byte stream-oriented service. It resides directly above IP (and adjacent
to UDP), and uses acknowledgments with retransmissions to achieve reliability.
 
TCP differs from the sliding window protocols we have studied so far in the following ways:
 
1. Applications treat the data sent and received as an arbitrary byte stream.
a) The sending TCP module frames the byte stream into "segments", and sends individual
segments within an IP datagram.
b) TCP decides where segment boundaries start and end (the application does not!).
c) In contrast, individual packets are handed to the data link protocols.

2. The TCP sliding window operates at the byte level rather than the packet (or segment) level.

"Sequence numbers" count bytes rather than packets/segments.


 

Chap. 6- Transport 30
Overview
TCP & UDP

OVERVIEW OF TRANSMISSION CONTROL PROTOCOL

3. Segment boundaries may change at any time. TCP is free to retransmit two adjacent segments each
containing 200 bytes of data as a single segment of 400 bytes.
 
4. The sizes of the send and receive windows change dynamically. In particular, acknowledgments
contain two pieces of information:
 
a) A conventional ACK indicating what has been received, and

b) The receiver's current window size;


i. The number of bytes of data the receiver is willing to accept.
ii. This flow control at the transport level allows a slow receiver to shut down a fast sender.
iii. For example, a PC can direct a supercomputer to stop sending additional data until it has
processed the data it already has.

Chap. 6- Transport 31
The TCP Service Model
TCP & UDP

Attributes of TCP include:


 
1. Socket addresses made up of IP address + Port Number.

• Port numbers 0-255 are reserved for well-known ports.


• For such universal services as mail, Telnet, and FTP.
• Well-known ports are administrated by a central authority.
• Also ports for services specific to UNIX machines (/etc/services).

2. Sites are free to assign the remaining ports any way they wish.

3. TCP connections are full-duplex and point to point (a socket connects to a socket.)

4. The connection is a byte stream. Messages are not maintained end to end.

5. Data may be bunched together for efficiency, not being sent out until more data is to be sent or until
a timer expires. The sender can use the PUSH flag to override this.

6. Can send data in Urgent mode that in effect causes a signal on the remote process.
 

Chap. 6- Transport 32
TCP & UDP The TCP Protocol

 Properties of TCP include:


 
1. Segments are sent with 20 byte headers and 0 or more bytes of data.

2. In theory a segment can be the 65,535 bytes that fits into an IP payload; in practice, most
networks limit the size to several Kbytes.

3. Uses sliding window. The receiver sends back an ACK containing the sequence number of the
data it NEXT expects to receive. The sender maintains a timer to alert to lost transmissions.

Chap. 6- Transport 33
TCP & UDP The TCP Segment Header

 TCP segments contain a 20-byte TCP header, followed by header options (if any), followed by user
data (if any). TCP headers contain the following fields Figure 6.24 :
 

Chap. 6- Transport 34
TCP & UDP The TCP Segment Header

 
1. - 2. Source and destination port ( 16 bits each):
 
TCP port numbers of the sender and receiver.
• TCP and UDP ports are essentially the same, but are assigned separately.
• Thus, TCP port 54 may refer to a different service than UDP port 54.
• Each protocol manages its own ports.
 
3. Sequence number (32 bits):
 
The sequence number of the first byte of data in the data portion of the segment.
 

Chap. 6- Transport 35
TCP & UDP The TCP Segment Header
4. Acknowledgment number (32 bits):
 
a) The next byte expected.
• The receiver has received up to and including every byte prior to the acknowledgment.
• Transport protocols must always consider the possibility of delayed datagrams arriving
unexpectedly.

b) Suppose we use a sequence number space of 16 bits (0-65535).


• The time required for 65 Kbytes is only a small part of a second;
• the sequence numbers would wrap very quickly and there would be a high likelihood of old data
popping up on the subnet and being confused for the original data.

c) Insuring that the sequence number space is large enough to detect old (invalid) datagrams depends
on two factors:
 
• The amount of wall-clock time that a datagram can remain in the network.

• The amount of time that elapses before a given sequence number becomes reused.
 
We use 32 bit sequence numbers. We also use the IP TTL field to insure that datagrams don't stay in the
network for too long.

5. TCP Header Length (4 bits)


 
The number of 32-bit words in the TCP header. Used to locate the start of the data section (if any).

Chap. 6- Transport 36
TCP & UDP The TCP Segment Header
6. Various Flags (6 bits)
 
Urgent pointer (URG)
 
If set, the urgent pointer field contains a valid pointer.
 
Acknowledgment valid (ACK bit)
 
• Set when the acknowledgment field is valid.
• In practice, the only time that the ACK bit is not set is during the 3-way handshake at the start of
the connection.
• Otherwise it ACKs what's next expected, even if the last packet ACK'd the same thing.
 
Reset (RST)
 
The reset flag is used to abort connections quickly. It is used to signal errors rather than the normal
termination of a connection when both sides have no more data to send. Upon receipt of a RST
segment, TCP aborts the connection and informs the application of the error.

Synchronization (SYN)
 
Used to initiate a new connection. (Described below.)
 
Finish (FIN)
 
Used to close a connection. (Described below.)
Chap. 6- Transport 37
TCP & UDP The TCP Segment Header
6. Various Flags (6 bits)
 
Push (PSH)
 
Flush any data buffered in the sender or receiver's queues and hand it to the remote application.

The PSH bit is requested by the sending application; it is not generated by TCP itself.
 
TCP decides where segment boundaries start and end.
 
The sending TCP is free to delay sending a segment in the hope that the application will generate
more data shortly.

This performance optimization allows an application to (inefficiently) write one byte at a time, but have
TCP package many bytes into a single segment.
 
A client may send data, then wait for the server's response. If TCP (either the sender or receiver) is
buffering the request, the server application won't have received the request, and the client will
wait a long time.
 
The client sets the PSH flag when it sends the last byte of a complete request. The PSH directs TCP
to flush the data to the remote application.

Chap. 6- Transport 38
TCP & UDP The TCP Segment Header
7. Flow control window size (16 bits)
• The size of the receive window, relative to the acknowledgment field.
• The sender is not allowed to send any data that extends beyond the right edge of the receiver's
receive window.
• If the receiver cannot accept any more data, it advertises a flow-control window of zero.

8. Checksum (16 bits)


• Checksum of TCP header, pseudo-header, and data.
 
9. Urgent data (16 bits)
• If the Urgent data flag is set, this field indicates the byte position (relative to the current
sequence number) where urgent data will be found.
• It allows the sending application to indicate the presence of high-priority data that the receiver
should process immediately.
 
10. Options (variable length)
• Similar to IP options, but for TCP-specific options.
• One interesting case is the maximum segment size option, which allows the sender and receiver
to agree on how large segments can be.
• This allows a small machine with few resources to prevent a large machine from sending
segments that are too large for the small machine to handle.
• On the other hand, larger segments are more efficient, so they should be used when
appropriate.

Chap. 6- Transport 39
TCP & UDP The TCP Segment Header
11. PSEUDOHEADER:
 
The information in the pseudo header comes from the IP datagram header. This is data passed down to the
IP Layer and is included in the checksum.
 
IP source address (4 bytes): Sending machine.
 
IP destination address (4 bytes): Destination machine.
 
TCP Length (2 bytes): Length of TCP segment.
 
Protocol (1 byte): Protocol field of the IP header; should be 6 (for TCP).
 
Zero (1 byte): One byte pad containing zero.
 
Note: the use of a pseudo header is a strong violation of our goal of layering. However, the decision is a
compromise based on pragmatics. Using the IP address as part of the transport address greatly
simplifies the problem of mapping between transport level addresses and machine addresses.

Chap. 6- Transport 40
TCP & UDP TCP Connection Management

Establishing a Connection:

TCP uses a Three Way Handshake to initiate a connection. The handshake serves two functions:
 
1. It ensures that both sides are ready to transmit data, and that both ends know that the other end is
ready before transmission actually starts.

2. It allows both sides to pick the initial sequence number to use.


 
• When opening a new connection, do not simply use an initial sequence number of 0.

• If connections are of short duration, exchanging only a small number of segments, we may reuse low
sequence numbers too quickly.

• Thus, each side that wants to send data must be able to choose its initial sequence number.

Chap. 6- Transport 41
TCP & UDP TCP Connection Management
The Three Way Handshake proceeds as follows: See Figure 6.26
1. TCP A picks an initial sequence number (A_SEQ) and sends a segment to B containing:

2. When TCP B receives the SYN, it chooses its initial sequence number (B_SEQ) and sends a TCP
segment to A containing:

3. When A receives B 's response, it acknowledges B 's choice of an initial sequence number by
sending a data-less third segment containing:

4. Data transfer may now begin.

Normal Connection Call Collision


Chap. 6- Transport 42
TCP & UDP TCP Connection Management

a) TCP A picks an initial sequence number (A_SEQ) and sends a segment to B containing:

SYN_FLAG = 1, ACK_FLAG = 0, and SEQ = A_SEQ.


 
b) When TCP B receives the SYN, it chooses its initial sequence number (B_SEQ) and sends a TCP
segment to A containing:

SYN_FLAG = 1, ACK_FLAG = 1, ACK = (A_SEQ + 1), SEQ = B_SEQ.

c) When A receives B 's response, it acknowledges B 's choice of an initial sequence number by
sending a dataless third segment containing:

SYN_FLAG = 0, ACK_FLAG = 1, ACK = (B_SEQ + 1),


SEQ = A_SEQ+1 (data length was 0).

d) Data transfer may now begin.


 
Note: The sequence number used in SYN segments are actually part of the sequence number space.
• That is why the third segment that A sends contains SEQ=(A_SEQ+1).
• This is required so that we don't get confused by old SYNs that we have already seen.
• To insure that old segments are ignored, TCP ignores any segments that refer to a sequence
number outside of its receive window.
• This includes segments with the SYN bit set.

Chap. 6- Transport 43
TCP Connection Management
TCP & UDP
Terminating Connections
 
An application sets the FIN bit when it has no more data to send.

Then the remote TCP refuses to accept any more new data (data whose sequence number is greater than
that indicated by the FIN segment).
 
Closing a connection is complicated because receipt of a FIN doesn't mean we are done.
 
        We may not have received all the data leading up to the FIN (e.g., some segments may have been
lost), and we must make sure that we have received all the data in the window.
 
        Also, FINs refer to only 1/2 of the connection. If we send a FIN, we cannot send any more new data,
but must continue accepting data sent by the peer. The connection closes only after both sides have
sent FIN segments.
 
        Finally, even after we have sent and received a FIN, we are not completely done! We must wait
around long enough to be sure that our peer has received an ACK for its FIN. If it has not, and we
terminate the connection (deleting a record of its existence), we will return a RST segment when the
peer retransmits the FIN, and the peer will abort the connection.

Chap. 6- Transport 44
TCP Transmission Policy
TCP & UDP
TCP has a goal of trying for good
performance. To do this, it delays
sending small amounts of data or
ACKs to gather up additional
segments. Algorithms include:
 
1.   Delay of ACKs and window updates for
some amount of time in the hopes
that they can be piggybacked on
some data. Generally 200
milliseconds.

Chap. 6- Transport 45
TCP Transmission Policy
TCP & UDP
2.      Nagle's Algorithm which says that if the
sender is transmitting one byte at a time,
send the first byte, then hang on to the next
bytes until the first is ACK'd, then send all
received up to that point, and so on.
 
3.      Prevention of the following situation:
a) A receiver's TCP-level buffer is full, so
it tells the sender it's current size
(zero).
b) The receiving application now takes a
byte out of that buffer.
c) The receiver, seeing some available
space, tells the sender to transmit one
byte.
d) The receiver instead doesn't open up
for the sender until there's an efficient
space ( “efficient” == min( Max
segment size, half the buffer size) ).

Chap. 6- Transport 46
TCP Congestion Control
TCP & UDP
Transport protocols operating across connection-less networks must implement congestion control. This
means reducing the offered load on the network when it becomes congested.
 
So let's understand what factors govern the rate at which TCP sends segments.
 
1.     The current window size specifies the amount of data that can be in transmission at any one time.
Small windows imply little data, large windows imply a large amount of data.
 
2.     If our retransmit timer is too short, TCP retransmits segments that have been delayed, but not lost,
increasing congestion at a time when the network is probably already congested!
 
Both of these factors are discussed in the following subsections

Chap. 6- Transport 47
TCP Congestion Control
TCP & UDP
Window Size and Slow-Start TCP
 
There is a congestion control mechanism in TCP that adjusts the size of the sending window to match the
current ability of the network to deliver segments:
 
1.      If the send window is small, and the network is idle, TCP will make inefficient use of the available
links.
 
2.      If the send window is large, and the network is congested, most segments will be using gateway
buffer space waiting for links to become available.
 
3.      Even in an unloaded network, the optimal window size depends on network topology: To keep all
links busy simultaneously, exactly one segment should be in transmission on each link along the
path. Thus, the optimal window size depends on the actual path and varies dynamically.

Chap. 6- Transport 48
TCP & UDP TCP Congestion Control
WINDOWS:
 
A congestion window keeps track of the appropriate send
window relative to network load.
 
The congestion window is not related to the flow-control
(sender/receiver - coordination) window.
 
The actual send window in use at any one time will be the
smaller of the two windows.

There are two parts to TCP's congestion control mechanism. See Figure 6.32
 
1. Increase the sender's window to take advantage of any additional bandwidth that becomes available.
 
• This case is part of congestion avoidance and is handled by slowly, but continually, increasing the size
of the send window.

• We want to slowly take advantage of available resources, but not so fast that we overload the network.

• We want to increase so slowly that we will get feedback from the network or remote end of the
connection before we've increased the level of congestion significantly.
 
2. Decrease the sender's window suddenly and significantly in response to congestion. This case is known as
congestion control, reacting after the network becomes overloaded.
Chap. 6- Transport 49
TCP & UDP TCP Congestion Control
AN EXAMPLE OF CONGESTION CONTROL:
 
1. Assume that TCP is transmitting at just the right level for
current conditions. During the congestion avoidance
phase, TCP is sending data at the proper rate for current
conditions.

2. To make use of any additional capacity that becomes


available, the sender slowly and linearly increases the
size of its send window. It can do this as long as it
receives a positive indication that the data it is
transmitting is reaching the remote end. None of the
data is getting lost, so there must not be much (if any)
congestion.

3. Because the send window continually increases, the network will eventually become congested. The
sender fails to receive an ACK for a segment it just sent. This congestion indicator causes the sender
to reduce its window to 1.
 
• During slow start, the sender increases the window size by one on every new ACK. In each time
period there will be twice the sends as the last time period.

• When current window reaches half the previous Max, congestion avoidance takes over and the
window resumes its linear increase.

Chap. 6- Transport 50
TCP & UDP TCP Timer Management

The issue is determining the value to set the retransmission timer:


 
• If the value is too short, we will retransmit prematurely, even though the original segment has not been
lost.

• If our value is too long, the connection will remain idle for a long period of time after a lost segment,
while we wait for the timer to go off.

• Ideally, we want our timer to be close to the true Round Trip Time (RTT). Because the actual round
trip delay varies dynamically (unlike in the data link layer), using a fixed timer is inadequate.
 

Chap. 6- Transport 51
TCP & UDP TCP Timer Management

To cope with widely varying delays, TCP maintains a dynamic estimate of the current RTT calculated this
way:
 
1. When sending a segment, the sender starts a timer.

2. Upon receipt of an acknowledgment, stop the timer and record the actual elapsed delay, M,
between sending the segment and receiving its ACK.

3. Whenever a new value, M, for the current RTT is measured, it is averaged into a smoothed RTT
depending on the last measurement as follows:
 
RTT =  * RTT + ( 1 -  ) * M
.
 is known as a smoothing factor, and it determines how much weight the new measurement carries.
When  is 0, we simply use the new value; when  is 1, we ignore the new value. Typical values
for  lie between .8 and .9. Use of the above formula causes us to change our RTT estimate
slowly, so that we don't overreact to wild fluctuations in the delay time.

Chap. 6- Transport 52
TCP & UDP TCP Timer Management

Because the RTT is only an estimate of the actual delay, which varies from packet to packet, set the
actual retransmission timeout (RTO) for a segment to depend on the standard deviation of RTT. By
knowing the standard deviations of RTT, we can base the RTO on a real probability.
 
TCP maintains an estimate of the mean deviation (D) of the RTT. D is the difference between the
measured and expected RTT and provides a close approximation to the standard deviation. Its
computation is as follows:
 
D =  * D + ( 1 - ) * | RTT - M |
 
Finally, when transmitting a segment, set its retransmission timer to RTO:
 
RTO = RTT + 4 * D.

Chap. 6- Transport 53
TCP & UDP User Datagram Protocol UDP

UDP provides unreliable datagram service. UDP is a "thin" layer sitting on top of IP and this thinness
makes it much less expensive. It uses the raw datagram service of IP and has no
acknowledgments or retransmissions.
 
Value added by UDP is that it provides delivery to a process where IP only gives delivery to a machine.
 
Although it is convenient to think of transport service between processes, this leads to some problems
that can be solved by UDP:
 
Processes are identified differently on different machines; we don't want to have machine or operating
system dependencies in our protocol.
 
Processes may terminate and restart. If a machine reboots, we don't want to have to tell other machines
about it.
 
Associating a single process with a connection makes it difficult to have multiple processes servicing
client requests (e.g. file server processes on a file server).
 
The solution is to add a level of indirection. Transport level addresses refer to services without regard to
who actually provides that service. In most cases, a transport service maps to a single process, but
UDP allows alternatives.

Chap. 6- Transport 54
TCP & UDP User Datagram Protocol UDP

Like all packets we've seen, UDP datagrams consist of a UDP header and some data. The UDP header
contains the following fields:
 
Source port (16 bits): Port number of the sender.
 
Destination port (16 bits): Port number of the intended recipient. UDP software uses this number to
demultiplex a datagram to the appropriate higher-layer software (e.g. a specific connection).
 
Length (16 bits): Length of the entire UDP datagram, including header and data.
 
Checksum (16 bits): Checksum of entire datagram (including data and pseudo header).
 

Chap. 6- Transport 55
Summary Putting Together The Packets
From All The Layers
Ethernet

Chap. 6- Transport 56
Summary Putting Together The Packets
From All The Layers
IP

Chap. 6- Transport 57
Summary Putting Together The Packets
From All The Layers
TCP

Chap. 6- Transport 58

You might also like