0% found this document useful (0 votes)
3 views

CET_EDU_CN Lecture Notes

CET_EDU_CN Lecture Notes

Uploaded by

2024mt12089
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

CET_EDU_CN Lecture Notes

CET_EDU_CN Lecture Notes

Uploaded by

2024mt12089
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

 The transport layer provides communication between two processes running on

different hosts.
 A process is an instance of a program that is running on a host.
 There may be multiple processes communicating between two hosts – for example,
there could be a FTP session and a Telnet session between the same two
hosts.

Transport services and protocols


 provide logical communication between app processes running on different hosts
 transport protocols run in end systems
 send side: breaks app messages into segments, passes to network layer
 rcv side: reassembles segments into messages, passes to app layer
 more than one transport protocol available to apps
 Internet: TCP and UDP
Transport Layer Protocols:
 reliable, in-order delivery (TCP)
 congestion control
 flow control
 connection setup
 unreliable, unordered delivery: UDP
 no-frills extension of “best-effort” IP
 services not available:
 delay guarantees
 bandwidth guarantees
UDP: User Datagram Protocol
 “no frills,” “bare bones” Internet transport protocol
 “best effort” service, UDP segments may be:
 lost
 delivered out of order to app
 connectionless:
 no handshaking between UDP sender, receiver
 each UDP segment handled independently of others
 often used for streaming multimedia apps
 loss tolerant
 rate sensitive
 other UDP uses
 DNS
 SNMP
 reliable transfer over UDP: add reliability at application layer
 application-specific error recovery!

TCP Service Model


 TCP Service is obtained by having both the sender and receiver create end points,
called sockets. Each socket has a socket number (address) consisting of the IP address
of the host and a 16-bit number local to that host, called a port.
 To obtain TCP service, a connection must be explicitly established between a socket on
the sending machine and a socket on the receiving machine.
 A socket may be used for multiple connections at the same time. In other words, two or
more connections may terminate at the same socket.
 Port numbers below 1024 are called well-known ports and are reserved for standard
services. For example, any process wishing to establish a connection to a host to
transfer a file using FTP can connect to the destination host’s port 21 to contact its FTP
daemon/service. Similarly, to establish a remote login session using TELNET, port 23 is
used. Port 80 is used for HTTP, port 443 is used for SSL, etc.
 Ports between 1024 and 5000 are called ephemeral and are free to use (not reserved).
The client’s socket would use such port.
 All TCP connections are full-duplex and point-to-point. Full duplex means that traffic can
go in both directions at the same time. Point-to-point means that each connection has
exactly two end points. TCP does not support multicasting or broadcasting.
 A TCP connection is byte stream, not a message stream. Message boundaries are not
preserved end to end.
 For example, if the sending process does four 512-byte writes to a TCP stream, these
data may be delivered to the receiving process as four 512-byte chunks, or two 1024-
byte chunks, or one 2048-byte chunk, or some other way.
 When an application passes data to TCP, TCP may send it immediately or buffer it (in
order to collect a larger amount to send at once), at its discretion.
 Every byte on a TCP connection has its own 32-bit sequence number.
 The sending and receiving TCP entities exchange data in the form of segments. A
segment consists of a fixed 20-byte header (plus an optional part) followed by 0 or more
data bytes. The TCP software decides how big segments should be. It can accumulate
data from several writes into one segment or split data from one write over multiple
segments.
 Two limits restrict the segment size:
 Each segment, including the TCP header, must fit in the 64K byte IP payload
 Each network has a maximum transfer unit or MTU, and each segment must fit in the
MTU.
 TCP uses a sliding window mechanism for flow control
 Sender maintains 3 pointers for each connection
 Pointer to bytes sent and acknowledged
 Pointer to bytes sent, but not yet acknowledged
 Sender window includes bytes sent but not acknowledged
 Pointer to bytes that cannot yet be sent

Port Numbers:
• Port numbers are 16-bit integers (0  65,535)
 Servers use well know ports, 0-1023 are privileged
 Clients use ephemeral (short-lived) ports
• Internet Assigned Numbers Authority (IANA) maintains a list of port number assignment
 Well-known ports (0-1023)  controlled and assigned by IANA
 Registered ports (1024-49151)  IANA registers and lists use of ports as a
convenience (49151 is ¾ of 65536)
 Dynamic ports (49152-65535)  ephemeral ports
Socket Addressing
• Process-to-process delivery needs two identifiers
 IP address and Port number
 Combination of IP address and port number is called a socket address (a socket
is a communication endpoint)
 Client socket address uniquely identifies client process
 Server socket address uniquely identifies server process
• Transport-layer protocol needs a pair of socket addresses
 Client socket address
 Server socket address
 For example, socket pair for a TCP connection is a 4-tuple
 Local IP address, local port, and
 foreign IP address, foreign port
Multiplexing and Demultiplexing:
Multiplexing
Sender side may have several processes that need to send packets (albeit only 1 transport-layer
protocol)
Demultiplexing
At receiver side, after error checking and header dropping, transport-layer delivers each message to
appropriate process
• Flow Control
Tell peer exactly how many bytes it is willing to accept (advertised window  sender can not
overflow receiver buffer)
 Sender window includes bytes sent but not acknowledged
 Receiver window (number of empty locations in receiver buffer)
 Receiver advertises window size in ACKs
 Sender window <= receiver window (flow control)
 Sliding sender window (without a change in receiver’s advertised window)
 Expanding sender window (receiving process consumes data faster than it receives
 receiver window size increases)
 Shrinking sender window (receiving process consumes data more slowly than it
receives  receiver window size reduces)
 Closing sender window (receiver advertises a window of zero)
• Error Control
 Mechanisms for detecting corrupted segments, lost segments, out-of-order
segments, and duplicated segments
 Tools: checksum (corruption), ACK, and time-out (one time-out counter per
segment)
 Lost segment or corrupted segment are the same situation: segment will
be retransmitted after time-out (no NACK in TCP)
 Duplicate segment (destination discards)
 Out-of-order segment (destination does not acknowledge, until it receives
all segments that precede it)
 Lost ACK (loss of an ACK is irrelevant, since ACK mechanism is
cumulative)

• Congestion Control
 TCP assumes the cause of a lost segment is due to congestion in the network
 If the cause of the lost segment is congestion, retransmission of the segment does
not remove the problem, it actually aggravates it
 The network needs to tell the sender to slow down (affects the sender window size
in TCP)
 Actual window size = Min (receiver window size, congestion window size)
 The congestion window is flow control imposed by the sender
 The advertised window is flow control imposed by the receiver

TCP segment Header:


 Source port and Destination port – identify the local end points of the connection.
 Sequence number and acknowledgement number (specifies the next sequence number
expected)
 TCP header length – tells now many 32-bit words are contained in the TCP header
(because Options field is of variable length)
 Next comes a 6-bit field that is not used.
 Next come 6 1-bit flags:
• URG is set to 1 if the Urgent pointer is in use. The Urgent Pointer is used to
indicate a byte offset (from the current sequence number) at which urgent data is
located
• ACK is set to 1 to indicate that the acknowledgement number field is valid.
Otherwise, if set to 0, then this segment does not contain an acknowledgment

• PSH bit indicates PUSHed data. The receiver hereby kindly requested to deliver the
data to the application upon arrival and not buffer it (done for efficiency)
• RST bit is used to reset a connection that has become confused due to a host
crash or some other reason. It is also used to reject an invalid segment or refuse
an attempt to open a connection.
• SYN bit is used to establish connections. SYN=1 and ACK=0 – connection request,
SYN=1 and ACK=1 – connection accepted.
• FIN but is used to release a connection. It specifies that the sender has no more
data to transmit.
 Window size field tells how many bytes may be sent starting at the byte acknowledged.

 A Checksum is also provided for extreme reliability – it checksums the header and the
data.
 Options field was designed to provide a way to add extra facilities not covered by the
regular header. For example, allow each host to specify the maximum TCP payload it is
willing to accept. (using large segments is more efficient than using small ones)

TCP Connection Establishment


• TCP uses a three-way handshake to open a connection:
(1) ACTIVE OPEN: Client sends a segment with
– SYN bit set *
– port number of client
– initial sequence number (ISN) of client
(2) PASSIVE OPEN: Server responds with a segment with
– SYN bit set *
– initial sequence number of server
– ACK for ISN of client
(3) Client acknowledges by sending a segment with:
– ACK ISN of server (* counts as one byte)

TCP connection Establishment

SYN: Synchronize
ACK: Acknowledge

TCP Connection Termination


• Each end of the data flow must be shut down independently (“half-close”)
• If one end is done it sends a FIN segment. This means that no more data will be sent
• Four steps involved:
(1) X sends a FIN to Y (active close)
(2) Y ACKs the FIN,
(at this time: Y can still send data to X)
(3) and Y sends a FIN to X (passive close)
(4) X ACKs the FIN.

• FIN: Finish
• Step 1 can be sent with data
• Steps 2 and 3 can be combined into 1 segment

UDP
 The Internet protocol suite also supports a connectionless transport protocol, UDP (User
Data Protocol)
 UDP provides a way for applications to send encapsulated raw IP datagrams and send
them without having to establish a connection.
 Many client-server applications that have 1 request and 1 response use UDP rather than
go to the trouble of establishing and later releasing a connection.
 A UDP segment consists of an 8-byte header followed by the data.

UDP Header
 The two ports serve the same function as they do in TCP: to identify the end points
within the source and destination machines.
 The UDP length field includes the 8-byte header and the data.
 The UDP checksum is used to verify the size of header and data.

Congestion Control and Quality of service


DATA TRAFFIC
The main focus of congestion control and quality of service is data traffic. In congestion control we try
to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the
traffic.
Traffic Descriptor
Traffic descriptors are qualitative values that represent a data flow. Figure shows a traffic flow with
some of these values.

Average Data Rate


The average data rate is the number of bits sent during a period of time, divided by the
number of seconds in that period. We use the following equation:
Average data rate =amount of data/time
Peak Data Rate
The peak data rate defines the maximum data rate of the traffic. The peak data rate is a very
important measurement because it indicates the peak bandwidth that the network needs for traffic
to pass through without changing its data flow.
Maximum Burst Size
Although the peak data rate is a critical value for the network, it can usually be ignored
if the duration of the peak value is very short.
Effective Bandwidth
The effective bandwidth is the bandwidth that the network needs to allocate for the
flow of traffic. The effective bandwidth is a function of three values: average data rate,
peak data rate, and maximum burst size.
Traffic Profiles
For our purposes, a data flow can have one of the following traffic profiles: constant bit
rate, variable bit rate, or bursty as shown in Figure below.

Constant Bit Rate


A constant-bit-rate (CBR), or a fixed-rate, traffic model has a data rate that does not
change. In this type of flow, the average data ratc and thc peak data rate are the same.
Variable Bit Rate
In the variable-bit-rate (VBR) category, the rate of the data flow changes in time, with
the changes smooth instead of sudden and sharp.
Bursty
In the bursty data category, the data rate changes suddenly in a very short time. It
may jump from zero, for example, to 1 Mbps in a few microseconds and vice versa. Bursty traffic
is one of the main causes of congestion in a network.

CONGESTION
Congestion in a network may occur if the load on the network-the number of packets sent to the
network-is greater than the capacity of the network-the number of packets a network can handle.
Congestion control refers to the mechanisms and techniques to control the congestion and keep
the load below the capacity.
Congestion in a network or internetwork occurs because routers and switches have queues-
buffers that hold the packets before and after processing.

Network Performance
Congestion control involves two factors that measure the performance of a network: delay
and throughput. Figure shows these two performance measures as function of load.

Delay versus Load


When the load is much less than the capacity of the network, the delay is at a minimum. This
minimum delay is composed of propagation delay and processing delay, both of which are
negligible. However, when the load reaches the network capacity, the delay increases sharply.
Throughput versus Load
We can define throughput in a network as the number of packets passing through the network in
a unit of time. From the above figure it can be found that when the load is below the capacity of
the network, the throughput increases proportionally with the load. We expect the throughput to
remain constant after the load reaches the capacity, but instead the throughput declines sharply.

CONGESTION CONTROL
Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened. we can divide congestion control
mechanisms into two broad categories: open-loop congestion control (prevention) and closed-
loop congestion control (removal) as shown in Figure.

Open-Loop Congestion Control


In open-loop congestion control, policies are applied to prevent congestion before it happens.
Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. Retransmission in general may increase
congestion in the network. However, a good retransmission policy can prevent congestion.
Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat
window is better than the Go-Back-N window for congestion control. The Selective Repeat
window tries to send the specific packets that have been lost or corrupted instead of sending
several packets.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver
does not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion. Sending fewer acknowledgments means imposing less load on the network.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time may not
harm the integrity of the transmission.
Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens. Several policies
are as follows:
Backpressure
The technique of backpressure refers to a congestion control mechanism in which a congested
node stops receiving data from the immediate upstream node or nodes. This may cause the
upstream node or nodes to become congested, and they, in turn, reject data from their upstream
nodes or nodes. Backpressure is a node-to-node congestion control that starts with a node and
propagates, in the opposite direction of data flow, to the source. The backpressure technique can
be applied only to virtual circuit networks

Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. In
backpressure, the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke packet method, the warning is from the router,
which has encountered congestion, to the source station directly. The intermediate nodes through
which the packet has travelled are not warned.

Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the
source. The source guesses that there is a congestion somewhere in the network from other
symptoms.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or destination.
The explicit signaling method, however, is different from the choke packet method. In the choke
packet method, a separate packet is used for this purpose; in the explicit signaling method, the
signal is included in the packets that carry data. Explicit signalling can occur in either the forward
or the backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the congestion.
This bit can warn the source that there is congestion and that it needs to slow down to avoid the
discarding of packets.
Forward Signaling A bit can be set in a packet moving in the direction of the congestion. This bit
can warn the destination that there is congestion. The receiver in this case can use policies, such
as slowing down the acknowledgments, to alleviate the congestion.

Congestion Control in TCP


• TCP's general policy for handling congestion is based on three phases: slow start,
congestion avoidance, and congestion detection.
• In the slow-start phase, the sender starts with a very slow rate of transmission, but
increases the rate rapidly to reach a threshold.
• When the threshold is reached, the data rate is reduced to avoid congestion.
• Finally if congestion is detected, the sender goes back to the slow-start or congestion
avoidance phase based on how the congestion is detected.

Slow Start: Exponential Increase


• The source starts with cwnd = 1.
• Every time an ACK arrives, cwnd is incremented.
 cwnd is effectively doubled per RTT “epoch”.
• Two slow start situations:
 At the very beginning of a connection {cold start}.
 When the connection goes dead waiting for a timeout to occur (i.e, the
advertized window goes to zero!)
 However, in the second case the source has more information. The current
value of cwnd can be saved as a congestion threshold.
 This is also known as the “slow start threshold” ssthresh.
 In the slow-start algorithm, the size of the congestion window increases
exponentially until it reaches a threshold.

Congestion Avoidance: Additive Increase If we start with the slow-start algorithm, the size of
the congestion window increases exponentially. To avoid congestion before it happens, one must
slow down this exponential growth. TCP defines another algorithm called congestion avoidance,
which undergoes an additive increase instead of an exponential one. When the size of the
congestion window reaches the slow-start threshold, the slow-start phase stops and the additive
phase begins. In this algorithm, each time the whole window of segments is acknowledged (one
round), the size of the congestion window is increased by 1.

In the congestion avoidance algorithm, the size of the congestion window increases additively
until congestion is detected.
Congestion Detection: Multiplicative Decrease If congestion occurs, the congestion window
size must be decreased. The only way the sender can guess that congestion has occurred is by
the need to retransmit a segment. However, retransmission can occur in one of two cases: when
a timer times out or when three ACKs are received. In both cases, the size of the threshold is
dropped to one-half, a multiplicative decrease.
An implementation reacts to congestion detection in one of the following ways:
❏ If detection is by time-out, a new slow start phase starts.
❏ If detection is by three ACKs, a new congestion avoidance phase starts.

Fig.TCP congestion policy summary


Congestion Control in Frame Relay
Congestion in a Frame Relay network decreases throughput and increases delay. A high
throughput and low delay are the main goals of the Frame Relay protocol. Frame Relay does not
have flow control. In addition, Frame Relay allows the user to transmit bursty data. This means
that a Frame Relay network has the potential to be really congested with traffic, thus requiring
congestion control.
Congestion Avoidance
For congestion avoidance, the Frame Relay protocol uses 2 bits in the frame to explicitly warn
the source and the destination of the presence of congestion.
BECN The backward explicit congestion notification (BECN) bit warns the sender of congestion
in the network. One might ask how this is accomplished since the frames are traveling away from
the sender. In fact, there are two methods: The switch can use response frames from the receiver
(full-duplex mode), or else the switch can use a predefined connection (DLCI =1023) to send
special frames for this specific purpose. The sender can respond to this warning by simply
reducing the data rate.

FECN The forward explicit congestion notification (FECN) bit is used to warn the receiver of
congestion in the network. It might appear that the receiver cannot do anything to relieve the
congestion. However, the Frame Relay protocol assumes that the sender and receiver are
communicating with each other and are using some type of flow control at a higher level.
When two endpoints are communicating using a Frame Relay network, four situations may
occur with regard to congestion. Figure shows these four situations and the values of FECN and
BECN.

QUALITY OF SERVICE
Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We
can informally define quality of service as something a flow seeks to attain.
Flow Characteristics
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter, and bandwidth.

Reliability
Reliability is a characteristic that a flow needs. Lack of reliability means losing a packet or
acknowledgment, which entails retransmission.
Delay
Source-to-destination delay is another flow characteristic.
Jitter
Jitter is the variation in delay for packets belonging to the same flow.
Bandwidth
Different applications need different bandwidths.

TECHNIQUES TO IMPROVE QoS


We briefly discuss four common methods: scheduling, traffic shaping, admission control, and
resource reservation.

Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch)
is ready to process them. If the average arrival rate is higher than the average processing rate,
the queue will fill up and new packets will be discarded.

Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its own
queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority
queue are processed last.

Weighted Fair Queuing


A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are
weighted based on the priority of the queues; higher priority means a higher weight. The system
processes packets in each queue in a round-robin fashion with the number of packets selected
from each queue based on the corresponding weight.

Traffic Shaping
• Traffic shaping controls the rate at which packets are sent (not just how many)
• At connection set-up time, the sender and carrier negotiate a traffic pattern (shape)
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket
The Leaky Bucket Algorithm
• The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single-server
queue with constant service time. If the bucket (buffer) overflows then packets are discarded.
• The leaky bucket enforces a constant output rate regardless of the burstiness of the input. Does
nothing when input is idle.
• The host injects one packet per clock tick onto the network. This results in a uniform flow of packets,
smoothing out bursts and reducing congestion.
• When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length
packets though, it is better to allow a fixed number of bytes per tick.

A simple leaky bucket implementation is shown in Figure below. A FIFO queue holds the packets. If the
traffic consists of fixed-size packets the process removes a fixed number of packets from the queue at
each tick of the clock. If the traffic consists of variable-length packets, the fixed output rate must be
based on the number of bytes or bits.

Token Bucket Algorithm


• In contrast to the LB, the Token Bucket (TB) algorithm, allows the output rate to vary, depending on
the size of the burst.
• In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and destroy
one token.
• Tokens are generated by a clock at the rate of one token every t sec.
• Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger
bursts later.
• The token bucket allows bursty traffic at a regulated maximum rate.
Token bucket operation
• TB accumulates fixed size tokens in a token bucket
• Transmits a packet (from data buffer, if any are there) or arriving packet if the sum of the token sizes
in the bucket add up to packet size
• More tokens are periodically added to the bucket (at rate t). If tokens are to be added when the
bucket is full, they are discarded
Token bucket properties
• Does not bound the peak rate of small bursts, because bucket may contain enough token to cover a
complete burst size
• Performance depends only on the sum of the data buffer size and the token bucket size

Name servers
The DNS name space is divided up into nonoverlapping zones, A zone normally has one primary name
server, which gets its information from a file on its disk, and one or more secondary name servers,
which get their information from the primary name server
When a resolver has a query about a domain name, it passes the query to one of the local name
servers.
 If the domain being sought falls under the jurisdiction of the name server, it returns the authoritative
records (always correct).
 Once these records get back to the local name server, they will be entered into a cache there (timer
controlled).
SNMP - Simple Network Management Protocol
The SNMP model
The SNMP model of a managed network consists of four components
1. Managed nodes.
2. Management stations.
3. Management information
4. A management protocol. Network management is done from management stations: general-purpose
computers with a graphical user interface.
ASN.1 - Abstract Syntax Notation 1
The heart of the SNMP model is the set of objects managed by the agents and read and written by the
management station.
To make multivendor communication possible, it is essential that these objects be defined in a standard
and vendor-neutral way.
Furthermore, a standard way is needed to encode them for transfer over a network.
A standard object definition language, along with encoding rules, is needed. The one used by SNMP is
taken from OSI and called ASN.1 (Abstract Syntax Notation One), defined in International Standard
8824.
The rules for encoding ASN.1 data structures to a bit stream for transmission are given in International
Standard 8825. The format of the bit stream is called the transfer syntax.
The basic idea:
 The users first define the data structure types in their applications in ASN.1 notation.
 When an application wants to transmit a data structure, it passes the data structure to the presentation
layer (in the OSI model), along with the ASN.1 definition of the data structure.
 Using the ASN.1 definition as a guide, the presentation layer then knows what the types and sizes of
the fields in the data structure are, and thus knows how to encode them for transmission according to
the ASN.1 transfer syntax.
 Using the ASN.1 transfer syntax as a guide, the receiving presentation layer is able to do any necessary
conversions from the external format used on the wire to the internal format used by the receiving
computer, and pass a semantically equivalent data structure to the application layer.

You might also like