CET_EDU_CN Lecture Notes
CET_EDU_CN Lecture Notes
different hosts.
A process is an instance of a program that is running on a host.
There may be multiple processes communicating between two hosts – for example,
there could be a FTP session and a Telnet session between the same two
hosts.
• Congestion Control
TCP assumes the cause of a lost segment is due to congestion in the network
If the cause of the lost segment is congestion, retransmission of the segment does
not remove the problem, it actually aggravates it
The network needs to tell the sender to slow down (affects the sender window size
in TCP)
Actual window size = Min (receiver window size, congestion window size)
The congestion window is flow control imposed by the sender
The advertised window is flow control imposed by the receiver
• PSH bit indicates PUSHed data. The receiver hereby kindly requested to deliver the
data to the application upon arrival and not buffer it (done for efficiency)
• RST bit is used to reset a connection that has become confused due to a host
crash or some other reason. It is also used to reject an invalid segment or refuse
an attempt to open a connection.
• SYN bit is used to establish connections. SYN=1 and ACK=0 – connection request,
SYN=1 and ACK=1 – connection accepted.
• FIN but is used to release a connection. It specifies that the sender has no more
data to transmit.
Window size field tells how many bytes may be sent starting at the byte acknowledged.
A Checksum is also provided for extreme reliability – it checksums the header and the
data.
Options field was designed to provide a way to add extra facilities not covered by the
regular header. For example, allow each host to specify the maximum TCP payload it is
willing to accept. (using large segments is more efficient than using small ones)
SYN: Synchronize
ACK: Acknowledge
• FIN: Finish
• Step 1 can be sent with data
• Steps 2 and 3 can be combined into 1 segment
UDP
The Internet protocol suite also supports a connectionless transport protocol, UDP (User
Data Protocol)
UDP provides a way for applications to send encapsulated raw IP datagrams and send
them without having to establish a connection.
Many client-server applications that have 1 request and 1 response use UDP rather than
go to the trouble of establishing and later releasing a connection.
A UDP segment consists of an 8-byte header followed by the data.
UDP Header
The two ports serve the same function as they do in TCP: to identify the end points
within the source and destination machines.
The UDP length field includes the 8-byte header and the data.
The UDP checksum is used to verify the size of header and data.
CONGESTION
Congestion in a network may occur if the load on the network-the number of packets sent to the
network-is greater than the capacity of the network-the number of packets a network can handle.
Congestion control refers to the mechanisms and techniques to control the congestion and keep
the load below the capacity.
Congestion in a network or internetwork occurs because routers and switches have queues-
buffers that hold the packets before and after processing.
Network Performance
Congestion control involves two factors that measure the performance of a network: delay
and throughput. Figure shows these two performance measures as function of load.
CONGESTION CONTROL
Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened. we can divide congestion control
mechanisms into two broad categories: open-loop congestion control (prevention) and closed-
loop congestion control (removal) as shown in Figure.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. In
backpressure, the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke packet method, the warning is from the router,
which has encountered congestion, to the source station directly. The intermediate nodes through
which the packet has travelled are not warned.
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the
source. The source guesses that there is a congestion somewhere in the network from other
symptoms.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or destination.
The explicit signaling method, however, is different from the choke packet method. In the choke
packet method, a separate packet is used for this purpose; in the explicit signaling method, the
signal is included in the packets that carry data. Explicit signalling can occur in either the forward
or the backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the congestion.
This bit can warn the source that there is congestion and that it needs to slow down to avoid the
discarding of packets.
Forward Signaling A bit can be set in a packet moving in the direction of the congestion. This bit
can warn the destination that there is congestion. The receiver in this case can use policies, such
as slowing down the acknowledgments, to alleviate the congestion.
Congestion Avoidance: Additive Increase If we start with the slow-start algorithm, the size of
the congestion window increases exponentially. To avoid congestion before it happens, one must
slow down this exponential growth. TCP defines another algorithm called congestion avoidance,
which undergoes an additive increase instead of an exponential one. When the size of the
congestion window reaches the slow-start threshold, the slow-start phase stops and the additive
phase begins. In this algorithm, each time the whole window of segments is acknowledged (one
round), the size of the congestion window is increased by 1.
In the congestion avoidance algorithm, the size of the congestion window increases additively
until congestion is detected.
Congestion Detection: Multiplicative Decrease If congestion occurs, the congestion window
size must be decreased. The only way the sender can guess that congestion has occurred is by
the need to retransmit a segment. However, retransmission can occur in one of two cases: when
a timer times out or when three ACKs are received. In both cases, the size of the threshold is
dropped to one-half, a multiplicative decrease.
An implementation reacts to congestion detection in one of the following ways:
❏ If detection is by time-out, a new slow start phase starts.
❏ If detection is by three ACKs, a new congestion avoidance phase starts.
FECN The forward explicit congestion notification (FECN) bit is used to warn the receiver of
congestion in the network. It might appear that the receiver cannot do anything to relieve the
congestion. However, the Frame Relay protocol assumes that the sender and receiver are
communicating with each other and are using some type of flow control at a higher level.
When two endpoints are communicating using a Frame Relay network, four situations may
occur with regard to congestion. Figure shows these four situations and the values of FECN and
BECN.
QUALITY OF SERVICE
Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We
can informally define quality of service as something a flow seeks to attain.
Flow Characteristics
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter, and bandwidth.
Reliability
Reliability is a characteristic that a flow needs. Lack of reliability means losing a packet or
acknowledgment, which entails retransmission.
Delay
Source-to-destination delay is another flow characteristic.
Jitter
Jitter is the variation in delay for packets belonging to the same flow.
Bandwidth
Different applications need different bandwidths.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch)
is ready to process them. If the average arrival rate is higher than the average processing rate,
the queue will fill up and new packets will be discarded.
Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its own
queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority
queue are processed last.
Traffic Shaping
• Traffic shaping controls the rate at which packets are sent (not just how many)
• At connection set-up time, the sender and carrier negotiate a traffic pattern (shape)
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket
The Leaky Bucket Algorithm
• The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single-server
queue with constant service time. If the bucket (buffer) overflows then packets are discarded.
• The leaky bucket enforces a constant output rate regardless of the burstiness of the input. Does
nothing when input is idle.
• The host injects one packet per clock tick onto the network. This results in a uniform flow of packets,
smoothing out bursts and reducing congestion.
• When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length
packets though, it is better to allow a fixed number of bytes per tick.
A simple leaky bucket implementation is shown in Figure below. A FIFO queue holds the packets. If the
traffic consists of fixed-size packets the process removes a fixed number of packets from the queue at
each tick of the clock. If the traffic consists of variable-length packets, the fixed output rate must be
based on the number of bytes or bits.
Name servers
The DNS name space is divided up into nonoverlapping zones, A zone normally has one primary name
server, which gets its information from a file on its disk, and one or more secondary name servers,
which get their information from the primary name server
When a resolver has a query about a domain name, it passes the query to one of the local name
servers.
If the domain being sought falls under the jurisdiction of the name server, it returns the authoritative
records (always correct).
Once these records get back to the local name server, they will be entered into a cache there (timer
controlled).
SNMP - Simple Network Management Protocol
The SNMP model
The SNMP model of a managed network consists of four components
1. Managed nodes.
2. Management stations.
3. Management information
4. A management protocol. Network management is done from management stations: general-purpose
computers with a graphical user interface.
ASN.1 - Abstract Syntax Notation 1
The heart of the SNMP model is the set of objects managed by the agents and read and written by the
management station.
To make multivendor communication possible, it is essential that these objects be defined in a standard
and vendor-neutral way.
Furthermore, a standard way is needed to encode them for transfer over a network.
A standard object definition language, along with encoding rules, is needed. The one used by SNMP is
taken from OSI and called ASN.1 (Abstract Syntax Notation One), defined in International Standard
8824.
The rules for encoding ASN.1 data structures to a bit stream for transmission are given in International
Standard 8825. The format of the bit stream is called the transfer syntax.
The basic idea:
The users first define the data structure types in their applications in ASN.1 notation.
When an application wants to transmit a data structure, it passes the data structure to the presentation
layer (in the OSI model), along with the ASN.1 definition of the data structure.
Using the ASN.1 definition as a guide, the presentation layer then knows what the types and sizes of
the fields in the data structure are, and thus knows how to encode them for transmission according to
the ASN.1 transfer syntax.
Using the ASN.1 transfer syntax as a guide, the receiving presentation layer is able to do any necessary
conversions from the external format used on the wire to the internal format used by the receiving
computer, and pass a semantically equivalent data structure to the application layer.