UNIT 2 - Connectionless and Connection Oriented Protocol PDF
UNIT 2 - Connectionless and Connection Oriented Protocol PDF
Connectionless service
Sr. Parameters Connection oriented Connectionless
No.
1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1
sum 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
Fields Description
4 bits of the version of IP currently used, the ip version is 4
iph_ver
(other version is IPv6).
4 bits, the ip header (datagram) length in 32 bits octets (bytes)
that point to the beginning of the data. The minimum value
iph_ihl
for a correct header is 5. This means a value of 5 for
the iph_ihl means 20 bytes (5 * 4).
iph_tos 8 bits, type of service controls the priority of the packet.
The total is 16 bits; total length must contain the total length
iph_len of the ip datagram (ip and data) in bytes. This includes ip
header, icmp or tcp or udp header and payload size in bytes.
The iph_ident sequence number is mainly used for
iph_ident
reassembly of fragmented IP datagrams.
Consists of a 3-bit field of which the two low-order (least-
significant) bits control fragmentation. The low-order bit
specifies whether the packet can be fragmented. The middle
iph_flag
bit specifies whether the packet is the last fragment in a series
of fragmented packets. The third or high-order bit is not
Fields Description
The fragment offset is used for reassembly of fragmented
datagrams. The first 3 bits are the fragment flags, the first one
ihp_offset always 0, the second the do-not-fragment bit (set by ihp_offset =
0x4000) and the third the more-flag or more-fragments-following
bit (ihp_offset = 0x2000).
8 bits, time to live is the number of hops (routers to pass) before
iph_ttl the packet is discarded, and an icmp error message is
returned. The maximum is 255.
8 bits, the transport layer protocol. It can be tcp (6), udp (17),
iph_protocol
icmp (1), or whatever protocol follows the ip header.
iph_chksu
16 bits, a checksum on the header only, the ip datagram.
m
iph_source 32 bits, source IP address. It is converted to long format.
32 bits, destination IP address, converted to long format, e.g.
iph_dest
by inet_addr().
Options Variable.
Variable. The internet header padding is used to ensure that the
Padding
internet header ends on a 32 bit boundary.
Ports and sockets
• This section introduces the concepts of port and socket, which are
needed to determine which local process at a given host actually
communicates with which process, at which remote host, using
which protocol. If this sounds confusing, consider the following:
RDT_Send Deliver_Data
RDT_Send Deliver_Data
Timeout
arrives Time
– If received ACK, send the
next packet
– If timeout, Retransmit the
same packet
• Receiver:
– When you receive a packet
correctly, send an ACK
• Stop and wait with ARQ: Automatic Repeat reQuest
(ARQ), an error control method, is incorporated with stop
and wait flow control protocol.
– If error is detected by receiver, it discards the frame and send
a negative ACK (NAK), causing sender to re-send the frame
– In case a frame never got to receiver, sender has a timer:
each time a frame is sent, timer is set
→ If no ACK or NAK is received during timeout period, it
re-sends the frame
– Timer introduces a problem: Suppose timeout and sender
retransmits a frame but receiver actually received the
previous transmission → receiver has duplicated copies
– To avoid receiving and accepting two copies of same frame,
frames and ACKs are alternatively labeled 0 or 1: ACK0 for
frame 1, ACK1 for frame 0
An important link parameter is defined by
where ,
R is data rate (bps), d is link distance (m), V is propagation
velocity (m/s) and L frame length (bits)
In error-free case, efficiency or maximum link utilization of stop
and Wait with ARQ is:
Recovering from Error
Timeout
Timeout
Timeout
Time
Timeout
Timeout
Timeout
Packet lost ACK lost Early timeout
Timeout
Timeout
Time
Timeout
Timeout
Timeout
Packet lost ACK lost Early timeout
Performance of Stop and Wait
• Can only send one packet per round trip
• network protocol limits use of physical resources!
sender receive
first packet bit transmitted, t = 0 r
last packet bit transmitted, t = L / R
U L/R .008
= 0.027%
= = = 0.00027
sender 30.008
RTT + L / R microsec microsec
onds onds
Pipelining: Increasing Utilization
• Pipelining: sender allows multiple, “in-flight”, yet-to-be-
acknowledged pkts without waiting for first to be ACKed to keep
the pipe full
– Capacity of the Pipe = RTT * BW
sender receiver
first packet bit transmitted, t = 0
last bit transmitted, t = L / R
Increase utilization
by a factor of 3!
U 3*L/R .024
sender
= = = 0.0008
RTT + L / R 30.008 microsecon
Sliding Window Protocols
• Reliable, in-order delivery of packets
• Sender can send “window” of up to N,
consecutive unack’ed packets
• Receiver makes sure that the packets are
delivered in-order to the upper layer
• 2 Generic Versions
– Go-Back-N
– Selective Repeat
• For large link parameter a, stop and wait protocol is inefficient.
• A universally accepted flow control procedure is the sliding
window protocol.
– Frames and acknowledgements are numbered using sequence
numbers
– Sender maintains a list of sequence numbers (frames) it is allowed
to transmit, called sending window
– Receiver maintains a list of sequence numbers it is prepared to
receive, called receiving window
– A sending window of size N means that sender can send up to N
frames without the need for an ACK
– A window size of N implies buffer space for N frames
– For n-bit sequence number, we have 2n numbers: 0, 1, · · · , 2n − 1,
but the maximum window size N = 2n− 1 (not 2n)
– ACK3 means that receiver has received frame 0 to frame 2
correctly, ready to receive frame 3 (and rest of N frames within
window)
• In error-free case, efficiency or maximum link
utilization of sliding window protocol is:
Last ACK Received Last Packet Sent Next Packet Expected Last Packet Acceptable
(LAR) (LPS) (NPE) (LPA)
… … … …
Sent & Acked Sent Not Acked Received & Acked Acceptable Packet
LAR LPS
Sliding Window- Receiver Side
• The receiver maintains 3 variables
– Receiver Window Size (RWS)
• Upper bound on the number of buffered packets
– Last Packet Acceptable (LPA)
– Next Packet Expected (NPE)
– We want LPS – NPE + 1 <= RWS
<= RWS
NPE LPA
Go-back-n ARQ
• The basic idea of go-back-n error control is: If
frame i is damaged, receiver requests
retransmission of all frames starting from frame i
• An example:
• Notice that all possible cases of damaged frame
and ACK / NAK must be taken into account
• For n-bit sequence number, maximum window
size is N = 2n − 1 not N = 2n → with N = 2n
• Consider n = 3, if N = 8 what may happen:
• Suppose that sender transmits frame 0 and gets an ACK1
• It then transmits frames 1,2,3,4,5,6,7,0 (this is allowed, as
they are within the sending window of size 8) and gets
another ACK1
• This could means that all eight frames were received
correctly
• It could also mean that all eight frames were lost, and
receiver is repeating its previous ACK1
• With N = 7, this confusing situation is avoided
Sender Receiver
Last ACK Received Last Packet Sent Next Packet Expected Last Packet Acceptable
(LAR) (LPS) (NPE) (LPA)
… … … …
SWS = N
RWS = 1 packet
Sent & Acked Sent Not Acked Received & Acked Acceptable Packet
… … … …
SWS = N RWS = N
Sent & Acked Sent Not Acked Received & Acked Acceptable Packet
Message 1 Message 1
Message 2
Message 3
Acknowledgement
message 1
Time Time
EVENT DIAGRAM
– Multiplexing:
– Achieved through the use of ports, just as with UDP.
– Logical Connections:
– The reliability and flow control mechanisms described
above require that TCP initializes and maintains
certain status information for each data stream.
– The combination of this status, including sockets,
sequence numbers and window sizes, is called a
logical connection.
– Each connection is uniquely identified by the pair of
sockets used by the sending and receiving processes.
– Full Duplex:
– TCP provides for concurrent data streams in both
directions.
TCP segment format
• Source Port: The 16-bit source port number, used by the receiver to
reply.
• Destination Port: The 16-bit destination port number.
• Sequence Number: The sequence number of the first data byte in
this segment. If the SYN control bit is set, the sequence number is the
initial sequence number (n) and the first data byte is n+1.
• Acknowledgment Number: If the ACK control bit is set, this field
contains the value of the next sequence number that the receiver is
expecting to receive.
• Data Offset: The number of 32-bit words in the TCP header. It
indicates where the data begins.
• Reserved: Six bits reserved for future use; must be zero.
• URG: Indicates that the urgent pointer field is significant in this
segment.
• ACK: Indicates that the acknowledgment field is significant in this
segment.
• PSH: Push function.
• RST: Resets the connection.
• SYN: Synchronizes the sequence numbers.
• FIN: No more data from sender.
• Window: Used in ACK segments. It specifies the number of
data bytes
• Checksum: The 16-bit one's complement of the one's
complement sum of all 16-bit words in a pseudo-header, the
TCP header, and the TCP data.
• Urgent Pointer: Points to the first data octet following the
urgent data. Only significant when the URG control bit is set.
• Options: Just as in the case of IP datagram options, options
can be either:
– A single byte containing the option number
– A variable length option
• Padding: All zero bytes are used to fill up the TCP header to a
total length that is a multiple of 32 bits.
The window principle
• A trivial transport protocol is:
– send a packet and then wait for an ACK from the
receiver before sending the next packet;
– if the ACK is not received within a certain amount of
time, retransmit the packet.
– Slow start
– Congestion avoidance
– Fast retransmit
– Fast recovery
Slow start
• Slow Start, a requirement for TCP software
implementations is a mechanism used by the sender
to control the transmission rate, otherwise known as
sender-based flow control.
• The rate of acknowledgements returned by the
receiver determine the rate at which the sender can
transmit data.
• When a TCP connection first begins, the Slow Start
algorithm initializes a congestion window to one
segment, which is the maximum segment size (MSS)
initialized by the receiver during the connection
establishment phase.
• When acknowledgements are returned by the receiver, the
congestion window increases by one segment for each
acknowledgement returned.
• Thus, the sender can transmit the minimum of the congestion
window and the advertised window of the receiver, which is
simply called the transmission window.
• For example, the first successful transmission and
acknowledgement of a TCP segment increases the window to
two segments.
• After successful transmission of these two segments and
acknowledgements completes, the window is increased to four
segments.
• Then eight segments, then sixteen segments and so on,
doubling from there on out up to the maximum window size
advertised by the receiver or until congestion finally does occur.
• At some point the congestion window may
become too large for the network or network
conditions may change such that packets may be
dropped.
• Packets lost will trigger a timeout at the sender.
129
Choke Packets
• A more direct way of telling the source to
slow down.
• A choke packet is a control packet
generated at a congested node and
transmitted to restrict traffic flow.
• The source, on receiving the choke packet
must reduce its transmission rate by a
certain percentage.
• An example of a choke packet is the ICMP
Source Quench Packet.
130
Hop-by-Hop Choke Packets
• Over long distances or at high speeds choke
packets are not very effective.
• A more efficient method is to send to choke
packets hop-by-hop.
• This requires each hop to reduce its
transmission even before the choke packet
arrive at the source.
131
Load Shedding
• When buffers become full, routers simply discard
packets.
• Which packet is chosen to be the victim depends on
the application and on the error strategy used in the
data link layer.
• For a file transfer, for, e.g. cannot discard older
packets since this will cause a gap in the received
data.
• For real-time voice or video it is probably better to
throw away old data and keep new packets.
• Get the application to mark packets with discard
priority.
132
Random Early Discard (RED)
• This is a proactive approach in which the
router discards one or more packets before
the buffer becomes completely full.
• Each time a packet arrives, the RED
algorithm computes the average queue
length, avg.
• If avg is lower than some lower threshold,
congestion is assumed to be minimal or
non-existent and the packet is queued.
133
RED, cont.
• If avg is greater than some upper
threshold, congestion is assumed to be
serious and the packet is discarded.
• If avg is between the two thresholds, this
might indicate the onset of congestion. The
probability of congestion is then calculated.
134
Traffic Shaping
• Another method of congestion control is to
“shape” the traffic before it enters the
network.
• Traffic shaping controls the rate at which
packets are sent (not just how many).
• Used in ATM and Integrated Services
networks.
• At connection set-up time, the sender and
carrier negotiate a traffic pattern (shape).
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket
135
Piggybacking
• Piggybacking is a bi-directional data
transmission technique in the network layer (OSI
model).
• In all practical situations, the transmission of data
needs to be bi-directional. This is called as full-
duplex transmission.
• We can achieve this full duplex
transmission i.e. by having two separate
channels-one for forward data transfer and the
other for separate transfer i.e. for
acknowledgements.
• A better solution would be to use each
channel (forward & reverse) to transmit
frames both ways, with both channels having
the same capacity.
• If A and B are two users.
• Then the data frames from A to B are
intermixed with the acknowledgements from
A to B.
• One more improvement that can be made is
piggybacking.
• The concept is explained as follows:
• In two way communication, Whenever a data
frame is received, the received waits and does
not send the control frame (acknowledgement)
back to the sender immediately.
• The receiver waits until its network layer passes
in the next data packet. The delayed
acknowledgement is then attached to this
outgoing data frame.
• This technique of temporarily delaying the
acknowledgement so that it can be hooked with
next outgoing data frame is known as
piggybacking.
• The major advantage of piggybacking is better
use of available channel bandwidth.