0% found this document useful (0 votes)
11 views

UNIT 4 & 5th Computer Network Cs 6th Sem Notes

Computer science

Uploaded by

lodhimayank899
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

UNIT 4 & 5th Computer Network Cs 6th Sem Notes

Computer science

Uploaded by

lodhimayank899
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Computer Network Notes

UNIT-04

Network Layer Need:- The network layer is considered the backbone of the OSI Model. It selects
and manages the best logical path for data transfer between nodes. This layer contains hardware
devices such as routers, bridges, firewalls and switches, but it actually creates a logical image of the
most efficient communication route and implements it with a physical medium. Network layer
protocols exist in every host or router. The router examines the header fields of all the IP packets that
pass through it. Internet Protocol and Netware IPX/SPX are the most common protocols associated
with the network layer. In the OSI model, the network layer responds to requests from the layer
above it (transport layer) and issues requests to the layer below it (data link layer).
Network Layer Services :- It translates logical network address into physical address.
1. Routers and gateways operate in the network layer. Mechanism is provided by Network Layer for
routing the packets to final destination.
2. Connection services are provided including flow control, error control and packet sequence
control.
3. Breaks larger packets into small packets.
There are two types of service that can be provided by the network layer:-
1. An unreliable connectionless service.
2. 2.A connection-oriented, reliable or unreliable, service.
Network Layer Design issues:-
a) Store-and-Forward Packet Switching
b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
a) Store-and-Forward Packet Switching
A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-
to-point link to the carrier. The packet is stored there until it has fully arrived so the checksum can be
verified. Then it is forwarded to the next router along the path until it reaches the destination host,
where it is delivered. This mechanism is store-and-forward packet switching.

Fig:- Store and Forward Packet Switching

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

b) Services Provided to the Transport Layer:-


The network layer services have been designed with the following goals:
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers present.
3. The network addresses should be made available to the transport with a uniform numbering plan,
even acrossLANs and WANs.
c) Implementation of Connectionless Service:-
If connectionless service is offered, packets are injected into the subnet individually and routed
independently of each other. No advance setup is needed. In this context, the packets are frequently
called datagrams and the subnet is called a datagram subnet.

Fig:- Connectionless Service


d) Implementation of Connection-Oriented Service:-
If connection-oriented service is used, a path from the source router to the destination router must be
established before any data packets can be sent. This connection is called a VC (virtual circuit) and
the subnet is called a virtual-circuit subnet.
The Process is completed in three phase
1. Establishment Phase.
2. Data transfer Phase.
3. Connection release Phase.

Fig:- Connection-Oriented Service

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Routing algorithms:
A routing algorithm is a set of step-by-step operations used to direct Internet traffic efficiently. When
a packet of data leaves its source, there are many different paths it can take to its destination. The
routing algorithm is used to determine mathematically the best path to take.
Properties of routing algorithm:
Correctness: The routing should be done properly and correctly so that the packets may reach their
proper destination.
Simplicity: The routing should be done in a simple manner so that the overhead is as low as
possible. With increasing complexity of the routing algorithms the overhead also increases.
Robustness: Once a major network becomes operative, it may be expected to run continuously for
years without any failures. The algorithms designed for routing should be robust enough to handle
hardware and software failures and should be able to cope with changes in the topology and traffic
without requiring all jobs in all hosts to be aborted and the network rebooted every time some router
goes down.
Stability: The routing algorithms should be stable under all possible circumstances.
Fairness: Every node connected to the network should get a fair chance of transmitting their packets.
This is generally done on a first come first serve basis.
Optimality: The routing algorithms should be optimal in terms of throughput and minimizing mean
packet delays. Here there is a trade-off and one has to choose depending on his suitability.
Routing can be grouped into two categories
1. Adaptive Routing Algorithm: These algorithms change their routing decisions to reflect changes
in the topology and in traffic as well. These get their routing information from adjacent routers or
from all routers. The optimization parameters are the distance, number of hops and estimated transit
time. This can be further classified as follows:
1. Centralized: In this type some central node in the network gets entire information about the
network topology, about the traffic and about other nodes. This then transmits this information to the
respective routers. The advantage of this is that only one node is required to keep the information.
The disadvantage is that if the central node goes down the entire network is down, i.e. single point of
failure.
2. Isolated: In this method the node decides the routing without seeking information from other
nodes. The sending node does not know about the status of a particular link. The disadvantage is that
the packet may be send through a congested route resulting in a delay. Some examples of this type of
algorithm for routing are:
a. Hot Potato: When a packet comes to a node, it tries to get rid of it as fast as it can, by putting it on
the shortest output queue without regard to where that link leads. A variation of this algorithm is to
combine static routing with the hot potato algorithm. When a packet arrives, the routing algorithm
takes into account both the static weights of the links and the queue lengths.
b. Backward Learning: In this method the routing tables at each node gets modified by information
from the incoming packets. One way to implement backward learning is to include the identity of the
source node in each packet, together with a hop counter that is incremented on each hop. When a
node receives a packet in a particular line, it notes down the number of hops it has taken to reach it
from the source node. If the previous value of hop count stored in the node is better than the current

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

one then nothing is done but if the current value is better than the value is updated for future use. The
problem with this is that when the best route goes down then it cannot recall the second best route to
a particular node. Hence all the nodes have to forget the stored information periodically and start all
over again.
3. Distributed: In this the node receives information from its neighbouring nodes and then takes the
decision about which way to send the packet. The disadvantage is that if in between the interval it
receives information and sends the packet something changes then the packet may be delayed.
2. Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions on
measurements and estimates of the current traffic and topology. Instead the route to be taken in going
from one node to the other is computed in advance, off-line, and downloaded to the routers when the
network is booted. This is also known as static routing. This can be further classified as:
1. Flooding: Flooding adapts the technique in which every incoming packet is sent on every
outgoing line except the one on which it arrived. One problem with this method is that packets may
go in a loop. As a result of this a node may receive several copies of a particular packet which is
undesirable. Some techniques adapted to overcome these problems are as follows:
a. Sequence Numbers: Every packet is given a sequence number. When a node receives the packet
it sees its source address and sequence number. If the node finds that it has sent the same packet
earlier then it will not transmit the packet and will just discard it.
b. Hop Count: Every packet has a hop count associated with it. This is decremented (or
incremented) by one by each node which sees it. When the hop count becomes zero (or a maximum
possible value) the packet is dropped.
c. Spanning Tree: The packet is sent only on those links that lead to the destination by constructing
a spanning tree routed at the source. This avoids loops in transmission but is possible only when all
the intermediate nodes have knowledge of the network topology.
Flooding is not practical for general kinds of applications. But in cases where high degree of
robustness is desired such as in military applications, flooding is of great help.
2. Random Walk: In this method a packet is sent by the node to one of its neighbors randomly. This
algorithm is highly robust. When the network is highly interconnected, this algorithm has the
property of making excellent use of alternative routes. It is usually implemented by sending the
packet onto the least queued link.
Shortest Path Algorithm (Least Cost Routing algorithm):-

• In this the path length between each node is measured as a function of distance, Bandwidth,
average traffic, communication cost, mean queue length, measured delay etc.

• By changing the weighing function, the algorithm then computes the shortest path measured
according to any one of a number of criteria or a combination of criteria.

• For this a graph of subnet is drawn. With each node of graph representing a router and each arc of
the graph representing a communication link. Each link has a cost associated with it.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

UNIT IV (IP ADDRESSES)


Network Addressing: Network Addressing is one of the major responsibilities of the network layer.
Network addresses are always logical, i.e., software-based addresses. A host is also known as end
system that has one link to the network. The boundary between the host and link is known as an
interface. Therefore, the host can have only one interface. A router is different from the host in that it
has two or more links that connect to it. When a router forwards the datagram, then it forwards the
packet to one of the links. The boundary between the router and link is known as an interface, and
the router can have multiple interfaces, one for each of its links. Each interface is capable of sending
and receiving the IP packets, so IP requires each interface to have an address.

IP Addresses: Each IP address is 32 bits long, and they are represented in the form of "dot-decimal
notation" where each byte is written in the decimal form, and they are separated by the period. An IP
address would look like 193.32.216.9 where 193 represents the decimal notation of first 8 bits of an
address, 32 represents the decimal notation of second 8 bits of an address.

In the above figure, a router has three interfaces labeled as 1, 2 & 3 and each router interface
contains its own IP address. Each host contains its own interface and IP address. All the interfaces
attached to the LAN 1 is having an IP address in the form of 223.1.1.xxx, and the interfaces attached
to the LAN 2 and LAN 3 have an IP address in the form of 223.1.2.xxx and 223.1.3.xxx respectively.
Each IP address consists of two parts. The first part (first three bytes in IP address) specifies the
network and second part (last byte of an IP address) specifies the host in the network.

Class Addressing
An IP address is 32-bit long. which is divided into sub-classes:
➢ Class A
➢ Class B
➢ Class C
➢ Class D
➢ Class E

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

An ip address is divided into two parts:


Network ID: It represents the number of networks.
Host ID: It represents the number of hosts.

In the above diagram, we observe that each class have a specific range of IP addresses. The class of
IP address is used to determine the number of bits used in a class and number of networks and hosts
available in the class.
Class A:
In Class A, an IP address is assigned to those networks that contain a large number of hosts.
o The network ID is 8 bits long.
o The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always set to 0 and the remaining 7 bits
determine the network ID. The 24 bits determine the host ID in any network.
The total number of networks in Class A = 27 = 128 network address
The total number of hosts in Class A = 224 - 2 = 16,777,214 host address

Class B:- In Class B, an IP address is assigned to those networks that range from small-sized to
large-sized networks.
o The Network ID is 16 bits long.
o The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 10, and the remaining14 bits
determine the network ID. The other 16 bits determine the Host ID.
The total number of networks in Class B = 214 = 16384 network address
The total number of hosts in Class B = 216 - 2 = 65534 host address

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Class C:- In Class C, an IP address is assigned to only small-sized networks.


o The Network ID is 24 bits long.
o The host ID is 8 bits long.
In Class C, the higher order bits of the first octet is always set to 110, and the remaining 21 bits
determine the network ID. The 8 bits of the host ID determine the host in a network.
The total number of networks = 221 = 2097152 network address
The total number of hosts = 28 - 2 = 254 host address

Class D:- In Class D, an IP address is reserved for multicast addresses. It does not possess
subnetting. The higher order bits of the first octet is always set to 1110, and the remaining bits
determines the host ID in any network.

Class E:- In Class E, an IP address is used for the future use or for the research and development
purposes. It does not possess any subnetting. The higher order bits of the first octet is always set to
1111, and the remaining bits determines the host ID in any network.

Rules for assigning Host ID:-


The Host ID is used to determine the host within any network. The Host ID is assigned based on the
following rules:
o The Host ID must be unique within any network.
o The Host ID in which all the bits are set to 0 cannot be assigned as it is used to represent the
network ID of the IP address.
o The Host ID in which all the bits are set to 1 cannot be assigned as it is reserved for the
multicast address.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Rules for assigning Network ID:-


If the hosts are located within the same local network, then they are assigned with the same network
ID. The following are the rules for assigning Network ID:
o The network ID cannot start with 127 as 127 is used by Class A.
o The Network ID in which all the bits are set to 0 cannot be assigned as it is used to specify a
particular host on the local network.
o The Network ID in which all the bits are set to 1 cannot be assigned as it is reserved for the
multicast address.

Classful Network Architecture:-

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

IP header format:-
Unlike the post office, a router or computer cannot determine the size of a package without
additional information. A person can look at a letter or box and determine how big it is, but a router
cannot. Therefore, additional information is required at the IP layer, in addition to the source and
destination IP addresses.

Figure shows a logical representation of the information that is used at the IP layer to enable the
delivery of electronic data. This information is called a header, and is analogous to the addressing
information on an envelope. A header contains the information required to route data on the Internet,
and has the same format regardless of the type of data being sent. This is the same for an envelope
where the address format is the same regardless of the type of letter being sent.
The fields in the IP header and their descriptions are
• Version - A 4-bit field that identifies the IP version being used. The current version is 4, and
this version is referred to as IPv4.
• Length - A 4-bit field containing the length of the IP header in 32-bit increments. The
minimum length of an IP header is 20 bytes, or five 32-bit increments. The maximum length
of an IP header is 24 bytes, or six 32-bit increments. Therefore, the header length field should
contain either 5 or 6.
• Type of Service (ToS) - The 8-bit ToS uses 3 bits for IP Precedence, 4 bits for ToS with the
last bit not being used. The 4-bit ToS field, although defined, has never been used.
• IP Precedence - A 3-bit field used to identify the level of service a packet receives in the
network.
• Differentiated Services Code Point (DSCP) - A 6-bit field used to identify the level of
service a packet receives in the network. DSCP is a 3-bit expansion of IP precedence with the
elimination of the ToS bits.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

• Total Length - Specifies the length of the IP packet that includes the IP header and the user
data. The length field is 2 bytes, so the maximum size of an IP packet is 216 – 1 or 65,535
bytes.
• Identifier, Flags, and Fragment Offset - As an IP packet moves through the Internet, it
might need to cross a route that cannot handle the size of the packet. The packet will be
divided, or fragmented, into smaller packets and reassembled later. These fields are used to
fragment and reassemble packets.
• Time to Live (TTL) - It is possible for an IP packet to roam aimlessly around the Internet. If
there is a routing problem or a routing loop, then you don't want packets to be forwarded
forever. A routing loop is when a packet is continually routed through the same routers over
and over. The TTL field is initially set to a number and decremented by every router that is
passed through. When TTL reaches 0 the packet is discarded.
• Protocol - In the layered protocol model, the layer that determines which application the data
is from or which application the data is for is indicated using the Protocol field. This field
does not identify the application, but identifies a protocol that sits above the IP layer that is
used for application identification.
• Header Checksum - A value calculated based on the contents of the IP header. Used to
determine if any errors have been introduced during transmission.
• Source IP Address - 32-bit IP address of the sender.
• Destination IP Address - 32-bit IP address of the intended recipient.
• Options and Padding - A field that varies in length from 0 to a multiple of 32-bits. If the
option values are not a multiple of 32-bits, 0s are added or padded to ensure this field
contains a multiple of 32 bits.

Packet Forwarding:- Packet forwarding is the basic method for sharing information across
systems on a network. Packets are transferred between a source interface and a destination
interface, usually on two different systems. When you issue a command or send a message to a
nonlocal interface, your system forwards those packets onto the local network. The interface with
the destination IP address that is specified in the packet headers then retrieves the packets from the
local network. If the destination address is not on the local network, the packets are then
forwarded to the next adjacent network, or hop.

Fragmentation and reassembly:-

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Fragmentation is the process of breaking a packet into smaller pieces so that they will fit into the
frames of the underlying network. The receiving system reassembles the pieces into the original
packets. The term MTU (maximum transmission unit) refers to the maximum amount of data that
can travel in a frame. Different networks have different MTU sizes, so packets may need to be
fragmented in order to fit within the frames of the network that they transit. Internetworking
protocols such as IP use fragmentation because each of the networks that a packet may travel over
could have a different frame size. Fragmentation occurs at routers that connect two networks with
different MTUs. While it is possible to design an internal network with the same MTU size, this is
not an option on the Internet, which includes thousands of independently managed interconnected
networks.

Reassembly:- In a packet-switched telecommunication network, segmentation and reassembly


(SAR, sometimes just referred to as segmentation) is the process of breaking a packet into smaller
units before transmission and reassembling them into the proper order at the receiving end of the
communication.

The Internet Control Message Protocol (ICMP):-


The operation of the Internet is monitored closely by the routers. When something unexpected occurs
during packet processing at a router, the event is reported to the sender by the ICMP (Internet
Control Message Protocol). ICMP is also used to test the Internet. About a dozen types of ICMP
messages are defined. Each ICMP message type is carried encapsulated in an IP packet.

Fig:- The principal ICMP message types

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Difference Between IPv4 and IPv6:

IPV4 IPV6

IPv4 has 32-bit address length IPv6 has 128-bit address length

It Supports Manual and DHCP address It supports Auto and renumbering address
configuration configuration

In IPv4 end to end connection integrity In IPv6 end to end connection integrity is
is Unachievable Achievable

Address space of IPv6 is quite large it can


It can generate 4.29×109 address space produce 3.4×1038 address space

Security feature is dependent on IPSEC is inbuilt security feature in the IPv6


application protocol

Address representation of IPv4 in Address Representation of IPv6 is in


decimal hexadecimal

Fragmentation performed by Sender and In IPv6 fragmentation performed only by


forwarding routers sender

In IPv6 packet flow identification are


In IPv4 Packet flow identification is not Available and uses flow label field in the
available header

In IPv4 checksum field is available In IPv6 checksum field is not available

It has broadcast Message Transmission In IPv6 multicast and any cast message
Scheme transmission scheme is available

In IPv4 Encryption and Authentication In IPv6 Encryption and Authentication are


facility not provided provided

IPv4 has header of 20-60 bytes. IPv6 has header of 40 bytes fixed

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

UNIT 5(Transport Layer)


Transport Layer: Design Issues-
• Accepting data from Session layer, split it into segments and send to the network layer.
• Ensure correct delivery of data with efficiency.
• Isolate upper layers from the technological changes.
• Error control and flow control.
Functions of Transport Layer
1. Service Point Addressing: Transport Layer header includes service point address which is
port address. This layer gets the message to the correct process on the computer unlike
Network Layer, which gets each packet to the correct computer.
2. Segmentation and Reassembling: A message is divided into segments; each segment
contains sequence number, which enables this layer in reassembling the message. Message is
reassembled correctly upon arrival at the destination and replaces packets which were lost in
transmission.
3. Connection Control: It includes 2 types:
o Connectionless Transport Layer : Each segment is considered as an independent
packet and delivered to the transport layer at the destination machine.
o Connection Oriented Transport Layer : Before delivering packets, connection is made
with transport layer at the destination machine.
4. Flow Control: In this layer, flow control is performed end to end.
5. Error Control: Error Control is performed end to end in this layer to ensure that the
complete message arrives at the receiving transport layer without any error. Error Correction
is done through retransmission.
User Datagram Protocol (UDP)

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol
suite, referred as UDP/IP suite. Unlike TCP, it is unreliable and connectionless protocol. So, there is
no need to establish connection prior to data transfer.
Though Transmission Control Protocol (TCP) is the dominant transport layer protocol used with
most of Internet services; provides assured delivery, reliability and much more but all these services
cost us with additional overhead and latency. Here, UDP comes into picture. For the realtime
services like computer gaming, voice or video communication, live conferences; we need UDP.
Since high performance is needed, UDP permits packets to be dropped instead of processing delayed
packets. There is no error checking in UDP, so it also save bandwidth. User Datagram Protocol
(UDP) is more efficient in terms of both latency and bandwidth.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

UDP Header Format:- UDP header is 8-bytes fixed and simple header, while for TCP it may vary
from 20 bytes to 60 bytes. First 8 Bytes contains all necessary header information and remaining part
consist of data. UDP port number fields are each 16 bits long, therefore range for port numbers
defined from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish different user
requests or process.

1. Source Port : Source Port is 2 Byte long field used to identify port number of source.
2. Destination Port : It is 2 Byte long field, used to identify the port of destined packet.
3. Length : Length is the length of UDP including header and the data. It is 16-bits field.
4. Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the one’s
complement sum of the UDP header, pseudo header of information from the IP header and
the data, padded with zero octets at the end (if necessary) to make a multiple of two octets.
Unlike TCP, Checksum calculation is not mandatory in UDP. No Error control or flow control is
provided by UDP. Hence UDP depends on IP and ICMP for error reporting.
Applications of UDP:
❖ Used for simple request response communication when size of data is less and hence there is
lesser concern about flow and error control.
❖ It is suitable protocol for multicasting as UDP supports packet switching.
❖ UDP is used for some routing update protocols like RIP(Routing Information Protocol).
❖ Normally used for real time applications which can not tolerate uneven delays between
sections of a received message.
❖ Following implementations uses UDP as a transport layer protocol:
o NTP (Network Time Protocol)
o DNS (Domain Name Service)
o BOOTP, DHCP.
o NNP (Network News Protocol)
o Quote of the day protocol
o TFTP, RTSP, RIP, OSPF.
❖ Application layer can do some of the tasks through UDP-
o Trace Route
o Record Route

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

o Time stamp
❖ UDP takes datagram from Network Layer, attach its header and send it to the user. So, it
works fast.
❖ Actually UDP is null protocol if you remove checksum field.

Pre-Segment Checksum
Checksum is a simple error detection mechanism to determine the integrity of the data transmitted
over a network. Communication protocols like TCP/IP/UDP implement this scheme in order to
determine whether the received data is corrupted along the network. The sender of an IPv4 datagram
would compute the checksum value based on the data and embed it in the frame. The receiver would
also compute the checksum locally and based on the result ascertain the data integrity. Similarly the
TCP/UDP data which forms the payload for the IP datagram would have its checksum computed and
embedded as a part of the TCP/UDP frame.

Fig:- UDP Per-Segment Checksum


In an attempt to improve performance and to assist drivers in ensuring data integrity, checksum
computation is increasingly being done in hardware. The checksum offload feature can be
implemented as a combination of hardware and software functions - the hardware assists the driver
in completing the checksum computation. This functionality can be enabled in Asics and disabled in
the existing drivers (TCP/IP protocol stack) easily.
Unicast/Multicast Real-Time Traffic
Data is transported over a network by three simple methods i.e. Unicast, Broadcast, and Multicast.
❖ Unicast: from one source to one destination i.e. One-to-One
❖ Broadcast: from one source to all possible destinations i.e. One-to-All
❖ Multicast: from one source to multiple destinations stating an interest in receiving the traffic i.e.
One-to-Many.
Unicast
1. Traffic is sent from one host to another. A replica of each packet in the data stream goes to every
host that requests it.
2. The implementation of unicast applications is a bit easy as they use well-established IP protocols;
however, they are particularly incompetent when there is a need for many-to-many communications.
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes

In the meantime, all packets in the data stream must be sent to every host requesting access to the
data stream. However, this type of transmission is ineffective in terms of both network and server
resource as it equally presents obvious scalability issues.
3. This is a one-to-one connection between the client and the server. Unicast uses IP provision
techniques such as TCP (transmission control protocol) and UDP (user datagram protocol), which
are session-based protocols. Once a Windows media player client connects via unicast to a Windows
media server that client gets a straight connection to the server. Every unicast client that connects to
the server takes up extra bandwidth. For instance, if you have 10 clients all performing 100 Kbps
(kilobits per second) streams, it means those clients taking up 1,000 Kbps. But you a have single
client using the 100 Kbps stream, only 100 Kbps is being used.
Multicast
Multicast lets server’s direct single copies of data streams that are then simulated and routed to hosts
that request it. Hence, rather than sending thousands of copies of a streaming event, the server
instead streams a single flow that is then directed by routers on the network to the hosts that have
specified that they need to get the stream. This removes the requirement to send redundant traffic
over the network and also be likely to reduce CPU load on systems, which are not using the multicast
system, yielding important enhancement to efficiency for both server and network.

TCP Connection Management: -


TCP is a unicast connection-oriented protocol. Before either end can send data to the other, a
connection must be established between them. TCP detects and repairs essentially all the data
transfer problems that may be introduced by packet loss, duplication, or errors at the IP layer (or
below).
Because of its management of connection state (information about the connection kept by both
endpoints), TCP is a considerably more complicated protocol than UDP. UDP is a connectionless
protocol that involves no connection establishment or termination. One of the major differences
between the two is the amount of detail required to handle the various TCP states properly: when
connections are created, terminated normally, and reset without warning.
During connection establishment, several options can be exchanged between the two endpoints
regarding the parameters of the connection. Some options are allowed to be sent only when the
connection is established, and others can be sent later. The TCP header has a limited space for
holding options (40 bytes).

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

TCP Connection Establishment and Termination:-


A TCP connection is defined to be a 4-tuple consisting of two IP addresses and two port numbers. It
is a pair of endpoints or sockets where each endpoint is identified by an (IP address, port number)
pair.
A connection typically goes through three phases:
1. Setup
2. Data transfer (called established)
3. Teardown (closing).
TCP Connection Establishment:
1. To establish a connection, one side, say, the server, passively waits for an incoming
connection by executing the LISTEN and ACCEPT primitives, either specifying a specific
source or nobody in particular.
2. The other side, say, the client, executes a CONNECT primitive, specifying the IP address and
port to which it wants to connect, the maximum TCP segment size it is willing to accept, and
optionally some user data (e.g., a password).
3. The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off and
waits for a response.
4. When this segment arrives at the destination, the TCP entity there checks to see if there is a
process that has done a LISTEN on the port given in the Destination port field.
5. If not, it sends a reply with the (Reset) RST bit on to reject the connection.
6. If some process is listening to the port, that process is given the incoming TCP segment.
7. It can then either accept or reject the connection. Normal case is shown in Figure (a).
8. In the event that two hosts simultaneously attempt to establish a connection between the same
two sockets, the sequence of events is as illustrated in Figure (b).
9. The result of these events is that just one connection is established, not two because
connections are identified by their end points.

TCP Connection Release:


1. TCP connections are full duplex.
2. Each simplex connection is released independently of its sibling.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

3. To release a connection, either party can send a TCP segment with the FIN bit set, which
means that it has no more data to transmit.
4. When the FIN is acknowledged, that direction is shut down for new data.
5. Data may continue to flow indefinitely in the other direction, however.
6. When both directions have been shut down, the connection is released.
7. Normally, four TCP segments are needed to release a connection, one FIN(flag indicate) and
one ACK for each direction.
8. However, it is possible for the first ACK and the second FIN to be contained in the same
segment, reducing the total count to three.
9. To avoid the two-army problem, timers are used.
10. If a response to a FIN is not forthcoming within two maximum packet lifetimes, the sender of
the FIN releases the connection.
11. The other side will eventually notice that nobody seems to be listening to it anymore and will
time out as well.
Principle Of Reliable Data Transfer:-

Transport Layer Protocols are central piece of layered architectures, these provides the logical
communication between application processes. These processes uses the logical communication to
transfer data from transport layer to network layer and this transfer of data should be reliable and
secure. The data is transferred in the form of packets but the problem occurs in reliable transfer of
data.

The problem of transferring the data occurs not only at the transport layer, but also at the application
layer as well as in the link layer. This problem occur when a reliable service runs on an unreliable
service, For example, TCP (Transmission Control Protocol) is a reliable data transfer protocol that is
implemented on top of an unreliable layer, i.e., Internet Protocol (IP) is an end to end network layer
protocol.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Fig:- study of Reliable data transfer

In this model, we have design the sender and receiver sides of a protocol over a reliable channel. In
the reliable transfer of data the layer receives the data from the above layer breaks the message in the
form of segment and put the header on each segment and transfer. Below layer receives the segments
and remove the header from each segment and make it a packet by adding to header.

The data which is transferred from the above has no transferred data bits corrupted or lost, and all are
delivered in the same sequence in which they were sent to the below layer this is reliable data
transfer protocol. This service model is offered by TCP to the Internet applications that invoke this
transfer of data.

Similarly in an unreliable channel we have design the sending and receiving side. The sending side
of the protocol is called from the above layer to rdt_send() then it will pass the data that is to be
delivered to the application layer at the receiving side (here rdt-send() is a function for sending data
where rdt stands for reliable data transfer protocol and _send() is used for the sending side).

Fig:- study of Unreliable data transfer

On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -rcv() is used for
receiving side), will be called when a packet arrives from the receiving side of the unreliable
channel. When the rdt protocol wants to deliver data to the application layer, it will do so by calling
deliver_data() (where deliver_data() is a function for delivering data to upper layer).

In reliable data transfer protocol, we only consider the case of unidirectional data transfer, that is
transfer of data from the sending side to receiving side(i.e. only in one direction). In case of
bidirectional (full duplex or transfer of data on both the sides) data transfer is conceptually more
difficult. Although we only consider unidirectional data transfer but it is important to note that the
sending and receiving sides of our protocol will needs to transmit packets in both directions, as
shown in above figure.
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes

In order to exchange packets containing the data that is needed to be transferred the both (sending
and receiving) sides of rdt also need to exchange control packets in both direction (i.e., back and
forth), both the sides of rdt send packets to the other side by a call to udt_send() (udt_send() is a
function used for sending data to other side where udt stands for unreliable data transfer protocol).

(TCP Congestion Control)


Congestion in Network-
• Congestion is an important issue that can arise in Packet Switched Network.
• Congestion leads to the loss of packets in transit.
• So, it is necessary to control the congestion in network.
• It is not possible to completely avoid the congestion.
Congestion Control- Congestion control refers to techniques and mechanisms that can either
prevent congestion before it happens or remove congestion after it has happened.
TCP Congestion Control- TCP reacts to congestion by reducing the sender window size. The size
of the sender window is determined by the following two factors-
1. Receiver window size
2. Congestion window size
1. Receiver Window Size- Receiver window size is an advertisement of How much data (in bytes)
the receiver can receive without acknowledgement. Sender should not send data greater than receiver
window size. Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
So, sender should always send data less than or equal to receiver window size. Receiver dictates its
window size to the sender through TCP Header.

2. Congestion Window- Sender should not send data greater than congestion window size.
Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission. So, sender
should always send data less than or equal to congestion window size. Different variants of TCP use
different approaches to calculate the size of congestion window. Congestion window is known only
to the sender and is not sent over the links.
So always
Sender window size = Minimum (Receiver window size, Congestion window size)
TCP Congestion Policy:- TCP’s general policy for handling congestion consists of following three
phases-
1. Slow Start
2. Congestion Avoidance
3. Congestion Detection

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

1. Slow Start Phase- Initially, sender sets congestion window size = Maximum Segment Size (1
MSS). After receiving each acknowledgment, sender increases the congestion window size by 1
MSS. In this phase, the size of congestion window increases exponentially.
The followed formula is-
Congestion window size = Congestion window size + Maximum segment size
This is shown below-

• After 1 round trip time, congestion window size = (2)1 = 2 MSS


• After 2 round trip time, congestion window size = (2)2 = 4 MSS
• After 3 round trip time, congestion window size = (2)3 = 8 MSS and so on.
This phase continues until the congestion window size reaches the slow start threshold.
Threshold = Maximum number of TCP segments that receiver window can accommodate / 2
= (Receiver window size / Maximum Segment Size) / 2
2. Congestion Avoidance Phase- After reaching the threshold, Sender increases the congestion
window size linearly to avoid the congestion. On receiving each acknowledgement, sender
increments the congestion window size by 1.
The followed formula is- Congestion window size = Congestion window size + 1
This phase continues until the congestion window size becomes equal to the receiver window size.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

3. Congestion Detection Phase-


When sender detects the loss of segments, it reacts in different ways depending on how the loss is
detected-
Case-01: Detection On Time Out- Time Out Timer expires before receiving the acknowledgement
for a segment. This case suggests the stronger possibility of congestion in the network. There are
chances that a segment has been dropped in the network.
Reaction- In this case, sender reacts by-
• Setting the slow start threshold to half of the current congestion window size.
• Decreasing the congestion window size to 1 MSS.
• Resuming the slow start phase.

Case-02: Detection On Receiving 3 Duplicate Acknowledgements- Sender receives 3 duplicate


acknowledgements for a segment. This case suggests the weaker possibility of congestion in the
network. There are chances that a segment has been dropped but few segments sent later may have
reached.
Reaction- In this case, sender reacts by-
• Setting the slow start threshold to half of the current congestion window size.
• Decreasing the congestion window size to slow start threshold.
• Resuming the congestion avoidance phase.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

(TCP Header Format &Timer management)


TCP Header-
The following diagram represents the TCP header format-

1. Source Port-
• Source Port is a 16 bit field.
• It identifies the port of the sending application.
2. Destination Port-
• Destination Port is a 16 bit field.
• It identifies the port of the receiving application.
It is important to note that a TCP connection is uniquely identified by using Combination of port
numbers and IP Addresses of sender and receiver. IP Addresses indicate which systems are
communicating. Port numbers indicate which end to end sockets are communicating.
3. Sequence Number-
• Sequence number is a 32 bit field.
• TCP assigns a unique sequence number to each byte of data contained in the TCP segment.
• This field contains the sequence number of the first data byte.
4. Acknowledgement Number-
• Acknowledgment number is a 32 bit field.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

• It contains sequence number of the data byte that receiver expects to receive next from the
sender.
• It is always sequence number of the last received data byte incremented by 1.
5. Header Length-
• Header length is a 4 bit field.
• It contains the length of TCP header.
• It helps in knowing from where the actual data begins.
The length of TCP header always lies in the range of [20 bytes , 60 bytes].

6. Reserved Bits-
• The 6 bits are reserved.
• These bits are not used
7. URG Bit-
URG bit is used to treat certain data on an urgent basis.
When URG bit is set to 1,
• It indicates the receiver that certain amount of data within the current segment is urgent.
• Urgent data is pointed out by evaluating the urgent pointer field.
• The urgent data has be prioritized.
• Receiver forwards urgent data to the receiving application on a separate channel.
8. ACK Bit-
ACK bit indicates whether acknowledgement number field is valid or not.
• When ACK bit is set to 1, it indicates that acknowledgement number contained in the TCP
header is valid.
• For all TCP segments except request segment, ACK bit is set to 1.
• Request segment is sent for connection establishment during Three Way Handshake.
9. PSH Bit-
PSH bit is used to push the entire buffer immediately to the receiving application.
When PSH bit is set to 1,
• All the segments in the buffer are immediately pushed to the receiving application.
• No wait is done for filling the entire buffer.
• This makes the entire buffer to free up immediately.
It is important to note that unlike URG bit, PSH bit does not prioritize the data. It just causes all the
segments in the buffer to be pushed immediately to the receiving application. The same order is
maintained in which the segments arrived. It is not a good practice to set PSH bit = 1. This is because
it disrupts the working of receiver’s CPU and forces it to take an action immediately.
10. RST Bit-
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes

RST bit is used to reset the TCP connection.


When RST bit is set to 1,
• It indicates the receiver to terminate the connection immediately.
• It causes both the sides to release the connection and all its resources abnormally.
• The transfer of data ceases in both the directions.
• It may result in the loss of data that is in transit.
This is used only when-
• There are unrecoverable errors.
• There is no chance of terminating the TCP connection normally.
11. SYN Bit-
SYN bit is used to synchronize the sequence numbers.
When SYN bit is set to 1,
• It indicates the receiver that the sequence number contained in the TCP header is the initial
sequence number.
• Request segment sent for connection establishment during Three way handshake contains
SYN bit set to 1.

12. FIN Bit-


FIN bit is used to terminate the TCP connection.
When FIN bit is set to 1,
• It indicates the receiver that the sender wants to terminate the connection.
• FIN segment sent for TCP Connection Termination contains FIN bit set to 1.

13. Window Size-


• Window size is a 16 bit field.
• It contains the size of the receiving window of the sender.
• It advertises how much data (in bytes) the sender can receive without acknowledgement.
• Thus, window size is used for Flow Control.
It is important to note that The window size changes dynamically during data transmission. It usually
increases during TCP transmission up to a point where congestion is detected. After congestion is
detected, the window size is reduced to avoid having to drop packets.
14. Checksum-
• Checksum is a 16 bit field used for error control.
• It verifies the integrity of data in the TCP payload.
• Sender adds CRC checksum to the checksum field before sending the data.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

• Receiver rejects the data that fails the CRC check.


15. Urgent Pointer-
• Urgent pointer is a 16 bit field.
• It indicates how much data in the current segment counting from the first data byte is urgent.
• Urgent pointer added to the sequence number indicates the end of urgent data byte.
• This field is considered valid and evaluated only if the URG bit is set to 1.
16. Options-
• Options field is used for several purposes.
• The size of options field vary from 0 bytes to 40 bytes.
Options field is generally used for the following purposes-
1. Time stamp
2. Window size extension
3. Parameter negotiation
4. Padding

TCP Timer Management:-


Timers used by TCP to avoid excessive delays during communication are called as TCP Timers.
The 4 important timers used by a TCP implementation are-
1. Time Out Timer
2. Time Wait Timer
3. Keep Alive Timer
4. Persistent Timer

Time Out Timer- TCP uses a time out timer for retransmission of lost segments.

• Sender starts a time out timer after transmitting a TCP segment to the receiver.
• If sender receives an acknowledgement before the timer goes off, it stops the timer.
• If sender does not receives any acknowledgement and the timer goes off, then TCP
Retransmission occurs.
• Sender retransmits the same segment and resets the timer.
• The value of time out timer is dynamic and changes with the amount of traffic in the network.
• Time out timer is also called as Retransmission Timer.

Time Wait Timer- TCP uses a time wait timer during connection termination.
• Sender starts the time wait timer after sending the ACK for the second FIN segment.
• It allows to resend the final acknowledgement if it gets lost.
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes

• It prevents the just closed port from reopening again quickly to some other application.
• It ensures that all the segments heading towards the just closed port are discarded.
• The value of time wait timer is usually set to twice the lifetime of a TCP segment.

Keep Alive Timer- TCP uses a keep alive timer to prevent long idle TCP connections.
• Each time server hears from the client, it resets the keep alive timer to 2 hours.
• If server does not hear from the client for 2 hours, it sends 10 probe segments to the client.
• These probe segments are sent at a gap of 75 seconds.
• If server receives no response after sending 10 probe segments, it assumes that the client is
down.
• Then, server terminates the connection automatically.

Persistent Timer-
• TCP uses a persistent timer to deal with a zero-widow-size deadlock situation.
• It keeps the window size information flowing even if the other end closes its receiver
window.
• Sender starts the persistent timer on receiving an ACK from the receiver with a zero window
size.
• When persistent timer goes off, sender sends a special segment to the receiver.
• This special segment is called as probe segment and contains only 1 byte of new data.
• Response sent by the receiver to the probe segment gives the updated window size.
• If the updated window size is non-zero, it means data can be sent now.
If the updated window size is still zero, the persistent timer is set again and the cycle repeats.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

Application Layer
The application layer in the OSI model is the closest layer to the end user which means that the
application layer and end user can interact directly with the software application. The application
layer programs are based on client and servers.
Application layer functions: -

• Identifying communication partners: The application layer identifies the availability of


communication partners for an application with data to transmit.
• Determining resource availability: The application layer determines whether sufficient
network resources are available for the requested communication.
• Synchronizing communication: All the communications occur between the applications
requires cooperation which is managed by an application layer.
Services of Application Layers:-

• Network Virtual terminal: An application layer allows a user to log on to a remote host. To
do so, the application creates a software emulation of a terminal at the remote host. The user's
computer talks to the software terminal, which in turn, talks to the host. The remote host
thinks that it is communicating with one of its own terminals, so it allows the user to log on.
• File Transfer, Access, and Management (FTAM): An application allows a user to access
files in a remote computer, to retrieve files from a computer and to manage files in a remote
computer. FTAM defines a hierarchical virtual file in terms of file structure, file attributes
and the kind of operations performed on the files and their attributes.
• Addressing: To obtain communication between client and server, there is a need for
addressing. When a client made a request to the server, the request contains the server
address and its own address. The server response to the client request, the request contains
the destination address, i.e., client address. To achieve this kind of addressing, DNS is used.
• Mail Services: An application layer provides Email forwarding and storage.
• Directory Services: An application contains a distributed database that provides access for
global information about various objects and services.
• Authentication: It authenticates the sender or receiver's message or both.
World Wide Web: -
World Wide Web, which is also known as a Web, is a collection of websites or web pages stored in
web servers and connected to local computers through the internet. These websites contain text
pages, digital images, audios, videos, etc. Users can access the content of these sites from any part of
the world over the internet using their devices such as computers, laptops, cell phones, etc. The
WWW, along with internet, enables the retrieval and display of text and media to your device.
HTTP:-
HTTP stands for Hyper Text Transfer Protocol.It is a protocol used to access the data on the World
Wide Web (www).The HTTP protocol can be used to transfer the data in the form of plain text,
hypertext, audio, video, and so on.This protocol is known as Hyper Text Transfer Protocol because
of its efficiency that allows us to use in a hypertext environment where there are rapid jumps from
one document to another document.
HTTP is similar to the FTP as it also transfers the files from one host to another host. But, HTTP is
simpler than FTP as HTTP uses only one connection, i.e., no control connection to transfer the files.
HTTP is used to carry the data in the form of MIME (Multipurpose Internet Mail Extensions)-like
format. HTTP is similar to SMTP as the data is transferred between client and server. The HTTP
differs from the SMTP in the way the messages are sent from the client to the server and from server
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes

to the client. SMTP messages are stored and forwarded while HTTP messages are delivered
immediately.

FTP (File transfer protocol): -

FTP is a standard internet protocol provided by TCP/IP used for transmitting the files from one host
to another. It is mainly used for transferring the web page files from their creator to the computer that
acts as a server for other computers on the internet. It is also used for downloading the files to
computer from other servers.
It provides the sharing of files. It is used to encourage the use of remote computers. It transfers the
data more reliably and efficiently.
There are two types of connections in FTP:

• Control Connection: The control connection uses very simple rules for communication.
Through control connection, we can transfer a line of command or line of response at a time. The
control connection is made between the control processes. The control connection remains
connected during the entire interactive FTP session.
• Data Connection: The Data Connection uses very complex rules as data types may vary. The
data connection is made between data transfer processes. The data connection opens when a
command comes for transferring the files and closes when the file is transferred.
Advantages of FTP:

• Speed: One of the biggest advantages of FTP is speed. The FTP is one of the fastest way to
transfer the files from one computer to another computer.
• Efficient: It is more efficient as we do not need to complete all the operations to get the entire
file.
• Security: To access the FTP server, we need to login with the username and password.
Therefore, we can say that FTP is more secure.
• Back & forth movement: FTP allows us to transfer the files back and forth. Suppose you are a
manager of the company, you send some information to all the employees, and they all send
information back on the same server.
Disadvantages of FTP:

• The standard requirement of the industry is that all the FTP transmissions should be encrypted.
However, not all the FTP providers are equal and not all the providers offer encryption. So, we
will have to look out for the FTP providers that provides encryption.
• FTP serves two operations, i.e., to send and receive large files on a network. However, the size
limit of the file is 2GB that can be sent. It also doesn't allow you to run simultaneous transfers to
multiple receivers.
• Passwords and file contents are sent in clear text that allows unwanted eavesdropping. So, it is
quite possible that attackers can carry out the brute force attack by trying to guess the FTP
password.
• It is not compatible with every system.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR


Computer Network Notes

SSH Protocol: -
SSH stands for Secure Shell or Secure Socket Shell. It is a cryptographic network protocol that
allows two computers to communicate and share the data over an insecure network such as the
internet. It is used to login to a remote server to execute commands and data transfer from one
machine to another machine.
The SSH protocol was developed by SSH communication security Ltd to safely communicate with
the remote machine.
Secure communication provides a strong password authentication and encrypted communication
with a public key over an insecure channel. It is used to replace unprotected remote login protocols
such as Telnet, rlogin, etc., and insecure file transfer protocol FTP.
Its security features are widely used by network administrators for managing systems and
applications remotely. The SSH protocol protects the network from various attacks such as DNS
spoofing, IP source routing, and IP spoofing.
A simple example can be understood, such as suppose you want to transfer a package to one of your
friends. Without SSH protocol, it can be opened and read by anyone. But if you will send it using
SSH protocol, it will be encrypted and secured with the public keys, and only the receiver can open
it.

BY PROF. SACHIN CHOURASIA GSCE, SAGAR

You might also like