UNIT 4 & 5th Computer Network Cs 6th Sem Notes
UNIT 4 & 5th Computer Network Cs 6th Sem Notes
UNIT-04
Network Layer Need:- The network layer is considered the backbone of the OSI Model. It selects
and manages the best logical path for data transfer between nodes. This layer contains hardware
devices such as routers, bridges, firewalls and switches, but it actually creates a logical image of the
most efficient communication route and implements it with a physical medium. Network layer
protocols exist in every host or router. The router examines the header fields of all the IP packets that
pass through it. Internet Protocol and Netware IPX/SPX are the most common protocols associated
with the network layer. In the OSI model, the network layer responds to requests from the layer
above it (transport layer) and issues requests to the layer below it (data link layer).
Network Layer Services :- It translates logical network address into physical address.
1. Routers and gateways operate in the network layer. Mechanism is provided by Network Layer for
routing the packets to final destination.
2. Connection services are provided including flow control, error control and packet sequence
control.
3. Breaks larger packets into small packets.
There are two types of service that can be provided by the network layer:-
1. An unreliable connectionless service.
2. 2.A connection-oriented, reliable or unreliable, service.
Network Layer Design issues:-
a) Store-and-Forward Packet Switching
b) Services Provided to the Transport Layer
c) Implementation of Connectionless Service
d) Implementation of Connection-Oriented Service
a) Store-and-Forward Packet Switching
A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-
to-point link to the carrier. The packet is stored there until it has fully arrived so the checksum can be
verified. Then it is forwarded to the next router along the path until it reaches the destination host,
where it is delivered. This mechanism is store-and-forward packet switching.
Routing algorithms:
A routing algorithm is a set of step-by-step operations used to direct Internet traffic efficiently. When
a packet of data leaves its source, there are many different paths it can take to its destination. The
routing algorithm is used to determine mathematically the best path to take.
Properties of routing algorithm:
Correctness: The routing should be done properly and correctly so that the packets may reach their
proper destination.
Simplicity: The routing should be done in a simple manner so that the overhead is as low as
possible. With increasing complexity of the routing algorithms the overhead also increases.
Robustness: Once a major network becomes operative, it may be expected to run continuously for
years without any failures. The algorithms designed for routing should be robust enough to handle
hardware and software failures and should be able to cope with changes in the topology and traffic
without requiring all jobs in all hosts to be aborted and the network rebooted every time some router
goes down.
Stability: The routing algorithms should be stable under all possible circumstances.
Fairness: Every node connected to the network should get a fair chance of transmitting their packets.
This is generally done on a first come first serve basis.
Optimality: The routing algorithms should be optimal in terms of throughput and minimizing mean
packet delays. Here there is a trade-off and one has to choose depending on his suitability.
Routing can be grouped into two categories
1. Adaptive Routing Algorithm: These algorithms change their routing decisions to reflect changes
in the topology and in traffic as well. These get their routing information from adjacent routers or
from all routers. The optimization parameters are the distance, number of hops and estimated transit
time. This can be further classified as follows:
1. Centralized: In this type some central node in the network gets entire information about the
network topology, about the traffic and about other nodes. This then transmits this information to the
respective routers. The advantage of this is that only one node is required to keep the information.
The disadvantage is that if the central node goes down the entire network is down, i.e. single point of
failure.
2. Isolated: In this method the node decides the routing without seeking information from other
nodes. The sending node does not know about the status of a particular link. The disadvantage is that
the packet may be send through a congested route resulting in a delay. Some examples of this type of
algorithm for routing are:
a. Hot Potato: When a packet comes to a node, it tries to get rid of it as fast as it can, by putting it on
the shortest output queue without regard to where that link leads. A variation of this algorithm is to
combine static routing with the hot potato algorithm. When a packet arrives, the routing algorithm
takes into account both the static weights of the links and the queue lengths.
b. Backward Learning: In this method the routing tables at each node gets modified by information
from the incoming packets. One way to implement backward learning is to include the identity of the
source node in each packet, together with a hop counter that is incremented on each hop. When a
node receives a packet in a particular line, it notes down the number of hops it has taken to reach it
from the source node. If the previous value of hop count stored in the node is better than the current
one then nothing is done but if the current value is better than the value is updated for future use. The
problem with this is that when the best route goes down then it cannot recall the second best route to
a particular node. Hence all the nodes have to forget the stored information periodically and start all
over again.
3. Distributed: In this the node receives information from its neighbouring nodes and then takes the
decision about which way to send the packet. The disadvantage is that if in between the interval it
receives information and sends the packet something changes then the packet may be delayed.
2. Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions on
measurements and estimates of the current traffic and topology. Instead the route to be taken in going
from one node to the other is computed in advance, off-line, and downloaded to the routers when the
network is booted. This is also known as static routing. This can be further classified as:
1. Flooding: Flooding adapts the technique in which every incoming packet is sent on every
outgoing line except the one on which it arrived. One problem with this method is that packets may
go in a loop. As a result of this a node may receive several copies of a particular packet which is
undesirable. Some techniques adapted to overcome these problems are as follows:
a. Sequence Numbers: Every packet is given a sequence number. When a node receives the packet
it sees its source address and sequence number. If the node finds that it has sent the same packet
earlier then it will not transmit the packet and will just discard it.
b. Hop Count: Every packet has a hop count associated with it. This is decremented (or
incremented) by one by each node which sees it. When the hop count becomes zero (or a maximum
possible value) the packet is dropped.
c. Spanning Tree: The packet is sent only on those links that lead to the destination by constructing
a spanning tree routed at the source. This avoids loops in transmission but is possible only when all
the intermediate nodes have knowledge of the network topology.
Flooding is not practical for general kinds of applications. But in cases where high degree of
robustness is desired such as in military applications, flooding is of great help.
2. Random Walk: In this method a packet is sent by the node to one of its neighbors randomly. This
algorithm is highly robust. When the network is highly interconnected, this algorithm has the
property of making excellent use of alternative routes. It is usually implemented by sending the
packet onto the least queued link.
Shortest Path Algorithm (Least Cost Routing algorithm):-
• In this the path length between each node is measured as a function of distance, Bandwidth,
average traffic, communication cost, mean queue length, measured delay etc.
• By changing the weighing function, the algorithm then computes the shortest path measured
according to any one of a number of criteria or a combination of criteria.
• For this a graph of subnet is drawn. With each node of graph representing a router and each arc of
the graph representing a communication link. Each link has a cost associated with it.
IP Addresses: Each IP address is 32 bits long, and they are represented in the form of "dot-decimal
notation" where each byte is written in the decimal form, and they are separated by the period. An IP
address would look like 193.32.216.9 where 193 represents the decimal notation of first 8 bits of an
address, 32 represents the decimal notation of second 8 bits of an address.
In the above figure, a router has three interfaces labeled as 1, 2 & 3 and each router interface
contains its own IP address. Each host contains its own interface and IP address. All the interfaces
attached to the LAN 1 is having an IP address in the form of 223.1.1.xxx, and the interfaces attached
to the LAN 2 and LAN 3 have an IP address in the form of 223.1.2.xxx and 223.1.3.xxx respectively.
Each IP address consists of two parts. The first part (first three bytes in IP address) specifies the
network and second part (last byte of an IP address) specifies the host in the network.
Class Addressing
An IP address is 32-bit long. which is divided into sub-classes:
➢ Class A
➢ Class B
➢ Class C
➢ Class D
➢ Class E
In the above diagram, we observe that each class have a specific range of IP addresses. The class of
IP address is used to determine the number of bits used in a class and number of networks and hosts
available in the class.
Class A:
In Class A, an IP address is assigned to those networks that contain a large number of hosts.
o The network ID is 8 bits long.
o The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always set to 0 and the remaining 7 bits
determine the network ID. The 24 bits determine the host ID in any network.
The total number of networks in Class A = 27 = 128 network address
The total number of hosts in Class A = 224 - 2 = 16,777,214 host address
Class B:- In Class B, an IP address is assigned to those networks that range from small-sized to
large-sized networks.
o The Network ID is 16 bits long.
o The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 10, and the remaining14 bits
determine the network ID. The other 16 bits determine the Host ID.
The total number of networks in Class B = 214 = 16384 network address
The total number of hosts in Class B = 216 - 2 = 65534 host address
Class D:- In Class D, an IP address is reserved for multicast addresses. It does not possess
subnetting. The higher order bits of the first octet is always set to 1110, and the remaining bits
determines the host ID in any network.
Class E:- In Class E, an IP address is used for the future use or for the research and development
purposes. It does not possess any subnetting. The higher order bits of the first octet is always set to
1111, and the remaining bits determines the host ID in any network.
IP header format:-
Unlike the post office, a router or computer cannot determine the size of a package without
additional information. A person can look at a letter or box and determine how big it is, but a router
cannot. Therefore, additional information is required at the IP layer, in addition to the source and
destination IP addresses.
Figure shows a logical representation of the information that is used at the IP layer to enable the
delivery of electronic data. This information is called a header, and is analogous to the addressing
information on an envelope. A header contains the information required to route data on the Internet,
and has the same format regardless of the type of data being sent. This is the same for an envelope
where the address format is the same regardless of the type of letter being sent.
The fields in the IP header and their descriptions are
• Version - A 4-bit field that identifies the IP version being used. The current version is 4, and
this version is referred to as IPv4.
• Length - A 4-bit field containing the length of the IP header in 32-bit increments. The
minimum length of an IP header is 20 bytes, or five 32-bit increments. The maximum length
of an IP header is 24 bytes, or six 32-bit increments. Therefore, the header length field should
contain either 5 or 6.
• Type of Service (ToS) - The 8-bit ToS uses 3 bits for IP Precedence, 4 bits for ToS with the
last bit not being used. The 4-bit ToS field, although defined, has never been used.
• IP Precedence - A 3-bit field used to identify the level of service a packet receives in the
network.
• Differentiated Services Code Point (DSCP) - A 6-bit field used to identify the level of
service a packet receives in the network. DSCP is a 3-bit expansion of IP precedence with the
elimination of the ToS bits.
• Total Length - Specifies the length of the IP packet that includes the IP header and the user
data. The length field is 2 bytes, so the maximum size of an IP packet is 216 – 1 or 65,535
bytes.
• Identifier, Flags, and Fragment Offset - As an IP packet moves through the Internet, it
might need to cross a route that cannot handle the size of the packet. The packet will be
divided, or fragmented, into smaller packets and reassembled later. These fields are used to
fragment and reassemble packets.
• Time to Live (TTL) - It is possible for an IP packet to roam aimlessly around the Internet. If
there is a routing problem or a routing loop, then you don't want packets to be forwarded
forever. A routing loop is when a packet is continually routed through the same routers over
and over. The TTL field is initially set to a number and decremented by every router that is
passed through. When TTL reaches 0 the packet is discarded.
• Protocol - In the layered protocol model, the layer that determines which application the data
is from or which application the data is for is indicated using the Protocol field. This field
does not identify the application, but identifies a protocol that sits above the IP layer that is
used for application identification.
• Header Checksum - A value calculated based on the contents of the IP header. Used to
determine if any errors have been introduced during transmission.
• Source IP Address - 32-bit IP address of the sender.
• Destination IP Address - 32-bit IP address of the intended recipient.
• Options and Padding - A field that varies in length from 0 to a multiple of 32-bits. If the
option values are not a multiple of 32-bits, 0s are added or padded to ensure this field
contains a multiple of 32 bits.
Packet Forwarding:- Packet forwarding is the basic method for sharing information across
systems on a network. Packets are transferred between a source interface and a destination
interface, usually on two different systems. When you issue a command or send a message to a
nonlocal interface, your system forwards those packets onto the local network. The interface with
the destination IP address that is specified in the packet headers then retrieves the packets from the
local network. If the destination address is not on the local network, the packets are then
forwarded to the next adjacent network, or hop.
Fragmentation is the process of breaking a packet into smaller pieces so that they will fit into the
frames of the underlying network. The receiving system reassembles the pieces into the original
packets. The term MTU (maximum transmission unit) refers to the maximum amount of data that
can travel in a frame. Different networks have different MTU sizes, so packets may need to be
fragmented in order to fit within the frames of the network that they transit. Internetworking
protocols such as IP use fragmentation because each of the networks that a packet may travel over
could have a different frame size. Fragmentation occurs at routers that connect two networks with
different MTUs. While it is possible to design an internal network with the same MTU size, this is
not an option on the Internet, which includes thousands of independently managed interconnected
networks.
IPV4 IPV6
IPv4 has 32-bit address length IPv6 has 128-bit address length
It Supports Manual and DHCP address It supports Auto and renumbering address
configuration configuration
In IPv4 end to end connection integrity In IPv6 end to end connection integrity is
is Unachievable Achievable
It has broadcast Message Transmission In IPv6 multicast and any cast message
Scheme transmission scheme is available
IPv4 has header of 20-60 bytes. IPv6 has header of 40 bytes fixed
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol
suite, referred as UDP/IP suite. Unlike TCP, it is unreliable and connectionless protocol. So, there is
no need to establish connection prior to data transfer.
Though Transmission Control Protocol (TCP) is the dominant transport layer protocol used with
most of Internet services; provides assured delivery, reliability and much more but all these services
cost us with additional overhead and latency. Here, UDP comes into picture. For the realtime
services like computer gaming, voice or video communication, live conferences; we need UDP.
Since high performance is needed, UDP permits packets to be dropped instead of processing delayed
packets. There is no error checking in UDP, so it also save bandwidth. User Datagram Protocol
(UDP) is more efficient in terms of both latency and bandwidth.
UDP Header Format:- UDP header is 8-bytes fixed and simple header, while for TCP it may vary
from 20 bytes to 60 bytes. First 8 Bytes contains all necessary header information and remaining part
consist of data. UDP port number fields are each 16 bits long, therefore range for port numbers
defined from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish different user
requests or process.
1. Source Port : Source Port is 2 Byte long field used to identify port number of source.
2. Destination Port : It is 2 Byte long field, used to identify the port of destined packet.
3. Length : Length is the length of UDP including header and the data. It is 16-bits field.
4. Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the one’s
complement sum of the UDP header, pseudo header of information from the IP header and
the data, padded with zero octets at the end (if necessary) to make a multiple of two octets.
Unlike TCP, Checksum calculation is not mandatory in UDP. No Error control or flow control is
provided by UDP. Hence UDP depends on IP and ICMP for error reporting.
Applications of UDP:
❖ Used for simple request response communication when size of data is less and hence there is
lesser concern about flow and error control.
❖ It is suitable protocol for multicasting as UDP supports packet switching.
❖ UDP is used for some routing update protocols like RIP(Routing Information Protocol).
❖ Normally used for real time applications which can not tolerate uneven delays between
sections of a received message.
❖ Following implementations uses UDP as a transport layer protocol:
o NTP (Network Time Protocol)
o DNS (Domain Name Service)
o BOOTP, DHCP.
o NNP (Network News Protocol)
o Quote of the day protocol
o TFTP, RTSP, RIP, OSPF.
❖ Application layer can do some of the tasks through UDP-
o Trace Route
o Record Route
o Time stamp
❖ UDP takes datagram from Network Layer, attach its header and send it to the user. So, it
works fast.
❖ Actually UDP is null protocol if you remove checksum field.
Pre-Segment Checksum
Checksum is a simple error detection mechanism to determine the integrity of the data transmitted
over a network. Communication protocols like TCP/IP/UDP implement this scheme in order to
determine whether the received data is corrupted along the network. The sender of an IPv4 datagram
would compute the checksum value based on the data and embed it in the frame. The receiver would
also compute the checksum locally and based on the result ascertain the data integrity. Similarly the
TCP/UDP data which forms the payload for the IP datagram would have its checksum computed and
embedded as a part of the TCP/UDP frame.
In the meantime, all packets in the data stream must be sent to every host requesting access to the
data stream. However, this type of transmission is ineffective in terms of both network and server
resource as it equally presents obvious scalability issues.
3. This is a one-to-one connection between the client and the server. Unicast uses IP provision
techniques such as TCP (transmission control protocol) and UDP (user datagram protocol), which
are session-based protocols. Once a Windows media player client connects via unicast to a Windows
media server that client gets a straight connection to the server. Every unicast client that connects to
the server takes up extra bandwidth. For instance, if you have 10 clients all performing 100 Kbps
(kilobits per second) streams, it means those clients taking up 1,000 Kbps. But you a have single
client using the 100 Kbps stream, only 100 Kbps is being used.
Multicast
Multicast lets server’s direct single copies of data streams that are then simulated and routed to hosts
that request it. Hence, rather than sending thousands of copies of a streaming event, the server
instead streams a single flow that is then directed by routers on the network to the hosts that have
specified that they need to get the stream. This removes the requirement to send redundant traffic
over the network and also be likely to reduce CPU load on systems, which are not using the multicast
system, yielding important enhancement to efficiency for both server and network.
3. To release a connection, either party can send a TCP segment with the FIN bit set, which
means that it has no more data to transmit.
4. When the FIN is acknowledged, that direction is shut down for new data.
5. Data may continue to flow indefinitely in the other direction, however.
6. When both directions have been shut down, the connection is released.
7. Normally, four TCP segments are needed to release a connection, one FIN(flag indicate) and
one ACK for each direction.
8. However, it is possible for the first ACK and the second FIN to be contained in the same
segment, reducing the total count to three.
9. To avoid the two-army problem, timers are used.
10. If a response to a FIN is not forthcoming within two maximum packet lifetimes, the sender of
the FIN releases the connection.
11. The other side will eventually notice that nobody seems to be listening to it anymore and will
time out as well.
Principle Of Reliable Data Transfer:-
Transport Layer Protocols are central piece of layered architectures, these provides the logical
communication between application processes. These processes uses the logical communication to
transfer data from transport layer to network layer and this transfer of data should be reliable and
secure. The data is transferred in the form of packets but the problem occurs in reliable transfer of
data.
The problem of transferring the data occurs not only at the transport layer, but also at the application
layer as well as in the link layer. This problem occur when a reliable service runs on an unreliable
service, For example, TCP (Transmission Control Protocol) is a reliable data transfer protocol that is
implemented on top of an unreliable layer, i.e., Internet Protocol (IP) is an end to end network layer
protocol.
In this model, we have design the sender and receiver sides of a protocol over a reliable channel. In
the reliable transfer of data the layer receives the data from the above layer breaks the message in the
form of segment and put the header on each segment and transfer. Below layer receives the segments
and remove the header from each segment and make it a packet by adding to header.
The data which is transferred from the above has no transferred data bits corrupted or lost, and all are
delivered in the same sequence in which they were sent to the below layer this is reliable data
transfer protocol. This service model is offered by TCP to the Internet applications that invoke this
transfer of data.
Similarly in an unreliable channel we have design the sending and receiving side. The sending side
of the protocol is called from the above layer to rdt_send() then it will pass the data that is to be
delivered to the application layer at the receiving side (here rdt-send() is a function for sending data
where rdt stands for reliable data transfer protocol and _send() is used for the sending side).
On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -rcv() is used for
receiving side), will be called when a packet arrives from the receiving side of the unreliable
channel. When the rdt protocol wants to deliver data to the application layer, it will do so by calling
deliver_data() (where deliver_data() is a function for delivering data to upper layer).
In reliable data transfer protocol, we only consider the case of unidirectional data transfer, that is
transfer of data from the sending side to receiving side(i.e. only in one direction). In case of
bidirectional (full duplex or transfer of data on both the sides) data transfer is conceptually more
difficult. Although we only consider unidirectional data transfer but it is important to note that the
sending and receiving sides of our protocol will needs to transmit packets in both directions, as
shown in above figure.
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes
In order to exchange packets containing the data that is needed to be transferred the both (sending
and receiving) sides of rdt also need to exchange control packets in both direction (i.e., back and
forth), both the sides of rdt send packets to the other side by a call to udt_send() (udt_send() is a
function used for sending data to other side where udt stands for unreliable data transfer protocol).
2. Congestion Window- Sender should not send data greater than congestion window size.
Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission. So, sender
should always send data less than or equal to congestion window size. Different variants of TCP use
different approaches to calculate the size of congestion window. Congestion window is known only
to the sender and is not sent over the links.
So always
Sender window size = Minimum (Receiver window size, Congestion window size)
TCP Congestion Policy:- TCP’s general policy for handling congestion consists of following three
phases-
1. Slow Start
2. Congestion Avoidance
3. Congestion Detection
1. Slow Start Phase- Initially, sender sets congestion window size = Maximum Segment Size (1
MSS). After receiving each acknowledgment, sender increases the congestion window size by 1
MSS. In this phase, the size of congestion window increases exponentially.
The followed formula is-
Congestion window size = Congestion window size + Maximum segment size
This is shown below-
1. Source Port-
• Source Port is a 16 bit field.
• It identifies the port of the sending application.
2. Destination Port-
• Destination Port is a 16 bit field.
• It identifies the port of the receiving application.
It is important to note that a TCP connection is uniquely identified by using Combination of port
numbers and IP Addresses of sender and receiver. IP Addresses indicate which systems are
communicating. Port numbers indicate which end to end sockets are communicating.
3. Sequence Number-
• Sequence number is a 32 bit field.
• TCP assigns a unique sequence number to each byte of data contained in the TCP segment.
• This field contains the sequence number of the first data byte.
4. Acknowledgement Number-
• Acknowledgment number is a 32 bit field.
• It contains sequence number of the data byte that receiver expects to receive next from the
sender.
• It is always sequence number of the last received data byte incremented by 1.
5. Header Length-
• Header length is a 4 bit field.
• It contains the length of TCP header.
• It helps in knowing from where the actual data begins.
The length of TCP header always lies in the range of [20 bytes , 60 bytes].
6. Reserved Bits-
• The 6 bits are reserved.
• These bits are not used
7. URG Bit-
URG bit is used to treat certain data on an urgent basis.
When URG bit is set to 1,
• It indicates the receiver that certain amount of data within the current segment is urgent.
• Urgent data is pointed out by evaluating the urgent pointer field.
• The urgent data has be prioritized.
• Receiver forwards urgent data to the receiving application on a separate channel.
8. ACK Bit-
ACK bit indicates whether acknowledgement number field is valid or not.
• When ACK bit is set to 1, it indicates that acknowledgement number contained in the TCP
header is valid.
• For all TCP segments except request segment, ACK bit is set to 1.
• Request segment is sent for connection establishment during Three Way Handshake.
9. PSH Bit-
PSH bit is used to push the entire buffer immediately to the receiving application.
When PSH bit is set to 1,
• All the segments in the buffer are immediately pushed to the receiving application.
• No wait is done for filling the entire buffer.
• This makes the entire buffer to free up immediately.
It is important to note that unlike URG bit, PSH bit does not prioritize the data. It just causes all the
segments in the buffer to be pushed immediately to the receiving application. The same order is
maintained in which the segments arrived. It is not a good practice to set PSH bit = 1. This is because
it disrupts the working of receiver’s CPU and forces it to take an action immediately.
10. RST Bit-
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes
Time Out Timer- TCP uses a time out timer for retransmission of lost segments.
• Sender starts a time out timer after transmitting a TCP segment to the receiver.
• If sender receives an acknowledgement before the timer goes off, it stops the timer.
• If sender does not receives any acknowledgement and the timer goes off, then TCP
Retransmission occurs.
• Sender retransmits the same segment and resets the timer.
• The value of time out timer is dynamic and changes with the amount of traffic in the network.
• Time out timer is also called as Retransmission Timer.
Time Wait Timer- TCP uses a time wait timer during connection termination.
• Sender starts the time wait timer after sending the ACK for the second FIN segment.
• It allows to resend the final acknowledgement if it gets lost.
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes
• It prevents the just closed port from reopening again quickly to some other application.
• It ensures that all the segments heading towards the just closed port are discarded.
• The value of time wait timer is usually set to twice the lifetime of a TCP segment.
Keep Alive Timer- TCP uses a keep alive timer to prevent long idle TCP connections.
• Each time server hears from the client, it resets the keep alive timer to 2 hours.
• If server does not hear from the client for 2 hours, it sends 10 probe segments to the client.
• These probe segments are sent at a gap of 75 seconds.
• If server receives no response after sending 10 probe segments, it assumes that the client is
down.
• Then, server terminates the connection automatically.
Persistent Timer-
• TCP uses a persistent timer to deal with a zero-widow-size deadlock situation.
• It keeps the window size information flowing even if the other end closes its receiver
window.
• Sender starts the persistent timer on receiving an ACK from the receiver with a zero window
size.
• When persistent timer goes off, sender sends a special segment to the receiver.
• This special segment is called as probe segment and contains only 1 byte of new data.
• Response sent by the receiver to the probe segment gives the updated window size.
• If the updated window size is non-zero, it means data can be sent now.
If the updated window size is still zero, the persistent timer is set again and the cycle repeats.
Application Layer
The application layer in the OSI model is the closest layer to the end user which means that the
application layer and end user can interact directly with the software application. The application
layer programs are based on client and servers.
Application layer functions: -
• Network Virtual terminal: An application layer allows a user to log on to a remote host. To
do so, the application creates a software emulation of a terminal at the remote host. The user's
computer talks to the software terminal, which in turn, talks to the host. The remote host
thinks that it is communicating with one of its own terminals, so it allows the user to log on.
• File Transfer, Access, and Management (FTAM): An application allows a user to access
files in a remote computer, to retrieve files from a computer and to manage files in a remote
computer. FTAM defines a hierarchical virtual file in terms of file structure, file attributes
and the kind of operations performed on the files and their attributes.
• Addressing: To obtain communication between client and server, there is a need for
addressing. When a client made a request to the server, the request contains the server
address and its own address. The server response to the client request, the request contains
the destination address, i.e., client address. To achieve this kind of addressing, DNS is used.
• Mail Services: An application layer provides Email forwarding and storage.
• Directory Services: An application contains a distributed database that provides access for
global information about various objects and services.
• Authentication: It authenticates the sender or receiver's message or both.
World Wide Web: -
World Wide Web, which is also known as a Web, is a collection of websites or web pages stored in
web servers and connected to local computers through the internet. These websites contain text
pages, digital images, audios, videos, etc. Users can access the content of these sites from any part of
the world over the internet using their devices such as computers, laptops, cell phones, etc. The
WWW, along with internet, enables the retrieval and display of text and media to your device.
HTTP:-
HTTP stands for Hyper Text Transfer Protocol.It is a protocol used to access the data on the World
Wide Web (www).The HTTP protocol can be used to transfer the data in the form of plain text,
hypertext, audio, video, and so on.This protocol is known as Hyper Text Transfer Protocol because
of its efficiency that allows us to use in a hypertext environment where there are rapid jumps from
one document to another document.
HTTP is similar to the FTP as it also transfers the files from one host to another host. But, HTTP is
simpler than FTP as HTTP uses only one connection, i.e., no control connection to transfer the files.
HTTP is used to carry the data in the form of MIME (Multipurpose Internet Mail Extensions)-like
format. HTTP is similar to SMTP as the data is transferred between client and server. The HTTP
differs from the SMTP in the way the messages are sent from the client to the server and from server
BY PROF. SACHIN CHOURASIA GSCE, SAGAR
Computer Network Notes
to the client. SMTP messages are stored and forwarded while HTTP messages are delivered
immediately.
FTP is a standard internet protocol provided by TCP/IP used for transmitting the files from one host
to another. It is mainly used for transferring the web page files from their creator to the computer that
acts as a server for other computers on the internet. It is also used for downloading the files to
computer from other servers.
It provides the sharing of files. It is used to encourage the use of remote computers. It transfers the
data more reliably and efficiently.
There are two types of connections in FTP:
• Control Connection: The control connection uses very simple rules for communication.
Through control connection, we can transfer a line of command or line of response at a time. The
control connection is made between the control processes. The control connection remains
connected during the entire interactive FTP session.
• Data Connection: The Data Connection uses very complex rules as data types may vary. The
data connection is made between data transfer processes. The data connection opens when a
command comes for transferring the files and closes when the file is transferred.
Advantages of FTP:
• Speed: One of the biggest advantages of FTP is speed. The FTP is one of the fastest way to
transfer the files from one computer to another computer.
• Efficient: It is more efficient as we do not need to complete all the operations to get the entire
file.
• Security: To access the FTP server, we need to login with the username and password.
Therefore, we can say that FTP is more secure.
• Back & forth movement: FTP allows us to transfer the files back and forth. Suppose you are a
manager of the company, you send some information to all the employees, and they all send
information back on the same server.
Disadvantages of FTP:
• The standard requirement of the industry is that all the FTP transmissions should be encrypted.
However, not all the FTP providers are equal and not all the providers offer encryption. So, we
will have to look out for the FTP providers that provides encryption.
• FTP serves two operations, i.e., to send and receive large files on a network. However, the size
limit of the file is 2GB that can be sent. It also doesn't allow you to run simultaneous transfers to
multiple receivers.
• Passwords and file contents are sent in clear text that allows unwanted eavesdropping. So, it is
quite possible that attackers can carry out the brute force attack by trying to guess the FTP
password.
• It is not compatible with every system.
SSH Protocol: -
SSH stands for Secure Shell or Secure Socket Shell. It is a cryptographic network protocol that
allows two computers to communicate and share the data over an insecure network such as the
internet. It is used to login to a remote server to execute commands and data transfer from one
machine to another machine.
The SSH protocol was developed by SSH communication security Ltd to safely communicate with
the remote machine.
Secure communication provides a strong password authentication and encrypted communication
with a public key over an insecure channel. It is used to replace unprotected remote login protocols
such as Telnet, rlogin, etc., and insecure file transfer protocol FTP.
Its security features are widely used by network administrators for managing systems and
applications remotely. The SSH protocol protects the network from various attacks such as DNS
spoofing, IP source routing, and IP spoofing.
A simple example can be understood, such as suppose you want to transfer a package to one of your
friends. Without SSH protocol, it can be opened and read by anyone. But if you will send it using
SSH protocol, it will be encrypted and secured with the public keys, and only the receiver can open
it.