0% found this document useful (0 votes)
8 views

computer network unit 4

The transport layer is responsible for segmentation, connection control, flow control, and error control in data transmission. It utilizes protocols such as UDP, which is connectionless and unreliable, and TCP, which is connection-oriented and reliable, ensuring proper data transfer through mechanisms like flow and congestion control. TCP employs a three-way handshake for connection establishment and a four-segment process for termination, while also managing data flow to prevent buffer overflow at the receiver.

Uploaded by

ejarjun777
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

computer network unit 4

The transport layer is responsible for segmentation, connection control, flow control, and error control in data transmission. It utilizes protocols such as UDP, which is connectionless and unreliable, and TCP, which is connection-oriented and reliable, ensuring proper data transfer through mechanisms like flow and congestion control. TCP employs a three-way handshake for connection establishment and a four-segment process for termination, while also managing data flow to prevent buffer overflow at the receiver.

Uploaded by

ejarjun777
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

TRANSPORT LAYER

Functions of Transport Layer


1. Segmentation and Reassembling: A message is divided into segments; each segment
contains sequence number, which enables this layer in reassembling the message.
Message is reassembled correctly upon arrival at the destination and replaces packets
which were lost in transmission.
2. Connection Control: It includes 2 types:
○ Connectionless Transport Layer : Each segment is considered as an independent
packet and delivered to the transport layer at the destination machine.
○ Connection Oriented Transport Layer : Before delivering packets, connection is
made with transport layer at the destination machine.
3. Flow Control: In this layer, flow control is performed end to end.
4. Error Control: Error Control is performed end to end in this layer to ensure that the
complete message arrives at the receiving transport layer without any error. Error
Correction is done through retransmission.

Working of Transport Layer


The transport layer receives the services from the network layer and then give services to the
session layer.
At the sender’s side: At the sender's end, transport layer collect data from application layer i.e
message and performs segementation to divide the message into segments and then adds the port
number of source and destination in header and send that message to network layer.
At the receiver’s side: At the receiver's end, transport layer collects data from network layer and
then reassembles the segmented data and identifies port number by reading its header to send
that message to appropriate port in the session layer.
Transport Layer Protocols
● UDP (User Datagram Protocol)
● TCP(Transmission Control Protocol)

UDP

● Connection less protocol


● Unreliable protocol
● UDP stands for User Datagram Protocol.
● UDP is one of the simplest transport layer protocol which provides non sequenced data
transmission functionality.
● UDP is consider as connection less transport layer protocol.
● This type of protocol is referred to be used when speed and size are more important than
reliability and security.
● It is an end-to-end transport level protocol that adds transport-level addresses, checksum
error control, and length information to the data received from the upper layer.
● User datagram is the packet constructed by the UDP protocol

Format of User Datagram

User datagram have a fixed size header of 8 bytes which is divided into four parts -
Source port address: It defines source port number and it is of 16 bits.
Destination port address: It defines destination port number and it is of 16 bits.
Total length: This field is used to define the total length of the user datagram which is sum of
header and data length in bytes. It is a 16-bit field.
Checksum: Checksum is also 16 bit field to carry the optional error detection data.

UDP Services

● Process to Process Communication


● Connectionless Service
● Fast delivery of message
● Checksum

Disadvantages

● UDP delivers basic functions required for the end-to-end transimission of data.
● It does not use any sequencing and does not identify the damaged packet while reporting
an error.
● UDP can identify that an error has happened, but UDP does not identify which packet has
been lost.

TCP

● Connection oriented protocol


● Reliable protocol
● Provide error and flow control
● TCP stands for Transmission Control Protocol.
● TCP is a connection-oriented transport layer protocol.
● TCP explicitly defines connection establishment, data transfer, and connection tear down
phases to provide connection oriented service for data transmission.
● TCP is the most commonly used transport layer protocol.

Features Of TCP protocol

● Stream data transfer


● Reliability
● Flow Control
● Error Control
● Multiplexing
● Logical Connections
● Full Duplex

TCP Segment Format


Refer to the image below to see the header of TCP Segment.

● Source port address is a 16 bit field that defines port number of application program that
is sending the segment.
● Destination port address is a 16 bit field that defines port number of application program
that is receiving the segment.
● Sequence number is a field of 32 bit that will define the number assigned to data first byte
contained in segment.
● Acknowledgement number is a 32 bit field that describe the next byte that receiver is
looking forward to receive next from sender.
● Header Length (HLEN) is a field of 4 bit that specify the number of 4 byte words in TCP
header. The header length of TCP header can be between 20 to 60 bytes.
● Reserved is a field 6 bit that are reserved for future use.
● Control bits are 6 different independent control bits or flags in this field.
● There are six in control field:
○ URG: Urgent pointer
○ ACK: Acknowledgement number
○ PSH: Push request
○ RST: Reset connection
○ SYN: Sequence number Synchronization
○ FIN: Connection termination
● Window Size is a 16-bit field that defines the size of the window of sending TCP in bytes.
● Checksum, 16-bit field contains checksum and used for error detection.
● Urgent pointer is a 16 bit field .This flag is set when there is urgent data in the data
segment.
● Options and padding can be upto 40 bytes field for optional information in TCP header.

TCP Connection

TCP is a connection oriented service

TCP Connection Establishment


To make the transport services reliable, TCP hosts must establish a connection-oriented session
with one another. Connection establishment is performed by using the three-way handshake
mechanism. A three-way handshake synchronizes both ends of a network by enabling both sides
to agree upon original sequence numbers.

This mechanism also provides that both sides are ready to transmit data and learn that the other
side is available to communicate. This is essential so that packets are not shared or retransmitted
during session establishment or after session termination. Each host randomly selects a
sequence number used to track bytes within the stream it is sending and receiving.

The three-way handshake proceeds in the manner shown in the figure below −
The requesting end (Host A) sends an SYN segment determining the server's port number that
the client needs to connect to and its initial sequence number (x).

The server (Host B) acknowledges its own SYN segment, including the servers initial sequence
number (y). The server also responds to the client SYN by accepting the sender's SYN plus one
(X + 1).

An SYN consumes one sequence number. The client should acknowledge this SYN from the
server by accepting the server's SEQ plus one (SEQ = x + 1, ACK = y + 1). This is how a TCP
connection is settled.

Connection Termination Protocol (Connection Release)


While it creates three segments to establish a connection, it takes four segments to terminate a
connection. During a TCP connection is full-duplex (that is, data flows in each direction
independently of the other direction), each direction should be shut down alone.

The termination procedure for each host is shown in the figure. The rule is that either end can
share a FIN when it has finished sending data.

When a TCP receives a FIN, it should notify the application that the other end has terminated
that data flow direction. The sending of a FIN is usually the result of the application issuing a
close.

The receipt of a FIN only means that there will be no more data flowing in that direction. A
TCP can send data after receiving a FIN. The end that first issues the close (example, send the
first FIN) executes the active close. The other end (that receives this FIN) manages the passive
close.

Segment
A packet in TCP is called a segment. The format of a segment is shown in the following figure.
The segment consists of a 20- to 60-byte header, followed by data from the application program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains options. The
different sections of the Header are as follows.

Source port address:


This is a 16-bit field that defines the port number of the application program in the host that is
sending the segment.
Destination port address:
This is a 16-bit field that defines the port number of the application program in the host that is
receiving the segment.
Sequence number:
This 32-bit field defines the number assigned to the first byte of data contained in this segment.
TCP is a stream transport protocol. To ensure connectivity, each byte to be transmitted is
numbered. The sequence number tells the destination which byte in this sequence comprises the
first byte in the segment.
During connection establishment, each party uses a random number generator to create an initial
sequence number (ISN), which is usually different in each direction.
Acknowledgment number:
This 32-bit field defines the byte number that the receiver of the segment is expecting to receive
from the other party. If the receiver of the segment has successfully received byte number x from
the other party, it defines x + 1 as the acknowledgment number. Acknowledgment and data can
be piggybacked together.
Header length:
This 4-bit field indicates the number of 4-byte words in the TCP header. The length of the header
can be between 20 and 60 bytes. Therefore, the value of this field can be between 5 (5 x 4 =20)
and 15 (15 x 4 =60).
Reserved: This is a 6-bit field reserved for future use.
Control:
This field defines 6 different control bits or flags as shown in the following figure and One or
more of these bits can be set at a time. These bits enable flow control, connection establishment
and termination, connection abortion, and the mode of data transfer in TCP. A brief description
of each bit is as follows:
URG - The value of the urgent pointer field is valid.
ACK - The value of the acknowledgment field is valid.
PSH - Push the data.
RST - Reset the connection.
SYN - Synchronize sequence numbers during connection.
FIN - Terminate the connection.
Window size:
This field defines the size of the window, in bytes, that the other party must maintain. The length
of this field is 16 bits, which means that the maximum size of the window is 65,535 bytes. This
value is normally referred to as the receiving window (rwnd) and is determined by the receiver.
The sender must obey the dictation of the receiver in this case.
Checksum:
This 16-bit field contains the checksum. The calculation of the checksum for TCP follows the
same procedure as the one described for UDP. However, the inclusion of the checksum in the
UDP datagram is optional, whereas the inclusion of the checksum for TCP is mandatory. The
same pseudo header, serving the same purpose, is added to the segment.
For the TCP pseudo header, the value for the protocol field is 6.
Urgent pointer: This l6-bit field, which is valid, only if the urgent flag is set, is used when the
segment contains urgent data. It defines the number that must be added to the sequence number
to obtain the number of the last urgent byte in the data section of the segment.
Options: There can be up to 40 bytes of optional information in the TCP header.

TCP Congestion Control


TCP uses a congestion window and a congestion policy that avoid congestion. Previously, we
assumed that only the receiver can dictate the sender’s window size. We ignored another entity
here, the network. If the network cannot deliver the data as fast as it is created by the sender, it
must tell the sender to slow down. In other words, in addition to the receiver, the network is a
second entity that determines the size of the sender’s window.
Congestion policy in TCP –
1. Slow Start Phase: starts slowly increment is exponential to threshold
2. Congestion Avoidance Phase: After reaching the threshold increment is by 1
3. Congestion Detection Phase: Sender goes back to Slow start phase or Congestion
avoidance phase.

Slow Start Phase : exponential increment – In this phase after every RTT the congestion
window size increments exponentially.
In the slow start phase, the sender sets congestion window size = maximum segment size (1
MSS) at the initial stage. The sender increases the size of the congestion window by 1 MSS after
receiving the ACK (acknowledgment). The size of the congestion window increases
exponentially in this phase. The formula for determining the size of the congestion window is
Congestion window size = Congestion window size + Maximum segment size
Congestion Avoidance Phase : additive increment – This phase starts after the threshold value
also denoted as ssthresh. The size of cwnd(congestion window) increases additive. After each
RTT cwnd = cwnd + 1.
In this phase, after the threshold is reached, the size of the congestion window is increased by the
sender linearly in order to avoid congestion. Each time an acknowledgment is received, the
sender increments the size of the congestion window by 1.
The formula for determining the size of the congestion window in this phase is Congestion
window size = Congestion window size + 1

This phase continues until the size of the window becomes equal to that of the receiver window
size.

3. Congestion Detection Phase : multiplicative decrement – If congestion occurs, the


congestion window size is decreased. The only way a sender can guess that congestion
has occurred is the need to retransmit a segment. Retransmission is needed to recover a
missing packet that is assumed to have been dropped by a router due to congestion.
Retransmission can occur in one of two cases: when the RTO timer times out or when
three duplicate ACKs are received.
● Case 1 : Retransmission due to Timeout – In this case congestion possibility is
high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with slow start phase again.
● Case 2 : Retransmission due to 3 Acknowledgement Duplicates – In this case
congestion possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase

TCP Flow Control


In a communication network, in order for two network hosts to communicate with each other,
one has to send a packet while another host has to receive it. It might happen that both the hosts
have different hardware and software specifications and accordingly their processors might
differ. If the receiver host has a fast processor which can consume messages sent at a higher rate
by the sender then the communication works well and no problem will occur. But have you ever
wondered what would happen if the receiver has a slower processor? Well, in this case, the
incoming messages will keep coming and will be added to the receiver’s queue. Once the
receiver’s queue is filled, the messages will start dropping leading to the wastage of channel
packets. In order to overcome this issue of the slow receiver and fast sender, the concept of flow
control comes into the picture.

For the slow sender and fast receiver, no flow control is required. Whereas for the fast sender and
slow receiver, flow control is important.
In the diagram given, there is a fast sender and a slow receiver. Here are the following points to
understand how the message will overflow after a certain interval of time.

● In the diagram, the receiver is receiving the message sent by the sender at the rate of 5
messages per second while the sender is sending the messages at the rate of 10 messages
per second.
● When the sender sends the message to the receiver, it gets into the network queue of the
receiver.
● Once the user reads the message from the application, the message gets clear from the
queue and the space gets free.
● According to the mentioned speed of the sender and receiver, the receiver queue will be
getting shortened and the buffer space will reduce at the speed of 5 messages/ second.
Since, the receiver buffer size can accomodate 200 messages, hence, in 40 seconds, the
receiver buffer will become full.
● So, after 40 seconds, the messages will start dropping as there will be no space remaining
for the
● incoming messages.

This is why flow control becomes important for TCP protocol while data transfer and
communication purposes.

How does Flow Control in TCP Work?

When the data is sent on the network, this is what normally happens in the network layer.
The sender writes the data to a socket and sends it to the transport layer which is TCP in this
case. The transport layer will then wrap this data and will send it to the network layer which will
route it to the receiving node. If you look at the diagram closely, you will notice this part of the
diagram.

The TCP stores the data that needs to be sent in the send buffer and the data to be received in the
receive buffer. Flow control makes sure that no more packets are sent by the sender once the
receiver’s buffer is full as the messages will be dropped and the receiver won’t be able to handle
them. In order to control the amount of data sent by the TCP, the receiver will create a buffer
which is also known as Receive Window.
The TCP needs to send ACK every time it receives the data packet, acknowledging that the
packet is received successfully and with this value of ACK it sends the value of the current
receive window so that sender knows where to send the data.

The Sliding Window

The sliding window is used in TCP to control the number of bytes a channel can accommodate.
It is the number of bytes that were sent but not acknowledged. This is done in TCP by using a
window in which packets in sequence are sent and are authorized. When the sending host
receives the ACK from the receiving host about the packets then the window size is incremented
in order to allow new packets to come in. There are several techniques used by the receiver
window including go-back-n and selective repeat but the fundamental of the communication
remains the same.

Receive window

The TCP flow control is maintained by the receive window on the sender side. It tracks the
amount of space left vacant inside the buffer on the receiver side. The figure below shows the
receive window.
The formula for calculating the receive window is given in the figure. The window is constantly
updated and reported via the window segment of the header in TCP. Some of the important
terminals in the TCP receive window are receiveBuffer, receiverWindow, lastByteRead, and
lastByteReceived. Whenever the receive buffer is full, the receiver sends receiveWindow=0 to
the sender. When the remaining buffer is consumed and there’s nothing back to acknowledge
then it creates a problem in the application of the receiver. In order to create a solution for this
problem, the TCP makes explicit consideration by dictating the sender a single bit of data to the
receiver side continuously. This minimizes strain on the network while maintaining a constant
check on the status of the buffer at the receiving side. In this way, as soon as buffer space frees
up, an ACK is sent.
In this, a control mechanism is adopted to ensure that the rate of incoming data is not greater
than its consumption. This mechanism relies on the window field of the TCP header and
provides reliable data transport over a network.
The Persist Timer

From the above condition, there might be a possibility of deadlock. After the receiver shows a
zero window, if the ACK message is not sent by the receiver to the sender, it will never know
when to start sending the data. This situation in which the sender is waiting for a message to start
sending the data and the receiver is waiting for more incoming data is called a deadlock
condition in flow control. In order to solve this problem, whenever the TCP receives the zero
window message, it starts a persist timer that will send small packets to the receiver periodically.
This is also called WindowProbe.

Or

The sliding window protocol

In the sliding window protocol method, when we are establishing a connection between sender
and receiver, there are two buffers created. Each of these two buffers are assigned to the sender,
called the sending window, and to the receiver, called the receiving window.

When the sender sends data to the receiver, the receiving window sends back the remaining
receiving buffer space. As a result, the sender cannot send more data than the available receiving
buffer space. We’ll understand the concept better once we take a look at the illustration below:
Explanation

In this example, the sending window sends data to the receiving window. The receiving window
sends the acknowledgment after receiving the data and then the sending window sends another
data frame.
However, this time, along with the received acknowledgment, the receiving window also sends
another message saying that the available memory is full.
The sending window pauses the transmission of data until it gets the acknowledgment of the
receiving window that space has been released and it can continue the transmission process.

Error Control in TCP


TCP protocol has methods for finding out corrupted segments, missing segments, out-of-order
segments and duplicated segments.

Error control in TCP is mainly done through the use of three simple techniques :

1. Checksum – Every segment contains a checksum field which is used to find


corrupted segments. If the segment is corrupted, then that segment is discarded by the
destination TCP and is considered lost.
2. Acknowledgement – TCP has another mechanism called acknowledgement to affirm
that the data segments have been delivered. Control segments that contain no data but
have sequence numbers will be acknowledged as well but ACK segments are not
acknowledged.
3. Retransmission – When a segment is missing, delayed to deliver to a receiver,
corrupted when it is checked by the receiver then that segment is retransmitted again.
Segments are retransmitted only during two events: when the sender receives three
duplicate acknowledgements (ACK) or when a retransmission timer expires.
○ Retransmission after RTO: TCP always preserves one retransmission
time-out (RTO) timer for all sent but not acknowledged segments. When
the timer runs out of time, the earliest segment is retransmitted. Here no
timer is set for acknowledgement. In TCP, the RTO value is dynamic in
nature and it is updated using the round trip time (RTT) of segments. RTT
is the time duration needed for a segment to reach the receiver and an
acknowledgement to be received by the sender.
○ Retransmission after Three duplicate ACK segments: RTO method
works well when the value of RTO is small. If it is large, more time is
needed to get confirmation about whether a segment has been delivered or
not. Sometimes one segment is lost and the receiver receives so many
out-of-order segments that they cannot be saved. In order to solve this
situation, three duplicate acknowledgement method is used and missing
segment is retransmitted immediately instead of retransmitting already
delivered segment. This is a fast retransmission because it makes it
possible to quickly retransmit lost segments instead of waiting for timer to
end.

World Wide Web


● The World Wide Web or Web is basically a collection of information that is linked
together from points all over the world. It is also abbreviated as WWW.
● World wide web provides flexibility, portability, and user-friendly features.
● It mainly consists of a worldwide collection of electronic documents (i.e, Web Pages).
● It is basically a way of exchanging information between computers on the Internet.
● The WWW is mainly the network of pages consists of images, text, and sounds on the
Internet which can be simply viewed on the browser by using the browser software.
● It was invented by Tim Berners-Lee.

Architecture of WWW
The WWW is mainly a distributed client/server service where a client using the browser can
access the service using a server. The Service that is provided is distributed over many different
locations commonly known as sites/websites.
Each website holds one or more documents that are generally referred to as web pages.
Where each web page contains a link to other pages on the same site or at other sites.
These pages can be retrieved and viewed by using browsers

In the above case, the client sends some information that belongs to site A. It generally sends a
request through its browser (It is a program that is used to fetch the documents on the web).And
also the request generally contains other information like the address of the site, web
page(URL).The server at site A finds the document then sends it to the client. after that when the
user or say the client finds the reference to another document that includes the web page at site
B.
The reference generally contains the URL of site B. And the client is interested to take a look at
this document too. Then after the client sends the request to the new site and then the new page is
retrieved.
The World Wide Web (WWW) is a collection of documents and other web resources which are
identified by URLs, interlinked by hypertext links, and can be accessed and searched by
browsers via the Internet.
World Wide Web is also called the Web and it was invented by Tim Berners-Lee in 1989.
Website is a collection of web pages belonging to a particular organization.
The pages can be retrieved and viewed by using browser.
Let us go through the scenario shown in above fig.
The client wants to see some information that belongs to site 1.
It sends a request through its browser to the server at site 2.
The server at site 1 finds the document and sends it to the client.

Client (Browser):
Web browser is a program, which is used to communicate with web server on the Internet.
● Each browser consists of three parts: a controller, client protocol and interpreter.
● The controller receives input from input device and use the programs to access the
documents.
● After accessing the document, the controller uses one of the interpreters to display the
document on the screen.
● An interpreter can be Java, HTML, javascript mainly depending upon the type of the
document.
● The Client protocol can be FTP, HTTP, TELNET.

Server:

The Computer that is mainly available for the network resources and in order to provide services
to the other computer upon request is generally known as the server.
● The Web pages are mainly stored on the server.
● Whenever the request of the client arrives then the corresponding document is sent to the
client.
● The connection between the client and the server is TCP.
● It can become more efficient through multithreading or multiprocessing. Because in this
case, the server can answer more than one request at a time.

URL

URL is an abbreviation of the Uniform resource locator.

● It is basically a standard used for specifying any kind of information on the Internet.
● In order to access any page the client generally needs an address.
● To facilitate the access of the documents throughout the world HTTP generally makes
use of Locators.

URL mainly defines the four things:

● Protocol It is a client/server program that is mainly used to retrieve the document. A


commonly used protocol is HTTP.
● Host Computer It is the computer on which the information is located. It is not mandatory
because it is the name given to any computer that hosts the web page.
● Port The URL can optionally contain the port number of the server. If the port number is
included then it is generally inserted in between the host and path and is generally
separated from the host by the colon.
● Path It indicates the pathname of the file where the information is located.

Features of WWW
Given below are some of the features provided by the World Wide Web:

● Provides a system for Hypertext information


● Open standards and Open source
● Distributed.
● Mainly makes the use of Web Browser in order to provide a single interface for many
services.
● Dynamic
● Interactive
● Cross-Platform

Advantages of WWW
Given below are the benefits offered by WWW:
● It mainly provides all the information for Free.
● Provides rapid Interactive way of Communication.
● It is accessible from anywhere.
● It has become the Global source of media.
● It mainly facilitates the exchange of a huge volume of data.

Disadvantages of WWW
There are some drawbacks of the WWW and these are as follows;
● It is difficult to prioritize and filter some information.
● There is no guarantee of finding what one person is looking for.
● There occurs some danger in case of overload of Information.
● There is no quality control over the available data.
● There is no regulation.

Electronic Mail
Electronic mail, commonly known as email, is a method of exchanging messages over the
internet. Here are the basics of email:
1. An email address: This is a unique identifier for each user, typically in the format of
[email protected].
2. An email client: This is a software program used to send, receive and manage emails,
such as Gmail, Outlook, or Apple Mail.
3. An email server: This is a computer system responsible for storing and forwarding
emails to their intended recipients.

To send an email:
1. Compose a new message in your email client.
2. Enter the recipient’s email address in the “To” field.
3. Add a subject line to summarize the content of the message.
4. Write the body of the message.
5. Attach any relevant files if needed.
6. Click “Send” to deliver the message to the recipient’s email server.
7. Emails can also include features such as cc (carbon copy) and bcc (blind carbon copy)
to send copies of the message to multiple recipients, and reply, reply all, and forward
options to manage the conversation.
Electronic Mail (e-mail) is one of most widely used services of Internet. This service allows an
Internet user to send a message in formatted manner (mail) to the other Internet user in any part
of world. Message in mail not only contain text, but it also contains images, audio and videos
data. The person who is sending mail is called sender and person who receives mail is called
recipient. It is just like postal mail service. Components of E-Mail System : The basic
components of an email system are : User Agent (UA), Message Transfer Agent (MTA), Mail
Box, and Spool file. These are explained as following below.
1. User Agent (UA) : The UA is normally a program which is used to send and receive
mail. Sometimes, it is called as mail reader. It accepts variety of commands for
composing, receiving and replying to messages as well as for manipulation of the
mailboxes.
2. Message Transfer Agent (MTA) : MTA is actually responsible for transfer of mail
from one system to another. To send a mail, a system must have client MTA and
system MTA. It transfer mail to mailboxes of recipients if they are connected in the
same machine. It delivers mail to peer MTA if destination mailbox is in another
machine. The delivery from one MTA to another MTA is done by Simple Mail
Transfer Protocol.
1. Mailbox : It is a file on local hard drive to collect mails. Delivered mails are present
in this file. The user can read it delete it according to his/her requirement. To use
e-mail system each user must have a mailbox . Access to mailbox is only to owner of
mailbox.
2. Spool file : This file contains mails that are to be sent. User agent appends outgoing
mails in this file using SMTP. MTA extracts pending mail from spool file for their
delivery. E-mail allows one name, an alias, to represent several different e-mail
addresses. It is known as mailing list, Whenever user have to sent a message, system
checks recipient’s name against alias database. If mailing list is present for defined
alias, separate messages, one for each entry in the list, must be prepared and handed
to MTA. If for defined alias, there is no such mailing list is present, name itself
becomes naming address and a single message is delivered to mail transfer entity.
Services provided by E-mail system :

● Composition – The composition refer to process that creates messages and answers.
For composition any kind of text editor can be used.
● Transfer – Transfer means sending procedure of mail i.e. from the sender to recipient.
● Reporting – Reporting refers to confirmation for delivery of mail. It help user to
check whether their mail is delivered, lost or rejected.
● Displaying – It refers to present mail in form that is understand by the user.
● Disposition – This step concern with recipient that what will recipient do after
receiving mail i.e save mail, delete before reading or delete after reading.

Advantages of email:
1. Convenient and fast communication with individuals or groups globally.
2. Easy to store and search for past messages.
3. Ability to send and receive attachments such as documents, images, and videos.
4. Cost-effective compared to traditional mail and fax.
5. Available 24/7.
Disadvantages of email:
1. Risk of spam and phishing attacks.
2. Overwhelming amount of emails can lead to information overload.
3. Can lead to decreased face-to-face communication and loss of personal touch.
4. Potential for miscommunication due to lack of tone and body language in written
messages.
5. Technical issues, such as server outages, can disrupt email service.
6. It is important to use email responsibly and effectively, for example, by keeping the
subject line clear and concise, using proper etiquette, and protecting against security
threats.

Domain Name System (DNS)


DNS is a hostname for IP address translation service. DNS is a distributed database implemented
in a hierarchy of name servers. It is an application layer protocol for message exchange between
clients and servers.
Requirement: Every host is identified by the IP address but remembering numbers is very
difficult for the people also the IP addresses are not static therefore a mapping is required to
change the domain name to the IP address. So DNS is used to convert the domain name of the
websites to their numerical IP address.
Domain: There are various kinds of DOMAIN:
1. Generic domain: .com(commercial) .edu(educational) .mil(military) .org(non profit
organization) .net(similar to commercial) all these are generic domain.
2. Country domain .in (india) .us .uk
3. Inverse domain if we want to know what is the domain name of the website. Ip to
domain name mapping. So DNS can provide both the mapping for example to find
the ip addresses of facebook.com then we have to type nslookup www.facebook.com
It is very difficult to find out the ip address associated to a website because there are
millions of websites and with all those websites we should be able to generate the ip
address immediately, there should not be a lot of delay for that to happen organization of
database is very important.
DNS record: Domain name, ip address what is the validity?? what is the time to live ?? and all
the information related to that domain name. These records are stored in tree like structure.
Namespace: Set of possible names, flat or hierarchical. The naming system maintains a
collection of bindings of names to values – given a name, a resolution mechanism returns the
corresponding value.

Name server: It is an implementation of the resolution mechanism. DNS (Domain Name


System) = Name service in Internet – Zone is an administrative unit, domain is a subtree.
Name to Address Resolution:
The host requests the DNS name server to resolve the domain name. And the name server returns
the IP address corresponding to that domain name to the host so that the host can future connect
to that IP address.
Hierarchy of Name Servers Root name servers: It is contacted by name servers that can not
resolve the name. It contacts authoritative name server if name mapping is not known. It then
gets the mapping and returns the IP address to the host.
Top level domain (TLD) server: It is responsible for com, org, edu etc and all top level country
domains like uk, fr, ca, in etc. They have info about authoritative domain servers and know the
names and IP addresses of each authoritative name server for the second-level domains.
Authoritative name servers are the organization’s DNS server, providing authoritative
hostName to IP mapping for organization servers. It can be maintained by an organization or
service provider. In order to reach cse.dtu.in we have to ask the root DNS server, then it will
point out to the top level domain server and then to authoritative domain name server which
actually contains the IP address. So the authoritative domain server will return the associative ip
address.
Domain Name Server
The client machine sends a request to the local name server, which , if root does not find the
address in its database, sends a request to the root name server , which in turn, will route the
query to an top-level domain (TLD) or authoritative name server. The root name server can also
contain some hostName to IP address mappings. The Top-level domain (TLD) server always
knows who the authoritative name server is. So finally the IP address is returned to the local
name server which in turn returns the IP address to the host.

Quality of Service(QoS)
Quality of Service(QoS) is basically the ability to provide different priority to different
applications, users, or data flows, or in order to guarantee a certain level of performance to the
flow of data.
QoS is basically the overall performance of the computer network. Mainly the performance of
the network is seen by the user of the Network.
Flow Characteristics

Given below are four types of characteristics that are mainly attributed to the flow and these are
as follows:
● Reliability
● Delay
● Jitter
● Bandwidth

Reliability

It is one of the main characteristics that the flow needs. If there is a lack of reliability then it
simply means losing any packet or losing an acknowledgement due to which retransmission is
needed.
Reliability becomes more important for electronic mail, file transfer, and for internet access.
Delay
Another characteristic of the flow is the delay in transmission between the source and
destination. During audio conferencing, telephony, video conferencing, and remote conferencing
there should be a minimum delay.
Jitter
It is basically the variation in the delay for packets that belongs to the same flow. Thus Jitter is
basically the variation in the packet delay. Higher the value of jitter means there is a large delay
and the low jitter means the variation is small.
Bandwidth
The different applications need different bandwidth.
How to achieve Quality of Service?

Let's get into some details and say, your organization wants to achieve Quality of Service, which
can be done by using some tools and techniques, like jitter buffer and traffic shaping.
Jitter buffer
This is a temporary storage buffer which is used to store the incoming data packets, it is used in
packet-based networks to ensure that the continuity of the data streams doesn't get disturbed, it
does that by smoothing out the packet arrival times during periods of network congestion.
Traffic shaping
This technique which is also known as packet shaping is a congestion control or management
technique that helps to regulate network data transfer by delaying the flow of least important or
least necessary data packets.
QoS is included in the service-level agreement when an organization signs it with its network
service provider which guarantees the selected performance level.

Integrated Services

The Integrated Services (IntServ) model is also known as hard QoS model. It’s a model based on
flows, i.e., source and destination IP addresses and ports.

With the IntServ model, applications ask to the network for an explicit resource reservation per
flow. Network devices keep track of all the flows traversing the nodes checking if new packets
belong to an existing flow and if there are enough network resources available to accept the
packet.
By reserving resources on the network for each flow, applications obtain resources guarantees
and a predictable behaviour of the network.
IntServ model performs deterministic Admission Control (AC) based on resources requests vs.
available resources.
The implementation of this model requires the presence of IntServ capable routers in the network
and uses RSVP for end-to-end resource reservation. RSVP enables a host to establish a
connection over connectionless IP Internet:

1. Applications request some level of service to the network before sending data.
2. The network admits or rejects the reservation (per flow) based on available resources.
3. Once cleared, the network expects the application to remain within the requested traffic
profile.

The scalability of this model is limited by the fact that exists a high resource consumption on
network nodes caused by per flow processing and associated state. Remember that network
nodes need to maintain the reservation state for each flow traversing the node.

The fact that RSVP is a soft state protocol with continuous signaling load only aggravates the
scalability problem.

IntServ advantages
● Good solution for managing flows in small networks.
● Intserv enables hosts to request per-flow, quantifiable resources, along end-to-end data
paths and to obtain feedback regarding admissibility of these requests.

IntServ disadvantages
● Poor scalability.
● High resource consumption on the network nodes.
● Per flow processing (CPU): signaling & processing load.
● Per flow state (memory): to keep track of every flow traversing the node.
● Continuous signaling (RSVP is a soft state protocol).
● It’s very difficult to implement.

You might also like