Dr. Suad El-Geder EC 433 Computer Engineer Department
Dr. Suad El-Geder EC 433 Computer Engineer Department
Suad El-Geder
EC 433
Computer Engineer Department
The following parameters are used to measure Network Performance −
1) Bandwidth
2) Throughput
3) Latency
4) Packet Loss
5) Jitter
Network bandwidth is defined as the maximum transfer throughput capacity of a network. It’s
a measure of how much data can be sent and received at a time. Bandwidth is measured in
bits, megabits, or gigabits per second.
It refers to the maximum amount of the data that can be passed from one point to another.
Monitoring bandwidth availability ensures the network administrator would have enough
In other words, it is considered as the actual measurement of the data that is being moved through
the media at any particular time.
Using throughput to measure network speed is good for troubleshooting because it can root out the
exact cause of a slow network and alert administrators to problems specifically in regard to packet
loss.
It can be easily affected by change in interference, traffic in network, network devices, transmission
errors and the host of other type.
Example 1 : Throughput
A network with bandwidth of 15 Mbps can pass only an average of 1000 packets per second with each packet
𝟏.𝟓
The throughput is = = 0.1 which is equal to 1/10 of the bandwidth
𝟏𝟓
Example 2:
A network with bandwidth of 10 Mbps can pass only an average of 12, 000 frames per minute where each frame carries
an average of 10, 000 bits.What will be the throughput for this network?
Answer:
We can calculate the throughput as- Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this case.
S.No. Comparison Bandwidth Throughput
In a network, latency refers to the measure of time it takes for data to reach its destination across
a network.
Usually measuring network latency by as a round trip delay, in milliseconds (ms), taking into
account the time it takes for the data to get to its destination and then back again to its source.
Packet Loss:
Packet loss refers to the number of data packets that were successfully sent out from one point
in a network, but were dropped during data transmission and never reached their destination.
It’s important for the IT team to measure packet loss to know how many packets are being
dropped across your network to be able to take steps to ensure that data can be transmitted as it
should be. Knowing how to measuring packet loss provides a metric for determining good or
poor network performance.
Jitter:
Network jitter is the network transmission’s biggest enemy when using real-time apps such as
IP telephony, video conferencing, and virtual desktop infrastructure.
The definition of jitter is a variation in delay. Otherwise known as a disruption that occurs while
data packets travel across the network.
There are many factors that can cause jitter, and many of these factors are the same as those
that cause delay. One difficult thing about jitter is that it doesn’t affect all network traffic in the
same way.
Jitter can be caused by network congestion. Network congestion occurs when network devices
are unable to send the equivalent amount of traffic they receive, so their packet buffer fills up
and they start dropping packets. If there is no disturbance on the network at an endpoint, every
packet arrives. However, if the endpoint buffer becomes full, packets arrive later and later.
A communication subsystem is a complex piece of Hardware and software. Early
attempts for implementing the software for such subsystems were based on a
single, complex, unstructured program with many interacting components.
The resultant software was very difficult to test and modify. To overcome such
problem, the ISO (Short for International Organization for Standardization) has
developed a layered approach. In a layered approach, networking concept is
divided into several layers, and each layer is assigned a particular task. Therefore,
we can say that networking tasks depend upon the layers.
The main aim of the layered architecture is to divide the design into small pieces.
Each lower layer adds its services to the higher layer to provide a full set of services to manage
It provides modularity and clear interfaces, i.e., provides interaction between subsystems.
It ensures the independence between layers by providing the services from lower to higher layer without
defining how the services are implemented. Therefore, any modification in a layer will not affect the other
layers.
The number of layers, functions, contents of each layer will vary from network to network. However, the
purpose of each layer is to provide the service from lower to a higher layer and hiding the details from
the layers of how the services are implemented.
No Layering: Each new application has Layering: itermediate layer(s) provide a
to be re-implemented for every network unique abstraction for various network
technology. technologies.
The basic elements of layered architecture are services, protocols, and interfaces.
Protocol: It defines a set of rules that a layer uses to exchange the information with peer
entity. These rules mainly concern about both the contents and order of the messages
used.
Interface: It is a way through which the message is transferred from one layer to another
layer.
Protocol: Set of rules that govern data communication between peer entities.
Syntax: Concerns the format of the data blocks Indicates how to read the bits - field
outlining
OSI consists of seven layers, and each layer performs a particular network function.
OSI model was developed by the International Organization for Standardization (ISO) in 1984, and it is now
considered as an architectural model for the inter-computer communications.
OSI model divides the whole task into seven smaller and manageable tasks. Each layer is assigned a
particular task.
Each layer is self-contained, so that task assigned to each layer can be performed independently.
The OSI model is divided into two layers: upper layers and lower
layers.
The upper layer of the OSI model mainly deals with the application
related issues, and they are implemented only in the software. The
application layer is closest to the end user. Both the end user and the
application layer interact with the software applications. An upper
layer refers to the layer just above another layer.
The lower layer of the OSI model deals with the data transport issues.
The data link layer and the physical layer are implemented in
hardware and software. The physical layer is the lowest layer of the OSI
model and is closest to the physical medium. The physical layer is
mainly responsible for placing the information on the physical
medium.
PDU
A PDU is a specific block of information transferred over a network. It is often used in reference
to the OSI model, since it describes the different types of data that are transferred from each
layer. The PDU for each layer of the OSI model is listed below.
1. Physical layer – raw bits (1s or 0s) transmitted physically via the hardware
3. Network layer – a packet that contains the source and destination address
The PDU defines the state of the data as it moves from one layer to the next.