0% found this document useful (0 votes)
44 views20 pages

Dr. Suad El-Geder EC 433 Computer Engineer Department

This document discusses network performance parameters and the OSI model. It describes bandwidth, throughput, latency, packet loss, and jitter as key network performance parameters. It then explains the layered architecture of the OSI model and its benefits, including modularity, independence of layers, and abstraction of networking tasks. Each layer of the OSI model handles distinct protocol data units (PDUs) to transfer information between layers.

Uploaded by

Heba Farhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views20 pages

Dr. Suad El-Geder EC 433 Computer Engineer Department

This document discusses network performance parameters and the OSI model. It describes bandwidth, throughput, latency, packet loss, and jitter as key network performance parameters. It then explains the layered architecture of the OSI model and its benefits, including modularity, independence of layers, and abstraction of networking tasks. Each layer of the OSI model handles distinct protocol data units (PDUs) to transfer information between layers.

Uploaded by

Heba Farhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Dr.

Suad El-Geder
EC 433
Computer Engineer Department
The following parameters are used to measure Network Performance −
1) Bandwidth

2) Throughput

3) Latency

4) Packet Loss

5) Jitter
 Network bandwidth is defined as the maximum transfer throughput capacity of a network. It’s

a measure of how much data can be sent and received at a time. Bandwidth is measured in
bits, megabits, or gigabits per second.

 It refers to the maximum amount of the data that can be passed from one point to another.

 It is not affected by physical obstruction because it is a theoretical unit to some extent.

 Monitoring bandwidth availability ensures the network administrator would have enough

theoretical bandwidth if he ever found himself in need of it.


 Network throughput refers to how much data can be transferred from source to destination within a
given timeframe. So, Throughput measures how many packets arrive at their destinations
successfully. For the most part, throughput capacity is measured in bits per second (bps)

 In other words, it is considered as the actual measurement of the data that is being moved through
the media at any particular time.

 Using throughput to measure network speed is good for troubleshooting because it can root out the
exact cause of a slow network and alert administrators to problems specifically in regard to packet
loss.

 It can be easily affected by change in interference, traffic in network, network devices, transmission
errors and the host of other type.
Example 1 : Throughput
 A network with bandwidth of 15 Mbps can pass only an average of 1000 packets per second with each packet

carrying an average of 1500 bits.

 What is the throughput of this network?

𝐓𝐡𝐫𝐨𝐮𝐠𝐡𝐩𝐮𝐭 = 𝐀𝐯𝐞𝐫𝐚𝐠𝐞 𝐧𝐞𝐭𝐰𝐨𝐫𝐤 𝐛𝐚𝐧𝐝𝐰𝐢𝐝𝐭𝐡 × 𝐩𝐚𝐜𝐤𝐞𝐭 𝐬𝐢𝐳𝐞 / time

𝟏.𝟓
The throughput is = = 0.1 which is equal to 1/10 of the bandwidth
𝟏𝟓

Example 2:
A network with bandwidth of 10 Mbps can pass only an average of 12, 000 frames per minute where each frame carries
an average of 10, 000 bits.What will be the throughput for this network?
Answer:
We can calculate the throughput as- Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this case.
S.No. Comparison Bandwidth Throughput

It is considered as the actual


It refers to the maximum amount of the data
measurement of the data that is being
1. Definition that can be passed from one point to
moved through the media at any
another.
particular time.

2. Dependence Not depend on the latency. It depends on the latency.

It can be easily affected by change in


It is not affected by physical obstruction
interference, traffic in network, network
3. Effect because it is a theoretical unit to some
devices, transmission errors and the host
extent.
of other type.
 Latency:

 In a network, latency refers to the measure of time it takes for data to reach its destination across

a network.

 Usually measuring network latency by as a round trip delay, in milliseconds (ms), taking into

account the time it takes for the data to get to its destination and then back again to its source.

 Packet Loss:

 Packet loss refers to the number of data packets that were successfully sent out from one point

in a network, but were dropped during data transmission and never reached their destination.

 It’s important for the IT team to measure packet loss to know how many packets are being

dropped across your network to be able to take steps to ensure that data can be transmitted as it
should be. Knowing how to measuring packet loss provides a metric for determining good or
poor network performance.
 Jitter:
 Network jitter is the network transmission’s biggest enemy when using real-time apps such as
IP telephony, video conferencing, and virtual desktop infrastructure.

 The definition of jitter is a variation in delay. Otherwise known as a disruption that occurs while
data packets travel across the network.

 There are many factors that can cause jitter, and many of these factors are the same as those
that cause delay. One difficult thing about jitter is that it doesn’t affect all network traffic in the
same way.

 Jitter can be caused by network congestion. Network congestion occurs when network devices
are unable to send the equivalent amount of traffic they receive, so their packet buffer fills up
and they start dropping packets. If there is no disturbance on the network at an endpoint, every
packet arrives. However, if the endpoint buffer becomes full, packets arrive later and later.
 A communication subsystem is a complex piece of Hardware and software. Early

attempts for implementing the software for such subsystems were based on a
single, complex, unstructured program with many interacting components.

 The resultant software was very difficult to test and modify. To overcome such

problem, the ISO (Short for International Organization for Standardization) has
developed a layered approach. In a layered approach, networking concept is
divided into several layers, and each layer is assigned a particular task. Therefore,
we can say that networking tasks depend upon the layers.
 The main aim of the layered architecture is to divide the design into small pieces.

 Each lower layer adds its services to the higher layer to provide a full set of services to manage

communications and run the applications.

 It provides modularity and clear interfaces, i.e., provides interaction between subsystems.

 It ensures the independence between layers by providing the services from lower to higher layer without

defining how the services are implemented. Therefore, any modification in a layer will not affect the other
layers.

 The number of layers, functions, contents of each layer will vary from network to network. However, the

purpose of each layer is to provide the service from lower to a higher layer and hiding the details from
the layers of how the services are implemented.
 No Layering: Each new application has Layering: itermediate layer(s) provide a
to be re-implemented for every network unique abstraction for various network
technology. technologies.
 The basic elements of layered architecture are services, protocols, and interfaces.

 Service: It is a set of actions that a layer provides to the higher layer.

 Protocol: It defines a set of rules that a layer uses to exchange the information with peer

entity. These rules mainly concern about both the contents and order of the messages

used.

 Interface: It is a way through which the message is transferred from one layer to another

layer.
 Protocol: Set of rules that govern data communication between peer entities.

 layer-n peer processes communicate by exchanging Protocol Data Units (PDUs) .

Protocol Key Features of a Protocol

 Syntax: Concerns the format of the data blocks Indicates how to read the bits - field

outlining

 Semantics: Includes control information for coordination and error handling

 Timing: Includes speed matching and sequencing


 One problem is decomposed into a number of smaller more manageable sub

problems more flexibility in designing, modifying and evolving computer networks

 A common functionality of a lower layer can be shared by many upper layers


 OSI stands for Open System Interconnection is a reference model that describes how information from a
software application in one computer moves through a physical medium to the software application in
another computer.

 OSI consists of seven layers, and each layer performs a particular network function.

 OSI model was developed by the International Organization for Standardization (ISO) in 1984, and it is now
considered as an architectural model for the inter-computer communications.

 OSI model divides the whole task into seven smaller and manageable tasks. Each layer is assigned a
particular task.

 Each layer is self-contained, so that task assigned to each layer can be performed independently.
 The OSI model is divided into two layers: upper layers and lower
layers.

 The upper layer of the OSI model mainly deals with the application
related issues, and they are implemented only in the software. The
application layer is closest to the end user. Both the end user and the
application layer interact with the software applications. An upper
layer refers to the layer just above another layer.

 The lower layer of the OSI model deals with the data transport issues.
The data link layer and the physical layer are implemented in
hardware and software. The physical layer is the lowest layer of the OSI
model and is closest to the physical medium. The physical layer is
mainly responsible for placing the information on the physical
medium.
PDU
 A PDU is a specific block of information transferred over a network. It is often used in reference
to the OSI model, since it describes the different types of data that are transferred from each
layer. The PDU for each layer of the OSI model is listed below.
1. Physical layer – raw bits (1s or 0s) transmitted physically via the hardware

2. Data Link layer – a frame (or series of bits)

3. Network layer – a packet that contains the source and destination address

4. Transport layer – a segment that includes a TCP header and data

5. Session layer – the data passed to the network connection

6. Presentation layer – the data formatted for presentation

7. Application layer – the data received or transmitted by a software application

 The PDU defines the state of the data as it moves from one layer to the next.

You might also like