Network Performance Analysis Within A Lo
Network Performance Analysis Within A Lo
NETWORK
BY
(COMPUTER SCIENCE)
OCTOBER, 2015
ABSTRACT
1
In this research work, we analyze network performance of a local area network
within a wireless and wired network. The traditional wired network constraints like
mobility and expensive cabling. But wireless communication is a flexible data
communication system implemented as an extension to or as an alternative for wired
communication. The bandwidth and the services provided by the wireless
communication networks are similar to that provided by the wired networks. Computing
the viability and performance of computer networks in real can be very expensive task.
In this research work, performance of wireless and wired networks as well as
comparison is evaluated using OPNET (Optimized Engineering Tools) simulation tool.
For wired network, collision count, traffic received, delay, throughput is studied while
for wireless network, data dropped, traffic received, media access delay, and throughput
is studied. For comparison of both wired and wireless networks, the performance
parameters throughput is investigated.
CHAPTER ONE
2
INTRODUCTION
transmission media.
Since, in the last two decades the technology and the market of computing was
moving from big centralized resources main frame computer with terminals into smaller
are becoming isolated and not efficiently utilized. On the other hand, communication is
becoming more and more essential for all business, scientific, and other tasks. The need
for resource sharing and communication capabilities was behind the evolution of
computer networks. One important and commonly used type of computer networks is the
Local Area Network (LAN), (freaser, 1999). LANs are characterized by limited range,
private usage, and high speeds. The local area network is categorized to either been a
between connected pair of systems. Rapid advances have taken place in the field of
Wired and Wireless Networks. Several network models have been modelled by various
researchers, using network simulators, to find out the most feasible ones. Investigations
of these network models have been performed using the simulation techniques that
3
reduce the cost of prediction, estimation and implementation of the network models.
Among the various network simulators available like NetSim, NS-2, GloMoSim etc.,
OPNET provides the industry’s leading environment for network modelling and
and applications with flexibility and scalability. It provides object oriented modelling
approach and graphical editors that mirror the structure of actual networks and network
components. It provides support for modelling both the wired and wireless LANs.
Though the wired networks have provided the high speed connectivity
but due to the drawbacks like extensive cabling and immobility etc., the
WLAN gained momentum, (Vikas, 2010). The computer networks today are
not only wired but wireless too, depending on the type of circumstances like
Link Control (DLC) layer into Logical Link Control (LLC) and Medium
Access Control (MAC) sub layers. The LLC layer is independently specified
for all 802 LANs, wireless or wired. Like IEEE 802.3 (Ethernet), IEEE 802.5
(Token Ring), IEEE 802.11 (WLANs) standard also focuses on the above
mentioned two layers . Our study has focused on performance analysis of IEEE
802.3 (Ethernet) based Wired LANs and IEEE 802.11b based Wireless LANs
4
Technological advancement affect the world in general and the network settings in
choice of network is a major challenge when there is need to set up a local area network.
People have ready access to more information than at any time in the past. Network users
are faced by the challenge to acquire the abilities required to participate in a technology
and knowledge driven global order. Due to difference between the performance of local
area network (wired and wireless) there is need for network analysis; so that user will be
able to make the right choice when setting up a local area network. The major challenges
includes;
To overcome this, there is need for the analysis of a local area network in order to make
This research work is aimed at analyzing network performance within a local area
5
c. To determine the performance of a local area network in term of speed,
This project work is to analyze the performance of a local area network using OPNET
(Optimized Engineering Tools) which provides the industry’s leading environment for
networks, devices, protocols, and applications with flexibility and scalability. It provides
object oriented modeling approach and graphical editors that mirror the structure of
Networks have grown significantly over the past few decades providing a pace to the
means of accessing network resources. For example, the use of Internet is gaining
importance with the adoption of network technologies for purposes like education,
business, banking and defense. Hence this project work will be relevant in several ways
as it will provide room for choice on network and how to analyze their performances
ii. The OPNET simulation techniques will be employ to analyze the performance
iii. Simulation and details of results and discussion are encapsulated in the
preceding chapter
6
1.6 Limitations of the Study
This project covers the performance analysis of both wired and wireless network.
i. Time constraints: Due to time constraint, this work contains only description of the
network analysis
ii. Availability of internet facility and computer literacy: This research work
cannot be carried out where there is no network facilities and computer literacy
7
CHAPTER TWO
LITERATURE REVIEW
communicate information much more quickly and accurately than a person could
(Bennington, 2002). The first ancient version of a computer, the abacus, was a device
used to represent numbers. It consisted of stones that were strung on threads in a wooden
frame. More recently a model for a mechanical machine, that was utilized to do
computations and which bore some similarity to modern day computers, was developed.
The first general purposes electronic computer, the electronic numerical integrator and
calculator, became operational on 14 February 1946. Since the development of the first
modern computer there have been many significant advancements in the development of
The first modern day computers were constructed using electronic tubes. By the
late 1950’s this technology had been displaced by discrete transistors, which were
smaller, faster and cheaper, and produced far less heat than previous technologies. In the
mid 1960’s discrete transistors gave way to integrated circuits and other components on a
silicon “chip”. During the 1970’s, the mainstream electronic industry began to
8
innovative products such as video games, calculators, and digital watches (Campbell-
Desktop computers
The Apple II, launched in 1977, established the paradigm of the personal
computer, namely a central processing unit equipped with a keyboard and screen, and a
floppy disk drive for program and data storage (Campbell-Kelly & Aspray, 1996).
Historically, desktop standalone computers used the Microsoft Disk Operating System
(MS DOS) (Freese, 1992). During the 1980’s, the MS DOS operating system has become
the standard operating system for personal computers as many computer manufacturers
adopted this system. Thousands of programs were based upon this system (Campbell-
The next step in the development of personal computers was the adoption of the
Microsoft Windows (MS Windows) as the operating system for personal computers. This
operating system was characterized by a graphical use interface. Various versions of this
operating system have since appeared on the market (Campbell-Kelly & Aspray, 1996).
The multitasking and large memory features of MS Windows paved the way for
The next step in the development of computer technology was the establishment of
video that are integrated, controlled and delivered by the computer. Multimedia
9
computers have a high degree of interactivity (Collin 1996; Dodd 1995; Joos, Whitman,
Smith & Nelson 1996). Multimedia computer programmes can be stored on a CD-ROM.
The acronym ROM stands for “read only memory” which means that one can read or
copy the information on the disc, but one cannot change it (Wright, 1996). Multimedia
technology supports games, simulations and other interactive applications (Dodd, 1995).
(Comer, 1994). A hypertext link is a special word, button or picture that provides a link
to another page, a piece of text, a sound file, an animation or a video clip. It’s used to
show more detail about a particular topic, provide interactive experiences with the
The user activates a hypertext link by clicking on it with a mouse (Collin, 1996).
2.2 Networks
computers formed the basis for the establishment of computer networks. A computer
network comprises any number of computers that are linked together. A network can be
confined to a single building, utilising data cables as linking devices. Where greater
distances are involved, the computers that constitute a network are linked by means of
satellite links, telephone lines or fibre optic cables (Meyer & Cilliers, 2002). When
computers are linked together, information can be moved between them swiftly and
efficiently. The information moves directly between computers rather than through a
10
human intermediary. A network also allows for information to be backed up at a central
computers and important information can be lost by mistake (Chellis et al. 2000).
organisations forming one institution (Chellis et al. 2000). This allows for data transfer
In the late 1960’s the United States Defence Department created a network that
linked military computers together. This eventually gave rise to the establishment of the
Internet (Maran, Maran, Maran, Maran, Maran, Maran & Maran, 1997). The Internet
together millions of computers, which are scattered, around the world. The backbone of
the Internet is a set of high-speed data lines that connect major networks all over the
world. This enables many millions of computer users to globally share and exchange
information, as a computer user is linked with computer users on the other side of the
world.
Network topology is the pattern of interconnection used among the various nodes
of the network. The most general topology is an unconstrained graph structure, with
11
nodes connected together in an arbitrary pattern, this general structure is the one normally
associated with a packet-switched network; its advantage is that the arrangement of the
communication links can be based on the network traffic, (Mohammed, 2009). This
generality is a tool for optimizing the use of costly transmission media, an idea which is
not germane to local area networks. Further, this generality introduces the unavoidable
cost of making a routing decision at each node a message traverses. A message arriving at
a node cannot be blindly transmitted out of all the other links connected to that node, for
that would result in a message that multiplied at every node and propagated forever in the
network, (Sameh, 2006). Thus each node must decide, as it receives a message, on which
this general topology is of no significant advantage in a local area network, and does
imply a degree of complexity at every node, local area network designers have identified
networks. We shall consider three such topologies: the star, the ring, and the bus. The
Star Network: A star network, eliminates the need for each network node to make
routing decisions by localizing all message routing in one central node, (Alborz, 2007).
This leads to a particularly simple structure for each of the other network nodes. This
with one primary node. For example, the star is an obvious topology to support a number
of terminals communicating with a time-sharing system, in which case the central node
12
If, however, the normal pattern of communication is not between one primary
node and several secondary nodes, but is instead more general communication among all
of the nodes, then reliability appears as a possible disadvantage of the star net. Clearly,
the operation of the network depends on the correct operation of the central node, which
performs all of the routing functions, and must have capacity sufficient to cope with all
simultaneous conversations, (Mike, 2000). For these reasons, the central node may be a
fairly large computer. The cost and difficulty of making the central node sufficiently
reliable may more than offset any benefit derived from the simplicity of the other nodes.
Ring and Bus Networks: The ring and bus topologies attempt to eliminate the central
node on the network, without sacrificing the simplicity of the other nodes, (Song, 2008).
While the elimination of the central node does imply a certain complexity at the other
nodes of the net, a decentralized network can be constructed with a surprisingly simple
structure of the nodes. In the ring topology, a message is passed from node to node along
unidirectional links. There are no routing decisions to be made in this topology; the
sending node simply transmits its message to the next node in the ring, and the message
passes around the ring, one node at a time, until it reaches the node for which it is
intended. The only routing requirement placed on each node is that it be able to
recognize, from the address in the message, those messages intended for it. Similarly, in
the bus structure, there are no routing decisions required by any of the nodes, (Sarah,
2008). A message flows away from the originating node in both directions to the ends of
the bus. The destination node reads the message as it passes by. Again, a node must be
13
2.4 Wireless Network
wired technology over a span of time and is a rapidly growing segment of the
information exchange between the portable devices located anywhere in the world,
(Hasham, 2008). Wireless Local Area Networks (WLANs) have been developed to
provide users in a limited geographical area with high bandwidth and similar services
supported by the wired Local Area Network (LAN). Unlike wired networks, WLANs,
uses IEEE 802.11 standards, to transmit and receive radio waves through the air medium
between a wireless client and an Access Point (AP), as well as among two or more
wireless clients within a certain range of each other, (Song, 2008). A WLAN basically
consists of one or more wireless devices connected to each other in a peer-to-peer manner
or through APs, which in turn are connected to the backbone network providing wireless
connectivity to the covered area. Mohammed, (2006), the authors worked on improving
the performance of WLANs using Access points. They investigated and estimated the
traffic load on an access point, which can help determine the number of access point to be
network stations and also the number of PCF stations that can be deployed per access
point was also investigated. Correctly setting the number of PCF stations will help tune
the performance of these nodes as well as the overall network performance. Also the
author introduced a wireless LAN design framework for optimal placements of access
14
points at suitable locations to satisfy the coverage and capacity requirements of the users.
Optimal planning of WLANs can result in improved Quality of Services, efficient use of
algorithms were used, to solve the AP placement and channel allocation problems like
coverage, traffic, Redundancy, channel interference and wiring cost. Then the output of
wired local area networks which provide a large bandwidth. This limitation is
due to the error prone physical medium (air), (Hua, 2008). The methods like
tuning the physical layer related parameters, tuning the IEEE 802.11
parameters and using enhanced link layer (media access control) protocol were
The IEEE 802.11 standard operates far from theoretical throughput limit
the window size of the IEEE 802.11 back-off algorithms. The main reason why
the capacity of the standard protocol is often far from theoretical limit is that
collisions before its window has a size which gives a low collision probability,
15
(Cali, 2000). It was cited that proper tuning of the back-off algorithm can
derive the IEEE 802.11 protocol close to the theoretical throughput limit.
proposed a Rician fading channel model to highlight the fading effect in Radio
frequency Identification (RIFD) System, using the statistics of Bit Error Rate
(BER) and Signal-to-noise Ratio (SNR). This model was employed in addition
to the existing RIFD system and was used to calculate the identification time to
showed that the Fading channel effect increased the Identification time as BER
varies. It was also analyzed that the wireless channel has strong effect on the
identification time.
the users. The wireless data connections have high bit error rates, low
bandwidth and long delays. The physical and MAC layer were fine tuned to
improve the performance of WLAN. The performance metrics like slot time,
focused upon to reduce collisions and media access delay. Hence an increase in
throughput and channel utilization occurs, which can improve the performance
of Wireless networks under heavy load conditions (high BER values). The
16
of IEEE 802.11 based wireless local area networks (WLANs) using OPNET
was also evaluated in. The impact of parameters like throughput, packet loss
rate, round trip time (RTT) for packets, retransmission rate and collision count
delay was presented, (Balaji, 2009). It was cited that handshake mechanism is
useful where hidden node problem exists, but the unnecessary use of RTS/CTS
performance of wireless LAN. Also the authors proposed the wireless network
and the results indicated that fine tuning of these parameters can help to
The impact of load, number of nodes, RTS/CTS, FTS and data rate on
performance metrics like end to end throughput and average delay was
wireless LAN using OPNET IT Guru Academic Edition 9.1 for improvement
in the throughput by fine tuning the attributes like fragmentation threshold and
RTS threshold.
A wired network on the other hand is the transmission of signal via the use of
cable, (Bansal, 2010). The following analysis the issues with wired network; In order to
deal with burst data transmission the 100Mbps Ethernet is preferable to ensure
17
communication performance. The features of conventional protection system, including
current differential protection and distance protection were analysed by the author. The
investigation of three wide area protection System (WAPS) architectures, i.e. centralized,
distributed and networked using OPNET, revealed that networked structure is considered
to be best due to its fast response time in terms of lesser delay or transfer time. The
The load on the network server increases with increase in the user
activity. An increased number of users increase the network load and degrades
2009). The effect of variation in attributes like traffic load on the performance
18
The increase in traffic load effects the network performance In a
network model with switched Ethernet subnets and Gigabit Ethernet backbone
under typical load conditions and also for time-sensitive applications such as
voice over IP was modeled and simulated. The simulations were carried out to
study the impact of increase in traffic load on the performance metrics like
delay.
Utilization, FTP download response time and normalized delivered traffic were
analyzed using OPNET simulator. The results indicated that ATM and MPLS
and ATM network technologies using modeling and simulation was done.
Real-time voice and video conferencing type traffic were used to compare the
very good. But it does not keep up with the Gigabit Ethernets small delay time.
19
Hence Gigabit Ethernet provides better performance than ATM as a backbone
TCP/IP over wireless networks was presented. The use of link –sharing
schedulers with just two queues (ack and packet queues, with SAD
packet loss.
disciplines the parameters like End-to-End Delay and Traffic received for live
streaming video were presented. The use of network connecting devices plays
designed by changing the network devices like Hub, Switch and Ethernet
the network was analysed using various performance metrics like Delay and
20
collision count, Traffic sink, Traffic source and packet size. It was observed
that the throughput improved and collisions decreased when the packet size is
analysis. A comparative study of two network simulators: OPNET Modeler and NS-2 for
packet level analysis was presented in both discrete events and analytical simulation
methods were combined to check the performance of simulator in terms of speed while
maintaining the accuracy. For performance testing of the network, different types of
traffic like CBR (constant Bit Rate) and an FTP (File transfer protocol) were generated
and simulated. Though both the simulators provide similar results, the freeware‖ version
of NS-2 makes it more attractive to a researcher but OPNET Modeler modules gain an
edge by providing more features. So, OPNET can be of use in academia i.e. advanced
networking education. Various scenarios like VoIP, WLAN or video Streaming were
designed, simulated and also analyzed analytically to check accuracy. This illustrated the
broader insight the OPNET software can offer in the networking technologies, simulation
21
simulation usually requires fewer simplification assumptions, since almost every possible
system is rather large and complex, a straight forward mathematical formulation may not
be feasible. In this case, the simulation approach is usually preferred to the analytical
convincing argument for generalization. Due to the generalization, simulation results are
usually considered not as strong as the analytical results. The simulation tools includes
the following
i. GloMoSim
lee, 2000).
inherently limits the scalability because the memory requirements increase dramatically
for a model with large number of nodes. With node aggregation, a single entity can
simulate several network nodes in the system. Node aggregation technique implies that
the number of nodes in the system can be increased while maintaining the same number
the simulation. Hence the network nodes which a particular entity represents are
22
Nodes can move according to a model that is generally referred to as the “random
waypoint” model. A node chooses a random destination within the simulated terrain and
moves to that location based on the speed specified in the configuration file. After
reaching its destination, the node pauses for a duration that is also specified in the
configuration file. The other mobility model in GloMoSim is referred to as the “random
drunken” model. A node periodically moves to a position chosen randomly from its
Use of GloMoSim
The <input file> contains the configuration parameters for the simulation (an example of
such file is CONFIG.IN). A file called GLOMO.STAT is produced at the end of the
simulation tool that has proved useful in studying the dynamic nature of communication
networks. Simulation of wired as well as wireless network functions and protocols (e.g.,
routing algorithms, TCP, UDP) can be done using NS2, (Lawrence, 1992). In general,
NS2 provides users with a way of specifying such network protocols and simulating their
corresponding behaviors.
23
Due to its flexibility and modular nature, NS2 has gained constant popularity in the
networking research community since its birth in 1989. Ever since, several revolutions
and revisions have marked the growing maturity of the tool, thanks to substantial
contributions from the players in the field. Among these are the University of California
and Cornell University who developed the REAL network simulator, the foundation on
which NS is invented. Since 1995 the Defense Advanced Research Projects Agency
(DARPA) supported the development of NS through the Virtual Inter Network Test bed
(VINT) project. Currently the National Science Foundation (NSF) has joined the ride in
development. Last but not the least, the group of researchers and developers in the
iii. Netsim
The Netsim simulator is a single process discrete event simulator. The Netsim
investigated, (Thomas, 1992). This flexibility makes Netsim an ideal tool for open ended
investigations of the network. The program simulates a finite population network which
gives a more accurate picture of network behavior than simulations based on assumptions
similar to those used in infinite population modeling studies. The program does not
model packet buffering at individual stations each station is assumed to generate a new
arrival only after their previous packet has been successfully transmitted.
24
This chapter reviews the related literatures on the performance analysis of a local
area network. Several authors were review and their various observations discussed. The
issues associated with Local area network were itemized. The chapter also discusses the
types of network simulating tools and how these tools can be effective in analyzing net
work performance.
CHAPTER THRE
Networks (Wired and Wireless) have grown like weed over the past few decades
have an accurate and a reliable generic platform to enable network. The wired Networks
provide a secure and faster means of connectivity. The performance of the wired Ethernet
is very sensitive to the number of users, offered load, transmission links while wireless is
also very sensitive to the number of users, offered load as well as physical characteristics,
25
data rate, packet size and so on. We can compare wired and wireless networks in the area
upgraded from scratch all over the word, network planning is becoming most important.
Computing the viability and performance of networks in real can be very expensive and
painstaking task. To ease and comfort the process of estimating and predicting a network
techniques are widely used and put into practice. The method used in this sense for the
performance analysis of local area network is not effective enough because of the
following challenges;
During this project research work, data needed for the project was gathered from
the various sources. In gathering and collecting necessary data and information needed
from the system analyses, two major fact-finding techniques were used in this work and
there are:
a. Primary Source:
This refers to the source of collecting original data in which the researcher made
b. Secondary Source:
26
The secondary data were obtained by the researcher from magazine, journal,
newspaper, library source and internet downloads. The data collected from this means
This was done between the researcher and the staff of the Delta state University
A variety of simulation tools like Glomosin, NS2, Netsim and OPNET are studied
for better understanding of the purpose of modeling and simulation but the choice of a
simulator depends upon the features available and requirements of network application.
OPNET is the registered commercial trademark and the name of product presented
commercial network simulators by the end of 2008. Because of it has been used for a
long time in the industry, it become mature and has occupied a big market share . Among
the various network simulators OPNET provides the industry’s leading environment for
networks, devices, protocols, and applications with flexibility and scalability. It provides
object oriented modeling approach and graphical editors that mirror the structure of
27
actual networks and network components. The analysis helped to estimate and optimize
the performance of wired and wireless networks using proposed optimization techniques.
protocols, and applications. Because of the fact of being a commercial software provider,
OPNET offers relatively much powerful visual or graphical support for the users. The
graphical editor interface can be used to build network topology and entities from the
create the mapping from the graphical design to the implementation of the real systems.
An example of the graphical GUI of OPNET can be seen in figure 3.1. We can see all the
topology configuration and simulation results can be presented very intuitively and
visually. The parameters can also be adjusted and the experiments can be repeated easily
28
.
29
CHAPTER 4
OPNET simulator is a tool to simulate the behavior and performance of any type
of network. The main difference with other simulators lies in its power and versatility.
This simulator makes possible working with OSI model, from layer 7 to the modification
of the most essential physical parameters. This section describes the performance analysis
of wireless and wired computer networks using simulation. The simulation was done
In case 4.2, First of all, a comparison was done by varying the types of
transmission links (Ethernet links) used in the wired networks for communication
between the server and the clients. Secondly, a load balancing mechanism has been used
to balance traffic load in the wired network. In this, different load balancing policies were
used. Investigations were done to find the policy using which the traffic sent/ received
30
can be balanced to improve the performance. The performance metrics evaluated are
In case 4.3: the performance analysis of the wireless computer networks has been
illustrated by tuning the Wireless local area network parameters (such as physical
characteristics and buffer Size among many other parameters). The performance metrics
Wired local area networks include several technologies like Ethernet, token ring,
token bus, Fiber distributed data interface and asynchronous transfer mode local area
networks. Ethernet has largely replaced competing wired LAN technologies. The
Ethernet is a working example of the more general Carrier Sense, Multiple Access with
network, meaning that a set of nodes sends and receives frames over a shared link. When
the two devices transmit at the same time the collision can occur. This collision generates
a jam signal that causes all nodes on the segment to stop sending data, which informs all
the devices that a collision has occurred. The carrier sense means that all the nodes can
distinguish between an idle and a busy link. The collision detect means that a node
listens as it transmits and can therefore detect when a frame it is transmitting has
interfered (collided) with a frame transmitted by another node. The Ethernet is said to be
31
a 1-persistent protocol because an adaptor with a frame to send transmits with probability
varying the types of transmission links (Ethernet links) used in the networks for
communication between the server and the clients in a wired local area network.
Figure 4.1 shows the wired network being modeled and simulated for performance
analysis using OPNET. The comparison was made for same number of users but different
types of links like 10 Base T, 100 Base T and 1000 Base X. The analysis of performance
metrics like Ethernet – Delay illustrates that the maximum delay occurs for 10 Base T.
The performance analysis shows the impact of varying types of links on the Ethernet
traffic received. The traffic received using 100 Base T and 1000 base X is maximum
32
When the load balancer receives a packet from a client machine, it must choose
the appropriate server to handle the request. The load balancer will use the load balancing
policy to determine which server is most appropriate. Following load balancing policies
can be used:
Random: The load balancer chooses one of the candidate servers at random.
Round-Robin: The load balancer cycles through the list of candidate servers.
Server Load: The load balancer chooses the candidate server with the lowest CPU load.
Number of Connections: The load balancer keeps track of the number of connections it
has assigned to each server. When a new request is made, it chooses the server with the
fewest connections.
The performance analysis has been done for networks with and without load
balancing policy. When no load balancing policy is used, the number of users is varied to
vary the network load. Then the performance analysis was done by comparing the
networks: one with maximum network load (without load balancing policy) and other
network with same maximum network load (with a load balancer implementing random
load balancing policy), as the number of user increases, more traffic is generated.
IEEE 802.11 is a recent standard developed for wireless local area networks
(WLANs). IEEE 802.11 is a multiple access protocol in which stations in the network
must compete for access to the shared communications medium to transmit data. IEEE
currently being used. If two or more stations in the network transmit at the same time
33
(i.e., a collision occurs), stations retransmit their data after random periods of time as in
standard for WLAN is the popular wireless networking technology that uses radio waves
to provide wireless high-speed Internet and network connections. The 802.11 data link
layer is divided in two sub layers: Logical Link Control (LLC) and Media Access Control
(MAC). LLC is the same as in 802 LAN allowing for very simple bridging from wireless
to wire networks. MAC is different to WLANs. The first method in MAC is CSMA with
collision avoidance protocol. This protocol is to ask each station to listen before action. If
the channel is busy, the station has to wait until channel is free. Another method in MAC
The investigations show that the network attains the maximum throughput using
IR layer. The worst results are achieved when IEEE 802.11 protocol uses FHSS layer.
But an important thing to focus on is, that the throughput may vary according to the type
of the network modelled, the network objects variation may occur in terms of number of
stations, data rate and type of network load too among certain other parameters..
Buffer size (bits) specifies the maximum size of the higher layer data buffer in
bits. Once the buffer limit is reached, the data packets arrived from higher layer will be
discarded until some packets are removed from the buffer, so that the buffer has some
free space to store these new packets. The optimum size of buffer can stabilize the queue
size, the packet drop probability and hence the packet loss rate. The benefits of stabilizing
34
queues in a network are high resource utilization. When the queue buffer appears to be
congested the packet discard probability increases. On the other hand, the buffer overflow
can be used to manage congestion. The performance analysis has been done for a buffer
The buffer configuration defines the buffer size, the maximum allocated
bursty, then it is possible for the entire buffer space to be filled by this single flow and
other flows will not be serviced until the buffer is emptied. If the buffer size is increased,
then the number of retransmission attempts would be reduced. Also the size of the queue
will be decreased for larger buffer due to the fact that the larger buffer will take less time
to send the packets, so the queue size will not build up continuously for larger buffer.
This shows the reduction in delay. The result of packet loss is the change of queue length.
When packet loss is relatively low, packets usually can be transmitted without
retransmission, so the queue length may be relatively small. But when the packet loss is
high, the MAC packet retransmission will prolong the delay of packet. So, a small buffer
size can increase packet drop rate and hence change the queue length and thus impact the
throughput and delay. Hence, the performance has been improved by increasing the
buffer size. The increase of packet discard rate can lead to the decrease of throughput.
This happens due to frequent retransmissions of the MAC layer data packets when the
packet loss rate increases. But the packet loss may happen due to low buffer size. The
analysis shows that if buffer size is increased then the retransmission attempts would be
reduced as the size of the queue is decreased for a large buffer size. The time to deliver
35
the packets decreases, due to large buffer size. The throughput always increases
monotonically with the buffer size, reaching a maximum above a threshold buffer size.
Similarly, the performance analysis can be done for varying data rates, RTS threshold
and fragmentation Threshold. Thus the throughput can be increased by increasing the
buffer size because on increasing the buffer size, the packet drop may be reduced
Collision count: Total number of collisions encountered by this station during packet
transmissions.
Data Dropped: Total number of bits that are sent by wireless node but never received by
another node.
Delay: This statistic represents the end to end delay of all packets received by all the
Load: Total number of bits received from the higher layer. Packets arriving from the
higher layer are stored in the higher layer queue. It may be measured in bits/sec or
packets/sec.
Media access delay: Total time (in Seconds) that the packet is in the higher layer queue,
from the arrival to the point when it is removed from the queue for transmission.
36
Queue Size: Represents the total number packets in MAC's transmission queue(s) (in
802.11e capable MACs, there will be a separate transmission queue for each access
category).
Throughput: Total number of bits sent to the higher layer from the MAC layer. The data
packets received at the physical layer are sent to the higher layer if they are destined for
this destination.
field of research, there exist some simulators to develop and test the effect of change in
has been investigated for their performance comparison by varying the attributes of
network objects such as traffic load, file size, RTS/CTS, customizing the physical
characteristics to vary BER, slot time, SIFS time or the contention window, to determine
their impact on throughput & delay. It is observed that as the number of user increases,
more traffic is generated. Also, Throughput can be increased by increasing the buffer size
because on increasing the buffer size, the packet drop may be reduced.
37
CHAPTER FIVE
5.1 SUMMARY
compares wired and wireless network performances. For the less number of users
wireless network performs better than the wired network for the same data rate of
packets. While as the number of user increases the performance and throughput
degradation occurs for wireless network than the wired network with same speed due to
the transmission limit, SNR (signal to noise) and bandwidth of the received signal. As the
number of users or load increases beyond some limit on wireless network can cause the
collisions among the packets sent by the users and due to that the retransmissions occurs
5.2 CONCLUSION
38
The impact of various network configurations on the network performance was
analyzed using the network simulator- OPNET. It has been investigated that performance
of the wired Networks is good if high speed Ethernet links are used under heavy network
loads. The mechanism of load balancing also improves the performance by reducing and
balancing the load equally among multiple servers. This lowers the response time to
access server. In addition performance analysis of wireless computer networks has been
done for improving the performance of wireless LAN. The investigations of physical
characteristics reveal that the infrared type is best in terms of throughput. The variation in
buffer size varies the queue size and hence optimizes the throughput.
5.3 RECOMMENDATION
This research work was carried out under thorough supervision, however, the
researcher has recommended that to improve the overall performance of the system it is
better to use hybrid network which is the combination of both wired and wireless
network.
REFERENCES
Hesham M.El et.al, (2008), Performance Evaluation of the IEEE 802.11 Wireless LAN
Standards, in the Proceedings of the World Congress on Engineering, 2008 , vol.
I, July 2-4.
Sameh H. Ghwanmeh, (2000), Wireless network performance optimisation using Opnet
Modeler, Information Technology Journal, vol. 5, No 1, pp. 18-24.
Ranjan Kaparti, ―OPNET IT Guru: A tool for networking education,‖ REGIS University.
Opnet_Modeler_Manual,‖ available at http://www.opnet.com
39
Velmurugan et.al, (2009) Comparison of Queuing disciplines for Differentiated Services
using OPNET,‖ IEEE, ARTComm. 744-746.
Yang Dondkai and Liu Wenli, (2009), The Wireless Channel Modeling for RFID System
with OPNET, in the Proceedings of the IEEE communications society sponsored
5th International Conference on Wireless communications, networking and
mobile computing, Beijing, China.
Ikram Ud Din, et.al (2009), Performance evaluation of different Ethernet LANs
connected by Switches and Hubs, European Journal of Scientific Research.
Schreiber M K, et.al (2005), Performance of video and video conferencing over ATM
and Gigabit Ethernet backbone networks Res. Lett. Inf. Math. Sci., vol. 7, pp.
19-27.
Chang X (1999), Network simulations with OPNET, Proceedings of the 1999 Winter
Simulation conference, pp.307-314.
Shufang Wu, et.al (2004), OPNET Implementation of the Megaco/H.248 protocol: multi-
call and multi connection scenarios, OPNETWORK 2004, Washington, DC.
Yan Huang, et.al (2002), Opnet Simulation of a multi-hop self-organizing Wireless
Sensor Network, in the Proceedings of OPNETWORK 2002 conference,
Washington D.C.
Gilberto Flores Lucio, (2003), OPNET Modeler and NS-2 : Comparing the accuracy of
Network Simulators for packet level Analysis using a Network Test bed, WSEAS
Transactions on Computers, pp. 700—707, 2- 3.
Dr. Bansal R.K (2010), Performance analysis of wired and wireless LAN using soft
computing techniques-A review, Global Journal of Computer Science and
Technology, Vol. 10 Issue 8 Ver.1.0.
Ikram Ud Din et.al (2009), Performance Evaluation of Different Ethernet LANs
Connected by Switches and Hubs, European Journal of Scientific Research, vol.
37 No. 3, pp. 461-470.
Jia Wang et.al (1999), Efficient and Accurate Ethernet Simulation, Proc. Of the 24th
Conference on Local Computer Networks (LCN'99), pp. 182-19.
40
Mohammad Hussain et.al (2009), Simulation Study of802.11b DCF using OPNET
Simulator, Eng. & Tech. Journal, vol.27,No6,2009,pp:1108-1117.
41