0% found this document useful (0 votes)
164 views41 pages

Network Performance Analysis Within A Lo

The document analyzes the performance of wired and wireless local area networks using OPNET simulation software. It studies parameters like collision count, traffic received, delay and throughput for wired networks. For wireless networks, it analyzes data dropped, traffic received, media access delay and throughput. The document aims to determine the performance of wired and wireless local area networks to help users choose the best option for their needs. It reviews related literature and discusses the methodology of using OPNET simulation to model and compare the network performances.

Uploaded by

sheikh sayeed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views41 pages

Network Performance Analysis Within A Lo

The document analyzes the performance of wired and wireless local area networks using OPNET simulation software. It studies parameters like collision count, traffic received, delay and throughput for wired networks. For wireless networks, it analyzes data dropped, traffic received, media access delay and throughput. The document aims to determine the performance of wired and wireless local area networks to help users choose the best option for their needs. It reviews related literature and discusses the methodology of using OPNET simulation to model and compare the network performances.

Uploaded by

sheikh sayeed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

NETWORK PERFORMANCE ANALYSIS WITHIN A LOCAL AREA

NETWORK

BY

KUTEYI AYODELE JACOB

DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE,

(COMPUTER SCIENCE)

DELTA STATE UNIVERSITY ABRAKA

OCTOBER, 2015

ABSTRACT

1
In this research work, we analyze network performance of a local area network
within a wireless and wired network. The traditional wired network constraints like
mobility and expensive cabling. But wireless communication is a flexible data
communication system implemented as an extension to or as an alternative for wired
communication. The bandwidth and the services provided by the wireless
communication networks are similar to that provided by the wired networks. Computing
the viability and performance of computer networks in real can be very expensive task.
In this research work, performance of wireless and wired networks as well as
comparison is evaluated using OPNET (Optimized Engineering Tools) simulation tool.
For wired network, collision count, traffic received, delay, throughput is studied while
for wireless network, data dropped, traffic received, media access delay, and throughput
is studied. For comparison of both wired and wireless networks, the performance
parameters throughput is investigated.

CHAPTER ONE

2
INTRODUCTION

1.1 Background of the Study

Local area network is a data communication network, typically a packet

communication network, limited in geographic scope, (Bhupendra, 2002).’ A local area

network generally provides high-bandwidth communication over inexpensive

transmission media.

Since, in the last two decades the technology and the market of computing was

moving from big centralized resources main frame computer with terminals into smaller

more distributed resources personal computers. As a result of this movement resources

are becoming isolated and not efficiently utilized. On the other hand, communication is

becoming more and more essential for all business, scientific, and other tasks. The need

for resource sharing and communication capabilities was behind the evolution of

computer networks. One important and commonly used type of computer networks is the

Local Area Network (LAN), (freaser, 1999). LANs are characterized by limited range,

private usage, and high speeds. The local area network is categorized to either been a

wired or wireless network depending on the interconnection, (Hussani, 2001).

These interconnected set of computer system permits interactive resource sharing

between connected pair of systems. Rapid advances have taken place in the field of

Wired and Wireless Networks. Several network models have been modelled by various

researchers, using network simulators, to find out the most feasible ones. Investigations

of these network models have been performed using the simulation techniques that

3
reduce the cost of prediction, estimation and implementation of the network models.

Among the various network simulators available like NetSim, NS-2, GloMoSim etc.,

OPNET provides the industry’s leading environment for network modelling and

simulation. It allows to design and study communication networks, devices, protocols,

and applications with flexibility and scalability. It provides object oriented modelling

approach and graphical editors that mirror the structure of actual networks and network

components. It provides support for modelling both the wired and wireless LANs.

Though the wired networks have provided the high speed connectivity

but due to the drawbacks like extensive cabling and immobility etc., the

WLAN gained momentum, (Vikas, 2010). The computer networks today are

not only wired but wireless too, depending on the type of circumstances like

need of mobility, rough terrains, or secure networks.

Open system interconnection (OSI) reference model divides the Data

Link Control (DLC) layer into Logical Link Control (LLC) and Medium

Access Control (MAC) sub layers. The LLC layer is independently specified

for all 802 LANs, wireless or wired. Like IEEE 802.3 (Ethernet), IEEE 802.5

(Token Ring), IEEE 802.11 (WLANs) standard also focuses on the above

mentioned two layers . Our study has focused on performance analysis of IEEE

802.3 (Ethernet) based Wired LANs and IEEE 802.11b based Wireless LANs

using soft computing techniques like network simulators.

1.2 Statement of the Problem

4
Technological advancement affect the world in general and the network settings in

particular. Constant and revolutionary changes associated with technological and

scientific innovations resulted in an increasingly complex global order. Society is driven

by knowledge, and access to information is becoming a major competitive weapon. The

choice of network is a major challenge when there is need to set up a local area network.

People have ready access to more information than at any time in the past. Network users

are faced by the challenge to acquire the abilities required to participate in a technology

and knowledge driven global order. Due to difference between the performance of local

area network (wired and wireless) there is need for network analysis; so that user will be

able to make the right choice when setting up a local area network. The major challenges

includes;

i. The choice of number of users

ii. Retransmission attempt

iii. Collision attempt

iv. Throughput, etc.

To overcome this, there is need for the analysis of a local area network in order to make

the right choice.

1.3 Aim and Objectives of the Study

This research work is aimed at analyzing network performance within a local area

network. The objectives of the study are stated below;

a. To highlight related literature on local area network

b. examine the issues associate with wired and wireless network

5
c. To determine the performance of a local area network in term of speed,

throughput, retransmission attempt, etc.

1.4 Scope of the Study

This project work is to analyze the performance of a local area network using OPNET

(Optimized Engineering Tools) which provides the industry’s leading environment for

network modeling and simulation. It allows to design and study communication

networks, devices, protocols, and applications with flexibility and scalability. It provides

object oriented modeling approach and graphical editors that mirror the structure of

actual networks and network components.

1.5 Significant of the Study

Networks have grown significantly over the past few decades providing a pace to the

means of accessing network resources. For example, the use of Internet is gaining

importance with the adoption of network technologies for purposes like education,

business, banking and defense. Hence this project work will be relevant in several ways

as it will provide room for choice on network and how to analyze their performances

1.6 Research Methodology

This research will be done using the following methodology;

i. Study of related literature on performance analysis within a local area network.

ii. The OPNET simulation techniques will be employ to analyze the performance

of both the wired and wireless local area network

iii. Simulation and details of results and discussion are encapsulated in the

preceding chapter

6
1.6 Limitations of the Study

This project covers the performance analysis of both wired and wireless network.

However, the following are the limitations;

i. Time constraints: Due to time constraint, this work contains only description of the

network analysis

ii. Availability of internet facility and computer literacy: This research work

cannot be carried out where there is no network facilities and computer literacy

7
CHAPTER TWO

LITERATURE REVIEW

2.1 History of Computers

A computer is referred to as an electronic devise for storing and processing data

according to instructions given to it in a variable program (Reader’s Digest Oxford

Complete Wordfinder, 1996). The contribution of computers is to process and

communicate information much more quickly and accurately than a person could

(Bennington, 2002). The first ancient version of a computer, the abacus, was a device

used to represent numbers. It consisted of stones that were strung on threads in a wooden

frame. More recently a model for a mechanical machine, that was utilized to do

computations and which bore some similarity to modern day computers, was developed.

The first general purposes electronic computer, the electronic numerical integrator and

calculator, became operational on 14 February 1946. Since the development of the first

modern computer there have been many significant advancements in the development of

computer technology (Strydom, 2000).

The first modern day computers were constructed using electronic tubes. By the

late 1950’s this technology had been displaced by discrete transistors, which were

smaller, faster and cheaper, and produced far less heat than previous technologies. In the

mid 1960’s discrete transistors gave way to integrated circuits and other components on a

silicon “chip”. During the 1970’s, the mainstream electronic industry began to

appropriate new digital electronics and integrated circuits, producing a stream of

8
innovative products such as video games, calculators, and digital watches (Campbell-

Kelly & Aspray, 1996).

Desktop computers

The Apple II, launched in 1977, established the paradigm of the personal

computer, namely a central processing unit equipped with a keyboard and screen, and a

floppy disk drive for program and data storage (Campbell-Kelly & Aspray, 1996).

Historically, desktop standalone computers used the Microsoft Disk Operating System

(MS DOS) (Freese, 1992). During the 1980’s, the MS DOS operating system has become

the standard operating system for personal computers as many computer manufacturers

adopted this system. Thousands of programs were based upon this system (Campbell-

Kelly & Aspray, 1996: 263; Freese, 1992).

The next step in the development of personal computers was the adoption of the

Microsoft Windows (MS Windows) as the operating system for personal computers. This

operating system was characterized by a graphical use interface. Various versions of this

operating system have since appeared on the market (Campbell-Kelly & Aspray, 1996).

The multitasking and large memory features of MS Windows paved the way for

concurrent communications and networking operations, using the personal computer

(Jordan & Churchill, 1992).

The next step in the development of computer technology was the establishment of

the multimedia computer. Multimedia integrates audiovisual technology with computing.

Multimedia is referred to as various combinations of text, graphics, animation, sound and

video that are integrated, controlled and delivered by the computer. Multimedia

9
computers have a high degree of interactivity (Collin 1996; Dodd 1995; Joos, Whitman,

Smith & Nelson 1996). Multimedia computer programmes can be stored on a CD-ROM.

The acronym ROM stands for “read only memory” which means that one can read or

copy the information on the disc, but one cannot change it (Wright, 1996). Multimedia

technology supports games, simulations and other interactive applications (Dodd, 1995).

Multimedia applications are made up of pages, each containing a screen full of

information. Hypertext links contain embedded references to other pages of information

(Comer, 1994). A hypertext link is a special word, button or picture that provides a link

to another page, a piece of text, a sound file, an animation or a video clip. It’s used to

show more detail about a particular topic, provide interactive experiences with the

information on a topic, or to enable users to navigate between electronic pages or files.

The user activates a hypertext link by clicking on it with a mouse (Collin, 1996).

2.2 Networks

Developments towards the establishment of computer networks complemented the

stand-alone computer (Chellis, Perkins & Strebe, 2000). Traditional stand-alone

computers formed the basis for the establishment of computer networks. A computer

network comprises any number of computers that are linked together. A network can be

confined to a single building, utilising data cables as linking devices. Where greater

distances are involved, the computers that constitute a network are linked by means of

satellite links, telephone lines or fibre optic cables (Meyer & Cilliers, 2002). When

computers are linked together, information can be moved between them swiftly and

efficiently. The information moves directly between computers rather than through a

10
human intermediary. A network also allows for information to be backed up at a central

electronic location. It is difficult to maintain regular back-ups on a number of stand-alone

computers and important information can be lost by mistake (Chellis et al. 2000).

Local area networks

A local area network (LAN) is a number of computers connected to each other by

a cable in a single location such as a single healthcare organisation or group of

organisations forming one institution (Chellis et al. 2000). This allows for data transfer

and communication within an organisation or institution.

The Internet and the world wide web

In the late 1960’s the United States Defence Department created a network that

linked military computers together. This eventually gave rise to the establishment of the

Internet (Maran, Maran, Maran, Maran, Maran, Maran & Maran, 1997). The Internet

consists of a series of relationships forming a system of communications that can rest on

a number of underlying technologies (Libicki, 1995). It is a super network that joins

together millions of computers, which are scattered, around the world. The backbone of

the Internet is a set of high-speed data lines that connect major networks all over the

world. This enables many millions of computer users to globally share and exchange

information, as a computer user is linked with computer users on the other side of the

world.

2.3 Network Topology

Network topology is the pattern of interconnection used among the various nodes

of the network. The most general topology is an unconstrained graph structure, with

11
nodes connected together in an arbitrary pattern, this general structure is the one normally

associated with a packet-switched network; its advantage is that the arrangement of the

communication links can be based on the network traffic, (Mohammed, 2009). This

generality is a tool for optimizing the use of costly transmission media, an idea which is

not germane to local area networks. Further, this generality introduces the unavoidable

cost of making a routing decision at each node a message traverses. A message arriving at

a node cannot be blindly transmitted out of all the other links connected to that node, for

that would result in a message that multiplied at every node and propagated forever in the

network, (Sameh, 2006). Thus each node must decide, as it receives a message, on which

link it is to be forwarded, which implies a substantial computation at every node. Since

this general topology is of no significant advantage in a local area network, and does

imply a degree of complexity at every node, local area network designers have identified

a variety of constrained topologies with attributes particularly suited to local area

networks. We shall consider three such topologies: the star, the ring, and the bus. The

Star Network: A star network, eliminates the need for each network node to make

routing decisions by localizing all message routing in one central node, (Alborz, 2007).

This leads to a particularly simple structure for each of the other network nodes. This

topology is an obvious choice if the normal pattern of communication in the network

conforms to its physical topology, with a number of secondary nodes communicating

with one primary node. For example, the star is an obvious topology to support a number

of terminals communicating with a time-sharing system, in which case the central node

might be the time-sharing machine itself.

12
If, however, the normal pattern of communication is not between one primary

node and several secondary nodes, but is instead more general communication among all

of the nodes, then reliability appears as a possible disadvantage of the star net. Clearly,

the operation of the network depends on the correct operation of the central node, which

performs all of the routing functions, and must have capacity sufficient to cope with all

simultaneous conversations, (Mike, 2000). For these reasons, the central node may be a

fairly large computer. The cost and difficulty of making the central node sufficiently

reliable may more than offset any benefit derived from the simplicity of the other nodes.

Ring and Bus Networks: The ring and bus topologies attempt to eliminate the central

node on the network, without sacrificing the simplicity of the other nodes, (Song, 2008).

While the elimination of the central node does imply a certain complexity at the other

nodes of the net, a decentralized network can be constructed with a surprisingly simple

structure of the nodes. In the ring topology, a message is passed from node to node along

unidirectional links. There are no routing decisions to be made in this topology; the

sending node simply transmits its message to the next node in the ring, and the message

passes around the ring, one node at a time, until it reaches the node for which it is

intended. The only routing requirement placed on each node is that it be able to

recognize, from the address in the message, those messages intended for it. Similarly, in

the bus structure, there are no routing decisions required by any of the nodes, (Sarah,

2008). A message flows away from the originating node in both directions to the ends of

the bus. The destination node reads the message as it passes by. Again, a node must be

able to recognize messages intended for it.

13
2.4 Wireless Network

A wireless communication is flexible data communication system implemented as

an extension to or as an alternative for a wired communication. It has overshadowed the

wired technology over a span of time and is a rapidly growing segment of the

communications industry, with a potential to provide high-speed, high quality

information exchange between the portable devices located anywhere in the world,

(Hasham, 2008). Wireless Local Area Networks (WLANs) have been developed to

provide users in a limited geographical area with high bandwidth and similar services

supported by the wired Local Area Network (LAN). Unlike wired networks, WLANs,

uses IEEE 802.11 standards, to transmit and receive radio waves through the air medium

between a wireless client and an Access Point (AP), as well as among two or more

wireless clients within a certain range of each other, (Song, 2008). A WLAN basically

consists of one or more wireless devices connected to each other in a peer-to-peer manner

or through APs, which in turn are connected to the backbone network providing wireless

connectivity to the covered area. Mohammed, (2006), the authors worked on improving

the performance of WLANs using Access points. They investigated and estimated the

traffic load on an access point, which can help determine the number of access point to be

employed in a network. The effect of enabling Point Coordination Function (PCF) on

network stations and also the number of PCF stations that can be deployed per access

point was also investigated. Correctly setting the number of PCF stations will help tune

the performance of these nodes as well as the overall network performance. Also the

author introduced a wireless LAN design framework for optimal placements of access

14
points at suitable locations to satisfy the coverage and capacity requirements of the users.

Optimal planning of WLANs can result in improved Quality of Services, efficient use of

resources, minimizing interference and reduced deployment cost. The performance of

WLANs depends on the RF conditions in which they operate. Randomized optimization

algorithms were used, to solve the AP placement and channel allocation problems like

coverage, traffic, Redundancy, channel interference and wiring cost. Then the output of

this algorithm was validated using OPNET.

Another important issue is the Bandwidth of wireless networks. The

bandwidth of wireless local area networks is limited as compared to that of

wired local area networks which provide a large bandwidth. This limitation is

due to the error prone physical medium (air), (Hua, 2008). The methods like

tuning the physical layer related parameters, tuning the IEEE 802.11

parameters and using enhanced link layer (media access control) protocol were

used to improve the performance of WLANs.

The IEEE 802.11 standard operates far from theoretical throughput limit

depending on the network configuration. An analytical model was proposed to

achieve maximum protocol capacity (theoretical throughput limit), by tuning

the window size of the IEEE 802.11 back-off algorithms. The main reason why

the capacity of the standard protocol is often far from theoretical limit is that

during the overload conditions, a station experiences a large number of

collisions before its window has a size which gives a low collision probability,

15
(Cali, 2000). It was cited that proper tuning of the back-off algorithm can

derive the IEEE 802.11 protocol close to the theoretical throughput limit.

The identification time is another critical indicator for the performance

enhancement of RIFD in wireless systems, (Hafiz, 2007). The authors

proposed a Rician fading channel model to highlight the fading effect in Radio

frequency Identification (RIFD) System, using the statistics of Bit Error Rate

(BER) and Signal-to-noise Ratio (SNR). This model was employed in addition

to the existing RIFD system and was used to calculate the identification time to

reflect the influence of channel situation on tag identification. The simulation

showed that the Fading channel effect increased the Identification time as BER

varies. It was also analyzed that the wireless channel has strong effect on the

identification time.

The throughput performance of WLANs is affected by the mobility of

the users. The wireless data connections have high bit error rates, low

bandwidth and long delays. The physical and MAC layer were fine tuned to

improve the performance of WLAN. The performance metrics like slot time,

short Inter-frame spacing (SIFS), minimum contention window (CWmin),

Fragmentation Threshold (FTS) and Request to send (RTS) thresholds were

focused upon to reduce collisions and media access delay. Hence an increase in

throughput and channel utilization occurs, which can improve the performance

of Wireless networks under heavy load conditions (high BER values). The

effectiveness of optional RTS/CTS handshake mechanism on the performance

16
of IEEE 802.11 based wireless local area networks (WLANs) using OPNET

was also evaluated in. The impact of parameters like throughput, packet loss

rate, round trip time (RTT) for packets, retransmission rate and collision count

on the performance metrics like retransmissions, throughput, media access

delay was presented, (Balaji, 2009). It was cited that handshake mechanism is

useful where hidden node problem exists, but the unnecessary use of RTS/CTS

mechanism increases the overhead of RTS/CTS packets. The parameters like

RTS/CTS threshold, fragmentation threshold and data rate impact the

performance of wireless LAN. Also the authors proposed the wireless network

performance optimization using OPNET Modeler. The model was simulated

and the results indicated that fine tuning of these parameters can help to

improve the performance of WLANs.

The impact of load, number of nodes, RTS/CTS, FTS and data rate on

performance metrics like end to end throughput and average delay was

analyzed by means of simulation. The simulation study of IEEE 802.11b

wireless LAN using OPNET IT Guru Academic Edition 9.1 for improvement

in the throughput by fine tuning the attributes like fragmentation threshold and

RTS threshold.

2.4 Wired Network

A wired network on the other hand is the transmission of signal via the use of

cable, (Bansal, 2010). The following analysis the issues with wired network; In order to

deal with burst data transmission the 100Mbps Ethernet is preferable to ensure

17
communication performance. The features of conventional protection system, including

current differential protection and distance protection were analysed by the author. The

disadvantages of complex power systems were pointed out. The comparative

investigation of three wide area protection System (WAPS) architectures, i.e. centralized,

distributed and networked using OPNET, revealed that networked structure is considered

to be best due to its fast response time in terms of lesser delay or transfer time. The

architecture and communication network of WAPS was investigated to utilize global

information instead of local information to achieve better performance, (Ikram, 2009).

The load on the network server increases with increase in the user

activity. An increased number of users increase the network load and degrades

the performance. An effort was made to improve the performance by load

balancing. Various probabilistic methods to study network performance had

been proposed during the research. The significance of using discrete-event

simulation, as a methodology to confront network design and fine-tuning its

parameters was also highlighted, (Jia, 1999).

Another major problem exists in the form of network congestion. To

overcome the problem of congestion, Fiber Distributed Data Interface and

Asynchronous Transfer Mode type high-performance networks along with the

bucket congestion control mechanism were modeled and simulated, (Ali,

2009). The effect of variation in attributes like traffic load on the performance

metrics like end-to-end delay and throughput was analyzed.

18
The increase in traffic load effects the network performance In a

network model with switched Ethernet subnets and Gigabit Ethernet backbone

under typical load conditions and also for time-sensitive applications such as

voice over IP was modeled and simulated. The simulations were carried out to

study the impact of increase in traffic load on the performance metrics like

delay.

The type of routing technique used in the network is an important

consideration to study the network performance. Three technologies Internet

protocol (IP), Asynchronous Transfer Mode (ATM) and Multiprotocol Label

Switching (MPLS) were compared in terms of their routing capability.

Different performance metrics like end-to-end Delay, throughput, Channel

Utilization, FTP download response time and normalized delivered traffic were

analyzed using OPNET simulator. The results indicated that ATM and MPLS

outperform IP (without modification) in terms of delay and response time to

the exposed data. Another comparison of the performance of Gigabit Ethernet

and ATM network technologies using modeling and simulation was done.

Real-time voice and video conferencing type traffic were used to compare the

network technologies in terms of response times and packet end-to-end delays.

While ATM is a 53-byte frame connection-oriented technology, Gigabit

Ethernet is a 512-byte frame (minimum) connectionless technology. The

performance analysis indicated that the performance of ATM network is still

very good. But it does not keep up with the Gigabit Ethernets small delay time.

19
Hence Gigabit Ethernet provides better performance than ATM as a backbone

network, even in networks that require the transmission of delay sensitive

traffic such as video and voice.

A new operational model called AMP model‖ and an improved ack-

regulation scheme called SAD to explain and improve the performance of

TCP/IP over wireless networks was presented. The use of link –sharing

schedulers with just two queues (ack and packet queues, with SAD

implemented on ack queues) to support bidirectional traffic was also proposed,

(Yang, 2009). The authors analyzed TCP performance in asymmetric

networks, where throughput significantly depends on the reverse direction and

packet loss.

The queuing disciplines are implemented for resource allocation

mechanisms. The queuing disciplines used are First-in-first out (FIFO)

queuing, priority Queuing (PQ) and weighted Fair Queuing (WFQ). A

comparison of different queuing disciplines for different scenarios using

simulation was presented for performance evaluation. By varying the queuing

disciplines the parameters like End-to-End Delay and Traffic received for live

streaming video were presented. The use of network connecting devices plays

an important role in the network design. Various network scenarios were

designed by changing the network devices like Hub, Switch and Ethernet

cables using the network simulation software – OPNET. The performance of

the network was analysed using various performance metrics like Delay and

20
collision count, Traffic sink, Traffic source and packet size. It was observed

that the throughput improved and collisions decreased when the packet size is

reduced, (Chandra, 2009).

The choice of network simulator is very important for accurate simulation

analysis. A comparative study of two network simulators: OPNET Modeler and NS-2 for

packet level analysis was presented in both discrete events and analytical simulation

methods were combined to check the performance of simulator in terms of speed while

maintaining the accuracy. For performance testing of the network, different types of

traffic like CBR (constant Bit Rate) and an FTP (File transfer protocol) were generated

and simulated. Though both the simulators provide similar results, the freeware‖ version

of NS-2 makes it more attractive to a researcher but OPNET Modeler modules gain an

edge by providing more features. So, OPNET can be of use in academia i.e. advanced

networking education. Various scenarios like VoIP, WLAN or video Streaming were

designed, simulated and also analyzed analytically to check accuracy. This illustrated the

broader insight the OPNET software can offer in the networking technologies, simulation

techniques and its impact of applications on the network performance.

2.5 Types of Computer Network Performance Analysis Tool

A computer network is usually defined as a collection of computers interconnected

for gathering, processing, and distributing information. Simulation recreates real-world

scenarios using computer programs, (Gerla, 1999). It is used in various applications

ranging from operations research, business analysis, manufacturing planning, and

biological experimentation, just to name a few. Compared to analytical modeling,

21
simulation usually requires fewer simplification assumptions, since almost every possible

detail of system specifications can be incorporated in a simulation model. When the

system is rather large and complex, a straight forward mathematical formulation may not

be feasible. In this case, the simulation approach is usually preferred to the analytical

approach. The essence of simulation is to perform extensive experiment and make

convincing argument for generalization. Due to the generalization, simulation results are

usually considered not as strong as the analytical results. The simulation tools includes

the following

i. GloMoSim

Global Mobile Information System Simulator (GloMoSim) is a scalable

simulation environment for large wireless and wireline communication networks.

GloMoSim uses a parallel discrete-event simulation capability provided by Parsec, (sung-

lee, 2000).

The node aggregation technique is introduced into GloMoSim to give significant

benefits to the simulation performance. Initializing each node as a separate entity

inherently limits the scalability because the memory requirements increase dramatically

for a model with large number of nodes. With node aggregation, a single entity can

simulate several network nodes in the system. Node aggregation technique implies that

the number of nodes in the system can be increased while maintaining the same number

of entities in the simulation. In GloMoSim, each entity represents a geographical area of

the simulation. Hence the network nodes which a particular entity represents are

determined by the physical position of the nodes.

22
Nodes can move according to a model that is generally referred to as the “random

waypoint” model. A node chooses a random destination within the simulated terrain and

moves to that location based on the speed specified in the configuration file. After

reaching its destination, the node pauses for a duration that is also specified in the

configuration file. The other mobility model in GloMoSim is referred to as the “random

drunken” model. A node periodically moves to a position chosen randomly from its

immediate neighboring positions.

Use of GloMoSim

Simulator After successfully installing GloMoSim, a simulation can be started by

executing the following command I the BIN subdirectory.

./glomosim < inputfile >

The <input file> contains the configuration parameters for the simulation (an example of

such file is CONFIG.IN). A file called GLOMO.STAT is produced at the end of the

simulation and contains all the statistics generated.

ii. Network Simulator 2 (NS2)

Network Simulator (Version 2), widely known as NS2, is simply an event-driven

simulation tool that has proved useful in studying the dynamic nature of communication

networks. Simulation of wired as well as wireless network functions and protocols (e.g.,

routing algorithms, TCP, UDP) can be done using NS2, (Lawrence, 1992). In general,

NS2 provides users with a way of specifying such network protocols and simulating their

corresponding behaviors.

23
Due to its flexibility and modular nature, NS2 has gained constant popularity in the

networking research community since its birth in 1989. Ever since, several revolutions

and revisions have marked the growing maturity of the tool, thanks to substantial

contributions from the players in the field. Among these are the University of California

and Cornell University who developed the REAL network simulator, the foundation on

which NS is invented. Since 1995 the Defense Advanced Research Projects Agency

(DARPA) supported the development of NS through the Virtual Inter Network Test bed

(VINT) project. Currently the National Science Foundation (NSF) has joined the ride in

development. Last but not the least, the group of researchers and developers in the

community are constantly working to keep NS2 strong and versatile.

iii. Netsim

The Netsim simulator is a single process discrete event simulator. The Netsim

system is highly flexible so that a wide variety of network configurations can be

investigated, (Thomas, 1992). This flexibility makes Netsim an ideal tool for open ended

investigations of the network. The program simulates a finite population network which

gives a more accurate picture of network behavior than simulations based on assumptions

similar to those used in infinite population modeling studies. The program does not

model packet buffering at individual stations each station is assumed to generate a new

arrival only after their previous packet has been successfully transmitted.

2.6 Summary of Literature Reviewed

24
This chapter reviews the related literatures on the performance analysis of a local

area network. Several authors were review and their various observations discussed. The

issues associated with Local area network were itemized. The chapter also discusses the

types of network simulating tools and how these tools can be effective in analyzing net

work performance.

CHAPTER THRE

RESEARCH ANALYSIS AND METHODOLOGY

3.1 Analysis of the Existing Network Analysis

Networks (Wired and Wireless) have grown like weed over the past few decades

providing a pace to the means of accessing network resources. Therefore, it is vital to

have an accurate and a reliable generic platform to enable network. The wired Networks

provide a secure and faster means of connectivity. The performance of the wired Ethernet

is very sensitive to the number of users, offered load, transmission links while wireless is

also very sensitive to the number of users, offered load as well as physical characteristics,

25
data rate, packet size and so on. We can compare wired and wireless networks in the area

of installation, cost, reliability, performance, security, mobility. As networks are being

upgraded from scratch all over the word, network planning is becoming most important.

Computing the viability and performance of networks in real can be very expensive and

painstaking task. To ease and comfort the process of estimating and predicting a network

techniques are widely used and put into practice. The method used in this sense for the

performance analysis of local area network is not effective enough because of the

following challenges;

i. The choice of number of users

ii. Retransmission attempt

iii. Collision attempt

iv. Throughput, etc.

3.2 Method of Data Collection

During this project research work, data needed for the project was gathered from

the various sources. In gathering and collecting necessary data and information needed

from the system analyses, two major fact-finding techniques were used in this work and

there are:

a. Primary Source:

This refers to the source of collecting original data in which the researcher made

use of empirical approach such as personal interview.

b. Secondary Source:

26
The secondary data were obtained by the researcher from magazine, journal,

newspaper, library source and internet downloads. The data collected from this means

have been covered in literature review in the chapter two.

3.2.1 Oral interview

This was done between the researcher and the staff of the Delta state University

digital center, (Backlink Tecknolgy solutions).

3.2.2 Study of Manual. .

A variety of simulation tools like Glomosin, NS2, Netsim and OPNET are studied

for better understanding of the purpose of modeling and simulation but the choice of a

simulator depends upon the features available and requirements of network application.

3.3 OPNET (Optimized Network Tool)

OPNET is the registered commercial trademark and the name of product presented

by OPNET Technologies incorporation. It is one of the most famous and popular

commercial network simulators by the end of 2008. Because of it has been used for a

long time in the industry, it become mature and has occupied a big market share . Among

the various network simulators OPNET provides the industry’s leading environment for

network modeling and simulation. It allows to design and study communication

networks, devices, protocols, and applications with flexibility and scalability. It provides

object oriented modeling approach and graphical editors that mirror the structure of

27
actual networks and network components. The analysis helped to estimate and optimize

the performance of wired and wireless networks using proposed optimization techniques.

3.4 Advantages of OPNET Simulating Tools over Other Simulating Tools

OPNET's software environment is specialized for network research and

development. It can be flexibly used to study communication networks, devices,

protocols, and applications. Because of the fact of being a commercial software provider,

OPNET offers relatively much powerful visual or graphical support for the users. The

graphical editor interface can be used to build network topology and entities from the

application layer to the physical layer. Object-oriented programming technique is used to

create the mapping from the graphical design to the implementation of the real systems.

An example of the graphical GUI of OPNET can be seen in figure 3.1. We can see all the

topology configuration and simulation results can be presented very intuitively and

visually. The parameters can also be adjusted and the experiments can be repeated easily

through easy operation through the GUI

28
.

Fig.3.1 OPNET GUI

29
CHAPTER 4

4.1 OPNET Simulation of Local Area Network Performance Analysis

OPNET simulator is a tool to simulate the behavior and performance of any type

of network. The main difference with other simulators lies in its power and versatility.

This simulator makes possible working with OSI model, from layer 7 to the modification

of the most essential physical parameters. This section describes the performance analysis

of wireless and wired computer networks using simulation. The simulation was done

using the network simulator OPNET.

In case 4.2, First of all, a comparison was done by varying the types of

transmission links (Ethernet links) used in the wired networks for communication

between the server and the clients. Secondly, a load balancing mechanism has been used

to balance traffic load in the wired network. In this, different load balancing policies were

used. Investigations were done to find the policy using which the traffic sent/ received

30
can be balanced to improve the performance. The performance metrics evaluated are

delay, throughput, traffic sent and traffic received.

In case 4.3: the performance analysis of the wireless computer networks has been

illustrated by tuning the Wireless local area network parameters (such as physical

characteristics and buffer Size among many other parameters). The performance metrics

analyzed are delay and throughput for wireless networks.

4.2 OPNET Analysis Wired Network

Wired local area networks include several technologies like Ethernet, token ring,

token bus, Fiber distributed data interface and asynchronous transfer mode local area

networks. Ethernet has largely replaced competing wired LAN technologies. The

Ethernet is a working example of the more general Carrier Sense, Multiple Access with

Collision Detect local area network technology. The Ethernet is a multiple-access

network, meaning that a set of nodes sends and receives frames over a shared link. When

the two devices transmit at the same time the collision can occur. This collision generates

a jam signal that causes all nodes on the segment to stop sending data, which informs all

the devices that a collision has occurred. The carrier sense means that all the nodes can

distinguish between an idle and a busy link. The collision detect means that a node

listens as it transmits and can therefore detect when a frame it is transmitting has

interfered (collided) with a frame transmitted by another node. The Ethernet is said to be

31
a 1-persistent protocol because an adaptor with a frame to send transmits with probability

1 whenever a busy line goes idle.

Analysis of Wired network

In the simulation scenario shown in Figure4.1, comparison has been done by

varying the types of transmission links (Ethernet links) used in the networks for

communication between the server and the clients in a wired local area network.

Fig4.1 Wired local area network model I

Figure 4.1 shows the wired network being modeled and simulated for performance

analysis using OPNET. The comparison was made for same number of users but different

types of links like 10 Base T, 100 Base T and 1000 Base X. The analysis of performance

metrics like Ethernet – Delay illustrates that the maximum delay occurs for 10 Base T.

The performance analysis shows the impact of varying types of links on the Ethernet

traffic received. The traffic received using 100 Base T and 1000 base X is maximum

because of the reduction in delay.

32
When the load balancer receives a packet from a client machine, it must choose

the appropriate server to handle the request. The load balancer will use the load balancing

policy to determine which server is most appropriate. Following load balancing policies

can be used:

Random: The load balancer chooses one of the candidate servers at random.

Round-Robin: The load balancer cycles through the list of candidate servers.

Server Load: The load balancer chooses the candidate server with the lowest CPU load.

Number of Connections: The load balancer keeps track of the number of connections it

has assigned to each server. When a new request is made, it chooses the server with the

fewest connections.

The performance analysis has been done for networks with and without load

balancing policy. When no load balancing policy is used, the number of users is varied to

vary the network load. Then the performance analysis was done by comparing the

networks: one with maximum network load (without load balancing policy) and other

network with same maximum network load (with a load balancer implementing random

load balancing policy), as the number of user increases, more traffic is generated.

4.3 OPNET Analysis Wireless Network

IEEE 802.11 is a recent standard developed for wireless local area networks

(WLANs). IEEE 802.11 is a multiple access protocol in which stations in the network

must compete for access to the shared communications medium to transmit data. IEEE

802.11 uses a carrier sensing capability to determine if the communications medium is

currently being used. If two or more stations in the network transmit at the same time

33
(i.e., a collision occurs), stations retransmit their data after random periods of time as in

Ethernet. Wi-Fi (Wireless Fidelity) Technology, referred as the 802.11 communications

standard for WLAN is the popular wireless networking technology that uses radio waves

to provide wireless high-speed Internet and network connections. The 802.11 data link

layer is divided in two sub layers: Logical Link Control (LLC) and Media Access Control

(MAC). LLC is the same as in 802 LAN allowing for very simple bridging from wireless

to wire networks. MAC is different to WLANs. The first method in MAC is CSMA with

collision avoidance protocol. This protocol is to ask each station to listen before action. If

the channel is busy, the station has to wait until channel is free. Another method in MAC

is called RTS/CTS to solve hidden-Node problem.

Analysis of Wireless Network

The investigations show that the network attains the maximum throughput using

IR layer. The worst results are achieved when IEEE 802.11 protocol uses FHSS layer.

But an important thing to focus on is, that the throughput may vary according to the type

of the network modelled, the network objects variation may occur in terms of number of

stations, data rate and type of network load too among certain other parameters..

Buffer size (bits) specifies the maximum size of the higher layer data buffer in

bits. Once the buffer limit is reached, the data packets arrived from higher layer will be

discarded until some packets are removed from the buffer, so that the buffer has some

free space to store these new packets. The optimum size of buffer can stabilize the queue

size, the packet drop probability and hence the packet loss rate. The benefits of stabilizing

34
queues in a network are high resource utilization. When the queue buffer appears to be

congested the packet discard probability increases. On the other hand, the buffer overflow

can be used to manage congestion. The performance analysis has been done for a buffer

size of 256kbits and 1024kbits.

The buffer configuration defines the buffer size, the maximum allocated

bandwidth and minimum guaranteed bandwidth. If an incoming flow suddenly becomes

bursty, then it is possible for the entire buffer space to be filled by this single flow and

other flows will not be serviced until the buffer is emptied. If the buffer size is increased,

then the number of retransmission attempts would be reduced. Also the size of the queue

will be decreased for larger buffer due to the fact that the larger buffer will take less time

to send the packets, so the queue size will not build up continuously for larger buffer.

This shows the reduction in delay. The result of packet loss is the change of queue length.

When packet loss is relatively low, packets usually can be transmitted without

retransmission, so the queue length may be relatively small. But when the packet loss is

high, the MAC packet retransmission will prolong the delay of packet. So, a small buffer

size can increase packet drop rate and hence change the queue length and thus impact the

throughput and delay. Hence, the performance has been improved by increasing the

buffer size. The increase of packet discard rate can lead to the decrease of throughput.

This happens due to frequent retransmissions of the MAC layer data packets when the

packet loss rate increases. But the packet loss may happen due to low buffer size. The

analysis shows that if buffer size is increased then the retransmission attempts would be

reduced as the size of the queue is decreased for a large buffer size. The time to deliver

35
the packets decreases, due to large buffer size. The throughput always increases

monotonically with the buffer size, reaching a maximum above a threshold buffer size.

Similarly, the performance analysis can be done for varying data rates, RTS threshold

and fragmentation Threshold. Thus the throughput can be increased by increasing the

buffer size because on increasing the buffer size, the packet drop may be reduced

4.3 Features Analyzed

Some of the Performance metrics focused on, in the literature review,

regarding wired and wireless LAN are:

Collision count: Total number of collisions encountered by this station during packet

transmissions.

Data Dropped: Total number of bits that are sent by wireless node but never received by

another node.

Delay: This statistic represents the end to end delay of all packets received by all the

stations and forwarded to the higher layer.

Load: Total number of bits received from the higher layer. Packets arriving from the

higher layer are stored in the higher layer queue. It may be measured in bits/sec or

packets/sec.

Media access delay: Total time (in Seconds) that the packet is in the higher layer queue,

from the arrival to the point when it is removed from the queue for transmission.

36
Queue Size: Represents the total number packets in MAC's transmission queue(s) (in

802.11e capable MACs, there will be a separate transmission queue for each access

category).

Throughput: Total number of bits sent to the higher layer from the MAC layer. The data

packets received at the physical layer are sent to the higher layer if they are destined for

this destination.

4.4 Findings and Discussion

Though Wireless networks, in contrary to wired networks, are relatively a new

field of research, there exist some simulators to develop and test the effect of change in

the input/other attributes parameters on various performance metrics.

. An extensive literature review on wireless and wired networks using simulation

has been investigated for their performance comparison by varying the attributes of

network objects such as traffic load, file size, RTS/CTS, customizing the physical

characteristics to vary BER, slot time, SIFS time or the contention window, to determine

their impact on throughput & delay. It is observed that as the number of user increases,

more traffic is generated. Also, Throughput can be increased by increasing the buffer size

because on increasing the buffer size, the packet drop may be reduced.

37
CHAPTER FIVE

5.0 SUMMARY, CONCLUSION AND RECOMMENDATION

5.1 SUMMARY

In this paper set of simulation experiments performed in OPNET simulator that

compares wired and wireless network performances. For the less number of users

wireless network performs better than the wired network for the same data rate of

packets. While as the number of user increases the performance and throughput

degradation occurs for wireless network than the wired network with same speed due to

the transmission limit, SNR (signal to noise) and bandwidth of the received signal. As the

number of users or load increases beyond some limit on wireless network can cause the

collisions among the packets sent by the users and due to that the retransmissions occurs

in the wireless network which degrade the performance.

5.2 CONCLUSION

38
The impact of various network configurations on the network performance was

analyzed using the network simulator- OPNET. It has been investigated that performance

of the wired Networks is good if high speed Ethernet links are used under heavy network

loads. The mechanism of load balancing also improves the performance by reducing and

balancing the load equally among multiple servers. This lowers the response time to

access server. In addition performance analysis of wireless computer networks has been

done for improving the performance of wireless LAN. The investigations of physical

characteristics reveal that the infrared type is best in terms of throughput. The variation in

buffer size varies the queue size and hence optimizes the throughput.

5.3 RECOMMENDATION

This research work was carried out under thorough supervision, however, the

researcher has recommended that to improve the overall performance of the system it is

better to use hybrid network which is the combination of both wired and wireless

network.

REFERENCES

Hesham M.El et.al, (2008), Performance Evaluation of the IEEE 802.11 Wireless LAN
Standards, in the Proceedings of the World Congress on Engineering, 2008 , vol.
I, July 2-4.
Sameh H. Ghwanmeh, (2000), Wireless network performance optimisation using Opnet
Modeler, Information Technology Journal, vol. 5, No 1, pp. 18-24.
Ranjan Kaparti, ―OPNET IT Guru: A tool for networking education,‖ REGIS University.
Opnet_Modeler_Manual,‖ available at http://www.opnet.com

39
Velmurugan et.al, (2009) Comparison of Queuing disciplines for Differentiated Services
using OPNET,‖ IEEE, ARTComm. 744-746.
Yang Dondkai and Liu Wenli, (2009), The Wireless Channel Modeling for RFID System
with OPNET, in the Proceedings of the IEEE communications society sponsored
5th International Conference on Wireless communications, networking and
mobile computing, Beijing, China.
Ikram Ud Din, et.al (2009), Performance evaluation of different Ethernet LANs
connected by Switches and Hubs, European Journal of Scientific Research.
Schreiber M K, et.al (2005), Performance of video and video conferencing over ATM
and Gigabit Ethernet backbone networks Res. Lett. Inf. Math. Sci., vol. 7, pp.
19-27.
Chang X (1999), Network simulations with OPNET, Proceedings of the 1999 Winter
Simulation conference, pp.307-314.
Shufang Wu, et.al (2004), OPNET Implementation of the Megaco/H.248 protocol: multi-
call and multi connection scenarios, OPNETWORK 2004, Washington, DC.
Yan Huang, et.al (2002), Opnet Simulation of a multi-hop self-organizing Wireless
Sensor Network, in the Proceedings of OPNETWORK 2002 conference,
Washington D.C.
Gilberto Flores Lucio, (2003), OPNET Modeler and NS-2 : Comparing the accuracy of
Network Simulators for packet level Analysis using a Network Test bed, WSEAS
Transactions on Computers, pp. 700—707, 2- 3.
Dr. Bansal R.K (2010), Performance analysis of wired and wireless LAN using soft
computing techniques-A review, Global Journal of Computer Science and
Technology, Vol. 10 Issue 8 Ver.1.0.
Ikram Ud Din et.al (2009), Performance Evaluation of Different Ethernet LANs
Connected by Switches and Hubs, European Journal of Scientific Research, vol.
37 No. 3, pp. 461-470.
Jia Wang et.al (1999), Efficient and Accurate Ethernet Simulation, Proc. Of the 24th
Conference on Local Computer Networks (LCN'99), pp. 182-19.

40
Mohammad Hussain et.al (2009), Simulation Study of802.11b DCF using OPNET
Simulator, Eng. & Tech. Journal, vol.27,No6,2009,pp:1108-1117.

41

You might also like