0% found this document useful (0 votes)
72 views

Research Paper

The document compares the MPLS and OpenFlow communication protocols. It discusses examples of these protocols for network traffic engineering and quality of service. Test environments were created using hardware routers, virtualization technology, and software routers to perform experiments. The experiments analyzed the protocols' performance, scalability, and capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Research Paper

The document compares the MPLS and OpenFlow communication protocols. It discusses examples of these protocols for network traffic engineering and quality of service. Test environments were created using hardware routers, virtualization technology, and software routers to perform experiments. The experiments analyzed the protocols' performance, scalability, and capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

A Comparison of Multiprotocol Label Switching

(MPLS) and OpenFlow Communication Protocols


By Dariusz Terefenko

X00088029

A research paper submitted in partial fulfilment of the


requirements for the

MSc. in Distributed and Mobile Computing

Institute of Technology Tallaght


Dublin 24
April 2018
Abstract
The main reason behind this research was to compare the effectiveness of Multiprotocol Label
Switching (MPLS) and OpenFlow (OF) protocols in computer networks. Paper discussed
examples of these Internet Protocols (IPs) for network Traffic Engineering (TE) and Quality of
Service (QoS) as well as scalability and interoperability with outlook for performance
comparison. Test environments were created using Hardware (HW) routers and Hyper-V
technology as well as a Mininet environment while performing experiments with Software
Defined Networking (SDN). During the experiments software routers were used with the Linux
operating system with the addition of MPLS in Linux and FRRouting (FRR), Cisco physical and
virtual routers as well as a set of tools specially installed to generate the traffic and capture results
for analysis such as Wireshark, iPerf, and MGEN.

Word Count: 7900

2
1. Introduction
Nowadays it's hard to imagine a computer without a network connection. The most
common, and one that is growing at a truly breath-taking speed is the Internet which, according
to Internet Live Stats (2017), has grown to more than 3 billion users from 1 billion in 2005. We
must highlight that currently, users access the Internet not only from their Personal Computers
(PCs) but also from their mobile phones, tablets, cameras or even household appliances and
cars.
This exponentially growing number of connections means that computer networks, in
the field of Information Technology (IT), have become integral to research and experimentation
in order to keep pace with and allow for, the Internet’s rapid growth.

This thesis also matches the trends of these studies where we conducted a series of
experiments and tests which were designed to examine the operational efficiency and
manageability of network traffic with MPLS and OpenFlow. These technologies are assumed
to help solve problems such as transmission of multimedia in real time, providing services that
meet the criteria for Quality of Service (QoS), the construction of scalable virtual private
networks and efficient and effective Traffic Engineering (TE).

Subsequent chapters describe in detail the scope of work performed and lessons learned
from practical scenarios and they also point to the potential use of these results. The next
sections contain a description of the network protocols such as OpenFlow and MPLS as well
as SDN and its operation within OpenFlow boundaries. It also includes an explanation of why
we should use OpenFlow as an extension of IP technology, what are the security risks and how
to deploy such protocol. It fallows next to chapter two which includes concise description of
the environment used to perform the study of protocol effectiveness. This section also presents
a detailed walkthrough of the environment set up based on available Cisco Internetwork
Operating System (IOS) routers, Ubuntu with software router implementation and Hyper -V
virtualization technology and how it can be used for the testing of performance and capabilities
of MPLS and OpenFlow protocols such as scalability, QoS, TE and link failover. It also
includes explanation of various OpenFlow topologies and SDN controllers to highlight their
operational differences. The third chapter is an accurate description of the experiments with the
obtained results and a brief commentary on them. The last, fourth chapter has been dedicated
to present the findings which were concluded with work carried out as well as future work
proposed for further research. Due to complexity of the research it also includes a list of
appendices with scripts and commands used to configure the devices.

3
1.1. Limitations of IP
While using IP technology, it’s not possible to provide good performance for data
services with guaranteed quality (Smith and Accedes, 2008). This is a serious problem if we
want to deliver multimedia in high quality through a network satisfying the conditions of real -
time. To solve this problem, many began to work on an approach for data, voice, and video to
broadcast via telecommunications networks in a unified way, in the form of a packet (Gupta
2012). This, however, requires a modification to the network architecture which is generally
referred to as a Next Generation Network (NGN).

Another issue which we cannot solve with the use of IP technology is the creation of
effective mechanisms to control and manage the movement of packets across the network, the
so-called Traffic Engineering (TE). This is according Mishra and Sahoo (2007) due to the
restrictions applied to dynamic routing protocols, e.g. Open Shortest Path First (OSPF) which
doesn't allow to define the arbitrary data flow paths.
Described problems, however, can be solved using MPLS or OpenFlow.

1.2. MPLS
In principle, MPLS not supposed to substitute any already used communication
protocols including the most common IP, but it should extend it (Rosen et. al, 2001). MPLS
can work with network technologies such as TCP/IP, ATM, Frame Relay as well as
Synchronous Optical Networking (SONET).

MPLS is called a layer 2.5 protocol in the ISO Open Systems Interconnection (OSI)
model, because it operates between data and network layers. According to the Request for
Comments (RfCs) it combines the advantages of the data link layer, such as performance and
speed, as well as the network layer (and its scalability).

With MPLS we get a richer set of tools for network management and TE which allows
transmission of packets through arbitrarily specified routes which are not possible to be defined
with the use of classical routing protocols.

In the MPLS domain, IP routing is replaced by a label switching mechanism which is


used by edge Label Edge Router (LER) and Label Switched Router (LSR) to forward group of
packets with equivalent Forwarding Equivalence Class (FEC) via Label Switched Path (LSP).

According to Abinaiya and Jayageetha (2015), an MPLS label is attached by the LER

4
at the time the packet enters the network while using this protocol. A 32-bit label is added
between the second layer (Ethernet header) and the third layer (IP header).

MPLS technology allows us to build a stack of labels where path switching operations
are always performed on the top-level VPN labels for defined tunnels. It’s possible to use TE
with MPLS, because the list of hosts through which the packet is routed is determined by the
LER at the time the packet enters the MPLS domain. This allows to route traffic other than it
would be from the classical routing protocols, as we can select a path that has reserved resources
that meet QoS requirements.

According to Partsenidis (2011) with MPLS we can also easily set up tunnels between
the selected network nodes that can be used to create VPNs and guarantee logical separation
between different VPNs using one common network infrastructure.

1.2.1. MPLS in Linux


An open source project which adds to the Linux kernel the support of MPLS (Leu,
2013). Creators of the project provided an upgrade to the Linux kernel which allows the use of
this protocol where we can transform a PC into a software router with MPLS. Its advantage
was, however, the stability of operation and therefore in 2015 MPLS support was widely
introduced in Linux 4.1 and developed since then by adding extra functionalities (Kernel
Newbies, 2015).

1.2.2. Cisco and MPLS


Cisco has developed an MPLS protocol for commercial technology as opposed to the
free software solution presented above. In accordance with the requirements of the version of
the protocol implemented by Cisco, it can work as an extension to IP, ATM, Frame Relay, and
Ethernet. These technologies are also supported by a wide range of device manufacturers.
According to Cisco (2002), it caters for the scalability of VPNs and it facilitates the use
the shortest path for the traffic flow to reduce the congestion as well as to make the best use of
network resources.

1.3. OpenFlow
1.3.1. OF Protocol
OpenFlow (2011) is an open source project which was developed in the first decade of
the twenty-first century by Stanford and California U.S. universities. Its first official

5
specification was announced in late 2009 and is designated as version 1.0.0. Currently, the
further work around this protocol is held by the Open Networking Foundation and the latest
version announced in March 2015 is 1.5.1 (Open Networking Foundation, 2015).

Use of OpenFlow provides similar benefits as offered by MPLS. We receive a rich set
of tools that will let us engineer the traffic to optimize the transmission to ensure adequate
throughput, avoid delays or the number of connections through which the packets are routed.

This protocol introduces the concept of traffic flows which caters for both network
virtualization and separation of traffic.

OpenFlow is a protocol operating in the second layer of the ISO OSI model what
distinguishes it from the MPLS protocol which works in both the data link and the network
layer.

According to Goransson and Black (2014), three components are needed to create a
network based on OpenFlow technology: a switch that supports this protocol, a controller and
a communication channel which is used by OpenFlow protocol through which the controller
and switch can communicate.

1.3.2. Switch Architecture


An OpenFlow switch is a network infrastructure component that operates in the second
layer of the ISO OSI that holds in its memory flow tables and has a communication channel
that can be used to communicate with the controller.

Flow tables consists of three major elements: header fields which are created from the
packet header, counters which are statistical information such as the number of packets and
bytes sent and the time since the last packet matched the rule, action fields which specify the
way the package was processed. Entries to it are added via the controller. They specify how the
switch should behave after receiving the packet that meets the matching condition. The switch
can send data to the output port, reject it or send it to the controller.

Communication channel is used to facilitate communication between switches and the


controllers which is crucial since the actual decisions about network traffic management are
made by the controller and must be propagated among the switches. The data sent through this
channel must conform to the OpenFlow specification and is usually encrypted using Transport
Layer Security (TLS).
6
Most controllers support OpenFlow version 1.3.2 rather than 1.4 or 1.5. OF switch
might support one on more flow tables within the pipeline, but only needs to support one table
to be compliant with the standard.

The switch can be either a Mininet, OVS or physical HW which uses OF protocol to
communicate with the external controller via TCP or TLS to perform packet lookups for
forwarding decisions.
The controller is decoupled from the switch in the control plane usually running on a Linux box
and it manages the switch via OF to add, update and delete flow entries in a reactive or proactive
manner.

Each switch can have up to 254 flow tables and matching of the packets starts on Flow Table 0
which was originally supported in OF version 1.0, while other versions support multiple tables
in the pipeline by using goto-table instructions. Flow entries are matched in order of priority
from higher to lower when instructions are executed and if no entries are matched then a table-
miss entry with the priority of 0 is used.

1.3.3. Flow Matching


To investigate the concept of the flow tables we have used Mininet and HPE Aruba
VAN SDN controller with a topology which consisted of two switches and two hosts.
Observations allowed us to see the Datapath ID of the switch and the single flow table
with all matches and actions during pingall request between the nodes. We also have created
flow entry for ICMP packets with a higher priority without any action associated to block the
connectivity between the hosts.

During these test cases we have seen that according to the OF 1.0 we were only able use
one table in the pipeline, while OF 1.3 and higher can support a larger amount of flow tables
starting from Flow Table 0.

1.3.4. Switch Ports


Switches connect to each other via OF ports which are virtual/logic or HW ports where
physical ports map one-to-one to logical ports.

OF switches also have reserved ports which are defined by the specification and these
represent forwarding actions such as sending traffic to the controller, flooding packets out, or
forwarding with normal switch processing.

7
Logical port can be used as ingress or egress port depending of OF version, while normal
port represents the traditional routing.

1.3.5. Pure vs Hybrid


OpenFlow-only switches support normal or flood ports within the pipeline, so they only
operate in the data plane and rely on the intelligence of the controller to make the decision about
the forwarding of the packets.

OpenFlow-hybrid switches support pure OF operations as well as normal switching


mechanisms such as L2 switching, L3 routing, ACLs, VLANs and QoS. This means that they
operate within normal pipeline with use of classification mechanisms such as VLAN tagging
or input ports which also can go through an OpenFlow-only pipeline via flood or reserved ports.

In an OF pipeline, switch decisions are made by the controller wherein a traditional


mechanism they're made locally on the switch. It's also possible to use an OF pipeline to use a
normal forwarding method for traditional mechanism and then send it to the normal port for
traditional routing and switching.

1.3.6. Connection Interruption


When the switch will not receive a periodic echo reply message back, it would mean
that there is a problem with the connection to the controller that would result in going into fail
secure or fail standalone mode.

In fail secure mode, during first connection attempt, or in event of the disconnection of
a switch from the controller, packets traversing to the controller will be dropped and flow entries
will become deleted or they will expire as per their timeout settings. Hybrid switches, however,
can operate in fail standalone mode during failure, so packets can be delivered with the use of
a traditional forwarding method with the normal port.

To investigate the concept of fail secure and fail standalone we have concluded test
cases with use of HPE Aruba VAN SDN and OpenDaylight (ODL) controllers. This allowed
us to observer the behaviour of the flow entries as well as packets traversing between ports.

1.3.7. Real World


There is no requirement to change most of the devices to support OF as vendors such
as HP (HPE Support, 2012) and Cisco (Cisco Support, 2017) provide firmware upgrades. This
way in core or access layer we can use hybrid switches or OVS in Mininet and if OF isn't
configured to operate with the controller, traditional routing and forwarding mechanisms will
8
be used.

1.3.8. Message Types


The OpenFlow version 1.3.2 (OF13) specification distinguishes three types of
communication (Open Networking Foundation, 2013).
In controller-to-switch communication, the controller sends a message to the switch
and possibly receives a reply with one of eight main types of messages: Features Request,
Configuration, Modify-State, Read-State, Packet-Out, Barrier, Role-Request or Asynchronous
Configuration.
Asynchronous messages are initiated by switches and there are four basic types of
messages: a packet receipt (Packet-In), a removal of an entry from the flow table (Flow-
Removed), a request for port status (Port-Status) and an error message (Error).

Symmetric communication can be initiated by any of the sides and messages sent in this way
are: Hello messages between switch and controller, Echo verifies the link and can be used to
measure the latency as well as a vendor-specific message reserved to be used in the future
(Experimenter).

To investigate the message types, we have used ODL and Wireshark to inspect the
packets traversing between the switches and the cartelized controller.

1.4. SDN
1.4.1. SDN Considerations
ONF's definition of SDN states that it's a separation of the network control plane from
the forwarding plane where devices can be controlled via the control plane (ONF, 2017).

Application, control and infrastructure layers, where switches reside in infrastructure layer,
controllers such as OpenFlow, OpenDaylight (ODL), Ryu, ONOS, Floodlight, POX or HPE’s
Aruba Virtual Application Networks (VAN) and applications which talk to the controller via
Northbound Interface (NBI).

OpenFlow itself isn't an SDN, it's a protocol used in the Southbound Interface (SBI)
between the controller and switches.

1.4.2. NBI vs SBI


In general, SDN separates the control plane from the data plane and provides interfaces
and APIs for centralized network management rather than configuring individual distributed
devices
9
Two interfaces are required for SDN (Russello, 2016). The NBI allows individual
components of the network to communicate with higher level components and vice versa. This
interface describes the area of communication between hardware controllers and applications
as well as higher layer systems. Its functions focus mainly on the management of automation
and the interchange of data between systems.

The SBI is implemented, for example, through OpenFlow. Its main function is to
support communications between the SDN controller and network nodes, both physical as well
as virtual. It's also responsible for the integration of a distributed network environment. With
this interface, devices can discover network Topology (Topo), define network flows and
implement API to forward requests from the northbound interface.

1.4.3. NFV
According to Chiosi et al. (2012), Network Functions Virtualization (NFV) are
functions which are typically available on the HW but deployed as Software (SW) running in a
virtual environment.

1.4.4. CORD
Central Office Re-architected as a Datacentre (CORD), so rather than the use of
traditional office approach where we use NFV and SDN to deploy VMs and VAs in the cloud
with an agile approach to provide higher efficiency (OpenCORD, 2017).

1.4.5. Available Controllers


There are many Open Source controllers such as Floodlight, LOOM, OpenContrail,
ODL, OpenMUL, ONOS, Ryu, POX and Trema. We can also list a major number of
commercial controllers developed by Hewlett Packard Enterprise (HPE), Brocade, Dell, Big
Switch and many more.
The most important factor is that the SDN controller software must include drivers that
will allow controlling the functions of network devices running the system as only then can it
act as a network management system. It should be noted that most modern switches and routers
are equipped with memory modules containing flow tables to control the flow of data which
have different structure, but there is a basic set of features supported by all network devices as
it uses the OpenFlow protocol.

1.4.6. White-box Switching


This is disaggregation of Network Operating System (NOS), in the past switch and OS
where proprietary and integrated, but now we have moved to white-box switches running OS

10
and apps on the top such as: Quagga (2017), SnapRoute (2017), FBOSS (Simpkins, 2015),
ICOS (BW-Switch, 2016) or we can get Linux OS from Cumulus Networks, Pica8 (2017) or
Big Switch Networks.

According to Salisbury (2013), many of the commercially available switches in L2 and


L3 layers can function as so-called hybrid switches which support both classic switching and
packet routing functions as well as commands issued by the OpenFlow controller.

1.4.7. Software Defined WAN


SD-WAN technologies are used to control the traffic sent via MPLS networks while at
the same time dynamically sending parts of it via Internet cloud rather than using static VPNs
and Policy Based Routing (PBR). This way the centralized controller can send low latency apps
via MPLS domain where other packets can be sent via the Internet to dynamically forward
traffic across different network segments.

1.4.8. Advantages and Disadvantages


Burgess (2014) states that centralization is one of the key determinants of the business
success of the SDN, because it allows for significant reductions in Operating (Opex) and Capital
Expenditure (Capex).

Unfortunately, according to Reber (2015), centralized control plane simplifies


architecture, but this approach does not work well in the face of the need for high scalability in
real applications.
Since the configurations of the individual flows are very detailed and may also contain
parameters of the application layer what could be a potential security risk, any centralized
system in a large network could be overloaded with the propagation of millions of these flows.
At the same time, in situations like network failures, the need for new paths and thus the number
of streams can dramatically increase. In this situation, the distributed control layer located
locally in each switch manages and scales better than a fully centralized system.
Another important factor to be considered before deciding to implement SDN is a delay
in packet forwarding (O’Reilly, 2014).

1.4.9. Deployment Approaches


According to Vissicchio et al. (2014) a more practical solution, but still allowing for a rather
detailed level of control, is the hybrid approach. The local, distributed control plane is
responsible for network virtualization, failover mechanisms and provisioning of new flows.
However, some flows are subjected to more thorough analysis of the central point and its

11
reconfiguration. The results are then returned to the switches and subsequent updates are made
to them.
Another indirect solution is to use more than one SDN controller depending on network
size. In this way, the controllers can be placed closer to the devices that they manage. This leads
to shorter delays and allows more efficient control of the work of the switches while transferring
requests to the central control plane.
McNickle (2014) states that SDN can also be deployed using common protocols and
interfaces such as Border Gateway Protocol (BGP), Network Configuration Protocol
(NETCONF), Extensible Messaging and Presence Protocol (XMPP), Open Virtual Switch
Database Management Protocol (OVSDB) and Multiprotocol Label Switching Transport
Profile (MPLS-TP) as well as with Command Line Interface (CLI) or SNMP.

SDN uses User Datagram Protocol (UDP) tunnels which are very similar to Generic
Routing Encapsulation (GRE) tunnels, except that they can be dynamically switched on and
off. According to Wang et al. (2017) the effect of using tunnelling is the lack of transparency
of network traffic which entails significant consequences such as serious difficulties in the
troubleshooting of network problems.

1.4.10. Controller Security


Theoretically, a hacker could gain unauthorized physical or virtual network access or
break security on an endpoint device connected to the SDN and then try to escalate the attack
to destabilize other network elements. This may be, for example, a type of Denial of Service
(DoS) attack.
Attackers can also use underlying protocols to add new entries to flow tables by
modifying new flows to allow a specific type of network communication that was previously
excluded from the network. In this way it's possible to initiate a flow that bypasses the traffic
control mechanisms to allow the network communication through the firewall and if it is
possible to manage the traffic so that it will be able to pass through preferred network links,
then this can be used to capture network traffic and perform Man in the Middle (MITM) attacks.
A hacker can also eavesdrop on communication between the controller and network devices to
see what kind of transmission takes place and what kind of traffic is allowed on the network to
use this information as reconnaissance.
According to Millman (2015), we should use TLS to authenticate and encrypt
communications between network devices and the controller. Using TLS helps to authenticate
the controller and network devices which prevents eavesdropping or spoofing of legitimate

12
communications.

Hogg (2014) stated that attacks on SDN-specific protocols are another vector of attack
due to APIs such as Python, Java, C, REST, XML and JSON which hackers can potentially
exploit in terms of the vulnerabilities, and then take control of the SDN via the controller. If the
controller does not have any security measures implemented against attacks on APIs, then there
is the possibility to create its own SDN rules and thus take control of the SDN environment.
To test the security vulnerabilities, we have used SDN Toolkit discussed by Picket
(2015) with Floodlight and ODL controllers. However, we were not able to retrieve flows out
of controllers. We also investigated the HPE Aruba VAN controller and its self-signed
certificate for TLS which is used as authentication token.

1.4.11. Traditional vs OF Forwarding


In a traditional environment, each switch has its own MAC Address Table, so if PC1
with MAC address 00:00:00:00:00:01 wants to send a packet to PC2 with MAC address
00:00:00:00:00:02 it must send it first to the first switch which will look at the MAC Address
Table and if not found an association to PC2 it will flood this through all other ports.

However, in OpenFlow when a packet arrives on the port the switch sends it to the
controller as per its Flow Table, the controller, in turn, runs an app which learns all of the MAC
addresses and where to forward the specific MAC addresses. This way all learning happens on
the controller, but as soon as it knows where the devices are, it will update the Flow Tables on
the switches, so they will forward the packets and learn independently while only unknown
traffic will be sent to the controller.

1.4.12. Reactive vs Proactive


In a reactive flow, the entry controller learns the port and MAC address after an event
such as a Packet-In message from the switch while sending a request.

Proactive flow entries are pre-programmed rules which exist before any request will be
sent as the controller tells the switch what to do, so flow entries would contain matches to
devices, their actions, and instructions.

2.Test Environment
All testing of performance, and compatibility of communication using the selected
network protocols was performed using physical equipment, such as three Cisco 2801 routers
and Hyper-V Virtual Machines (VMs) as guests with Ubuntu OS as well as with the use of a
Mininet environment on a Windows Server 2016 host.

13
2.1. Cisco 2801
The Cisco 2801 routers had at least three Local Area Network (LAN) ports required to
perform the experiments.

2.1.1. Hardware Routers


To fulfil the requirements for our three “test-bed” routers, we had to install additional
1-Port Fast Ethernet layer 3 (HWIC-1FE) cards and upgrade the memory to load IOS released
at October 2016 named 2801-adventerprisek9-mz.151-4.M12a with MPLS support (Cisco,
2017).

2.1.2. Hardware Configuration


To configure the routers, we have used the 64bit Windows Putty client version 0.69 and
USB to RJ45 console cable which requires additional OS drivers for 64bit Windows in version
2.12.26 (FTDI Chip, 2017).

2.2. Traffic Generators


To check the effectiveness of OpenFlow and MPLS we had to use the software traffic
generators such as MGEN and iPerf which are made available as freeware software.

2.2.1. MGEN
This tool can generate TCP and UDP traffic, and then allows to save the relevant
statistics (U.S. Naval Research Laboratory, 2017). Multi-Generator (MGEN) provides a very
wide range of possibilities when it comes to the type of traffic generated, with which we were
able to plan test scenarios that conform to actual conditions in the working environment.

2.2.2. iPerf
The program is written in C++ and it works in a client-server architecture. For our
performance studies, we have generated streams of TCP and UDP traffic to observe the
network throughput, jitter and the number of lost packets (iPerf, 2017).

We have used version iPerf3 for TCP related tests, iPerf2 for most of UDP scenarios
and default buffer sizes. The reason for that is it supports multiple client connections (iPerf,
2017) in conjunction with jPerf2
Except for iPerf tools, we also installed the Graphical User Interface (GUI) for iPerf2
called jPerf2 as per instructions by Rao (2016).

2.3. Hyper-V
We didn’t use the bare type 1 hypervisor, but the software where one of the elements
is the type 2 hypervisor called Virtual Machine Monitor (VMM) used to run VMs. Hyper-V

14
software was used to host the physical machines in the virtual environment. It allowed us to
run different operating systems at the same time on one physical server without interference
with the existing OS or the need to create independent partitions on a physical disk.

2.4. Mininet
Mininet is a system for vitalizing computer networks on a PC (Lantz et al., 2010). A
Mininet is an emulator that is great for exploring and testing the capabilities of the SDN
architecture. It helped us to create virtual networks via sudo mn and to test OpenFlow and
SDN.

For this purpose, NATSwitch vNIC was created in Hyper-V to act as gateway for the
Mininet bridge and tests were performed with HPE Aruba VAN SDN controller.

2.5. Wireshark
Wireshark is a graphical network traffic analyser, also known as “network sniffer”,
which allowed to capture packets transmitted over specific network interfaces within ISO OSI
layers 2-7 with data frame protocols (Wireshark, 2017).

2.6. Controllers with Mininet


In this section, we have investigated different remote controllers and their integration
with Mininet via OF13 (Open Networking Foundation, 2013) and OpenFlow version 1.0
(OF10) as well as various Topos and terminology covered in Chapter 1.3. The main purpose
of this section was to compare various SDN controllers and their features as well as different
topologies which can be used to create OF network.

We have investigated controllers such as: OpenDaylight, Floodlight, HPE Aruba


VAN, ONOS, POX, Ryu and topologies such as: Linear, Single, Tree, Torus. This allowed to
see the operation of flow tables as well as port and flow entries in them both within different
GUIs and in the CLIs. It also provided a valuable feedback in terms of OpenFlow versions,
operation of Spanning Tree Protocol (STP) and Bridge Protocol Data Units (BPDUs), what
lead to ability to create own customized Datacentre Topo.

2.7. Test Methodology


At the time of determining the subject matter and scope of this work decision was
made that any attempt to study effectiveness and possibilities of described network protocols
will take place with “hybrid” approach with use of physical routers and virtual environments
such as: Mininet and Hyper-V.

2.7.1. Software Routers


In this work, we have decided to install Linux kernel version 4.12 (Kernel Newbies,

15
2017) on Ubuntu version 16.04.2 (Ubuntu, 2017) which supports MPLS since Linux version
4.1 was released in June 2015.
Since we were limited in terms of available HW resources we also have used VMs
with Cisco CSR 1000V Series IOS version 3.12.2S (csr1000v-universalk9.03.12.00.S.154-2.S-
std) installed with vendor minimal requirements (Cisco, 2017) to test the scalability and
interoperability of the protocols in the network which wouldn’t be possible with the use of
only three physical routers.

2.7.2. Hardware Routers


Cisco network environment was configured to use either mpls static binding to
implement hop-by-hop forwarding for neighbours which do not use Label Distribution
Protocol (LDP) (Cisco, 2007) or a dynamic method of distribution with OSPF protocol which
will assign labels to routes (Cisco, 2005).

2.7.3. Software Switches


To use the PC as a set of switches with OpenFlow protocol we have used Ubuntu OS
version 16.04.2 LTS with the codename “Xenial Xerus” (Ubuntu, 2017) and Linux kernel
version 4.12.

3.Performance and Compatibility Tests


The experiments which are supposed to show the advantages or disadvantages of the
above-mentioned solutions were divided into six main groups:

1. Checking for MPLS interoperability between MPLS implementation for Linux and
Cisco IOS Multiprotocol Label Switching.

2. Comparison of efficiency of the network computer based for standard IP routing


solutions: MPLS in Linux, Cisco IOS MPLS, and OpenFlow for parameters such as:
throughput, delay (jitter), packet loss and Round-Trip Time (RTT).
3. Scaling up the network by adding additional nodes to MPLS and OF environments.

4. Tests of QoS approaches within various MPLS and OF topologies.


5. The use of TE with the use of the described technologies.

6. Explaining what the possibilities in response to the failure of certain parts of the network
while using both protocols.

3.1. Compatibility between MPLS in Linux and Cisco MPLS


This test has proven that the Kildare router acting as the LSR isn’t able to correctly
pop the outgoing label while forwarding the packets on the LSP to VM1’s GW and thus it
16
will not be able to forward packets to Dublin’s directly connected network 192.16.10.0/24.

It also proved that MPLS support in Linux isn’t fully compatible as Linux software
nodes acting as LSR cannot strip out the label before forwarding the packet to the next hop
within the MPLS network.
Therefore, we have decided that in the remaining tests we are only going to investigate
the scenario where the Dublin router will act as the LSR and other remaining Linux MPLS
enabled routers will be configured as LERs.

Figure 1: MPLS Compatibility Topologies.

3.2. IP Performance
To test performance of MPLS and OpenFlow we have compared them against each
other as well as P2P connection and IP Forwarding with static routing as per below
topologies:

Figure 2: P2P Connection on Internal vSwitch in Hyper-V.

17
Figure 3: IP Forwarding with Three Cisco 2801 Routers and Static Routing.

Figure 4: MPLS with Kildare and Laois as Linux LERs and Dublin as Cisco 2801 LSR.

Figure 5: Cisco MPLS with Three Cisco 2801 Routers with Dublin as LSR

18
Figure 6: OpenFlow Performance Topology with S1, S2 and S3 in Mininet.
The results obtained in throughput experiments are presented in Figure 7 which allows
us to state that the use of OpenFlow provides slightly higher throughput than the P2P link
between two VMs. This is possibly because controller makes the forwarding decision based
on network port number, while both VMs would use their routing tables what involves
additional processing and delay in result. We also have proven that MPLS support in Linux
kernel 4.12 with IPRoute2 isn’t effective in terms of speed between LERs due to lowest
results in terms of throughput and highest StDev. We can see from the arithmetic mean based
on the four measurements made for each method that, other than this fact, the most of results
are very close to each other. We can also assume that the results obtained with iPerf3 are
reliable from a sample of four measurements since the Standard Deviation (StDev) of the
random variable is small for the remaining test cases. It also shows that IP traffic control
technology operates unpredictably if we consider its throughput in OpenFlow.

Throughput StDev
IP Technology
Mbps Kbps Mbps Kbps

P2P 99.947 12493.375 1.358 169.750

IP Forwarding 94.332 11791.500 1.563 195.375

MPLS in Linux 0.011 1.413 10.879 1359.875

Cisco MPLS 92.814 11601.750 1.102 137.750

OpenFlow 99.948 12493.500 1.376 172.000

Figure 7: Network Throughput Depending on the IP Technology.

The results in Figure 8 obtained during delay test cases present a clear advantage to

19
OpenFlow over tested technologies except for P2P connections established between two
endpoints. Since all of our tests with MPLS in Linux didn't provide a summary of sent
datagrams we need to invalidate the 0 ms results within our sample. We can tell that they all
work quite similarly while one connection is established because the jitter values aren't high
and StDev is below 1 ms. An interesting observation is that increasing the number of parallel
transmissions causes a significant increase in the jitter value. For IP Forwarding and Cisco
MPLS, it is approximately ten times the number of senders. However, this is probably
because the server accepts datagrams as a group-send to a given port which in turn results in
an irregularity of how packets reach the destination. In terms of performance, we can say that
the optimal results were given by OpenFlow while iPerf Server's response was ten times
higher than for Cisco MPLS. For the results obtained as a benchmark, we can assume that the
acceptable jitter for video and voice transmissions over IP network must be below 30 ms.
(Lewis and Pickavance, 2006). The results presented in Figure 8 are at least seven times
slower than that threshold value except for iPerf Server's response to multiple requests
considering the small size of the network on which the tests were conducted.

Jitter (ms) StDev (ms)


IP Technology
Client Server Client Server

P2P 0.069 0.029 0.087 0.037

IP Forwarding (p1) 0.190 0.192 0.046 0.057

IP Forwarding (p10) 1.841 1.115 4.467 1.088

MPLS in Linux (p1) 0.000 0.000 0.000 0.000

MPLS in Linux (p10) 0.000 0.000 0.000 0.000

Cisco MPLS (p1) 0.389 0.260 0.331 0.098

Cisco MPLS (p10) 4.137 4.135 1.013 1.065

OpenFlow (p1) 0.279 0.363 0.733 0.915

OpenFlow (p10) 0.070 42.986 0.067 305.028

Figure 8: Network Delay Depending on the IP Technology.

Most selective test case to measure the packet loss consisted of the transmission of
small packets with high frequency in such way that the link load oscillated around 100 %. The
issue is that the limited frequency when datagrams can be sent before reaching saturation on
the endpoint.

This is shown in Figure 9 and Figure 10, where the number of generated packages
20
differs depending on the technology used in the 50B-Medium column. We also need to note
that these are average values from three consecutive measurements and the deviation received
in subsequent samples in the three remaining test cases was very small which means that the
remaining tests were less significant. However, imperfections of the MGEN5 have been
verified in terms of significant P2P link utilization of 96 %. In the remaining three cases large
volumes of data sent out at lower frequencies with all the IP technologies reporting to be
doing well except for OpenFlow when datagrams were 100 B and the rate was set to 6000
times per second. This would be caused by high rate of Packet-In messages to the controller
as discussed by Zhihao and Wolter (2016).We can also see that MPLS in Linux reported the
lowest value of 44 % during the high-frequency test case which means that it has been
identified as the slowest performer taking into consideration that first test case which is most
significant due to the high variation between the results.

IP Technology 50B-Medium 100B-Medium 1000B-High 100B-Low

722.899 370.591 90.091 177.171


P2P
750.000 375.000 90.000 180.000

359.021 372.638 90.065 180.730


IP Forwarding
750.000 375.000 90.000 180.000

330.304 368.782 90.093 178.272


MPLS in Linux
750.000 375.000 90.000 180.000

740.579 375.005 90.101 180.730


Cisco MPLS
750.000 375.000 90.000 180.000

625.459 367.162 87.066 4.675


OpenFlow
750.000 375.000 90.000 180.000

Figure 9: Packets Received in Comparison to Total Packets Transmitted.

21
P2P
100%
Hundreds IP Forwarding
95%
MPLS in Linux
90%
85% Cisco MPLS
80% OpenFlow
75%
70%
65%
60%
55%
50%
45%
40%
35%
30%
25%
20%
15%
10%
5%
0%
50B-Medium 100B-Medium 1000B-High 100B-Low

Figure 10: Correctly Delivered Data Depending on the IP Technology Used for Individual
Test Cases.
The results obtained during RTT test cases by sending packets of 78 B allowed us to
distinguish that IP Forwarding and MPLS in Linux are the worst performers, taking into
consideration that during these tests the infrastructure load was quite small. Therefore, we
have decided to use packets of 51200 B which allowed us to determinate the slowest IP
technology while sending packets of 50 KB which appeared to be MPLS is Linux. This was
investigated further with a smaller 1 KB (1024 B) packet which was the largest possible load
in this situation and introduced a higher delay than IP Forwarding and Cisco MPLS. The
smallest delay, however, was achieved using the OpenFlow protocol for both packet sizes
possibly because proactive approach to flow entries discussed in Chapter 1.4.12 was used.
This proves that MPLS in Linux delays are highest for larger packets while OpenFlow
performs nearly as well as P2P with 85 % ratio in comparison to other IT technologies.

IP Technology 78 B (ms) 50 KB (ms) 1 KB (ms)

P2P 0.548 1.031 0.375

IP Forwarding 1.581 11.787 NA

MPLS in Linux 1.425 NA 1.555

Cisco MPLS 1.155 11.465 NA


OpenFlow 0.566 1.192 NA
Figure 11: RTT Results in Milliseconds.
22
P2P
100%
IP Forwarding
95%
MPLS in Linux
90%
85% Cisco MPLS
80% OpenFlow
75%
70%
65%
60%
55%
50%
45%
40%
35%
30%
25%
20%
15%
10%
5%
0%
78 B (ms) 50 KB (ms) 1 KB (ms)

Figure 12: RTT Comparison in Percentage Values.

3.3. Scalability
The purpose of these scalability scenarios was to build similar networks with mixture
of IP technologies to proof that they can be easily expanded by adding in extra nodes to the
previously discussed topologies thought Chapter 3.1 and Chapter 3.2. It also allowed to
compare wheatear the delay would impact favourable OF protocol over MPLS solutions.

To build the network environments below topologies were used within described
variants:

Figure 13: Three Cisco MPLS LSR Nodes and Two LER Nodes.

23
Figure 14: Three Cisco MPLS LSR Nodes and Two MPLS in Linux LER Nodes with use of
FRRouting (FRR).

Figure 15: Mininet OpenFlow Scaled-Up Topology in Mininet.

From the results in Figure 16 and Figure 17, we were able make a succinct conclusion
that OpenFlow outperformed all other IP technologies, while LDP implemented together with
OSPF and FFR on Linux provided better results than MPLS on Cisco routers.
This also has proven that all tested technologies can be easily scaled-up within the
“test-bed” no matter what routing method has been used. However, in terms of manageability,
it’s always easier to manage a dynamic protocol over static routing as the topology would
adapt to changes automatically independent of the size of the network (Cisco, 2014).

24
IP Technology 78 B (ms) 50 KB (ms) 1 KB (ms)

IP Forwarding 2.040 12.610 2.452

Cisco MPLS 2.716 16.213 2.922

MPLS in Linux 2.199 14.816 2.697

OpenFlow 0.473 4.103 2.538

Figure 16: RTT Results for Scaled-Up Environment.

440% IP Forwarding
420% Cisco MPLS
400% MPLS in Linux
380%
360% OpenFlow
340%
320%
300%
280%
260%
240%
220%
200%
180%
160%
140%
120%
100%
80%
60%
40%
20%
0%
78 B (ms) 50 KB (ms) 1 KB (ms)

Figure 17: RTT Comparison in Percentage Values for Scaled-Up Environment.

3.4. QoS
3.4.1. MPLS
Cisco topology in Figure 18 consisted of two CE routers: CSR1000V3 and
CSR1000V4, two PE routers: CSR1000V1 and CSR1000V2 as well as one Provider (P) Cisco
2801 router (Dublin). OSPF was enabled on all ISP devices in network 0.0.0.0.
255.255.255.255 area 0 and routing between PE and CE nodes was achieved with EIGRP.

We have used MP-BGP to exchange CE labels between CSR1000V1 and CSR1000V2 with
VRF “cust” as well as with Route Distinguisher (RD) and Route Target (RT) of 100:1.

25
Figure 18: Two Cisco PE MP-BGP Nodes and Two CE EIGRP Nodes.
For our policies, we have decided to use File Transfer Protocol (FTP) and Hypertext
Transfer Protocol (HTTP) which will be used to connect servers built on VM2 with vsFTPd
3.0.3 on port 21 (Anderson, 2016) and Apache 2.4.18 (Ellingwood, 2017) on port 80.

To provide the QoS we have implemented a maximum FTP data transfer of 1024 Kbps (1.024
Mbps) with the same rate of guaranteed transfer for HTTP data.

With Linux and FRR topology displayed in Figure 19 CSR1000V3 FTP-data packets
were marked as DSCP EF and HTTP were marked as DSCP AF41. Next, when packets
reached the CSR1000V1 via VPN, the markings were associated with EXP bits together with
their policies before they enter the MPLS domain. After that, when data leaves the PE and
moves across the VPN to the other side on CE they were again associated with their DSCP
mappings for the relevant policies before they reach the destination.

Figure 19: MPLS-TE Tunnels with RSVP Topology.

26
We have set up unidirectional tunnels from CSR1000V1 to Dublin and to CSR100V2
on next hop interface basis as well as the way back from CSR1000V2 to CSR1000V1 via
Dublin. All routers also operated OSPF protocol in area 0 to exchange their MPLS labels
while FRR devices used implicit-null labels for performance reasons which were popped out
on CSRs and replaced with explicit-null labels within the MPLS domain or in this situation
sent via TE Tunnel with RSVP.
From the tests of TE tunnel, we have proved that RSVP bandwidth parameter for TE
doesn’t work the same way as the bandwidth limit on the interface. However, bandwidth set
on the tunnel resulted in expected values which were not less than 1024 Kbps. We also
observed that the bottom of the label stack was used for local MPLS domain traffic for the
explicit path to the tunnel endpoint rather than for transport labels. To summarize all facts, we
can acknowledge that MPLS-TE has no good mechanism to limit bandwidth until there are
multiple tunnels to a destination with QoS policies in conjunction, to divide the packets into
classes. Our examples didn’t use ToS as we can see on captured packets in Figure 20 and
Figure 21 where they’re marked as 0x00 or 0x10 which means that it’s routine, not classified
traffic for QoS (Digital Hybrid, 2012).

Figure 20: Wireshark Capture of VPN Label for FTP on NIC1.

27
Figure 21: Wireshark Capture of VPN Label for HTTP on NIC3.

3.4.2. OpenFlow
In OF we have explored REST to configure QoS based on the type of data and
bandwidth limit per flow, with the use of DiffServ as well as with Meter Table and a CPqD
SW switch (CPqD GitHub, 2018).
To perform the experiments Linux Hierarchical Token Buckets (HTBs) as discussed
by Benita (2005) were used as well as protocols and ports specified in the figures below for
test cases. Each QoS table refers to separate OF topology, where all the tests were executed
with use of Ryu SDN controller and OF13.

Queue Type Port Max Rate (bps) Min Rate (bps)

1 UDP 5000 1000000 500000

0 TCP 21 1000000 N/A

1 TCP 80 10000000 5000000


Figure 22a: Linux HTB Queues for FTP and Web Server Scenario.

Figure 22b: Topo for per Flow with FTP and Web Server.
28
Queue Type Port DSCP Max Rate (bps) Min Rate (bps)

0 UDP 5000 48 1000000 N/A

1 UDP 21 18 1000000 300000

2 UDP 80 36 1000000 500000

Figure 23a: Linux HTB Queues and DSCP Mapping for Cloud Scenario.

Figure 23b: Topo for per Class with DSCP QoS.

Queue DSCP Max Rate (bps) Min Rate (bps)

0 0 1000000 100000

1 18 1000000 300000

2 36 1000000 600000

Figure 24a: Linux HTB Queues and DSCP Mapping for Unsliced QoS Topo.

Figure 24b: Custom QoS Topo without Separation and Meter Table.

Figure 24c: Custom QoS Topo with Separation and OFSoftSwitch13.

29
With the experiments on the Meter Table we have proven that it’s possible to use the
external controller to remark the traffic until some other app running on the NBI will take care
of forwarding, while OF13 will be responsible for QoS rules injection via REST API and
OFSoftSwitch13 will take over the role of remarking our DSCP classes bound to specific
meters.

Above tests allowed to proof that QoS in MPLS and OpenFlow is possible to achieve
with use of traffic classification and DSCP markings. MPLS-TE doesn’t have any inbuild
mechanism to limit the bandwidth on specific interfaces rather than on the whole VPN
channel to limit the overall available bandwidth to the customer, while OF can use HTBs and
port numbers to place traffic into different queues.

3.5. Traffic Engineering


3.5.1. MPLS
From below TE test case topology on mixture of real HW and virtual routers with MPLS
we have proven that it’s possible to effectively use Linux and FRR together with Cisco
equipment, which uses PBR to perform MPLS TE with LDP and RSVP to route traffic via
tunnels depending on the protocol type.

Figure 25a: MPLS TE Topology.

Tunnel0 Tunnel1

DSCP: EF DSCP: CS1

EXP: 5 EXP: 1

ToS: 184 ToS: 32


Figure 25b: MPLS TE Topology DSCP to EXP Mappings.

30
3.5.2. OpenFlow
In OF tests we have used non-commercial controller called Floodlight and commercial
HPE controller with OF13 and custom topology which was using STP in the core to
implement custom flow entries in the flow table as per below scenarios:

Figure 26: OF Topology with Floodlight with TE.

Figure 27: OF Topology with HPE Aruba VAN with TE.

Method Without TE (ms) With TE (ms)

Mean 0.136 0.184

StDev 0.060 0.112


Figure 28: Comparison of Delay without and with TE.

Method SDN Controller Without TE (ms) With TE (ms)

Floodlight 0.136 0.184


Mean
HPE 0.126 0.130

Floodlight 0.060 0.112


StDev
HPE 0.043 0.073
Figure 29: Comparison of Delay without and with TE between Floodlight and HPE
Controllers.

31
From the results of Floodlight controller in Figure 28, we can observe that the delay is
higher by 35% as expected due to a longer route to the destination by one hop count as the
initial route had four hop counts between the hosts.

By comparing both of the SDN controllers in Figure 29, we can see that HPE
performs better than Floodlight as both scenarios with and without TE resulted in a lower
mean and StDev for the jitter parameter. HPE controller without TE appeared to be 7 % faster
and 29 % more efficient with TE in comparison to Floodlight controller with a difference of
one hop between the client and server. This could be caused because HPE is a commercial
controller, however, this explains how different SDN implementations can impact the
network performance on a larger scale.

3.6. Failover
First, we have tested MPLS with use of TE topology (Figure 25a) and then OF with
custom Datacentre Topo, as seen in Figure 30.

Figure 30: HPE Aruba VAN Controller and New Flow Path after vNIC Failure.
ICMP requests were successful and no major delay was identified during the failure of
the link while the interface went down similar to the scenario when we tested failover with
MPLS on Cisco routers and Linux FRR nodes.

32
We have proved that in the above discussed situations, the failover mechanism operates
correctly. The reaction time in the solution based on the OF depends fully on the controllers’
capabilities to learn the DPID or the amount of manually entered flow entries in the flow
tables as well as their priorities. In the case of Cisco devices checking for the connection, the
status is entirely the responsibility of the IOS system. It is exceptionally efficient while
changing the packet forwarding route when the main connection is restored to the state before
the failure. However, we have noticed that it takes much longer to diagnose that the
transmission channel is not working properly.

3.7. Summary of Tests


Chapter 3 was the most practical part of this research as it covered the configurations
and results from a multitude of test cases which were aimed at investigating the hypothesis
made at the beginning of it.

The experiments uncovered that Linux nodes implemented with MPLS acting as LSR
cannot pop an outgoing label on the LSP, their throughput was the lowest between LERs,
packet loss was high for small files as well as a delay for large packets. Linux and FRR with
the mixed approach of using Cisco nodes resulted in lower response times in comparison to
pure hardware, and it was also fully compatible while creating QoS policies when acting as
LERs and during TE tests.

The deployed Cisco HW without MPLS in Linux nodes were obviously fully compatible
between each other while exchanging label information, throughput and delay were lower
than in OF, but the number of packets lost was lower, while the response times were still a lot
higher for small packets than with OF. Interoperability of the protocol, QoS and TE were
easily achievable after long and complex configuration of nodes which requires wide
knowledge from the network administrator.

OpenFlow, however, resulted in lower throughputs even than a P2P link with slightly higher
delays, but it outperformed all remaining tested technologies. It did perform worse when
tested with large volumes of data during packet loss, but it achieved the smallest response
times for small and large packet sizes. Scalable topology has proven that it’s possible to scale-
up network resources with minimal configuration, while QoS experiments with the Ryu
controller provided an insight into per-flow policies, mapping of QoS classes to DSCP values
and traffic remarking with the Meter Table. TE in OF tested with Floodlight and HPE Aruba
VAN controllers on scaled-up topology has proven that SDN caters for centralized
management to program the flow of data while it also has a mechanism for link failover

33
which will respond rapidly after detecting that DPID is no longer available.

4. Conclusions and Future Work


The first important statement arising from this work is the compatibility of two
different implementations of the MPLS protocol. We have checked that there is a possibility
to provide internet services using a heterogeneous network using both routers based on MPLS
in Linux and Cisco hardware routers. However, during tests with Linux router acting as LSR
Kildare node couldn’t pop out outgoing label to forward the packets on the LSP. This has
proven that MPLS support in Linux wasn’t fully compatible with software routers, where no
issues where identified when same node acted as LER. Cisco devices had no issues during
testing when they acted as LER or LSR and thus makes them fully interoperable with MPLS.

A very interesting observation was identified in the OpenFlow provided in one of the
tests described in Chapter 3.2.2.1. The results presented there show that the OF flow table
operations are much faster than lookup in the routing table when deciding to send the packet to
the next node on the route. The total delay using MPLS in Linux and Cisco is, however, lower
than in the case of IP forwarding because the time gained during the transition through the LSR
is lost at the LER nodes. The operation of adding and removing the MPLS label takes longer
than selecting the route based on the routing table. Having a test environment consisting of a
larger number of nodes could be better highlighted in the MPLS protocol. However, due to the
limitations of the equipment available in the laboratory, it was impossible.
OF also appeared far easier to scale than MPLS as adding additional nodes only
involved in altering the script when controller takes over the flow processing, while this process
for MPLS requires all the configurations on each node individually. In terms of compatibility
of scaled-up infrastructure in Chapter 3.3, LDP on both Cisco and FRR Linux nodes were
functioning correctly, but Linux implementation resulted in lower delays, while OF was
irrespectively the fastest.
Moving away from the aspect of the throughput of OpenFlow and MPLS in the work
we presented examples of the use of these protocols in the field of QoS and TE. The first issue
was the transmission of data on arbitrarily selected routes. The assumption was that the flow
paths leading to one target point would depend on the source generating the traffic. The
experiments described in detail in Chapter 3.4 and Chapter 3.5 showed that with each of the
technologies studied we can obtain the expected results. It was possible to use traffic
classification and DSCP markings for both technologies to provide QoS, but only OF has a
mechanism which can be used to limit the bandwidth with use of Linux HTBs and port numbers

34
to move packets into different queues. Major identified TE benefits in OF come from
centralization of management which minimises all that administrator’s burden while setting up
the tunnel end-points, while in OF simply flow paths are programmed on the controller with
flow entries to specific ports on each switch as it was discussed in Chapter 3.5.2.
The last topic of the paper was the question of securing a computer network against the
effects of a sudden connection failure. In Chapter 3.6 we can see what possibilities in this
respect are given to us with MPLS on Linux and Cisco as well as on the OF protocol itself.
Both solutions were effective, but it seems to be better when using OF than Cisco or Linux
nodes, because the failure detection takes place in the centralized external SDN controller. The
administrator is required only to properly configure the backup flow entries or to simply use
the learning capabilities of the developed controller and the apps running on it.

Due to serious problem with the compatibility of the used software traffic generators
hardware traffic generator such as Open Source Network Tester (OSNT) developed by
Antichi (2017) would become beneficial to future research with discussed protocols.
Modern services based on data obtained in IoT systems require efficient computer
networks that meet specific QoS requirements such as very short delays in data transmission.
A new approach to the creation and management of a network infrastructure with the use of
SDN and OF can face this challenge. In these terms we could explore the possibility to create
ecosystems with SDN network infrastructure and cloud computing which the main purpose
would be to automatically control the transmission of data obtained in IoT systems to meet
the requirements of end-users.

35
List of References
Abinaiya, N. and Jayageetha, J. (2015) ‘A Survey On Multi Protocol Label Switching’,
International Journal of Technology Enhancements and Emerging Engineering Research,
vol. 3, no. 2, pp. 25–28. Available at: http://www.ijteee.org/final-print/feb2015/A-Survey-On-
Multi-Protocol-Label-Switching.pdf (Accessed: 4 June 2017).
Anderson, M. (2016) ‘How To Set Up vsftpd for a User's Directory on Ubuntu 16.04’,
DigitalOcean Tutorial, September 2016. Available at:
https://www.digitalocean.com/community/tutorials/how-to-set-up-vsftpd-for-a-user-s-
directory-on-ubuntu-16-04 (Accessed: 19 November 2017).
Antichi, G. (2017) Open Source Network Tester. Available at: http://osnt.org (Accessed: 25
February 2018).
Benita, Y. (2005) ‘Kernel Korner - Analysis of the HTB Queuing Discipline’, Linux Journal,
January 2005. Available at: http://www.linuxjournal.com/article/7562 (Accessed: 12 January
2018).
Burgess, J. (2008) ‘ONOS (Open Network Operating System)’, Ingram Micro Advisor Blog,
August 2008. Available at: http://www.ingrammicroadvisor.com/data-center/7-advantages-of-
software-defined-networking (Accessed: 5 June 2017).
BW-Switch (2016) ICOS AND LINUX SHELL MANAGEMENT. Available at: https://bm-
switch.com/index.php/blog/icos-linux-shell/ (Accessed: 21 December 2017).
Chiosi, M., Clarke, D., Willis, P., Reid, A., Feger, J., Bugenhagen, M., Khan, W., Fargano,
M., Dr. Cui, C., Dr. Deng, H., Benitez, J., Michel, U., Damker, H., Ogaki, K., Matsuzaki, T.,
Fukui, M., Shimano, K., Delisle, D., Loudier, Q., Kolias, C., Guardini, I., Demaria, E.,
Minerva, R., Manzalini, A., Lopez, D., Salguero, F., J., R., Ruhl, F., Sen, P. (2012) ‘Network
Functions Virtualisation - An Introduction, Benefits, Enablers, Challenges & Call for Action’,
SDN and OpenFlow World Congres, October 2012. Available at:
https://portal.etsi.org/NFV/NFV_White_Paper.pdf (Accessed: 26 December 2017).
Cisco (2002) Multiprotocol label switching (MPLS) on Cisco Routers. Available at:
http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/fs_rtr22.html (Accessed: 12
February 2017).
Cisco (2005) MPLS Label Distribution Protocol (LDP). Available at:
https://www.cisco.com/c/en/us/td/docs/ios/12_4t/12_4t2/ftldp41.pdf (Accessed: 19 August
2017).
Cisco (2007) MPLS Static Labels. Available at:

36
http://www.cisco.com/c/en/us/td/docs/ios/mpls/configuration/guide/15_0s/mp_15_0s_book/m
p_static_labels.pdfz6rwXrSUmV2tLMIZnKHQ&sig2=g0xxUdu4Je2R-4V98V5NbA
(Accessed: 18 February 2017).
Cisco (2014) Cisco Networking Academy's Introduction to Routing Dynamically. Available
at: http://www.ciscopress.com/articles/article.asp?p=2180210&seqNum=5 (Accessed: 12
November 2017).
Cisco (2017) ‘Products & Services / Routers’, Cisco Cloud Services Router 1000V Series,
October 2017. Available at: https://www.cisco.com/c/en/us/products/routers/cloud-services-
router-1000v-series/index.html#~stickynav=1 (Accessed: 6 November 2017).
Cisco (2017) IOS Software-15.1.4M12a. Available at:
https://software.cisco.com/download/release.html?mdfid=279316777&flowid=7672&softwar
eid=280805680&release=15.1.4M12a&relind=AVAILABLE&rellifecycle=MD&reltype=late
st (Accessed: 12 February 2017).
Cisco Support (2017) Release Notes for the Catalyst 4500-X Series Switches, Cisco IOS XE
3.10.0E. Available at:
https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/release/note/ol -310xe-
4500x.html (Accessed: 3 January 2018).
CPqD GitHub (2018) OpenFlow 1.3 switch. Available at:
https://github.com/CPqD/ofsoftswitch13 (Accessed: 12 January 2018).
Digital Hybrid (2012) Quality of Service (QoS) -- DSCP TOS CoS Precedence Conversion
Chart. Available at: https://my.digitalhybrid.com.au/knowledgebase/201204284/Quality-of-
Service-QoS----DSCP-TOS-CoS-Precedence-Conversion-Chart.html (Accessed: 10
December 2017).
Ellingwood, J. (2017) ‘How To Install the Apache Web Server on Ubuntu 16.04’,
DigitalOcean Tutorial, May 2017. Available at:
https://www.digitalocean.com/community/tutorials/how-to-install-the-apache-web-server-on-
ubuntu-16-04 (Accessed: 19 November 2017).
FTDI Chip (2017) D2XX Drivers. Available at: http://www.ftdichip.com/Drivers/D2XX.htm
(Accessed: 2 July 2017).
Goransson, P. and Black, C. (2014) Software defined networks: A comprehensive approach.
United States: Morgan Kaufmann Publishers In.
Gupta, S.N. (2013) ‘Next Generation Networks (NGN)-Future of Telecommunication’,
International Journal of ICT and Management, 1(1), pp. 32–35.

37
Hogg, S. (2014) ‘SDN Security Attack Vectors and SDN Hardening’, Network World,
October 2014. Available at: http://www.networkworld.com/article/2840273/sdn/sdn-security-
attack-vectors-and-sdn-hardening.html (Accessed: 11 June 2017).
HPE Support (2012) HP Switch Software OpenFlow Support. Available at:
https://support.hpe.com/hpsc/doc/public/display?sp4ts.oid=3437443&docLocale=en_US&doc
Id=emr_na-c03170243 (Accessed: 3 January 2018)
Internet Live Stats (2017) Number of Internet users. Available at:
http://www.internetlivestats.com/internet-users/ (Accessed: 12 February 2017).
iPerf (2017) Change between iPerf 2.0, iPerf 3.0 and iPerf 3.1. Available at:
https://iperf.fr/iperf-doc.php (Accessed: 18 February 2017).
iPerf (2017) What is iPerf / iPerf3. Available at: https://iperf.fr (Accessed: 18 February 2017).
Kernel Newbies (2015) Linux 4.1. Available at: https://kernelnewbies.org/Linux_4.1
(Accessed: 22 July 2017).
Kernel Newbies (2017) Linux 4.12. Available at: https://kernelnewbies.org/Linux_4.12
(Accessed: 22 July 2017).
Lantz, B., Heller, B., McKeown, N. (2010) ‘A network in a laptop: rapid prototyping for
software-defined networks’, Hotnets-IX Proceedings of the 9th ACM SIGCOMM Workshop
on Hot Topics in Networks, Article No. 19, pp. 19. Available at:
http://dl.acm.org/citation.cfm?id=1868466 (Accessed: 2 July 2017).
Leu, J.R. (2013) MPLS for Linux. Available at: https://sourceforge.net/projects/mpls-linux/
(Accessed: 12 February 2017).
Lewis, C. and Pickavance, S. (2006) ‘Implementing Quality of Service Over Cisco MPLS
VPNs’, Cisco Press Article, May 2006. Available at:
http://www.ciscopress.com/articles/article.asp?p=471096&seqNum=6 (Accessed: 29 October
2017).
McNickle, M. (2014) ‘Five SDN protocols other than OpenFlow’, TechTarget, August 2014.
Available at: http://searchsdn.techtarget.com/news/2240227714/Five-SDN-protocols-other-
than-OpenFlow (Accessed: 5 June 2017).
Millman, R. (2015) ‘How to secure the SDN infrastructure’, Computer Weekly, March 2015.
Available at: http://www.computerweekly.com/feature/How-to-secure-the-SDN-infrastructure
(Accessed: 11 June 2017).
Mishra, A.K. and Sahoo, A. (2007) ‘S-OSPF: A Traffic Engineering Solution for OSPF Based
Best Effort Networks’, Piscataway, NJ: IEEE, pp. 1845–1849.

38
Open Networking Foundation (2013) OpenFlow Switch Specification Version 1.3.2.
Available at: https://3vf60mmveq1g8vzn48q2o71a-wpengine.netdna-ssl.com/wp-
content/uploads/2014/10/openflow-spec-v1.3.2.pdf (Accessed: 12 February 2017).
Open Networking Foundation (2015) OpenFlow Switch Specification Version 1.5.1.
Available at: https://www.opennetworking.org/wp-content/uploads/2014/10/openflow-switch-
v1.5.1.pdf (Accessed: 12 February 2017).
OpenCORD (2017) Specs. Available at: https://opencord.org/specs (Accessed: 18 December
2017).
OpenFlow (2011) Create OpenFlow network with multiple PCs/NetFPGAs. Available at:
http://archive.openflow.org/wp/deploy-labsetup/ (Accessed: 23 July 2017).
OpenFlow (2011) View source for Ubuntu Install. Available at:
http://archive.openflow.org/wk/index.php?title=Ubuntu_Install&action=edit (Accessed: 18
February 2017).
O'Reilly, J. (2014) ‘SDN Limitations’, Network Computing, October 2014. Available at:
https://www.networkcomputing.com/networking/sdn-limitations/241820465 (Accessed: 5
June 2017).
Partsenidis, C. (2011) ‘MPLS VPN tutorial’, TechTarget, June 2011. Available at:
http://searchenterprisewan.techtarget.com/tutorial/MPLS-VPN-tutorial (Accessed: 4 June
2017).
Pica8 (2017) PicOS. Available at: http://www.pica8.com/products/picos (Accessed: 26
December 2017).
Picket, G. (2015) ‘Abusing Software Defined Networks’, DefCon 22 Hacking Conference,
August 2015, Rio Hotel & Casino in Last Vegas. Available at:
https://www.defcon.org/html/links/dc-archives/dc-22-archive.html (Accessed: 10 January
2018).
Quagga (2017) Quagga Routing Suite. Available at: http://www.nongnu.org/quagga/
(Accessed: 21 December 2017).
Rao, S. (2016) ‘How to install & use iperf & jperf tool’, Linux Thrill Tech Blog, April 2016.
Available at: http://linuxthrill.blogspot.ie/2016/04/how-to-install-use-iperf-jperf-tool.html
(Accessed: 18 February 2017).
Reber, A. (2015) ‘On the Scalability of the Controller in Software-Defined Networking’, MSc
in Computer Science, University of Liege, Belgium. Available at:
http://www.student.montefiore.ulg.ac.be/~agirmanr/src/tfe-sdn.pdf (Accessed: 5 June 2017).

39
Rosen, E., Viswanathan, A., Callon, R. (2001) ‘Multiprotocol Label Switching Architecture’,
Internet Engineering Task Force, January 2001. Available at:
https://tools.ietf.org/html/rfc3031.html (Accessed: 12 February 2017).
Salisbury, B. (2013) ‘OpenFlow: SDN Hybrid Deployment Strategies’, Brent Salisbury's
Blog, January 2013. Available at: http://networkstatic.net/openflow-sdn-hybrid-deployment-
strategies/ (Accessed: 5 June 2017).
Simpkins, A. (2015) ‘Facebook Open Switching System ("FBOSS") and Wedge in the open’,
Facebook Article, March 2015. Available at:
https://code.facebook.com/posts/843620439027582/facebook-open-switching-system-fboss-
and-wedge-in-the-open/ (Accessed: 21 December 2017).
Smith, B.R. and Aceves, C.L. (2008) Best Effort Quality-of-Service, St. Thomas, U.S. Virgin
Islands: IEEE.
SnapRoute (2017) Welcome to FlexSwitch from SnapRoute. Available at:
http://docs.snaproute.com/index.html (Accessed: 21 December 2017).
SnapRoute (2017) Welcome to FlexSwitch from SnapRoute. Available at:
http://docs.snaproute.com/index.html (Accessed: 21 December 2017).
U.S. Naval Research Laboratory (2017) Multi-Generator (MGEN). Available at:
https://www.nrl.navy.mil/itd/ncs/products/mgen (Accessed: 18 February 2017).
Ubuntu (2017) Package: mininet (2.1.0-0ubuntu1) [universe]. Available at:
https://packages.ubuntu.com/trusty/net/mininet (Accessed: 2 July 2017).
Ubuntu (2017) ReleaseNotes. Available at:
https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#Official_flavour_release_notes
(Accessed: 18 February 2017).
Vissicchio, S., Vanbever, L., Bonaventure, O. (2014) ‘Opportunities and Research Challenges
of Hybrid Software Defined Networks’, ACM SIGCOMM Computer Communication Review,
vol. 40, no. 2, pp. 70-75. Available at: http://dl.acm.org/citation.cfm?id=2602216 (Accessed:
5 June 2017).
Wang, M., Chen, L., Chi, P., Lei, C. (2017) ‘SDUDP: A Reliable UDP-based Transmission
Protocol over SDN’, IEEE Access, vol. PP, no. 99, pp. 1-13. Available at:
http://ieeexplore.ieee.org/document/7898398/ (Accessed: 5 June 2017).
Wireshark (2017) What is a network protocol analyzer?. Available at: https://wireshark.com
(Accessed: 23 July 2017).
Zhihao, S. and Wolter, K. (2016) ‘Delay Evaluation of OpenFlow Network Based on

40
Queueing Model’, Research Gate Publication, August 2016. Available at:
https://www.researchgate.net/publication/306397961_Delay_Evaluation_of_OpenFlow_Netw
ork_Based_on_Queueing_Model (Accessed: 19 March 2018).

41

You might also like