Research Paper
Research Paper
X00088029
2
1. Introduction
Nowadays it's hard to imagine a computer without a network connection. The most
common, and one that is growing at a truly breath-taking speed is the Internet which, according
to Internet Live Stats (2017), has grown to more than 3 billion users from 1 billion in 2005. We
must highlight that currently, users access the Internet not only from their Personal Computers
(PCs) but also from their mobile phones, tablets, cameras or even household appliances and
cars.
This exponentially growing number of connections means that computer networks, in
the field of Information Technology (IT), have become integral to research and experimentation
in order to keep pace with and allow for, the Internet’s rapid growth.
This thesis also matches the trends of these studies where we conducted a series of
experiments and tests which were designed to examine the operational efficiency and
manageability of network traffic with MPLS and OpenFlow. These technologies are assumed
to help solve problems such as transmission of multimedia in real time, providing services that
meet the criteria for Quality of Service (QoS), the construction of scalable virtual private
networks and efficient and effective Traffic Engineering (TE).
Subsequent chapters describe in detail the scope of work performed and lessons learned
from practical scenarios and they also point to the potential use of these results. The next
sections contain a description of the network protocols such as OpenFlow and MPLS as well
as SDN and its operation within OpenFlow boundaries. It also includes an explanation of why
we should use OpenFlow as an extension of IP technology, what are the security risks and how
to deploy such protocol. It fallows next to chapter two which includes concise description of
the environment used to perform the study of protocol effectiveness. This section also presents
a detailed walkthrough of the environment set up based on available Cisco Internetwork
Operating System (IOS) routers, Ubuntu with software router implementation and Hyper -V
virtualization technology and how it can be used for the testing of performance and capabilities
of MPLS and OpenFlow protocols such as scalability, QoS, TE and link failover. It also
includes explanation of various OpenFlow topologies and SDN controllers to highlight their
operational differences. The third chapter is an accurate description of the experiments with the
obtained results and a brief commentary on them. The last, fourth chapter has been dedicated
to present the findings which were concluded with work carried out as well as future work
proposed for further research. Due to complexity of the research it also includes a list of
appendices with scripts and commands used to configure the devices.
3
1.1. Limitations of IP
While using IP technology, it’s not possible to provide good performance for data
services with guaranteed quality (Smith and Accedes, 2008). This is a serious problem if we
want to deliver multimedia in high quality through a network satisfying the conditions of real -
time. To solve this problem, many began to work on an approach for data, voice, and video to
broadcast via telecommunications networks in a unified way, in the form of a packet (Gupta
2012). This, however, requires a modification to the network architecture which is generally
referred to as a Next Generation Network (NGN).
Another issue which we cannot solve with the use of IP technology is the creation of
effective mechanisms to control and manage the movement of packets across the network, the
so-called Traffic Engineering (TE). This is according Mishra and Sahoo (2007) due to the
restrictions applied to dynamic routing protocols, e.g. Open Shortest Path First (OSPF) which
doesn't allow to define the arbitrary data flow paths.
Described problems, however, can be solved using MPLS or OpenFlow.
1.2. MPLS
In principle, MPLS not supposed to substitute any already used communication
protocols including the most common IP, but it should extend it (Rosen et. al, 2001). MPLS
can work with network technologies such as TCP/IP, ATM, Frame Relay as well as
Synchronous Optical Networking (SONET).
MPLS is called a layer 2.5 protocol in the ISO Open Systems Interconnection (OSI)
model, because it operates between data and network layers. According to the Request for
Comments (RfCs) it combines the advantages of the data link layer, such as performance and
speed, as well as the network layer (and its scalability).
With MPLS we get a richer set of tools for network management and TE which allows
transmission of packets through arbitrarily specified routes which are not possible to be defined
with the use of classical routing protocols.
According to Abinaiya and Jayageetha (2015), an MPLS label is attached by the LER
4
at the time the packet enters the network while using this protocol. A 32-bit label is added
between the second layer (Ethernet header) and the third layer (IP header).
MPLS technology allows us to build a stack of labels where path switching operations
are always performed on the top-level VPN labels for defined tunnels. It’s possible to use TE
with MPLS, because the list of hosts through which the packet is routed is determined by the
LER at the time the packet enters the MPLS domain. This allows to route traffic other than it
would be from the classical routing protocols, as we can select a path that has reserved resources
that meet QoS requirements.
According to Partsenidis (2011) with MPLS we can also easily set up tunnels between
the selected network nodes that can be used to create VPNs and guarantee logical separation
between different VPNs using one common network infrastructure.
1.3. OpenFlow
1.3.1. OF Protocol
OpenFlow (2011) is an open source project which was developed in the first decade of
the twenty-first century by Stanford and California U.S. universities. Its first official
5
specification was announced in late 2009 and is designated as version 1.0.0. Currently, the
further work around this protocol is held by the Open Networking Foundation and the latest
version announced in March 2015 is 1.5.1 (Open Networking Foundation, 2015).
Use of OpenFlow provides similar benefits as offered by MPLS. We receive a rich set
of tools that will let us engineer the traffic to optimize the transmission to ensure adequate
throughput, avoid delays or the number of connections through which the packets are routed.
This protocol introduces the concept of traffic flows which caters for both network
virtualization and separation of traffic.
OpenFlow is a protocol operating in the second layer of the ISO OSI model what
distinguishes it from the MPLS protocol which works in both the data link and the network
layer.
According to Goransson and Black (2014), three components are needed to create a
network based on OpenFlow technology: a switch that supports this protocol, a controller and
a communication channel which is used by OpenFlow protocol through which the controller
and switch can communicate.
Flow tables consists of three major elements: header fields which are created from the
packet header, counters which are statistical information such as the number of packets and
bytes sent and the time since the last packet matched the rule, action fields which specify the
way the package was processed. Entries to it are added via the controller. They specify how the
switch should behave after receiving the packet that meets the matching condition. The switch
can send data to the output port, reject it or send it to the controller.
The switch can be either a Mininet, OVS or physical HW which uses OF protocol to
communicate with the external controller via TCP or TLS to perform packet lookups for
forwarding decisions.
The controller is decoupled from the switch in the control plane usually running on a Linux box
and it manages the switch via OF to add, update and delete flow entries in a reactive or proactive
manner.
Each switch can have up to 254 flow tables and matching of the packets starts on Flow Table 0
which was originally supported in OF version 1.0, while other versions support multiple tables
in the pipeline by using goto-table instructions. Flow entries are matched in order of priority
from higher to lower when instructions are executed and if no entries are matched then a table-
miss entry with the priority of 0 is used.
During these test cases we have seen that according to the OF 1.0 we were only able use
one table in the pipeline, while OF 1.3 and higher can support a larger amount of flow tables
starting from Flow Table 0.
OF switches also have reserved ports which are defined by the specification and these
represent forwarding actions such as sending traffic to the controller, flooding packets out, or
forwarding with normal switch processing.
7
Logical port can be used as ingress or egress port depending of OF version, while normal
port represents the traditional routing.
In fail secure mode, during first connection attempt, or in event of the disconnection of
a switch from the controller, packets traversing to the controller will be dropped and flow entries
will become deleted or they will expire as per their timeout settings. Hybrid switches, however,
can operate in fail standalone mode during failure, so packets can be delivered with the use of
a traditional forwarding method with the normal port.
To investigate the concept of fail secure and fail standalone we have concluded test
cases with use of HPE Aruba VAN SDN and OpenDaylight (ODL) controllers. This allowed
us to observer the behaviour of the flow entries as well as packets traversing between ports.
Symmetric communication can be initiated by any of the sides and messages sent in this way
are: Hello messages between switch and controller, Echo verifies the link and can be used to
measure the latency as well as a vendor-specific message reserved to be used in the future
(Experimenter).
To investigate the message types, we have used ODL and Wireshark to inspect the
packets traversing between the switches and the cartelized controller.
1.4. SDN
1.4.1. SDN Considerations
ONF's definition of SDN states that it's a separation of the network control plane from
the forwarding plane where devices can be controlled via the control plane (ONF, 2017).
Application, control and infrastructure layers, where switches reside in infrastructure layer,
controllers such as OpenFlow, OpenDaylight (ODL), Ryu, ONOS, Floodlight, POX or HPE’s
Aruba Virtual Application Networks (VAN) and applications which talk to the controller via
Northbound Interface (NBI).
OpenFlow itself isn't an SDN, it's a protocol used in the Southbound Interface (SBI)
between the controller and switches.
The SBI is implemented, for example, through OpenFlow. Its main function is to
support communications between the SDN controller and network nodes, both physical as well
as virtual. It's also responsible for the integration of a distributed network environment. With
this interface, devices can discover network Topology (Topo), define network flows and
implement API to forward requests from the northbound interface.
1.4.3. NFV
According to Chiosi et al. (2012), Network Functions Virtualization (NFV) are
functions which are typically available on the HW but deployed as Software (SW) running in a
virtual environment.
1.4.4. CORD
Central Office Re-architected as a Datacentre (CORD), so rather than the use of
traditional office approach where we use NFV and SDN to deploy VMs and VAs in the cloud
with an agile approach to provide higher efficiency (OpenCORD, 2017).
10
and apps on the top such as: Quagga (2017), SnapRoute (2017), FBOSS (Simpkins, 2015),
ICOS (BW-Switch, 2016) or we can get Linux OS from Cumulus Networks, Pica8 (2017) or
Big Switch Networks.
11
reconfiguration. The results are then returned to the switches and subsequent updates are made
to them.
Another indirect solution is to use more than one SDN controller depending on network
size. In this way, the controllers can be placed closer to the devices that they manage. This leads
to shorter delays and allows more efficient control of the work of the switches while transferring
requests to the central control plane.
McNickle (2014) states that SDN can also be deployed using common protocols and
interfaces such as Border Gateway Protocol (BGP), Network Configuration Protocol
(NETCONF), Extensible Messaging and Presence Protocol (XMPP), Open Virtual Switch
Database Management Protocol (OVSDB) and Multiprotocol Label Switching Transport
Profile (MPLS-TP) as well as with Command Line Interface (CLI) or SNMP.
SDN uses User Datagram Protocol (UDP) tunnels which are very similar to Generic
Routing Encapsulation (GRE) tunnels, except that they can be dynamically switched on and
off. According to Wang et al. (2017) the effect of using tunnelling is the lack of transparency
of network traffic which entails significant consequences such as serious difficulties in the
troubleshooting of network problems.
12
communications.
Hogg (2014) stated that attacks on SDN-specific protocols are another vector of attack
due to APIs such as Python, Java, C, REST, XML and JSON which hackers can potentially
exploit in terms of the vulnerabilities, and then take control of the SDN via the controller. If the
controller does not have any security measures implemented against attacks on APIs, then there
is the possibility to create its own SDN rules and thus take control of the SDN environment.
To test the security vulnerabilities, we have used SDN Toolkit discussed by Picket
(2015) with Floodlight and ODL controllers. However, we were not able to retrieve flows out
of controllers. We also investigated the HPE Aruba VAN controller and its self-signed
certificate for TLS which is used as authentication token.
However, in OpenFlow when a packet arrives on the port the switch sends it to the
controller as per its Flow Table, the controller, in turn, runs an app which learns all of the MAC
addresses and where to forward the specific MAC addresses. This way all learning happens on
the controller, but as soon as it knows where the devices are, it will update the Flow Tables on
the switches, so they will forward the packets and learn independently while only unknown
traffic will be sent to the controller.
Proactive flow entries are pre-programmed rules which exist before any request will be
sent as the controller tells the switch what to do, so flow entries would contain matches to
devices, their actions, and instructions.
2.Test Environment
All testing of performance, and compatibility of communication using the selected
network protocols was performed using physical equipment, such as three Cisco 2801 routers
and Hyper-V Virtual Machines (VMs) as guests with Ubuntu OS as well as with the use of a
Mininet environment on a Windows Server 2016 host.
13
2.1. Cisco 2801
The Cisco 2801 routers had at least three Local Area Network (LAN) ports required to
perform the experiments.
2.2.1. MGEN
This tool can generate TCP and UDP traffic, and then allows to save the relevant
statistics (U.S. Naval Research Laboratory, 2017). Multi-Generator (MGEN) provides a very
wide range of possibilities when it comes to the type of traffic generated, with which we were
able to plan test scenarios that conform to actual conditions in the working environment.
2.2.2. iPerf
The program is written in C++ and it works in a client-server architecture. For our
performance studies, we have generated streams of TCP and UDP traffic to observe the
network throughput, jitter and the number of lost packets (iPerf, 2017).
We have used version iPerf3 for TCP related tests, iPerf2 for most of UDP scenarios
and default buffer sizes. The reason for that is it supports multiple client connections (iPerf,
2017) in conjunction with jPerf2
Except for iPerf tools, we also installed the Graphical User Interface (GUI) for iPerf2
called jPerf2 as per instructions by Rao (2016).
2.3. Hyper-V
We didn’t use the bare type 1 hypervisor, but the software where one of the elements
is the type 2 hypervisor called Virtual Machine Monitor (VMM) used to run VMs. Hyper-V
14
software was used to host the physical machines in the virtual environment. It allowed us to
run different operating systems at the same time on one physical server without interference
with the existing OS or the need to create independent partitions on a physical disk.
2.4. Mininet
Mininet is a system for vitalizing computer networks on a PC (Lantz et al., 2010). A
Mininet is an emulator that is great for exploring and testing the capabilities of the SDN
architecture. It helped us to create virtual networks via sudo mn and to test OpenFlow and
SDN.
For this purpose, NATSwitch vNIC was created in Hyper-V to act as gateway for the
Mininet bridge and tests were performed with HPE Aruba VAN SDN controller.
2.5. Wireshark
Wireshark is a graphical network traffic analyser, also known as “network sniffer”,
which allowed to capture packets transmitted over specific network interfaces within ISO OSI
layers 2-7 with data frame protocols (Wireshark, 2017).
15
2017) on Ubuntu version 16.04.2 (Ubuntu, 2017) which supports MPLS since Linux version
4.1 was released in June 2015.
Since we were limited in terms of available HW resources we also have used VMs
with Cisco CSR 1000V Series IOS version 3.12.2S (csr1000v-universalk9.03.12.00.S.154-2.S-
std) installed with vendor minimal requirements (Cisco, 2017) to test the scalability and
interoperability of the protocols in the network which wouldn’t be possible with the use of
only three physical routers.
1. Checking for MPLS interoperability between MPLS implementation for Linux and
Cisco IOS Multiprotocol Label Switching.
6. Explaining what the possibilities in response to the failure of certain parts of the network
while using both protocols.
It also proved that MPLS support in Linux isn’t fully compatible as Linux software
nodes acting as LSR cannot strip out the label before forwarding the packet to the next hop
within the MPLS network.
Therefore, we have decided that in the remaining tests we are only going to investigate
the scenario where the Dublin router will act as the LSR and other remaining Linux MPLS
enabled routers will be configured as LERs.
3.2. IP Performance
To test performance of MPLS and OpenFlow we have compared them against each
other as well as P2P connection and IP Forwarding with static routing as per below
topologies:
17
Figure 3: IP Forwarding with Three Cisco 2801 Routers and Static Routing.
Figure 4: MPLS with Kildare and Laois as Linux LERs and Dublin as Cisco 2801 LSR.
Figure 5: Cisco MPLS with Three Cisco 2801 Routers with Dublin as LSR
18
Figure 6: OpenFlow Performance Topology with S1, S2 and S3 in Mininet.
The results obtained in throughput experiments are presented in Figure 7 which allows
us to state that the use of OpenFlow provides slightly higher throughput than the P2P link
between two VMs. This is possibly because controller makes the forwarding decision based
on network port number, while both VMs would use their routing tables what involves
additional processing and delay in result. We also have proven that MPLS support in Linux
kernel 4.12 with IPRoute2 isn’t effective in terms of speed between LERs due to lowest
results in terms of throughput and highest StDev. We can see from the arithmetic mean based
on the four measurements made for each method that, other than this fact, the most of results
are very close to each other. We can also assume that the results obtained with iPerf3 are
reliable from a sample of four measurements since the Standard Deviation (StDev) of the
random variable is small for the remaining test cases. It also shows that IP traffic control
technology operates unpredictably if we consider its throughput in OpenFlow.
Throughput StDev
IP Technology
Mbps Kbps Mbps Kbps
The results in Figure 8 obtained during delay test cases present a clear advantage to
19
OpenFlow over tested technologies except for P2P connections established between two
endpoints. Since all of our tests with MPLS in Linux didn't provide a summary of sent
datagrams we need to invalidate the 0 ms results within our sample. We can tell that they all
work quite similarly while one connection is established because the jitter values aren't high
and StDev is below 1 ms. An interesting observation is that increasing the number of parallel
transmissions causes a significant increase in the jitter value. For IP Forwarding and Cisco
MPLS, it is approximately ten times the number of senders. However, this is probably
because the server accepts datagrams as a group-send to a given port which in turn results in
an irregularity of how packets reach the destination. In terms of performance, we can say that
the optimal results were given by OpenFlow while iPerf Server's response was ten times
higher than for Cisco MPLS. For the results obtained as a benchmark, we can assume that the
acceptable jitter for video and voice transmissions over IP network must be below 30 ms.
(Lewis and Pickavance, 2006). The results presented in Figure 8 are at least seven times
slower than that threshold value except for iPerf Server's response to multiple requests
considering the small size of the network on which the tests were conducted.
Most selective test case to measure the packet loss consisted of the transmission of
small packets with high frequency in such way that the link load oscillated around 100 %. The
issue is that the limited frequency when datagrams can be sent before reaching saturation on
the endpoint.
This is shown in Figure 9 and Figure 10, where the number of generated packages
20
differs depending on the technology used in the 50B-Medium column. We also need to note
that these are average values from three consecutive measurements and the deviation received
in subsequent samples in the three remaining test cases was very small which means that the
remaining tests were less significant. However, imperfections of the MGEN5 have been
verified in terms of significant P2P link utilization of 96 %. In the remaining three cases large
volumes of data sent out at lower frequencies with all the IP technologies reporting to be
doing well except for OpenFlow when datagrams were 100 B and the rate was set to 6000
times per second. This would be caused by high rate of Packet-In messages to the controller
as discussed by Zhihao and Wolter (2016).We can also see that MPLS in Linux reported the
lowest value of 44 % during the high-frequency test case which means that it has been
identified as the slowest performer taking into consideration that first test case which is most
significant due to the high variation between the results.
21
P2P
100%
Hundreds IP Forwarding
95%
MPLS in Linux
90%
85% Cisco MPLS
80% OpenFlow
75%
70%
65%
60%
55%
50%
45%
40%
35%
30%
25%
20%
15%
10%
5%
0%
50B-Medium 100B-Medium 1000B-High 100B-Low
Figure 10: Correctly Delivered Data Depending on the IP Technology Used for Individual
Test Cases.
The results obtained during RTT test cases by sending packets of 78 B allowed us to
distinguish that IP Forwarding and MPLS in Linux are the worst performers, taking into
consideration that during these tests the infrastructure load was quite small. Therefore, we
have decided to use packets of 51200 B which allowed us to determinate the slowest IP
technology while sending packets of 50 KB which appeared to be MPLS is Linux. This was
investigated further with a smaller 1 KB (1024 B) packet which was the largest possible load
in this situation and introduced a higher delay than IP Forwarding and Cisco MPLS. The
smallest delay, however, was achieved using the OpenFlow protocol for both packet sizes
possibly because proactive approach to flow entries discussed in Chapter 1.4.12 was used.
This proves that MPLS in Linux delays are highest for larger packets while OpenFlow
performs nearly as well as P2P with 85 % ratio in comparison to other IT technologies.
3.3. Scalability
The purpose of these scalability scenarios was to build similar networks with mixture
of IP technologies to proof that they can be easily expanded by adding in extra nodes to the
previously discussed topologies thought Chapter 3.1 and Chapter 3.2. It also allowed to
compare wheatear the delay would impact favourable OF protocol over MPLS solutions.
To build the network environments below topologies were used within described
variants:
Figure 13: Three Cisco MPLS LSR Nodes and Two LER Nodes.
23
Figure 14: Three Cisco MPLS LSR Nodes and Two MPLS in Linux LER Nodes with use of
FRRouting (FRR).
From the results in Figure 16 and Figure 17, we were able make a succinct conclusion
that OpenFlow outperformed all other IP technologies, while LDP implemented together with
OSPF and FFR on Linux provided better results than MPLS on Cisco routers.
This also has proven that all tested technologies can be easily scaled-up within the
“test-bed” no matter what routing method has been used. However, in terms of manageability,
it’s always easier to manage a dynamic protocol over static routing as the topology would
adapt to changes automatically independent of the size of the network (Cisco, 2014).
24
IP Technology 78 B (ms) 50 KB (ms) 1 KB (ms)
440% IP Forwarding
420% Cisco MPLS
400% MPLS in Linux
380%
360% OpenFlow
340%
320%
300%
280%
260%
240%
220%
200%
180%
160%
140%
120%
100%
80%
60%
40%
20%
0%
78 B (ms) 50 KB (ms) 1 KB (ms)
3.4. QoS
3.4.1. MPLS
Cisco topology in Figure 18 consisted of two CE routers: CSR1000V3 and
CSR1000V4, two PE routers: CSR1000V1 and CSR1000V2 as well as one Provider (P) Cisco
2801 router (Dublin). OSPF was enabled on all ISP devices in network 0.0.0.0.
255.255.255.255 area 0 and routing between PE and CE nodes was achieved with EIGRP.
We have used MP-BGP to exchange CE labels between CSR1000V1 and CSR1000V2 with
VRF “cust” as well as with Route Distinguisher (RD) and Route Target (RT) of 100:1.
25
Figure 18: Two Cisco PE MP-BGP Nodes and Two CE EIGRP Nodes.
For our policies, we have decided to use File Transfer Protocol (FTP) and Hypertext
Transfer Protocol (HTTP) which will be used to connect servers built on VM2 with vsFTPd
3.0.3 on port 21 (Anderson, 2016) and Apache 2.4.18 (Ellingwood, 2017) on port 80.
To provide the QoS we have implemented a maximum FTP data transfer of 1024 Kbps (1.024
Mbps) with the same rate of guaranteed transfer for HTTP data.
With Linux and FRR topology displayed in Figure 19 CSR1000V3 FTP-data packets
were marked as DSCP EF and HTTP were marked as DSCP AF41. Next, when packets
reached the CSR1000V1 via VPN, the markings were associated with EXP bits together with
their policies before they enter the MPLS domain. After that, when data leaves the PE and
moves across the VPN to the other side on CE they were again associated with their DSCP
mappings for the relevant policies before they reach the destination.
26
We have set up unidirectional tunnels from CSR1000V1 to Dublin and to CSR100V2
on next hop interface basis as well as the way back from CSR1000V2 to CSR1000V1 via
Dublin. All routers also operated OSPF protocol in area 0 to exchange their MPLS labels
while FRR devices used implicit-null labels for performance reasons which were popped out
on CSRs and replaced with explicit-null labels within the MPLS domain or in this situation
sent via TE Tunnel with RSVP.
From the tests of TE tunnel, we have proved that RSVP bandwidth parameter for TE
doesn’t work the same way as the bandwidth limit on the interface. However, bandwidth set
on the tunnel resulted in expected values which were not less than 1024 Kbps. We also
observed that the bottom of the label stack was used for local MPLS domain traffic for the
explicit path to the tunnel endpoint rather than for transport labels. To summarize all facts, we
can acknowledge that MPLS-TE has no good mechanism to limit bandwidth until there are
multiple tunnels to a destination with QoS policies in conjunction, to divide the packets into
classes. Our examples didn’t use ToS as we can see on captured packets in Figure 20 and
Figure 21 where they’re marked as 0x00 or 0x10 which means that it’s routine, not classified
traffic for QoS (Digital Hybrid, 2012).
27
Figure 21: Wireshark Capture of VPN Label for HTTP on NIC3.
3.4.2. OpenFlow
In OF we have explored REST to configure QoS based on the type of data and
bandwidth limit per flow, with the use of DiffServ as well as with Meter Table and a CPqD
SW switch (CPqD GitHub, 2018).
To perform the experiments Linux Hierarchical Token Buckets (HTBs) as discussed
by Benita (2005) were used as well as protocols and ports specified in the figures below for
test cases. Each QoS table refers to separate OF topology, where all the tests were executed
with use of Ryu SDN controller and OF13.
Figure 22b: Topo for per Flow with FTP and Web Server.
28
Queue Type Port DSCP Max Rate (bps) Min Rate (bps)
Figure 23a: Linux HTB Queues and DSCP Mapping for Cloud Scenario.
0 0 1000000 100000
1 18 1000000 300000
2 36 1000000 600000
Figure 24a: Linux HTB Queues and DSCP Mapping for Unsliced QoS Topo.
Figure 24b: Custom QoS Topo without Separation and Meter Table.
29
With the experiments on the Meter Table we have proven that it’s possible to use the
external controller to remark the traffic until some other app running on the NBI will take care
of forwarding, while OF13 will be responsible for QoS rules injection via REST API and
OFSoftSwitch13 will take over the role of remarking our DSCP classes bound to specific
meters.
Above tests allowed to proof that QoS in MPLS and OpenFlow is possible to achieve
with use of traffic classification and DSCP markings. MPLS-TE doesn’t have any inbuild
mechanism to limit the bandwidth on specific interfaces rather than on the whole VPN
channel to limit the overall available bandwidth to the customer, while OF can use HTBs and
port numbers to place traffic into different queues.
Tunnel0 Tunnel1
EXP: 5 EXP: 1
30
3.5.2. OpenFlow
In OF tests we have used non-commercial controller called Floodlight and commercial
HPE controller with OF13 and custom topology which was using STP in the core to
implement custom flow entries in the flow table as per below scenarios:
31
From the results of Floodlight controller in Figure 28, we can observe that the delay is
higher by 35% as expected due to a longer route to the destination by one hop count as the
initial route had four hop counts between the hosts.
By comparing both of the SDN controllers in Figure 29, we can see that HPE
performs better than Floodlight as both scenarios with and without TE resulted in a lower
mean and StDev for the jitter parameter. HPE controller without TE appeared to be 7 % faster
and 29 % more efficient with TE in comparison to Floodlight controller with a difference of
one hop between the client and server. This could be caused because HPE is a commercial
controller, however, this explains how different SDN implementations can impact the
network performance on a larger scale.
3.6. Failover
First, we have tested MPLS with use of TE topology (Figure 25a) and then OF with
custom Datacentre Topo, as seen in Figure 30.
Figure 30: HPE Aruba VAN Controller and New Flow Path after vNIC Failure.
ICMP requests were successful and no major delay was identified during the failure of
the link while the interface went down similar to the scenario when we tested failover with
MPLS on Cisco routers and Linux FRR nodes.
32
We have proved that in the above discussed situations, the failover mechanism operates
correctly. The reaction time in the solution based on the OF depends fully on the controllers’
capabilities to learn the DPID or the amount of manually entered flow entries in the flow
tables as well as their priorities. In the case of Cisco devices checking for the connection, the
status is entirely the responsibility of the IOS system. It is exceptionally efficient while
changing the packet forwarding route when the main connection is restored to the state before
the failure. However, we have noticed that it takes much longer to diagnose that the
transmission channel is not working properly.
The experiments uncovered that Linux nodes implemented with MPLS acting as LSR
cannot pop an outgoing label on the LSP, their throughput was the lowest between LERs,
packet loss was high for small files as well as a delay for large packets. Linux and FRR with
the mixed approach of using Cisco nodes resulted in lower response times in comparison to
pure hardware, and it was also fully compatible while creating QoS policies when acting as
LERs and during TE tests.
The deployed Cisco HW without MPLS in Linux nodes were obviously fully compatible
between each other while exchanging label information, throughput and delay were lower
than in OF, but the number of packets lost was lower, while the response times were still a lot
higher for small packets than with OF. Interoperability of the protocol, QoS and TE were
easily achievable after long and complex configuration of nodes which requires wide
knowledge from the network administrator.
OpenFlow, however, resulted in lower throughputs even than a P2P link with slightly higher
delays, but it outperformed all remaining tested technologies. It did perform worse when
tested with large volumes of data during packet loss, but it achieved the smallest response
times for small and large packet sizes. Scalable topology has proven that it’s possible to scale-
up network resources with minimal configuration, while QoS experiments with the Ryu
controller provided an insight into per-flow policies, mapping of QoS classes to DSCP values
and traffic remarking with the Meter Table. TE in OF tested with Floodlight and HPE Aruba
VAN controllers on scaled-up topology has proven that SDN caters for centralized
management to program the flow of data while it also has a mechanism for link failover
33
which will respond rapidly after detecting that DPID is no longer available.
A very interesting observation was identified in the OpenFlow provided in one of the
tests described in Chapter 3.2.2.1. The results presented there show that the OF flow table
operations are much faster than lookup in the routing table when deciding to send the packet to
the next node on the route. The total delay using MPLS in Linux and Cisco is, however, lower
than in the case of IP forwarding because the time gained during the transition through the LSR
is lost at the LER nodes. The operation of adding and removing the MPLS label takes longer
than selecting the route based on the routing table. Having a test environment consisting of a
larger number of nodes could be better highlighted in the MPLS protocol. However, due to the
limitations of the equipment available in the laboratory, it was impossible.
OF also appeared far easier to scale than MPLS as adding additional nodes only
involved in altering the script when controller takes over the flow processing, while this process
for MPLS requires all the configurations on each node individually. In terms of compatibility
of scaled-up infrastructure in Chapter 3.3, LDP on both Cisco and FRR Linux nodes were
functioning correctly, but Linux implementation resulted in lower delays, while OF was
irrespectively the fastest.
Moving away from the aspect of the throughput of OpenFlow and MPLS in the work
we presented examples of the use of these protocols in the field of QoS and TE. The first issue
was the transmission of data on arbitrarily selected routes. The assumption was that the flow
paths leading to one target point would depend on the source generating the traffic. The
experiments described in detail in Chapter 3.4 and Chapter 3.5 showed that with each of the
technologies studied we can obtain the expected results. It was possible to use traffic
classification and DSCP markings for both technologies to provide QoS, but only OF has a
mechanism which can be used to limit the bandwidth with use of Linux HTBs and port numbers
34
to move packets into different queues. Major identified TE benefits in OF come from
centralization of management which minimises all that administrator’s burden while setting up
the tunnel end-points, while in OF simply flow paths are programmed on the controller with
flow entries to specific ports on each switch as it was discussed in Chapter 3.5.2.
The last topic of the paper was the question of securing a computer network against the
effects of a sudden connection failure. In Chapter 3.6 we can see what possibilities in this
respect are given to us with MPLS on Linux and Cisco as well as on the OF protocol itself.
Both solutions were effective, but it seems to be better when using OF than Cisco or Linux
nodes, because the failure detection takes place in the centralized external SDN controller. The
administrator is required only to properly configure the backup flow entries or to simply use
the learning capabilities of the developed controller and the apps running on it.
Due to serious problem with the compatibility of the used software traffic generators
hardware traffic generator such as Open Source Network Tester (OSNT) developed by
Antichi (2017) would become beneficial to future research with discussed protocols.
Modern services based on data obtained in IoT systems require efficient computer
networks that meet specific QoS requirements such as very short delays in data transmission.
A new approach to the creation and management of a network infrastructure with the use of
SDN and OF can face this challenge. In these terms we could explore the possibility to create
ecosystems with SDN network infrastructure and cloud computing which the main purpose
would be to automatically control the transmission of data obtained in IoT systems to meet
the requirements of end-users.
35
List of References
Abinaiya, N. and Jayageetha, J. (2015) ‘A Survey On Multi Protocol Label Switching’,
International Journal of Technology Enhancements and Emerging Engineering Research,
vol. 3, no. 2, pp. 25–28. Available at: http://www.ijteee.org/final-print/feb2015/A-Survey-On-
Multi-Protocol-Label-Switching.pdf (Accessed: 4 June 2017).
Anderson, M. (2016) ‘How To Set Up vsftpd for a User's Directory on Ubuntu 16.04’,
DigitalOcean Tutorial, September 2016. Available at:
https://www.digitalocean.com/community/tutorials/how-to-set-up-vsftpd-for-a-user-s-
directory-on-ubuntu-16-04 (Accessed: 19 November 2017).
Antichi, G. (2017) Open Source Network Tester. Available at: http://osnt.org (Accessed: 25
February 2018).
Benita, Y. (2005) ‘Kernel Korner - Analysis of the HTB Queuing Discipline’, Linux Journal,
January 2005. Available at: http://www.linuxjournal.com/article/7562 (Accessed: 12 January
2018).
Burgess, J. (2008) ‘ONOS (Open Network Operating System)’, Ingram Micro Advisor Blog,
August 2008. Available at: http://www.ingrammicroadvisor.com/data-center/7-advantages-of-
software-defined-networking (Accessed: 5 June 2017).
BW-Switch (2016) ICOS AND LINUX SHELL MANAGEMENT. Available at: https://bm-
switch.com/index.php/blog/icos-linux-shell/ (Accessed: 21 December 2017).
Chiosi, M., Clarke, D., Willis, P., Reid, A., Feger, J., Bugenhagen, M., Khan, W., Fargano,
M., Dr. Cui, C., Dr. Deng, H., Benitez, J., Michel, U., Damker, H., Ogaki, K., Matsuzaki, T.,
Fukui, M., Shimano, K., Delisle, D., Loudier, Q., Kolias, C., Guardini, I., Demaria, E.,
Minerva, R., Manzalini, A., Lopez, D., Salguero, F., J., R., Ruhl, F., Sen, P. (2012) ‘Network
Functions Virtualisation - An Introduction, Benefits, Enablers, Challenges & Call for Action’,
SDN and OpenFlow World Congres, October 2012. Available at:
https://portal.etsi.org/NFV/NFV_White_Paper.pdf (Accessed: 26 December 2017).
Cisco (2002) Multiprotocol label switching (MPLS) on Cisco Routers. Available at:
http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/fs_rtr22.html (Accessed: 12
February 2017).
Cisco (2005) MPLS Label Distribution Protocol (LDP). Available at:
https://www.cisco.com/c/en/us/td/docs/ios/12_4t/12_4t2/ftldp41.pdf (Accessed: 19 August
2017).
Cisco (2007) MPLS Static Labels. Available at:
36
http://www.cisco.com/c/en/us/td/docs/ios/mpls/configuration/guide/15_0s/mp_15_0s_book/m
p_static_labels.pdfz6rwXrSUmV2tLMIZnKHQ&sig2=g0xxUdu4Je2R-4V98V5NbA
(Accessed: 18 February 2017).
Cisco (2014) Cisco Networking Academy's Introduction to Routing Dynamically. Available
at: http://www.ciscopress.com/articles/article.asp?p=2180210&seqNum=5 (Accessed: 12
November 2017).
Cisco (2017) ‘Products & Services / Routers’, Cisco Cloud Services Router 1000V Series,
October 2017. Available at: https://www.cisco.com/c/en/us/products/routers/cloud-services-
router-1000v-series/index.html#~stickynav=1 (Accessed: 6 November 2017).
Cisco (2017) IOS Software-15.1.4M12a. Available at:
https://software.cisco.com/download/release.html?mdfid=279316777&flowid=7672&softwar
eid=280805680&release=15.1.4M12a&relind=AVAILABLE&rellifecycle=MD&reltype=late
st (Accessed: 12 February 2017).
Cisco Support (2017) Release Notes for the Catalyst 4500-X Series Switches, Cisco IOS XE
3.10.0E. Available at:
https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/release/note/ol -310xe-
4500x.html (Accessed: 3 January 2018).
CPqD GitHub (2018) OpenFlow 1.3 switch. Available at:
https://github.com/CPqD/ofsoftswitch13 (Accessed: 12 January 2018).
Digital Hybrid (2012) Quality of Service (QoS) -- DSCP TOS CoS Precedence Conversion
Chart. Available at: https://my.digitalhybrid.com.au/knowledgebase/201204284/Quality-of-
Service-QoS----DSCP-TOS-CoS-Precedence-Conversion-Chart.html (Accessed: 10
December 2017).
Ellingwood, J. (2017) ‘How To Install the Apache Web Server on Ubuntu 16.04’,
DigitalOcean Tutorial, May 2017. Available at:
https://www.digitalocean.com/community/tutorials/how-to-install-the-apache-web-server-on-
ubuntu-16-04 (Accessed: 19 November 2017).
FTDI Chip (2017) D2XX Drivers. Available at: http://www.ftdichip.com/Drivers/D2XX.htm
(Accessed: 2 July 2017).
Goransson, P. and Black, C. (2014) Software defined networks: A comprehensive approach.
United States: Morgan Kaufmann Publishers In.
Gupta, S.N. (2013) ‘Next Generation Networks (NGN)-Future of Telecommunication’,
International Journal of ICT and Management, 1(1), pp. 32–35.
37
Hogg, S. (2014) ‘SDN Security Attack Vectors and SDN Hardening’, Network World,
October 2014. Available at: http://www.networkworld.com/article/2840273/sdn/sdn-security-
attack-vectors-and-sdn-hardening.html (Accessed: 11 June 2017).
HPE Support (2012) HP Switch Software OpenFlow Support. Available at:
https://support.hpe.com/hpsc/doc/public/display?sp4ts.oid=3437443&docLocale=en_US&doc
Id=emr_na-c03170243 (Accessed: 3 January 2018)
Internet Live Stats (2017) Number of Internet users. Available at:
http://www.internetlivestats.com/internet-users/ (Accessed: 12 February 2017).
iPerf (2017) Change between iPerf 2.0, iPerf 3.0 and iPerf 3.1. Available at:
https://iperf.fr/iperf-doc.php (Accessed: 18 February 2017).
iPerf (2017) What is iPerf / iPerf3. Available at: https://iperf.fr (Accessed: 18 February 2017).
Kernel Newbies (2015) Linux 4.1. Available at: https://kernelnewbies.org/Linux_4.1
(Accessed: 22 July 2017).
Kernel Newbies (2017) Linux 4.12. Available at: https://kernelnewbies.org/Linux_4.12
(Accessed: 22 July 2017).
Lantz, B., Heller, B., McKeown, N. (2010) ‘A network in a laptop: rapid prototyping for
software-defined networks’, Hotnets-IX Proceedings of the 9th ACM SIGCOMM Workshop
on Hot Topics in Networks, Article No. 19, pp. 19. Available at:
http://dl.acm.org/citation.cfm?id=1868466 (Accessed: 2 July 2017).
Leu, J.R. (2013) MPLS for Linux. Available at: https://sourceforge.net/projects/mpls-linux/
(Accessed: 12 February 2017).
Lewis, C. and Pickavance, S. (2006) ‘Implementing Quality of Service Over Cisco MPLS
VPNs’, Cisco Press Article, May 2006. Available at:
http://www.ciscopress.com/articles/article.asp?p=471096&seqNum=6 (Accessed: 29 October
2017).
McNickle, M. (2014) ‘Five SDN protocols other than OpenFlow’, TechTarget, August 2014.
Available at: http://searchsdn.techtarget.com/news/2240227714/Five-SDN-protocols-other-
than-OpenFlow (Accessed: 5 June 2017).
Millman, R. (2015) ‘How to secure the SDN infrastructure’, Computer Weekly, March 2015.
Available at: http://www.computerweekly.com/feature/How-to-secure-the-SDN-infrastructure
(Accessed: 11 June 2017).
Mishra, A.K. and Sahoo, A. (2007) ‘S-OSPF: A Traffic Engineering Solution for OSPF Based
Best Effort Networks’, Piscataway, NJ: IEEE, pp. 1845–1849.
38
Open Networking Foundation (2013) OpenFlow Switch Specification Version 1.3.2.
Available at: https://3vf60mmveq1g8vzn48q2o71a-wpengine.netdna-ssl.com/wp-
content/uploads/2014/10/openflow-spec-v1.3.2.pdf (Accessed: 12 February 2017).
Open Networking Foundation (2015) OpenFlow Switch Specification Version 1.5.1.
Available at: https://www.opennetworking.org/wp-content/uploads/2014/10/openflow-switch-
v1.5.1.pdf (Accessed: 12 February 2017).
OpenCORD (2017) Specs. Available at: https://opencord.org/specs (Accessed: 18 December
2017).
OpenFlow (2011) Create OpenFlow network with multiple PCs/NetFPGAs. Available at:
http://archive.openflow.org/wp/deploy-labsetup/ (Accessed: 23 July 2017).
OpenFlow (2011) View source for Ubuntu Install. Available at:
http://archive.openflow.org/wk/index.php?title=Ubuntu_Install&action=edit (Accessed: 18
February 2017).
O'Reilly, J. (2014) ‘SDN Limitations’, Network Computing, October 2014. Available at:
https://www.networkcomputing.com/networking/sdn-limitations/241820465 (Accessed: 5
June 2017).
Partsenidis, C. (2011) ‘MPLS VPN tutorial’, TechTarget, June 2011. Available at:
http://searchenterprisewan.techtarget.com/tutorial/MPLS-VPN-tutorial (Accessed: 4 June
2017).
Pica8 (2017) PicOS. Available at: http://www.pica8.com/products/picos (Accessed: 26
December 2017).
Picket, G. (2015) ‘Abusing Software Defined Networks’, DefCon 22 Hacking Conference,
August 2015, Rio Hotel & Casino in Last Vegas. Available at:
https://www.defcon.org/html/links/dc-archives/dc-22-archive.html (Accessed: 10 January
2018).
Quagga (2017) Quagga Routing Suite. Available at: http://www.nongnu.org/quagga/
(Accessed: 21 December 2017).
Rao, S. (2016) ‘How to install & use iperf & jperf tool’, Linux Thrill Tech Blog, April 2016.
Available at: http://linuxthrill.blogspot.ie/2016/04/how-to-install-use-iperf-jperf-tool.html
(Accessed: 18 February 2017).
Reber, A. (2015) ‘On the Scalability of the Controller in Software-Defined Networking’, MSc
in Computer Science, University of Liege, Belgium. Available at:
http://www.student.montefiore.ulg.ac.be/~agirmanr/src/tfe-sdn.pdf (Accessed: 5 June 2017).
39
Rosen, E., Viswanathan, A., Callon, R. (2001) ‘Multiprotocol Label Switching Architecture’,
Internet Engineering Task Force, January 2001. Available at:
https://tools.ietf.org/html/rfc3031.html (Accessed: 12 February 2017).
Salisbury, B. (2013) ‘OpenFlow: SDN Hybrid Deployment Strategies’, Brent Salisbury's
Blog, January 2013. Available at: http://networkstatic.net/openflow-sdn-hybrid-deployment-
strategies/ (Accessed: 5 June 2017).
Simpkins, A. (2015) ‘Facebook Open Switching System ("FBOSS") and Wedge in the open’,
Facebook Article, March 2015. Available at:
https://code.facebook.com/posts/843620439027582/facebook-open-switching-system-fboss-
and-wedge-in-the-open/ (Accessed: 21 December 2017).
Smith, B.R. and Aceves, C.L. (2008) Best Effort Quality-of-Service, St. Thomas, U.S. Virgin
Islands: IEEE.
SnapRoute (2017) Welcome to FlexSwitch from SnapRoute. Available at:
http://docs.snaproute.com/index.html (Accessed: 21 December 2017).
SnapRoute (2017) Welcome to FlexSwitch from SnapRoute. Available at:
http://docs.snaproute.com/index.html (Accessed: 21 December 2017).
U.S. Naval Research Laboratory (2017) Multi-Generator (MGEN). Available at:
https://www.nrl.navy.mil/itd/ncs/products/mgen (Accessed: 18 February 2017).
Ubuntu (2017) Package: mininet (2.1.0-0ubuntu1) [universe]. Available at:
https://packages.ubuntu.com/trusty/net/mininet (Accessed: 2 July 2017).
Ubuntu (2017) ReleaseNotes. Available at:
https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#Official_flavour_release_notes
(Accessed: 18 February 2017).
Vissicchio, S., Vanbever, L., Bonaventure, O. (2014) ‘Opportunities and Research Challenges
of Hybrid Software Defined Networks’, ACM SIGCOMM Computer Communication Review,
vol. 40, no. 2, pp. 70-75. Available at: http://dl.acm.org/citation.cfm?id=2602216 (Accessed:
5 June 2017).
Wang, M., Chen, L., Chi, P., Lei, C. (2017) ‘SDUDP: A Reliable UDP-based Transmission
Protocol over SDN’, IEEE Access, vol. PP, no. 99, pp. 1-13. Available at:
http://ieeexplore.ieee.org/document/7898398/ (Accessed: 5 June 2017).
Wireshark (2017) What is a network protocol analyzer?. Available at: https://wireshark.com
(Accessed: 23 July 2017).
Zhihao, S. and Wolter, K. (2016) ‘Delay Evaluation of OpenFlow Network Based on
40
Queueing Model’, Research Gate Publication, August 2016. Available at:
https://www.researchgate.net/publication/306397961_Delay_Evaluation_of_OpenFlow_Netw
ork_Based_on_Queueing_Model (Accessed: 19 March 2018).
41