Red Hat Openstack Platform 11: Network Functions Virtualization Planning and Prerequisites Guide
Red Hat Openstack Platform 11: Network Functions Virtualization Planning and Prerequisites Guide
OpenStack Team
[email protected]
Legal Notice
Copyright © 2018 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other
countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This guide contains important planning information that should be considered prior to setting up and
installing an NFV-enabled Red Hat OpenStack Platform environment.
Table of Contents
Table of Contents
. . . . . . . . . .1.. .INTRODUCTION
CHAPTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . .
.CHAPTER
. . . . . . . . .2.. .SOFTWARE
. . . . . . . . . . .REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . .
2.1. SUPPORTED CONFIGURATIONS FOR NFV DEPLOYMENTS 4
2.2. SUPPORTED DRIVERS 4
2.3. COMPATIBILITY WITH THIRD PARTY SOFTWARE 4
.CHAPTER
. . . . . . . . .3.. .HARDWARE
. . . . . . . . . . .REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . .
3.1. TESTED NICS 5
3.2. DISCOVERING YOUR NUMA NODE TOPOLOGY WITH HARDWARE INTROSPECTION 5
3.3. REVIEW BIOS SETTINGS 9
. . . . . . . . . .4.. .NETWORK
CHAPTER . . . . . . . . . .CONSIDERATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
...........
.CHAPTER
. . . . . . . . .5.. .PLANNING
. . . . . . . . . .YOUR
. . . . . SR-IOV
. . . . . . . DEPLOYMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
...........
5.1. HARDWARE PARTITIONING FOR A NFV SR-IOV DEPLOYMENT 11
5.2. TOPOLOGY OF A NFV SR-IOV DEPLOYMENT 11
5.2.1. NFV SR-IOV without HCI 12
5.2.2. NFV SR-IOV with HCI 13
.CHAPTER
. . . . . . . . .6.. .PLANNING
. . . . . . . . . .YOUR
. . . . . OVS-DPDK
. . . . . . . . . . DEPLOYMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
...........
6.1. HOW OVS-DPDK USES CPU PARTITIONING AND NUMA TOPOLOGY 15
6.2. UNDERSTANDING OVS-DPDK PARAMETERS 15
6.2.1. CPU Parameters 16
6.2.2. Memory Parameters 17
6.2.3. Networking Parameters 19
6.2.4. Other Parameters 19
6.3. TWO NUMA NODE EXAMPLE OVS-DPDK DEPLOYMENT 20
6.4. TOPOLOGY OF AN NFV OVS-DPDK DEPLOYMENT 21
. . . . . . . . . .7.. .PERFORMANCE
CHAPTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
...........
. . . . . . . . . .8.. .FINDING
CHAPTER . . . . . . . .MORE
. . . . . .INFORMATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
...........
1
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
2
CHAPTER 1. INTRODUCTION
CHAPTER 1. INTRODUCTION
Network Functions Virtualization (NFV) is a software-based solution that helps Communication Service
Providers (CSPs) move beyond traditional, proprietary hardware to achieve greater efficiency and agility.
For a high level overview of the NFV concepts, see the Network Functions Virtualization Product Guide.
For information on configuring SR-IOV and OVS-DPDK with Red Hat OpenStack Platform 11 director,
see the Network Functions Virtualization Configuration Guide.
3
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
To install Red Hat OpenStack Platform 11, you must register all systems in the OpenStack environment
using the Red Hat Subscription Manager and subscribe to the required channels. See Registering your
System for details.
For a list of NICs tested for NFV deployments with Red Hat OpenStack, see Tested NICs.
For a complete list of products and services tested, supported, and certified to perform with Red Hat
technologies (Red Hat Enterprise Linux), see Third Party Software compatible with Red Hat Enterprise
Linux. You can filter the list by product version and software category.
4
CHAPTER 3. HARDWARE REQUIREMENTS
You can use Red Hat Technologies Ecosystem to check for a list of certified hardware, software, cloud
provider, component by choosing the category and then selecting the product version.
For a complete list of the certified hardware for Red Hat OpenStack Platform, see Red Hat OpenStack
Platform certified hardware.
SR-IOV
Red Hat tested the SR-IOV 10G for Mellanox and Qlogic. Red Hat also tested the following Intel cards:
NOTE
Red Hat has verified original Intel NICs only and not any other NICs that use the same
drivers.
OVS-DPDK
Red Hat tested the following NICs for OVS-DPDK:
Intel
82598, 82599, X520, X540, X550, X710, XL710, X722.
NOTE
Red Hat has verified original Intel NICs only and not any other NICs that use the same
drivers.
NOTE
You must install and configure the undercloud before you can retrieve NUMA information
through hardware introspection. See the Director Installation and Usage Guide for details.
The Bare Metal service hardware inspection extras (inspection_extras) is enabled by default to retrieve
hardware details. You can use these hardware details to configure your overcloud. See Configuring the
Director for details on the inspection_extras parameter in the undercloud.conf file.
5
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
For example, the numa_topology collector is part of these hardware inspection extras and includes the
following information for each NUMA node:
The following example shows the retrieved NUMA information for a bare-metal node:
{
"cpus": [
{
"cpu": 1,
"thread_siblings": [
1,
17
],
"numa_node": 0
},
{
"cpu": 2,
"thread_siblings": [
10,
26
],
"numa_node": 1
},
{
"cpu": 0,
"thread_siblings": [
0,
16
],
"numa_node": 0
},
{
"cpu": 5,
"thread_siblings": [
13,
29
],
"numa_node": 1
},
{
"cpu": 7,
"thread_siblings": [
15,
31
],
"numa_node": 1
},
6
CHAPTER 3. HARDWARE REQUIREMENTS
{
"cpu": 7,
"thread_siblings": [
7,
23
],
"numa_node": 0
},
{
"cpu": 1,
"thread_siblings": [
9,
25
],
"numa_node": 1
},
{
"cpu": 6,
"thread_siblings": [
6,
22
],
"numa_node": 0
},
{
"cpu": 3,
"thread_siblings": [
11,
27
],
"numa_node": 1
},
{
"cpu": 5,
"thread_siblings": [
5,
21
],
"numa_node": 0
},
{
"cpu": 4,
"thread_siblings": [
12,
28
],
"numa_node": 1
},
{
"cpu": 4,
"thread_siblings": [
4,
20
],
"numa_node": 0
},
7
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
{
"cpu": 0,
"thread_siblings": [
8,
24
],
"numa_node": 1
},
{
"cpu": 6,
"thread_siblings": [
14,
30
],
"numa_node": 1
},
{
"cpu": 3,
"thread_siblings": [
3,
19
],
"numa_node": 0
},
{
"cpu": 2,
"thread_siblings": [
2,
18
],
"numa_node": 0
}
],
"ram": [
{
"size_kb": 66980172,
"numa_node": 0
},
{
"size_kb": 67108864,
"numa_node": 1
}
],
"nics": [
{
"name": "ens3f1",
"numa_node": 1
},
{
"name": "ens3f0",
"numa_node": 1
},
{
"name": "ens2f0",
"numa_node": 0
},
8
CHAPTER 3. HARDWARE REQUIREMENTS
{
"name": "ens2f1",
"numa_node": 0
},
{
"name": "ens1f1",
"numa_node": 0
},
{
"name": "ens1f0",
"numa_node": 0
},
{
"name": "eno4",
"numa_node": 0
},
{
"name": "eno1",
"numa_node": 0
},
{
"name": "eno3",
"numa_node": 0
},
{
"name": "eno2",
"numa_node": 0
}
]
}
DCA - Enabled.
9
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
Provisioning network - Provides DHCP and PXE boot functions to help discover bare-metal
systems for use in the overcloud.
External network - A separate network for remote connectivity to all nodes. The interface
connecting to this network requires a routable IP address, either defined statically, or
dynamically through an external DHCP service.
Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged
VLANs that use subnets for the different overcloud network types.
Dual NIC configuration - One NIC for the Provisioning network and the other NIC for the External
network.
Dual NIC configuration - One NIC for the Provisioning network on the native VLAN and the other
NIC for tagged VLANs that use subnets for the different overcloud network types.
Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.
10
CHAPTER 5. PLANNING YOUR SR-IOV DEPLOYMENT
See Discovering Your NUMA Node Topology to evaluate your hardware impact on the SR-IOV
parameters.
A typical topology includes 14 cores per NUMA node on dual socket Compute nodes. Both hyper-
threading (HT) and non-HT cores are supported. Each core has two sibling threads. One core is
dedicated to the host on each NUMA node. The VNF handles the SR-IOV interface bonding. All the
interrupt requests (IRQs) are routed on the host cores. The VNF cores are dedicated to the VNFs. They
provide isolation from other VNFs as well as isolation from the host. Each VNF must use resources on a
single NUMA node. The SR-IOV NICs used by the VNF must also be associated with that same NUMA
node. This topology does not have a virtualization overhead. The host, OpenStack Networking (neutron)
and Compute (nova) configuration parameters are exposed in a single file for ease, consistency and to
avoid incoherence that is fatal to proper isolation, causing preemption and packet loss. The host and
virtual machine isolation depend on a tuned profile, which takes care of the boot parameters and any
OpenStack modifications based on the list of CPUs to isolate.
11
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
The image shows a VNF that leverages DPDK at an application level and has access to SR-IOV
VF/PFs, together for better availability or performance (depending on the fabric configuration). DPDK
improves performance, while the VF/PF DPDK bonds provide support for failover (availability). The VNF
vendor must ensure their DPDK PMD driver supports the SR-IOV card that is being exposed as a
VF/PF. The management network uses OVS so the VNF sees a “mgmt” network device using the
standard virtIO drivers. Operators can use that device to initially connect to the VNF and ensure that their
DPDK application bonds properly the two VF/PFs.
12
CHAPTER 5. PLANNING YOUR SR-IOV DEPLOYMENT
13
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
14
CHAPTER 6. PLANNING YOUR OVS-DPDK DEPLOYMENT
See NFV Performance Considerations for a high-level introduction to CPUs and NUMA topology.
A sample partitioning includes 16 cores per NUMA node on dual socket Compute nodes. The traffic
requires additional NICs since the NICs cannot be shared between the host and OVS-DPDK.
NOTE
DPDK PMD threads must be reserved on both NUMA nodes even if a NUMA node does
not have an associated DPDK NIC.
OVS-DPDK performance also depends on reserving a block of memory local to the NUMA node. Use
NICs associated with the same NUMA node that you use for memory and CPU pinning. Also ensure both
interfaces in a bond are from NICs on the same NUMA node.
15
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
NOTE
Always pair CPU sibling threads (logical CPUs) together for the physical core when
allocating CPU cores.
See Discovering Your NUMA Node Topology to determine the CPU and NUMA nodes on your Compute
nodes. You use this information to map CPU and other parameters to support the host, guest instance,
and OVS-DPDK process needs.
NeutronDpdkCoreList
Provides the CPU cores that are used for the DPDK poll mode drivers (PMD). Choose CPU cores
that are associated with the local NUMA nodes of the DPDK interfaces. NeutronDpdkCoreList is
used for the pmd-cpu-mask value in Open vSwitch.
Avoid allocating the logical CPUs (both sibling threads) of the first physical core on both
NUMA nodes as these should be used for the HostCpusList parameter.
Performance depends on the number of physical cores allocated for this PMD Core list. On
the NUMA node which is associated with DPDK NIC, allocate the required cores.
Determine the number of physical cores required based on the performance requirement
and include all the sibling threads (logical CPUs) for each physical core.
Allocate the sibling threads (logical CPUs) of one physical core (excluding the first
physical core of the NUMA node). You need a minimal DPDK poll mode driver on the
NUMA node even without DPDK NICs present to avoid failures in creating guest
instances.
NOTE
DPDK PMD threads must be reserved on both NUMA nodes even if a NUMA node does
not have an associated DPDK NIC.
NovaVcpuPinSet
Sets cores for CPU pinning. The Compute node uses these cores for guest instances.
NovaVcpuPinSet is used as the vcpu_pin_set value in the nova.conf file.
16
CHAPTER 6. PLANNING YOUR OVS-DPDK DEPLOYMENT
HostIsolatedCoreList
A set of CPU cores isolated from the host processes. This parameter is used as the
isolated_cores value in the cpu-partitioning-variable.conf file for the tuned-
profiles-cpu-partitioning component.
HostCpusList
Provides CPU cores for non data path OVS-DPDK processes, such as handler and revalidator
threads. This parameter has no impact on overall data path performance on multi-NUMA node
hardware. This parameter is used for the dpdk-lcore-mask value in Open vSwitch, and these
cores are shared with the host.
Allocate the first physical core (and sibling thread) from each NUMA node (even if the NUMA
node has no associated DPDK NIC).
These cores must be mutually exclusive from the list of cores in NeutronDpdkCoreList
and NovaVcpuPinSet.
NeutronDpdkMemoryChannels
Maps memory channels in the CPU per NUMA node. The NeutronDpdkMemoryChannels
parameter is used by Open vSwitch as the other_config:dpdk-extra=”-n <value>” value.
Divide the number of memory channels available by the number of NUMA nodes.
NovaReservedHostMemory
Reserves memory in MB for tasks on the host. This value is used by the Compute node as the
reserved_host_memory_mb value in nova.conf.
NeutronDpdkSocketMemory
Specifies the amount of memory in MB to pre-allocate from the hugepage pool, per NUMA node, for
DPDK NICs. This value is used by Open vSwitch as the other_config:dpdk-socket-mem value.
For a NUMA node without a DPDK NIC, use the static recommendation of 1024 MB (1GB)
17
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
Add the MEMORY_REQD_PER_MTU for each of the MTU values set on the NUMA node
and add another 512 MB as buffer. Round the value up to a multiple of 1024.
2. Calculate the required memory for each MTU value based on these rounded byte values.
This calculation represents (Memory required for MTU of 9000) + (Memory required for MTU of
2000) + (512 MB buffer).
NeutronDpdkSocketMemory: “4096,1024”
18
CHAPTER 6. PLANNING YOUR OVS-DPDK DEPLOYMENT
2. Calculate the required memory for each MTU value based on these rounded byte values.
This calculation represents (Memory required for MTU of 2000) + (512 MB buffer).
NeutronDpdkSocketMemory: “2048,1024”
hugepagesz: Sets the size of the huge pages on a CPU. This value can vary depending on
the CPU hardware. Set to 1G for OVS-DPDK deployments (default_hugepagesz=1GB
hugepagesz=1G). Check for the pdpe1gb CPU flag to ensure your CPU supports 1G.
hugepages count: Sets the number of huge pages available. This value depends on the
amount of host memory available. Use most of your available memory (excluding
NovaReservedHostMemory). You must also configure the huge pages count value within
19
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
isolcpus: Sets the CPU cores to be tuned. This value matches HostIsolatedCoreList.
NUMA 0 has cores 0-7. The sibling thread pairs are (0,1), (2,3), (4,5), and (6,7)
NUMA 1 has cores 8-15. The sibling thread pairs are (8,9), (10,11), (12,13), and (14,15).
Each NUMA node connects to a physical NIC (NIC1 on NUMA 0 and NIC2 on NUMA 1).
NOTE
Reserve the first physical cores (both thread pairs) on each NUMA node (0,1 and 8,9) for
non data path DPDK processes (HostCpusList).
This example also assumes a 1500 MTU configuration, so the OvsDpdkSocketMemory is the same for
all use cases:
OvsDpdkSocketMemory: “1024,1024”
NeutronDpdkCoreList: “2,3,10,11”
NovaVcpuPinSet: “4,5,6,7,12,13,14,15”
20
CHAPTER 6. PLANNING YOUR OVS-DPDK DEPLOYMENT
NeutronDpdkCoreList: “2,3,4,5,10,11”
NovaVcpuPinSet: “6,7,12,13,14,15”
NeutronDpdkCoreList: “2,3,10,11”
NovaVcpuPinSet: “4,5,6,7,12,13,14,15”
NeutronDpdkCoreList: “2,3,10,11,12,13”
NovaVcpuPinSet: “4,5,6,7,14,15”
NIC 1 and NIC2 for DPDK, with two physical cores for PMD
In this use case, we allocate two physical cores on each NUMA node for PMD. The remaining cores (not
reserved for HostCpusList) are allocated for guest instances. The resulting parameter settings are:
NeutronDpdkCoreList: “2,3,4,5,10,11,12,13”
NovaVcpuPinSet: “6,7,14,15”
21
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
22
CHAPTER 6. PLANNING YOUR OVS-DPDK DEPLOYMENT
23
Red Hat OpenStack Platform 11 Network Functions Virtualization Planning and Prerequisites Guide
CHAPTER 7. PERFORMANCE
Red Hat OpenStack Platform 11 director configures the Compute nodes to enforce resource partitioning
and fine tuning to achieve line rate performance for the guest VNFs. The key performance factors in the
NFV use case are throughput, latency and jitter.
DPDK-accelerated OVS enables high performance packet switching between physical NICs and virtual
machines. OVS 2.6 embeds support for DPDK 16.04 and includes support for vhost-user multiqueue
allowing scalable performance. OVS-DPDK provides line rate performance for guest VNFs.
SR-IOV networking provides enhanced performance characteristics, including improved throughput for
specific networks and virtual machines.
Other important features for performance tuning include huge pages, NUMA alignment, host isolation
and CPU pinning. VNF flavors require huge pages for better performance. Host isolation and CPU
pinning improve NFV performance and prevent spurious packet loss.
See NFV Performance Considerations for a high-level introduction to CPUs and NUMA topology.
24
CHAPTER 8. FINDING MORE INFORMATION
The Red Hat OpenStack Platform documentation suite can be found here: Red Hat OpenStack Platform
11 Documentation Suite
Component Reference
Red Hat Enterprise Linux Red Hat OpenStack Platform is supported on Red Hat Enterprise
Linux 7.3. For information on installing Red Hat Enterprise Linux,
see the corresponding installation guide at: Red Hat Enterprise
Linux.
Red Hat OpenStack Platform To install OpenStack components and their dependencies, use the
Red Hat OpenStack Platform director. The director uses a basic
OpenStack installation as the undercloud to install, configure and
manage the OpenStack nodes in the final overcloud. Be aware that
you need one extra host machine for the installation of the
undercloud, in addition to the environment necessary for the
deployed overcloud. For detailed instructions, see Red Hat
OpenStack Platform director Installation and Usage.
You can also manually install the Red Hat OpenStack Platform
components, see Manual Installation Procedures .
NFV Documentation For a high level overview of the NFV concepts, see the Network
Functions Virtualization Product Guide.
25