Red Hat Openstack Platform 13: Networking With Open Virtual Network
Red Hat Openstack Platform 13: Networking With Open Virtual Network
OpenStack Team
[email protected]
Legal Notice
Copyright © 2019 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other
countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
A Cookbook for using OVN for OpenStack Networking Tasks.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . .1.. .OPEN
. . . . . VIRTUAL
. . . . . . . . .NETWORK
. . . . . . . . . .(OVN)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . .
1.1. QUICK STEPS: DEPLOYING CONTAINERIZED OVN ON THE OVERCLOUD 3
1.2. OVN ARCHITECTURE 3
.CHAPTER
. . . . . . . . .2.. .PLANNING
. . . . . . . . . .YOUR
. . . . . OVN
. . . . .DEPLOYMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . .
2.1. THE OVN-CONTROLLER ON COMPUTE NODES 5
2.2. THE OVN COMPOSABLE SERVICE 5
2.3. HIGH AVAILABILITY WITH PACEMAKER AND DVR 5
2.4. LAYER 3 HIGH AVAILABILITY WITH OVN 6
.CHAPTER
. . . . . . . . .3.. .DEPLOYING
. . . . . . . . . . .OVN
. . . . WITH
. . . . . DIRECTOR
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. . . . . . . . . .
3.1. DEPLOYING OVN WITH DVR 8
3.2. DEPLOYING THE OVN METADATA AGENT ON COMPUTE NODES 9
3.2.1. Troubleshooting Metadata issues 9
3.3. DEPLOYING INTERNAL DNS WITH OVN 9
.CHAPTER
. . . . . . . . .4.. .MONITORING
. . . . . . . . . . . .OVN
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
...........
4.1. MONITORING OVN LOGICAL FLOWS 10
4.2. MONITORING OPENFLOWS 13
1
Red Hat OpenStack Platform 13 Networking with Open Virtual Network
2
CHAPTER 1. OPEN VIRTUAL NETWORK (OVN)
NOTE
This section describes the steps required to deploy OVN using director.
NOTE
OVN is supported only in an HA environment. We recommend that you deploy OVN with
distributed virtual routing (DVR).
3
Red Hat OpenStack Platform 13 Networking with Open Virtual Network
4
CHAPTER 2. PLANNING YOUR OVN DEPLOYMENT
NOTE
To use OVN, your director deployment must use Generic Network Virtualization
Encapsulation (Geneve), and not VXLAN. Geneve allows OVN to identify the network
using the 24-bit Virtual Network Identifier (VNI) field and an additional 32-bit Type Length
Value (TLV) to specify both the source and destination logical ports. You should account
for this larger protocol header when you determine your MTU setting.
The ovn-controller service expects certain key-value pairs in the external_ids column of the
Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. Below are the
key-value pairs that puppet-vswitch configures in the external_ids column:
hostname=<HOST NAME>
ovn-encap-ip=<IP OF THE NODE>
ovn-encap-type=geneve
ovn-remote=tcp:OVN_DBS_VIP:6642
NOTE
5
Red Hat OpenStack Platform 13 Networking with Open Virtual Network
In addition to the required HA profile, Red Hat recommends that you deploy OVN with DVR to ensure the
availability of networking services. With the HA profile enabled, the OVN database servers start on all the
Controllers, and pacemaker then selects one controller to serve in the master role.
The ovsdb-server service does not currently support active-active mode. It does support HA with the
master-slave mode, which is managed by Pacemaker using the resource agent Open Cluster
Framework (OCF) script. Having ovsdb-server run in master mode allows write access to the
database, while all the other slave ovsdb-server services replicate the database locally from the
master, and do not allow write access.
The OVN database servers are started on each Controller node, and the controller owning the virtual IP
address (OVN_DBS_VIP) runs the OVN DB servers in master mode. The OVN ML2 mechanism driver
and ovn-controller then connect to the database servers using the OVN_DBS_VIP value. In the
event of a failover, Pacemaker moves the virtual IP address (OVN_DBS_VIP) to another controller, and
also promotes the OVN database server running on that node to master.
NOTE
L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any
nodes becoming a bottleneck.
BFD monitoring
OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the
gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to
node.
Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway
nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and
ARP responses and announcements.
Each compute node uses BFD to monitor each gateway node and automatically steers external traffic,
such as source and destination Network Address Translation (SNAT and DNAT), through the active
gateway node for a given router. Compute nodes do not need to monitor other compute nodes.
NOTE
External network failures are not detected as would happen with an ML2-OVS
configuration.
6
CHAPTER 2. PLANNING YOUR OVN DEPLOYMENT
The gateway node becomes disconnected from the network (tunneling interface).
NOTE
This BFD monitoring mechanism only works for link failures, not for routing failures.
7
Red Hat OpenStack Platform 13 Networking with Open Virtual Network
1. Enables the OVN ML2 plugin and generates the necessary configuration options.
2. Deploys the OVN databases and the ovn-northd service on the controller node(s).
NOTE
NOTE
2. Configure a Networking port for the Compute node on the external network by modifying
OS::TripleO::Compute::Ports::ExternalPort to an appropriate value, such as
OS::TripleO::Compute::Ports::ExternalPort:
../network/ports/external.yaml
For production environments (or test environments that require special customization, such as network
isolation or dedicated NICs, you can use the example environments as a guide. Pay special attention to
the bridge mapping type parameters used, for example, by OVS and any reference to external facing
bridges.
8
CHAPTER 3. DEPLOYING OVN WITH DIRECTOR
OpenStack guest instances access the Networking metadata service available at the link-local IP
address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks
where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to
reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request
and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.
The OVN Networking service creates a unique network namespace for each virtual network that enables
the metadata service. Each network accessed by the instances on the Compute node has a
corresponding metadata namespace (ovnmeta-<net_uuid>).
USER@INSTANCE_IP_ADDRESS is the user name and IP address for the local instance you want to
troubleshoot.
parameter_defaults:
NeutronPluginExtensions: "dns"
NeutronDnsDomain: "mydns-example.org"
9
Red Hat OpenStack Platform 13 Networking with Open Virtual Network
10
CHAPTER 4. MONITORING OVN
11
Red Hat OpenStack Platform 13 Networking with Open Virtual Network
OVN ports are logical entities that reside somewhere on a network, not physical ports on a single
switch.
OVN gives each table in the pipeline a name in addition to its number. The name describes the
purpose of that stage in the pipeline.
The actions supported in OVN logical flows extend beyond those of OpenFlow. You can
implement higher level features, such as DHCP, in the OVN logical flow syntax.
See the OpenStack and OVN Tutorial for a complete walk through of OVN monitoring options with this
command.
ovn-trace
The ovn-trace command can simulate how a packet travels through the OVN logical flows, or help you
determine why a packet is dropped. Provide the ovn-trace command with the following parameters:
DATAPATH
The logical switch or logical router where the simulated packet starts.
MICROFLOW
The simulated packet, in the syntax used by the ovn-sb database.
This example displays the --minimal output option on a simulated packet and shows that the packet
reaches its destination:
In more detail, the --summary output for this same simulated packet shows the full execution pipeline:
The packet enters the sw0 network from the sw0-port1 port and runs the ingress pipeline.
12
CHAPTER 4. MONITORING OVN
The outport variable is set to sw0-port2 indicating that the intended destination for this packet
is sw0-port2.
The packet is output from the ingress pipeline, which brings it to the egress pipeline for sw0 with
the outport variable set to sw0-port2.
The output action is executed in the egress pipeline, which outputs the packet to the current
value of the outport variable, which is sw0-port2.
13