0% found this document useful (0 votes)
9 views3 pages

eos.arista.com-Can OpenStack Run Over a VXLAN Fabric Without an Overlay Controller

The document discusses the feasibility of deploying OpenStack over a VXLAN fabric without an external overlay controller, highlighting advancements since a 2013 talk. It explains that Neutron can now manage a mix of hardware and software VTEPs through the introduction of the L2 Gateway project and a corresponding plugin. Additionally, it mentions the emergence of EVPN as a control plane for VXLAN reachability information, indicating progress in broader deployment options for OpenStack on VXLAN networks.

Uploaded by

jarekscribd23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

eos.arista.com-Can OpenStack Run Over a VXLAN Fabric Without an Overlay Controller

The document discusses the feasibility of deploying OpenStack over a VXLAN fabric without an external overlay controller, highlighting advancements since a 2013 talk. It explains that Neutron can now manage a mix of hardware and software VTEPs through the introduction of the L2 Gateway project and a corresponding plugin. Additionally, it mentions the emergence of EVPN as a control plane for VXLAN reachability information, indicating progress in broader deployment options for OpenStack on VXLAN networks.

Uploaded by

jarekscribd23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Can OpenStack Run Over a VXLAN Fabric Without an

Overlay Controller?
eos.arista.com/can-openstack-run-over-a-vxlan-fabric-without-an-overlay-controller

apech

At the OpenStack Summit in Hong Kong at the end of 2013, I gave a talk (video, slides) on
the requirements, tradeoffs, and potential designs for deploying OpenStack over a VXLAN
fabric. It’s been long enough that it feels like it’s time to revisit the topic. More specifically, I
want to focus on the question of whether you can now build such a fabric with a mix of both
hardware and software networking elements while only running standalone Neutron, which
wasn’t really possible back when I originally gave the talk. Using an external overlay
controller was considered the only way to make it work, but in two years a lot has changed.

The need for cooperation between physical and virtual networking elements is even more
important now that Ironic and Neutron work together to provide automated orchestration of
tenant networks for both virtual machines and bare metal servers. But I won’t rehash the
reasons for using VXLAN, the requirements for production deployments, and the key
design decisions and associated tradeoffs here, as they largely remain the same – plus you
can just watch my original talk :)

One of the proposed designs that meets all the requirements is to run Neutron
without an external overlay controller (named “Standalone Neutron”) managing a
mix of software and hardware VTEPs (Virtual Tunnel End Points). The vswitches on
each compute node are the software VTEPs and front the virtual machines. The top
of rack leaf-switches can be hardware VTEPs for any physical infrastructure that
needs to be brought into the tenant networks (ie for physical servers, existing
infrastructure not managed by OpenStack, non-virtualized firewalls and load
balancers, etc). Here’s the basic picture from my original presentation:

1/3
Back in 2013, this sort of design wasn’t really possible in practice as Neutron lacked both:

a way for exchanging VXLAN reachability information between physical and virtual
VTEPs, and
a general model and API for VXLAN gateway nodes, to enable mapping physical
infrastructure into tenant networks

Let’s first recap why both of these are necessary.

When building a VXLAN fabric, all of the different VTEPs must learn about every other
VTEP in the network and which VXLAN networks they’re participating in. This is required so
that each VTEP knows where to flood all BUM (Broadcast, Unknown unicast, and
Multicast) traffic. In a standalone Neutron world, Neutron is itself responsible for configuring
each software VTEP (ie vswitch) with the location of every other software VTEP. In order to
bring other VTEPs into the picture, there needs to be a way to similarly exchange this
information with other entities.

A related issue is the need to model VXLAN gateway nodes in OpenStack. Ultimately to
bridge physical infrastructure into tenant networks, Neutron needs to provide an API to the
end user for mapping which parts of the physical network should map into which tenant
network. Without such an API, the only alternative is to do this manually, which isn’t really
practical.

So where do things stand now, two years later? Is this deployment now possible?

The short answer is – yes! Neutron now has a model for managing VXLAN gateway nodes
as of the Kilo release via the L2 Gateway project, an effort chaired by Arista’s DE Sukhdev
Kapur. There’s now also an L2 Gateway plugin written by our friends at HP which
implements the standard OVSDB VTEP schema to manage hardware VXLAN gateways.
Together this means that a standalone Neutron can now in fact manage a mix of hardware
and software VTEPs without the use of a separate overlay controller. In Arista’s case, the
L2 Gateway plugin integrates with our CloudVision eXchange via OVSDB, thus bringing the
physical infrastructure under Neutron’s control.

The introduction of OVN, an open-source overlay controller for Open vSwitch, provides
another interesting alternative to deploying standalone Neutron. Back in 2013, the only
options ready for production deployments were commercial overlay controllers. This
created the need for an open-source alternative, where having Neutron itself play the role
of the overlay controller managing a mix of software and hardware VTEPs was one
solution. But OVN also supports the OVSDB standard for managing hardware VXLAN
gateways, thus enabling a VXLAN fabric with a mix of hardware and software VTEPs. All
that’s missing is for it to expose the L2 Gateway API by implementing an L2 Gateway
Plugin for OVN. So while relatively new, OVN is a compelling option when looking for an
open source alternative to commercial overlay controllers.

Another new development in this context is the emergence of EVPN as an open control
plane for exchanging VXLAN reachability information. EVPN was in use by a few vendors
back in 2013, but largely as a mechanism for coordinating between their own controllers, or
between their controllers and their hardware devices. The big change over the past two
2/3
years is that there’s now broader acceptance across all the major vendors on this direction.
While this doesn’t solve the question of how Neutron can configure physical infrastructure
to map into tenant networks, it does make the problem of how to share reachability
information between VTEPs from multiple vendors more achievable. In the context of
OpenStack and virtual machines, running EVPN to each compute node doesn’t seem that
tractable, so some intermediary will continue to be necessary. There are various proposals
to bring EVPN in open-source form to Neutron, including this one, but it still remains to be
seen how and when this will happen.

So while there is always plenty more work to do, it’s gratifying to look back and see the
progress that’s been made in pushing broader deployment options for running OpenStack
on top of a VXLAN network fabric!

inShare

3/3

You might also like