0% found this document useful (0 votes)
50 views

BP Ahv Networking

Uploaded by

Dinaj Attanayaka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

BP Ahv Networking

Uploaded by

Dinaj Attanayaka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

AHV Networking

Nutanix Best Practices

Version 2.1 • June 2020 • BP-2071


AHV Networking

Copyright
Copyright 2020 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

Copyright | 2
AHV Networking

Contents

1. Executive Summary.................................................................................5

2. Introduction.............................................................................................. 6
2.1. Audience.........................................................................................................................6
2.2. Purpose.......................................................................................................................... 6

3. Nutanix Enterprise Cloud Overview...................................................... 8


3.1. Nutanix HCI Architecture............................................................................................... 9

4. AHV Networking Overview................................................................... 10


4.1. Open vSwitch............................................................................................................... 10
4.2. Bridges......................................................................................................................... 10
4.3. Ports............................................................................................................................. 10
4.4. Bonds........................................................................................................................... 11
4.5. Bridge Chaining............................................................................................................12
4.6. Virtual Local Area Networks (VLANs)..........................................................................14
4.7. IP Address Management (IPAM)................................................................................. 15

5. AHV Network Management...................................................................17


5.1. View Network Status....................................................................................................17
5.2. Prism Uplink Configuration.......................................................................................... 21
5.3. Production Network Changes...................................................................................... 21
5.4. OVS Command Line Configuration............................................................................. 23

6. AHV Networking Best Practices.......................................................... 25


6.1. Open vSwitch Bridge and Bond Recommendations....................................................25
6.2. Load Balancing within Bond Interfaces....................................................................... 30
6.3. VLANs for AHV Hosts and CVMs............................................................................... 38
6.4. VLAN for User VMs..................................................................................................... 41
6.5. CVM Network Segmentation........................................................................................44
6.6. Jumbo Frames............................................................................................................. 44

3
AHV Networking

7. Conclusion..............................................................................................46

Appendix..........................................................................................................................47
AHV Networking Terminology............................................................................................. 47
AHV Networking Best Practices Checklist..........................................................................47
AHV Command Line Tutorial.............................................................................................. 51
AHV Networking Command Examples............................................................................... 54
References...........................................................................................................................56
About the Authors............................................................................................................... 56
About Nutanix...................................................................................................................... 56

List of Figures................................................................................................................ 57

List of Tables.................................................................................................................. 58

4
AHV Networking

1. Executive Summary
The default networking that we describe in the AHV Best Practices Guide covers a wide range of
scenarios that Nutanix administrators encounter. However, for those situations with unique VM
and host networking requirements that are not covered elsewhere, use this advanced networking
guide.
The default AHV networking configuration provides a highly available network for user VMs
and the Nutanix Controller VM (CVM). This default configuration includes simple control and
segmentation of user VM traffic using VLANs, as well as IP address management. Network
visualization for AHV available in Prism also provides a view of the guest and host network
configuration for troubleshooting and verification.
This advanced guide is useful when the defaults don't match customer requirements.
Configuration options include host networking high availability and load balancing mechanisms
beyond the default active-backup, tagged VLAN segmentation for host and CVM traffic, and
detailed command line configuration techniques for situations where a GUI may not be sufficient.
The tools we present here enable you to configure AHV to meet the most demanding network
requirements.

1. Executive Summary | 5
AHV Networking

2. Introduction

2.1. Audience
This best practices guide is part of the Nutanix Solutions Library. We wrote it for AHV
administrators configuring advanced host and VM networking. Readers of this document should
already be familiar with the AHV Best Practices Guide, which covers basic networking.

2.2. Purpose
In this document, we cover the following topics:
• Open vSwitch in AHV.
• VLANs for hosts, CVMs, and user VMs.
• IP address management (IPAM).
• Network adapter teaming within bonds.
• Network adapter load balancing.
• Command line overview and tips.

Table 1: Document Version History

Version
Published Notes
Number
1.0 February 2017 Original publication.
Added jumbo frame configuration, bond name
1.1 February 2018 recommendation, and considerations for staging
installation with flat switch.
Updated product naming and recommendations
1.2 October 2018
regarding the balance-slb bond mode.
Updated the Open vSwitch Bridge and Bond
1.3 December 2018
Recommendations section.
Added production workflow instructions, bridge
2.0 March 2020
chaining, and VM networking enhancements.

2. Introduction | 6
AHV Networking

Version
Published Notes
Number
Updated the Nutanix overview, jumbo frame
2.1 June 2020
recommendations, and terminology.

2. Introduction | 7
AHV Networking

3. Nutanix Enterprise Cloud Overview


Nutanix delivers a web-scale, hyperconverged infrastructure solution purpose-built for
virtualization and both containerized and private cloud environments. This solution brings the
scale, resilience, and economic benefits of web-scale architecture to the enterprise through the
Nutanix enterprise cloud platform, which combines the core HCI product families—Nutanix AOS
and Nutanix Prism management—along with other software products that automate, secure, and
back up cost-optimized infrastructure.
Available attributes of the Nutanix enterprise cloud OS stack include:
• Optimized for storage and compute resources.
• Machine learning to plan for and adapt to changing conditions automatically.
• Intrinsic security features and functions for data protection and cyberthreat defense.
• Self-healing to tolerate and adjust to component failures.
• API-based automation and rich analytics.
• Simplified one-click upgrades and software life cycle management.
• Native file services for user and application data.
• Native backup and disaster recovery solutions.
• Powerful and feature-rich virtualization.
• Flexible virtual networking for visualization, automation, and security.
• Cloud automation and life cycle management.
Nutanix provides services and can be broken down into three main components: an HCI-
based distributed storage fabric, management and operational intelligence from Prism,
and AHV virtualization. Nutanix Prism furnishes one-click infrastructure management for
virtual environments running on AOS. AOS is hypervisor agnostic, supporting two third-party
hypervisors—VMware ESXi and Microsoft Hyper-V—in addition to the native Nutanix hypervisor,
AHV.

3. Nutanix Enterprise Cloud Overview | 8


AHV Networking

Figure 1: Nutanix Enterprise Cloud OS Stack

3.1. Nutanix HCI Architecture


Nutanix does not rely on traditional SAN or network-attached storage (NAS) or expensive storage
network interconnects. It combines highly dense storage and server compute (CPU and RAM)
into a single platform building block. Each building block delivers a unified, scale-out, shared-
nothing architecture with no single points of failure.
The Nutanix solution requires no SAN constructs, such as LUNs, RAID groups, or expensive
storage switches. All storage management is VM-centric, and I/O is optimized at the VM virtual
disk level. The software solution runs on nodes from a variety of manufacturers that are either
entirely solid-state storage with NVMe for optimal performance or a hybrid combination of SSD
and HDD storage that provides a combination of performance and additional capacity. The
storage fabric automatically tiers data across the cluster to different classes of storage devices
using intelligent data placement algorithms. For best performance, algorithms make sure the
most frequently used data is available in memory or in flash on the node local to the VM.
To learn more about Nutanix enterprise cloud software, visit the Nutanix Bible and Nutanix.com.

3. Nutanix Enterprise Cloud Overview | 9


AHV Networking

4. AHV Networking Overview


AHV uses Open vSwitch (OVS) to connect the CVM, the hypervisor, and user VMs to each other
and to the physical network on each node. The CVM manages the OVS inside the AHV host. Do
not allow any external tool to modify the OVS.

4.1. Open vSwitch


Open vSwitch (OVS) is an open source software switch designed to work in a multiserver
virtualization environment. In AHV, the OVS behaves like a layer-2 learning switch that maintains
a MAC address table. The hypervisor host and VMs connect to virtual ports on the switch.
AHV exposes many popular OVS features, such as VLAN tagging, load balancing, and link
aggregation control protocol (LACP). Each AHV server maintains an OVS instance, managed as
a single logical switch through Prism.

4.2. Bridges
A bridge acts as a virtual switch to manage traffic between physical and virtual network
interfaces. The default AHV configuration includes an OVS bridge called br0 and a native Linux
bridge called virbr0. The virbr0 Linux bridge carries management and storage communication
between the CVM and AHV host. All other storage, host, and VM network traffic flows through
the br0 OVS bridge by default. The AHV host, VMs, and physical interfaces use "ports" for
connectivity to the bridge.

4.3. Ports
Ports are logical constructs created in a bridge that represent connectivity to the virtual switch.
Nutanix uses several port types, including internal, tap, VXLAN, and bond.
• An internal port—with the same name as the default bridge (br0)—acts as the AHV host
management interface.
• Tap ports connect VM virtual NICs to the bridge.
• VXLAN ports are only used for the IP address management (IPAM) functionality provided by
AHV.
• Bonded ports provide NIC teaming for the physical interfaces of the AHV host.

4. AHV Networking Overview | 10


AHV Networking

4.4. Bonds
Bonded ports aggregate the physical interfaces on the AHV host for fault tolerance and load
balancing. By default, the system creates a bond named br0-up in bridge br0 containing all
physical interfaces. Changes to the default bond (br0-up) using manage_ovs commands can
rename it to bond0 when using older examples, so keep in mind that your system may be named
differently than the diagram below. Nutanix recommends using the name br0-up to quickly
identify this interface as the bridge br0 uplink. Using this naming scheme, you can also easily
distinguish uplinks for additional bridges from each other.
OVS bonds allow for several load-balancing modes to distribute traffic, including active-backup,
balance-slb, and balance-tcp. Administrators can also activate LACP for a bond to negotiate link
aggregation with a physical switch. During installation, the bond_mode defaults to active-backup,
which is the configuration we recommend for ease of use.
The following diagram illustrates the networking configuration of a single host immediately after
imaging. The best practice is to use only the 10 Gb or faster NICs and to disconnect the 1 Gb
NICs if you do not need them. For additional information on bonds, please refer to the Best
Practices section below.

Note: Only utilize NICs of the same speed within the same bond.

4. AHV Networking Overview | 11


AHV Networking

Figure 2: Post-Imaging Network State

Connections from the server to the physical switch use 10 GbE or faster networking. You can
establish connections between the switches with 40 GbE or faster direct links, or through a leaf-
spine network topology (not shown). The IPMI management interface of the Nutanix node also
connects to the out-of-band management network, which may connect to the production network.
Each node always has a single connection to the management network, but we have omitted this
element from further images in this document for clarity and simplicity.
For more information on the physical network recommendations for a Nutanix cluster, refer to the
Physical Networking Best Practices Guide.

4.5. Bridge Chaining


From AOS 5.5 onward, all AHV hosts use a bridge chain (multiple OVS bridges connected in a
line) as the backend for features like microsegmentation. Each bridge in the chain performs a
specific set of functions. Physical interfaces connect to bridge brX, and VMs connect to bridge

4. AHV Networking Overview | 12


AHV Networking

brX.local. Between these two bridges are br.microseg for microsegmentation and br.nf for
directing traffic to network function VMs. The br.mx and br.dmx bridges allow multiple uplink
bonds in a single AHV host (such as br0-up and br1-up) to use these advanced networking
features.

Figure 3: AHV Bridge Chain

Traffic from VMs enters the bridge chain at brX.local and flows through the chain to brX, which
makes local switching decisions. The brX bridge either forwards the traffic to the physical network
or sends it back through the chain to reach another VM. Traffic from the physical network takes
the opposite path, from brX all the way to brX.local. All VM traffic must flow through the bridge
chain, which applies microsegmentation and network functions.

4. AHV Networking Overview | 13


AHV Networking

The management of the bridge chain is automated, and no user configuration of the chain is
required or supported. Because there are no configurable components, we don’t include this
bridge chain, which exists between the physical interfaces and the user VMs, in other diagrams.

4.6. Virtual Local Area Networks (VLANs)


AHV supports the use of VLANs for the CVM, AHV host, and user VMs. We discuss the steps for
assigning VLANs to the AHV host and CVM in the Best Practices section below. You can easily
create and manage a virtual NIC’s networks for user VMs in the Prism GUI, the Acropolis CLI
(aCLI), or using REST without any additional AHV host configuration.
Each virtual network in AHV maps to a single VLAN and bridge. You must create each VLAN
and virtual network created in AHV on the physical top-of-rack switches as well, but integration
between AHV and the physical switch can automate this provisioning. In the following figure,
we’re using Prism to assign a network name of Production and the VLAN ID 27 for a network
on the default bridge, br0. This process adds the VLAN tag 27 to all AHV hosts in the cluster on
bridge br0.

Figure 4: Prism UI Network Creation

By default, all VM virtual NICs are created in “access” mode on br0, which permits only one
VLAN per virtual network. However, you can choose to configure a virtual NIC in “trunked” mode
using the aCLI instead, allowing multiple VLANs on a single VM NIC for network-aware user
VMs. For more information on virtual NIC modes or multiple bridges, refer to the Best Practices
section below.

4. AHV Networking Overview | 14


AHV Networking

4.7. IP Address Management (IPAM)


IPAM allows AHV to assign IP addresses automatically to VMs using DHCP. Administrators can
configure each virtual network with a specific IP subnet, associated domain settings, and IP
address pools available for assignment to VMs.

Figure 5: IPAM

Administrators can use AHV with IPAM to deliver a complete virtualization deployment, including
network address management, from the Prism interface. To avoid address overlap, be sure to
work with your network team to reserve a range of addresses for VMs before enabling the IPAM
feature.

Note: When using multiple bridges, only a single bridge and VLAN combination can
be a managed network for each VLAN. For example, if br0 vlan100 is a managed
network, then br1 vlan100 cannot be a managed network.

4. AHV Networking Overview | 15


AHV Networking

AHV assigns an IP address from the address pool when creating a managed VM NIC; the
address releases back to the pool when the VM NIC or VM is deleted. With a managed network,
AHV intercepts DHCP requests from user VMs and bypasses traditional network-based DHCP
servers. AHV uses the last network IP address in the assigned network for the managed network
DHCP server unless you select Override DHCP server when creating the network.

4. AHV Networking Overview | 16


AHV Networking

5. AHV Network Management


The following sections illustrate the most common methods used to manage network
configuration for VMs and AHV hosts. Some information is visible in both Prism and the CLI, and
we show both outputs when available. Refer to the AHV Command Line section in the appendix
for more information on CLI usage.

5.1. View Network Status


Viewing Network Configuration for VMs in Prism
Select Network Configuration, then Virtual Networks to view VM virtual networks from the VM
page, as shown in the figure below.

Figure 6: Prism UI Network List

You can see individual VM network details under the Table view on the VM page by selecting the
desired VM and choosing Update, as shown in the figure below.

5. AHV Network Management | 17


AHV Networking

Figure 7: Prism UI VM Network Details

Viewing AHV Host Network Configuration in Prism


Select the Network page to view VM- and host-specific networking details. When you select a
specific AHV host, Prism displays the network configuration, as shown in the figure below. For
more information on the new Network Visualization feature, refer to the Prism Web Console
Guide.

5. AHV Network Management | 18


AHV Networking

Figure 8: AHV Host Network Visualization

View AHV Host Network Configuration in the CLI


You can view Nutanix AHV network configuration in detail using the aCLI, AHV bash, and OVS
commands as shown in the appendix. The following sections outline basic administration tasks
and the commands needed to review and validate a configuration.
Administrators can perform all management operations through the Prism web interface and APIs
or through SSH access to the Controller VM.

Tip: For better security and a single point of management, avoid connecting directly
to the AHV hosts. All AHV host operations can be performed from the CVM by
connecting to 192.168.5.1, the internal management address of the AHV host.

5. AHV Network Management | 19


AHV Networking

View Physical NIC Status from the CVM


To verify the names, speed, and connectivity status of all AHV host interfaces, use the
manage_ovs show_uplinks command.
nutanix@CVM$ manage_ovs --bridge_name br0 show_uplinks
Uplink ports: br0-up
Uplink ifaces: eth3 eth2
nutanix@CVM$ manage_ovs show_interfaces
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

View OVS Bridge and Bond Status from the CVM


Verify bridge and bond details in the AHV host using the show_uplinks command.
nutanix@CVM$ manage_ovs show_uplinks
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2
lacp: off
lacp-fallback: true
lacp_speed: off
Bridge: br1
Bond: br1-up
bond_mode: active-backup
interfaces: eth1 eth0
lacp: off
lacp-fallback: true
lacp_speed: off

5. AHV Network Management | 20


AHV Networking

View VM Network Configuration from a CVM Using the aCLI


Connect to any CVM in the Nutanix cluster to launch the aCLI and view cluster-wide VM network
details.
nutanix@CVM$ acli
<acropolis> net.list
Network name Network UUID Type Identifier
Production ea8468ec-c1ca-4220-bc51-714483c6a266 VLAN 27
vlan.0 a1850d8a-a4e0-4dc9-b247-1849ec97b1ba VLAN 0
<acropolis> net.list_vms vlan.0
VM UUID VM name MAC address
7956152a-ce08-468f-89a7-e377040d5310 VM1 52:54:00:db:2d:11
47c3a7a2-a7be-43e4-8ebf-c52c3b26c738 VM2 52:54:00:be:ad:bc
501188a6-faa7-4be0-9735-0e38a419a115 VM3 52:54:00:0c:15:35

5.2. Prism Uplink Configuration


From AOS 5.11 onward, you can manage interfaces and load balancing for bridge br0 and bond
br0-up from the Prism web interface. This method automatically changes all hosts in the cluster
and performs the appropriate maintenance mode and VM migration. Refer to the Prism Uplink
Configuration documentation for complete instructions.
Nutanix recommends using the Prism web interface exclusively for configurations that require
only a single bridge and bond. If you need multiple bridges and bonds, use the CLI to modify the
network configuration instead.
Do not use any manage_ovs commands once you've used the Prism uplink configuration,
because this configuration automatically reverts changes made to br0 and br0-up using
manage_ovs.

5.3. Production Network Changes


Note: Exercise caution when making changes that impact the network connectivity
of Nutanix nodes.

We strongly recommend performing changes on one node (AHV host and CVM) at a time, after
making sure that the cluster can tolerate a single node outage. To prevent network and storage
disruption, place the AHV host and CVM of each node in maintenance mode before making
network changes. While in maintenance mode, the system migrates VMs off the AHV host and
directs storage services to another CVM.

5. AHV Network Management | 21


AHV Networking

Follow the steps in this section on one node at a time to make network changes to a Nutanix
cluster that is connected to a production network.
• Use SSH to connect to the first CVM you want to update. Check its name and IP to make sure
you are connected to the correct CVM. Verify failure tolerance, and do not proceed if cluster
cannot tolerate at least one node failure.
• Verify that the target AHV host can enter maintenance mode.
nutanix@CVM$ acli host.enter_maintenance_mode_check <host ip>

• Put the AHV host in maintenance mode.


nutanix@CVM$ acli host.enter_maintenance_mode <host ip>

• Find the <host ID> in the output of the command ncli host list.
nutanix@CVM$ ncli host list
Id : 00058977-c18c-af17-0000-000000006f89::2872
?- "2872" is the host ID
Uuid : ddc9d93b-68e0-4220-85f9-63b73d08f0ff
...

• Enable maintenance mode for the CVM on the target AHV host. You may skip this step if the
CVM services are not running, or if the cluster state is stopped.
nutanix@CVM$ ncli host edit id=<host ID> enable-maintenance-mode=true

• Because network changes can disrupt host connectivity, use IPMI to connect to the host
console and perform the desired network configuration changes. Once you have changed the
configuration, ping the default gateway and another Nutanix node to verify connectivity.
• After all tests have completed successfully, remove the CVM and AHV host from maintenance
mode.
• From a different CVM, run the following command to take the affected CVM out of
maintenance mode.
nutanix@cvm$ ncli host edit id=<host ID> enable-maintenance-mode=false

• Exit host maintenance mode to restore VM locality, migrating VMs back to their original AHV
host.
nutanix@cvm$ acli host.exit_maintenance_mode <host ip>

Move to the next node in the Nutanix cluster and repeat the previous steps to enter maintenance
mode, make the desired changes, and exit maintenance mode. Repeat this process until you
have made the changes on all hosts in the cluster.

5. AHV Network Management | 22


AHV Networking

5.4. OVS Command Line Configuration


To view the OVS configuration from the CVM command line, use the AHV-specific manage_ovs
command. To run a single view command on every Nutanix CVM in a cluster, use the allssh
shortcut described in the AHV Command Line appendix.

Note: In a production environment, we only recommend using the allssh shortcut


to view information. Do not use the allssh shortcut to make changes in a production
environment. When making network changes, only use the allssh shortcut in a
nonproduction or staging environment.

Note: The order in which flags and actions pass to manage_ovs is critical. Flags
must come before the action. Any flag passed after an action is not parsed.
nutanix@CVM$ manage_ovs --helpshort
USAGE: manage_ovs [flags] <action>

To list all physical interfaces on all nodes, use the show_interfaces command. The
show_uplinks command returns the details of a bonded adapter for a single bridge.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> show_uplinks"

The update_uplinks command configures a comma-separated list of interfaces into a single


uplink bond in the specified bridge. If the bond does not have at least one interface with a
physical connection, the manage_ovs command issues a warning and exits without configuring
the bond. To avoid this error and provision members of the bond even if they are not connected,
use the require_link =false flag.

Note: If you do not enter a bridge_name, the command runs on the default bridge,
br0.
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces> update_uplinks
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces> --require_link=false
update_uplinks

The manage_ovs update_uplinks command deletes an existing bond and creates it with the new
parameters when a change to bond members or load balancing algorithm is required. Unless you
specify the correct bond mode parameter, using manage_ovs to update uplinks deletes the bond,
then recreates it with the default load balancing configuration. If you are using active-backup
load balancing, update_uplinks can cause a short network interruption. If you are using balance-
slb or balance-tcp (LACP) load balancing and do not specify that correct bond mode parameter,
update_uplinks resets the configuration to active-passive. At this point, the host stops responding
to keepalives and network links relying on LACP go down.

5. AHV Network Management | 23


AHV Networking

Note: Do not use the command allssh manage_ovs update_uplinks in a production


environment. If you have not set the correct flags and parameters, this command can
cause a cluster outage.

You can use the manage_ovs command for host uplink configuration in various common
scenarios, including initial cluster deployment, cluster expansion, reimaging a host during boot
disk replacement, or general host network troubleshooting.

Note: With AHV clusters running versions 5.10.x prior to 5.10.4, using manage_ovs
to make configuration changes on an OVS bridge configured with a single uplink may
result in a network loop. If the AHV node is configured with a single interface in the
bridge, upgrade AOS to 5.10.4 or later before making any changes. If you have a
single interface in the bridge, engage Nutanix Support if you cannot upgrade AOS
and must change the bond configuration.

Compute Only Node Network Configuration


The manage_ovs command runs from a CVM and makes network changes the local AHV host
where that CVM is running. Because Nutanix AHV Compute Only nodes don’t run a CVM, setting
the network configuration of Compute Only nodes requires a slightly different process.
• Follow the steps in the Production Network Changes section to put the target AHV host in
maintenance mode.
• Use the manage_ovs command from any other CVM in the cluster with the --host flag.
nutanix@CVM$ manage_ovs --host <compute-only-node-IP> --bridge_name <br-name> --bond_name
<bond-name> --interfaces 10g update_uplinks

• Exit maintenance mode after completing the network changes, then repeat these steps on the
next Compute Only node.

5. AHV Network Management | 24


AHV Networking

6. AHV Networking Best Practices


The main best practice for AHV networking is to keep things simple. The recommended
networking configuration, with two 10 Gb or faster adapters using active-backup, provides a
highly available environment that performs well with minimal configuration. Nutanix CVMs and
AHV hosts communicate in the untagged VLAN, and tagged VLANs serve user VM traffic.
Use this basic configuration unless there is a compelling business requirement for advanced
configuration.

6.1. Open vSwitch Bridge and Bond Recommendations


This section addresses advanced bridge and bond configuration scenarios where you would
use CLI commands such as manage_ovs. For a quick CLI tutorial, see the AHV Command Line
section in the appendix.
Identify the scenario that best matches your desired use case and follow the instructions for that
scenario in its corresponding subsection. With slight modifications, you can use the commands
from the second scenario to create multiple bridges and bonds. The final two scenarios are
simply variations of the second scenario. In the following sections, we use 10 Gb to refer to the
fastest interfaces on the system. If your system uses 25, 40, or 100 Gb interfaces, use the actual
speed of the interfaces instead of 10 Gb.

Tip: Nutanix recommends that each bond always have at least two interfaces for
high availability.

Table 2: Bridge and Bond Use Cases

Bridge and Bond Scenario Use Case


Recommended configuration for easy setup. Use when you can
2x 10 Gb (no 1 Gb) send all CVM and user VM traffic over the same pair of 10 Gb
adapters. Compatible with any load balancing algorithm.
Use when you need an additional, separate pair of physical 1 Gb
adapters for VM traffic that must be isolated to another adapter
2x 10 Gb and 2x 1 Gb
or physical switch. Keep the CVM traffic on the 10 Gb network.
separated
You can place user VM traffic on either the 10 Gb network or the
1 Gb network. Compatible with any load balancing algorithm.

6. AHV Networking Best Practices | 25


AHV Networking

Bridge and Bond Scenario Use Case


Use to physically separate CVM traffic such as storage and
Nutanix Volumes from user VM traffic while still providing 10
4x 10 Gb (2 + 2) and 2x 1 Gb Gb connectivity for both traffic types. The four 10 Gb adapters
separated are divided into two separate pairs. Compatible with any load
balancing algorithm. This case is not illustrated in the diagrams
below.
Use to provide additional bandwidth and failover capacity to the
CVM and user VMs sharing four 10 Gb adapters in the same
4x 10 Gb combined and 2x 1
bond. We recommend using LACP with balance-tcp to take
Gb separated
advantage of all adapters. This case is not illustrated in the
diagrams below.

Keep the following recommendations in mind for all bond scenarios to prevent undesired
behavior and maintain NIC compatibility.
• Do not mix NIC models from different vendors in the same bond.
• Do not mix NICs of different speeds in the same bond.

6. AHV Networking Best Practices | 26


AHV Networking

6.1. Scenario 1: 2x 10 Gb

Figure 9: Network Connections for 2x 10 Gb NICs

The most common network configuration is to utilize the 10 Gb or faster interfaces within the
default bond for all networking traffic. The CVM and all user VMs use the 10 Gb interfaces. In
this configuration, we don’t use the 1 Gb interfaces. Note that this is different from the factory
configuration, because we have removed the 1 Gb interfaces from the OVS bond. For simplicity,
we have not included the IPMI connection in these diagrams.
This scenario uses two physical upstream switches, and each 10 Gb interface within the
bond plugs into a separate physical switch for high availability. Within the bond, only one
physical interface is active when using the default active-backup load balancing mode. Nutanix
recommends using active-backup because it is easy to configure, works immediately after install,
and requires no upstream switch configuration. See the Load Balancing within Bond Interfaces
section below for more information and alternate configurations.
Remove NICs that are not in use from the default bond, especially when they are of different
speeds. To do so, perform the following manage_ovs action for each Nutanix node in the cluster:
• From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs by specifying
that only eth2 and eth3 remain in the bridge. The 10g shortcut lets you include all 10 Gb
interfaces without having to specify the interfaces explicitly by name. Shortcuts also exist for

6. AHV Networking Best Practices | 27


AHV Networking

25 Gb (25g) and 40 Gb (40g) interfaces. Some Nutanix models have different ethX names for
1 Gb and 10 Gb links, so these shortcuts are helpful in multiple ways.

Note: Previous versions of this guide used the bond name bond0 instead of br0-
up. We recommend using br0-up because it identifies the associated bridge and the
uplink function of this bond.

Note: In a production environment, Nutanix strongly recommends making changes


on one node at a time, after verifying that the cluster can tolerate a node failure.
Follow the steps in the Production Network Changes section when making changes.

Run the following command on each Nutanix node in the cluster to achieve the configuration
shown in the previous figure:
nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name br0-up --interfaces 10g update_uplinks

6.1. Scenario 2: 2x 10 Gb and 2x 1 Gb Separated

Figure 10: Network Connections for 2x 10 Gb and 2x 1 Gb NICs

If you want to use the 1 Gb physical interfaces, separate the 10 Gb and 1 Gb interfaces into
different bridges and bonds to ensure that CVM traffic always traverses the fastest possible link.

6. AHV Networking Best Practices | 28


AHV Networking

Here, we’ve grouped the 10 Gb interfaces (eth2 and eth3) into br0-up and dedicated them to the
CVM and User VM1. We’ve grouped the 1 Gb interfaces into br1-up; only a second link on User
VM2 uses br1. Bonds br0-up and br1-up are added into br0 and br1, respectively.
In this configuration, the CVM and user VMs use the 10 Gb interfaces on bridge br0. Bridge
br1 is available for VMs that require physical network separation from the CVM and VMs on
br0. Devices eth0 and eth1 could alternatively plug into a different pair of upstream switches for
further physical traffic separation as shown.

Note: In a production environment, Nutanix strongly recommends making changes


on one node at a time, after verifying that the cluster can tolerate a node failure.
Follow the steps in the Production Network Changes section when making changes.

Run the following commands on each Nutanix node in the cluster to achieve the configuration
shown in the previous figure:
• In AOS 5.5 or later, manage_ovs handles bridge creation. On each CVM, add bridge br1.
Bridge names must not exceed six characters. We suggest using the name br1.

Note: When adding a bridge, ensure that the bridge is created on every host in the
cluster. Failure to add bridges to all hosts can lead to VM migration errors.
nutanix@CVM$ manage_ovs --bridge_name br1 create_single_bridge

• From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. Run the
following show commands to make sure that all interfaces are in a good state before
performing the update.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"

The output from these show commands should verify that the 10 Gb and 1 Gb interfaces have
connectivity to the upstream switches—just look for the columns labeled link and speed.
The following sample output of the manage_ovs show_interfaces command verifies connectivity:
nutanix@CVM$ manage_ovs show_interfaces
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

6. AHV Networking Best Practices | 29


AHV Networking

The following sample output of the manage_ovs show_uplinks command verifies bond
configuration before making the change.
nutanix@CVM$ manage_ovs show_uplinks
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2 eth1 eth0
lacp: off
lacp-fallback: true
lacp_speed: off

Once you’ve entered maintenance mode and confirmed connectivity, update the bond to include
only 10 Gb interfaces. This command removes all other interfaces from the bond.
nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name br0-up --interfaces 10g update_uplinks

Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

Note: You can use the --require_link=false flag to create the bond even if the all 1
Gb adapters are not connected.
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name br1-up --interfaces 1g --
require_link=false update_uplinks

Exit maintenance mode and repeat the previous update_uplinks steps for every host in the
cluster.

6.1. Creating Networks


You must use the aCLI to create networks on bridges other than br0. Once you’ve created the
networks in the aCLI, you can view them by network name in the Prism GUI, so it’s helpful to
include the bridge name in the network name.
nutanix@CVM$ acli net.create <net_name> vswitch_name=<br_name> vlan=<vlan_num>

For example:
nutanix@CVM$ acli net.create br1_production vswitch_name=br1 vlan=1001
nutanix@CVM$ acli net.create br2_production vswitch_name=br2 vlan=2001

6.2. Load Balancing within Bond Interfaces


AHV hosts use a bond containing multiple physical interfaces that each connect to a physical
switch. To build a fault-tolerant network connection between the AHV host and the rest of the
network, connect each physical interface in a bond to a separate physical switch.

6. AHV Networking Best Practices | 30


AHV Networking

A bond distributes traffic between multiple physical interfaces according to the bond mode.

Table 3: Load Balancing Use Cases

Maximum
Maximum Host
Bond Mode Use Case VM NIC
Throughput*
Throughput*
Recommended. Default
active-backup configuration, which transmits all 10 Gb 10 Gb
traffic over a single active adapter.
Has caveats for multicast traffic.
Increases host bandwidth utilization
beyond a single 10 Gb adapter.
balance-slb 10 Gb 20 Gb
Places each VM NIC on a single
adapter at a time. Do not use with
link aggregation such as LACP.
LACP and link aggregation required.
Increases host and VM bandwidth
utilization beyond a single 10 Gb
LACP and balance-tcp adapter by balancing VM NIC TCP 20 Gb 20 Gb
and UDP sessions among adapters.
Also used when network switches
require LACP negotiation.
* Assuming 2x 10 Gb adapters. Simplex speed.

Active-Backup
The recommended and default bond mode is active-backup, where one interface in the bond is
randomly selected at boot to carry traffic and other interfaces in the bond are used only when the
active link fails. Active-backup is the simplest bond mode, easily allowing connections to multiple
upstream switches without additional switch configuration. The limitation is that traffic from all
VMs uses only the single active link within the bond at one time. All backup links remain unused
until the active link fails. In a system with dual 10 Gb adapters, the maximum throughput of all
VMs running on a Nutanix node is limited to 10 Gbps, or the speed of a single link.

6. AHV Networking Best Practices | 31


AHV Networking

Figure 11: Active-Backup Fault Tolerance

Active-backup mode is enabled by default, but you can also configure it with the following ovs-
vsctl command on the CVM:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=active-backup"

View the bond mode with the following CVM command:


nutanix@CVM$ manage_ovs show_uplinks

In the active-backup configuration, this command returns a variation of the following output,
where eth2 and eth3 are marked as interfaces used in the bond br0-up.
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2
lacp: off
lacp-fallback: false
lacp_speed: slow

6. AHV Networking Best Practices | 32


AHV Networking

For more detailed bond information such as the currently active adapter, use the following ovs-
appctl command on the CVM:
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show"

Balance-SLB
Nutanix does not recommend balance-slb due to the multicast traffic caveats noted later in this
section. To combine the bandwidth of multiple links, consider using link aggregation with LACP
and balance-tcp instead of balance-slb. Do not use balance-slb unless you verify the multicast
limitations described here are not present in your network.

Note: Do not use IGMP snooping on physical switches connected to Nutanix servers
using balance-slb. Balance-slb forwards inbound multicast traffic on only a single
active adapter and discards multicast traffic from other adapters. Switches with IGMP
snooping may discard traffic to the active adapter and only send it to the backup
adapters. This mismatch leads to unpredictable multicast traffic behavior. Disable
IGMP snooping or configure static IGMP groups for all switch ports connected to
Nutanix servers using balance-slb. IGMP snooping is often enabled by default on
physical switches.

The balance-slb bond mode in OVS takes advantage of all links in a bond and uses measured
traffic load to rebalance VM traffic from highly used to less used interfaces. When the
configurable bond-rebalance interval expires, OVS uses the measured load for each interface
and the load for each source MAC hash to spread traffic evenly among links in the bond. Traffic
from some source MAC hashes may move to a less active link to more evenly balance bond
member utilization. Perfectly even balancing may not always be possible, depending on the
number of source MAC hashes and their stream sizes.
Each individual VM NIC uses only a single bond member interface at a time, but a hashing
algorithm distributes multiple VM NICs (multiple source MAC addresses) across bond member
interfaces. As a result, it is possible for a Nutanix AHV node with two 10 Gb interfaces to use up
to 20 Gbps of network throughput, while individual VMs have a maximum throughput of 10 Gbps,
the speed of a single physical interface.

6. AHV Networking Best Practices | 33


AHV Networking

Figure 12: Balance-SLB Load Balancing

The default rebalance interval is 10 seconds, but Nutanix recommends setting this interval to
30 seconds to avoid excessive movement of source MAC address hashes between upstream
switches. Nutanix has tested this configuration using two separate upstream switches with AHV.
No additional configuration (such as link aggregation) is required on the switch side, as long as
the upstream switches are interconnected physically or virtually and both uplinks allow the same
VLANs.

Note: Do not use link aggregation technologies such as LACP with balance-slb. The
balance-slb algorithm assumes that upstream switch links are independent layer-2
interfaces and handles broadcast, unknown, and multicast (BUM) traffic accordingly,
selectively listening for this traffic on only a single active adapter in the bond.

Note: In a production environment, Nutanix strongly recommends making changes


on one node at a time, after verifying that the cluster can tolerate a node failure.
Follow the steps in the Production Network Changes section when making changes.

After entering maintenance mode for the desired host, configure the balance-slb algorithm for the
bond with the following commands:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=balance-slb"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:bond-rebalance-
interval=30000"

Exit maintenance mode for this node and repeat this configuration on all nodes in the cluster.

6. AHV Networking Best Practices | 34


AHV Networking

Verify the proper bond mode on each CVM with the following command:
nutanix@CVM$ manage_ovs show_uplinks
Bridge: br0
Bond: br0-up
bond_mode: balance-slb
interfaces: eth3 eth2
lacp: off
lacp-fallback: false
lacp_speed: slow

The output shows that the bond_mode selected is balance-slb.


For more detailed information such as the source MAC hash distribution, use ovs-appctl.

LACP and Link Aggregation


Link aggregation is required to take full advantage of the bandwidth provided by multiple links.
In OVS it is accomplished though dynamic link aggregation with LACP and load balancing using
balance-tcp.
Nutanix and OVS require dynamic link aggregation with LACP instead of static link aggregation
on the physical switch. Do not use static link aggregation such as etherchannel with AHV.

Note: Nutanix recommends enabling LACP on the AHV host with fallback to active-
backup. Then configure the connected upstream switches. Different switch vendors
may refer to link aggregation as port channel or LAG. Using multiple upstream
switches may require additional configuration such as a multichassis link aggregation
group (MLAG) or virtual PortChannel (vPC). Configure switches to fall back to
active-backup mode in case LACP negotiation fails (sometimes called fallback or
no suspend-individual). This switch setting assists with node imaging and initial
configuration where LACP may not yet be available on the host.

With link aggregation negotiated by LACP, multiple links to separate physical switches appear
as a single layer-2 link. A traffic-hashing algorithm such as balance-tcp can split traffic between
multiple links in an active-active fashion. Because the uplinks appear as a single L2 link, the
algorithm can balance traffic among bond members without any regard for switch MAC address
tables. Nutanix recommends using balance-tcp when LACP and link aggregation are configured,
because each TCP or UDP stream from a single VM can potentially use a different uplink in
this configuration. The balance-tcp algorithm hashes traffic streams by source IP, destination IP,
source port, and destination port. With link aggregation, LACP, and balance-tcp, a single user VM
with multiple TCP or UDP streams could use up to 20 Gbps of bandwidth in an AHV node with
two 10 Gb adapters.

6. AHV Networking Best Practices | 35


AHV Networking

Figure 13: LACP and Balance-TCP Load Balancing

Note: In a production environment, Nutanix strongly recommends making changes


on one node at a time, after verifying that the cluster can tolerate a node failure.
Follow the steps in the Production Network Changes section when making changes.

After entering maintenance mode for the desired host, configure link aggregation with LACP and
balance-tcp using the commands below.

Note: Upstream physical switch LACP settings such as timers should match the
AHV host settings for configuration consistency.

If upstream LACP negotiation fails, the default AHV host configuration disables the bond, thus
blocking all traffic. The following command allows fallback to active-backup bond mode in the
AHV host in the event of LACP negotiation failure:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-fallback-
ab=true"

In the AHV host and on most switches, the default OVS LACP timer configuration is slow, or
30 seconds. This value—which is independent of the switch timer setting—determines how
frequently the AHV host requests LACPDUs from the connected physical switch. The fast setting
(1 second) requests LACPDUs from the connected physical switch every second, thereby helping

6. AHV Networking Best Practices | 36


AHV Networking

to detect interface failures more quickly. Failure to receive three LACPDUs—in other words,
after 3 seconds with the fast setting—shuts down the link within the bond. Nutanix recommends
setting lacp-time to fast on the AHV host and physical switch to decrease link failure detection
time from 90 seconds to 3 seconds.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-time=fast"

Next, enable LACP negotiation and set the hash algorithm to balance-tcp.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up lacp=active"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=balance-tcp"

Enable LACP on the upstream physical switches for this AHV host with matching timer and load
balancing settings. Confirm LACP negotiation using ovs-appctl commands, looking for the word
"negotiated" in the status lines.
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show br0-up"
nutanix@CVM$ ssh [email protected] "ovs-appctl lacp/show br0-up"

Exit maintenance mode and repeat the preceding steps for each node and every connected
switch port one node at a time, until you have configured the entire cluster and all connected
switch ports.

Disable LACP on the host


To safely disable the LACP configuration so you can use another load balancing algorithm,
perform the following steps.

Note: In a production environment, Nutanix strongly recommends making changes


on one node at a time, after verifying that the cluster can tolerate a node failure.
Follow the steps in the Production Network Changes section when making changes.

After entering maintenance mode on the desired host, configure a bonding mode that does not
require LACP (such as active-backup).
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=active-backup"

Turn off LACP on the connected physical switch ports.


Turn off LACP on the hosts.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up lacp=off"

Disable LACP fallback on the host.


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-fallback-
ab=true"

Exit maintenance mode and repeat the preceding steps for each node and connected switch port
one node at a time, until you have configured the entire cluster and all connected switch ports.

6. AHV Networking Best Practices | 37


AHV Networking

Storage Traffic between CVMs


Using active-backup or any other OVS load balancing method, it is not possible to select the
active adapter for the CVM in a way that is persistent between host reboots. When multiple
uplinks from the AHV host connect to multiple switches, ensure that adequate bandwidth
exists between these switches to support Nutanix CVM replication traffic between nodes.
Nutanix recommends redundant 40 Gbps or faster connections between switches. A leaf-spine
configuration or direct inter-switch link can satisfy this recommendation. Review the Physical
Networking Best Practices Guide for more information.

6.3. VLANs for AHV Hosts and CVMs


The recommended VLAN configuration is to place the CVM and AHV host in the untagged VLAN
(sometimes called the native VLAN) as shown in the figure below. Neither the CVM nor the AHV
host requires special configuration with this option. Configure the switch to allow tagged VLANs
for user VM networks to the AHV host using standard 802.1Q VLAN tags. Also, configure the
switch to send and receive traffic for the CVM and AHV host’s VLAN as untagged. Choose any
VLAN on the switch other than 1 as the native untagged VLAN on ports facing AHV hosts.

Note: All Controller VMs and hypervisor hosts must be on the same subnet and
broadcast domain. No systems other than the CVMs and hypervisor hosts should be
on this network, which should be isolated and protected.

6. AHV Networking Best Practices | 38


AHV Networking

Figure 14: Default Untagged VLAN for CVM and AHV Host

The setup depicted in the previous figure works well for situations where the switch administrator
can set the CVM and AHV VLAN to untagged. However, if you do not want to send untagged
traffic to the AHV host and CVM, or if security policy doesn’t allow this configuration, you can add
a VLAN tag to the host and the CVM with the procedure that follows the next image.

6. AHV Networking Best Practices | 39


AHV Networking

Figure 15: Tagged VLAN for CVM and AHV Host

Note: Ensure that IPMI console access is available for recovery before starting this
configuration.

Note: In a production environment, Nutanix strongly recommends making changes


on one node at a time, after verifying that the cluster can tolerate a node failure.
Follow the steps in the Production Network Changes section when making changes.

• After entering maintenance mode on the target host, configure VLAN tags on the AHV host.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=10"
nutanix@CVM$ ssh [email protected] "ovs-vsctl list port br0"

• Configure VLAN tags for the CVM.


nutanix@CVM$ change_cvm_vlan 10

6. AHV Networking Best Practices | 40


AHV Networking

Exit maintenance mode and repeat the preceding steps on every node to configure the entire
cluster.

Removing VLAN Configuration


To remove VLAN tags and revert to the default untagged VLAN configuration use the following
steps.
• After entering maintenance mode on the target host, run the following command for the CVM.
nutanix@CVM$ change_cvm_vlan 0

• Run the following command for the AHV host.


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=0"

Exit maintenance mode and repeat the previous steps on every node to configure the entire
cluster.

6.4. VLAN for User VMs


The VLAN for a VM is assigned using the VM network. Using Prism, br0 is the default bridge for
new networks. Create networks for user VMs in bridge br0 in either the Prism UI or the aCLI.
Networks for bridges other than br0, such as br1, must be added using the aCLI.
• When you have multiple bridges, putting the bridge name in the network name is helpful
when you view the network in the Prism UI. In this example, Prism shows a network named
"br1_vlan99" to indicate that this network sends VM traffic over VLAN 99 on bridge br1.
nutanix@CVM$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

Tip: From AOS 5.10 onward, it is no longer necessary or recommended to turn off
the VM to change the network.

Perform the following steps to change the VLAN tag of VM NIC without deleting and recreating
the vNIC.
• If you are running a version of AOS prior to 5.10, turn off the VM. Otherwise, leave the VM on.
• Create a new network with a different VLAN ID that you want to assign to the VM NIC.
• Run the following command on any CVM.
nutanix@cvm$ acli vm.nic_update <VM_name> <NIC MAC address> network=<new network>

VM NIC VLAN Modes


VM NICs on AHV can operate in three modes:
• Access

6. AHV Networking Best Practices | 41


AHV Networking

• Trunked
• Direct
Access mode is the default for VM NICs, where a single VLAN travels to and from the VM as
untagged but is encapsulated with the appropriate VLAN tag on the physical NIC. VM NICs in
trunked mode allow multiple tagged VLANs and a single untagged VLAN on a single NIC for VMs
that are VLAN aware. A NIC in trunked mode can only be added via the aCLI. It is not possible
to distinguish between access and trunked NIC modes in Prism UI. Direct mode NICs connect to
brX and bypass the bridge chain; do not use them unless advised by Nutanix support to do so.
Run following command on any CVM in cluster to add a new trunked NIC:
nutanix@CVM~$ acli vm.nic_create <vm name> network=<network name> trunked_networks=<comma
separated list of allowed VLAN IDs> vlan_mode=kTrunked

The native VLAN for the trunked NIC is the VLAN assigned to the network specified in the
network parameter. Additional tagged VLANs are designated by the trunked_networks
parameter.Run the following command on any CVM in the cluster to verify the VM NIC mode:
nutanix@CVM~$ acli vm.get <vm name>

6. AHV Networking Best Practices | 42


AHV Networking

Sample output:
nutanix@CVM~$ acli vm.get testvm
testvm {
config {
...
nic_list {
ip_address: "X.X.X.X"
mac_addr: "50:6b:8d:8a:46:f7"
network_name: "network"
network_type: "kNativeNetwork"
network_uuid: "6d8f54bb-4b96-4f3c-a844-63ea477c27e1"
trunked_networks: 3 <--- list of allowed VLANs
trunked_networks: 4
trunked_networks: 5
type: "kNormalNic"
uuid: "9158d7da-8a8a-44c8-a23a-fe88aa5f33b0"
vlan_mode: "kTrunked" <--- mode
}
...
}
...
}

To change the VM NIC’s mode from Access to Trunked, use the command acli vm.get <vm
name> to find its MAC address. Using this MAC address, run the following command on any
CVM in the cluster:
nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address> trunked_networks=<comma
separated list of allowed VLAN IDs> update_vlan_trunk_info=true

Note: The update_vlan_trunk_info=true parameter is mandatory. If you do


not specify this parameter, the command appears to run successfully, but the
trunked_networks setting does not change.

To change the mode of a VM NIC from trunked to access, find its MAC address in the output from
the acli vm.get <vm name> command and run the following command on any CVM in the cluster:
nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address> vlan_mode=kAccess
update_vlan_trunk_info=true

6. AHV Networking Best Practices | 43


AHV Networking

6.5. CVM Network Segmentation


The optional backplane LAN creates a dedicated interface in a separate VLAN on all CVMs and
AHV hosts in the cluster for exchanging storage replication traffic. The backplane network shares
the same physical adapters on bridge br0 by default but uses a different nonroutable VLAN. From
AOS 5.11.1 onward, you can create the backplane network in a new bridge (such as br1). If you
place the backplane network on a new bridge, ensure that this bridge has redundant network
adapters that are at least 10 Gbps and use a fault tolerant load balancing algorithm.
Use the backplane network only if you need to separate CVM management traffic (such as
Prism) from storage replication traffic. The official Network Segmentation documentation includes
diagrams and configuration instructions.

Figure 16: Prism UI CVM Network Interfaces

From AOS 5.11 onward, you can also separate iSCSI traffic for Nutanix Volumes onto a
dedicated virtual network interface on the CVMs using the Create New Interface dialog. The new
iSCSI virtual network interface can use a shared or dedicated bridge. Ensure that the selected
bridge uses multiple redundant uplinks.

6.6. Jumbo Frames


The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The standard 1,500 byte MTU delivers excellent
performance and stability. Nutanix does not support configuring the MTU on a CVM's network
interfaces to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV,
ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs require them. If you

6. AHV Networking Best Practices | 44


AHV Networking

choose to use jumbo frames on hypervisor hosts, be sure to enable them end to end in the
desired network and consider both the physical and virtual network infrastructure impacted by the
change.

6. AHV Networking Best Practices | 45


AHV Networking

7. Conclusion
Nutanix recommends using the default AHV networking settings, configured via the Prism GUI,
for most Nutanix deployments. But when your requirements demand specific configuration
outside the defaults, this advanced networking guide provides detailed CLI configuration
examples that can help.
Administrators can use the Nutanix CLI to configure advanced networking features on all
hosts. VLAN trunking for user VMs allows a single VM NIC to pass traffic on multiple VLANs for
network-intensive applications. You can apply VLAN tags to the AHV host and CVM in situations
that require all traffic to be tagged. Grouping host adapters in different ways can provide physical
traffic isolation or allow advanced load balancing and link aggregation to provide maximum
throughput and redundancy for VMs and hosts.
With these advanced networking techniques, administrators can configure a Nutanix system with
AHV to meet the demanding requirements of any VM or application.
For feedback or questions, contact us using the Nutanix NEXT Community forums.

7. Conclusion | 46
AHV Networking

Appendix

AHV Networking Terminology

Table 4: Networking Terminology Matrix

AHV Term VMware Term Microsoft Hyper-V or SCVMM Term


vSwitch, Distributed Virtual
Bridge Virtual switch, logical switch
Switch
Bond NIC team Team or uplink port profile
Port or tap Port N/A
Network Port group VLAN tag or logical network
Uplink pNIC or vmnic Physical NIC or pNIC
VM NIC vNIC VM NIC
Internal port VMkernel port Virtual NIC
Active-backup Active-standby Active-standby
Route based on source MAC
Balance-slb hash combined with route based Switch independent / dynamic
on physical NIC load
LACP with balance- LACP and route based on IP Switch dependent (LACP) / address
tcp hash hash

AHV Networking Best Practices Checklist


• Command line
⁃ Use the Prism network visualization feature before using the command line to view the
network.
⁃ Use Prism uplink configuration for clusters with a single bridge and bond.
⁃ Use the CLI configuration for clusters with multiple bridges and bonds.

Appendix | 47
AHV Networking

⁃ Do not use manage_ovs to make network changes once you have used Prism uplink
configuration.
⁃ Use the allssh and hostssh shortcuts only with view and show commands. Use extreme
caution with commands that make configuration changes, as these shortcuts execute them
on every CVM or AHV host. Running a disruptive command on all hosts risks disconnecting
all hosts. When making network changes, only use the allssh or hostssh shortcuts in a
staging environment.
⁃ Ensure that IPMI console connectivity is available and place the host and CVM in
maintenance mode before making any host networking changes.
⁃ Connect to a CVM instead of to the AHV hosts when using SSH. Use the hostssh or
192.168.5.1 shortcut for any AHV host operation.
⁃ For high availability, connect to the cluster Virtual IP (VIP) for cluster-wide commands
entered in the aCLI rather than to a single CVM.
⁃ Use the --host shortcut to configure networking for Compute Only nodes.
• Open vSwitch
⁃ Do not modify the OpenFlow tables associated with any OVS bridge.
⁃ Although it is possible to set QoS policies and other network configuration on the VM tap
interfaces manually (using the ovs-vsctl command), we do not recommend or support it.
Policies do not persist across VM power cycles or migrations between hosts.
⁃ Do not delete, rename, or modify the OVS bridge br0 or the bridge chain.
⁃ Do not modify the native Linux bridge virbr0.
• OVS bonds
⁃ Include at least two physical interfaces in every bond.
⁃ Aggregate the 10 Gb or faster interfaces on the physical host to an OVS bond named br0-
up on the default OVS bridge br0 and trunk VLANs to these interfaces on the physical
switch.
⁃ Use active-backup load balancing unless you have a specific need for LACP with balance-
tcp.
⁃ Create a separate bond and bridge for the connected 1 Gb interfaces, or remove them from
the primary bond br0-up.
⁃ Do not mix NIC models from different vendors in the same bond.
⁃ Do not mix NICs of different speeds in the same bond.
⁃ If required, connect the 1 Gb interfaces to different physical switches than those connecting
to the 10 Gb or faster to provide physical network separation for user VMs.

Appendix | 48
AHV Networking

⁃ Use LACP with balance-tcp only if user VMs require link aggregation for higher speed or
better fault tolerance. Ensure that you have completed LACP configuration on the physical
switches after enabling LACP on AHV.
⁃ Do not use the balance-tcp algorithm without upstream switch link aggregation such as
LACP.
⁃ Do not use the balance-slb algorithm if the physical switches use IGMP snooping and
pruning.
⁃ Do not use the balance-slb algorithm with link aggregation such as LACP.
⁃ Do not use static link aggregation such as etherchannel with AHV.
• Physical network layout
⁃ Use redundant top-of-rack switches in a leaf-spine architecture. This simple, flat network
design is well suited for a highly distributed, shared-nothing compute and storage
architecture.
⁃ Connect all the nodes that belong to a given cluster to the same layer-2 network segment.
⁃ If you need more east-west traffic capacity, add spine switches or uplinks between the leaf
and spine.
⁃ Use redundant 40 Gbps (or faster) connections to the spine to ensure adequate bandwidth
between upstream switches.
• Upstream physical switch specifications
⁃ Connect the 10 Gb or faster uplink ports on the AHV node to switch ports that are
nonblocking datacenter-class switches that provide line-rate traffic throughput.
⁃ Use an Ethernet switch that has a low-latency, cut-through design, and that provides
predictable, consistent traffic latency regardless of packet size, traffic pattern, or the
features enabled on the 10 Gb interfaces. Port-to-port latency should be no higher than two
microseconds.
⁃ Use fast-convergence technologies (such as Cisco PortFast) on switch ports that are
connected to the AHV host.
⁃ To prevent packet loss from oversubscription, avoid switches that use a shared port-buffer
architecture.
• Switch and host VLANs
⁃ Keep the CVM and AHV host in the same VLAN. By default, the CVM and the hypervisor
are placed on the native untagged VLAN configured on the upstream physical switch.
⁃ Configure switch ports connected to AHV as VLAN trunk ports.

Appendix | 49
AHV Networking

⁃ Configure a dedicated native untagged VLAN other than 1 on switch ports facing AHV
hosts to carry CVM and AHV host traffic.
• User VM VLANs
⁃ Configure user VM network VLANs on br0 using the Prism GUI.
⁃ Use VLANs other than the dedicated CVM and AHV VLAN.
⁃ Use the aCLI to add user VM network VLANs for additional bridges. Include the bridge
name in the network name for easy bridge identification.
⁃ Use VM NIC VLAN trunking only in cases where user VMs require multiple VLANs on the
same NIC. In all other cases, add a new VM NIC with a single VLAN in access mode to
bring new VLANs to user VMs.
⁃ Do not use direct mode NICs unless Nutanix Support directs you to do so.
• CVM network configuration
⁃ Do not remove the CVM from either the OVS bridge br0 or the native Linux bridge virbr0.
⁃ If required for security, add a dedicated CVM backplane VLAN with a nonroutable subnet to
separate CVM storage backplane traffic from CVM management traffic.
⁃ Do not use backplane segmentation or additional service segmentation unless separation
of backplane or storage traffic is a mandatory security requirement.
⁃ If the network for the backplane or additional services is connected to a bridge other than
br0, ensure that this bridge has redundant uplinks with fault tolerant load balancing.
• Jumbo frames
⁃ Nutanix does not support configuring the MTU on a CVM's network interfaces to higher
values.
⁃ If you choose to use jumbo frames on hypervisor hosts, be sure to enable them end to end
in the desired network and consider both the physical and virtual network infrastructure
impacted by the change.
• IP address management
⁃ Coordinate the configuration of IP address pools to avoid address overlap with existing
network DHCP pools.
⁃ Confirm IP address availability with the network administrator before configuring an IPAM
address pool in AHV.
• IPMI ports

Appendix | 50
AHV Networking

⁃ Do not allow multiple VLANs on switch ports that connect to the IPMI interface. For
management simplicity, only configure the IPMI switch ports as access ports in a single
VLAN.

AHV Command Line Tutorial


Nutanix systems have a number of command line utilities that make it easy to inspect the status
of network parameters and adjust advanced attributes that may not be available in the Prism
GUI. In this section, we address the three primary locations where you can enter CLI commands.
The first such location is in the CVM BASH shell. A command entered here takes effect locally on
a single CVM. Administrators can also enter CLI commands in the CVM aCLI shell. Commands
entered in aCLI operate on the level of an entire Nutanix cluster, even though you’re accessing
the CLI from one CVM. Finally, administrators can enter CLI commands in an AHV host’s BASH
shell. Commands entered here take effect only on that AHV host. The diagram below illustrates
the basic CLI locations.

Figure 17: Command Line Operation Overview

CLI shortcuts exist to make cluster management a bit easier. Often, you need to execute a
command on all CVMs, or on all AHV hosts, rather than on just a single host. It would be tedious
to log on to every system and enter the same command on each of them, especially in a large
cluster. That's where the allssh and hostssh shortcuts come in. allssh takes a given command
entered on the CVM BASH CLI and executes that command on every CVM in the cluster.

Appendix | 51
AHV Networking

hostssh works similarly, taking a command entered on the CVM BASH CLI and executing that
command on every AHV host in the cluster, as shown in the previous figure.
To streamline the management of CVMs and AHV hosts, the SSH shortcut connects a single
CVM directly to the local AHV host. From any single CVM, you can use SSH to connect to the
AHV host’s local address at IP address 192.168.5.1. Similarly, any AHV host can SSH to the
local CVM using the static IP address 192.168.5.254. Because the address 192.168.5.2 on a
CVM is used for dynamic high availability purposes in the AHV host, it may not always direct to
the local CVM. This SSH connection uses the internal Linux bridge virbr0.
Let's take a look at a few examples to demonstrate the usefulness of these commands.

Example 1: allssh
Imagine that we need to determine which network interfaces are plugged in on all nodes in the
cluster, and the link speed of each interface. We could use manage_ovs show_interfaces at
each CVM, but instead let's use the allssh shortcut. First, SSH into any CVM in the cluster as
the nutanix user, then execute the command allssh "manage_ovs show_interfaces" at the CVM
BASH shell:
nutanix@NTNX-A-CVM:~$ allssh "manage_ovs show_interfaces"

In the sample output below, we've truncated the results after the second node to save space.
Executing manage_ovs show_interfaces on the cluster
================== a.b.c.d =================
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
Connection to a.b.c.d closed.
================== e.f.g.h =================
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
Connection to e.f.g.h closed.

Appendix | 52
AHV Networking

Example 2: hostssh
If we wanted to view the MAC address of the eth0 interface on every AHV host, we could connect
to each AHV host individually and use ifconfig eth0. To make things faster, let's use the hostssh
shortcut instead. In this example, we still use SSH to connect to the CVM BASH shell, then prefix
our desired command with hostssh.
nutanix@NTNX-A-CVM~$ hostssh "ifconfig eth0 | grep HWaddr"
============= a.b.c.d ============
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B1:FE
============= e.f.g.h ============
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B2:4E

Example 3: aCLI
Administrators can use the aCLI shell to view Nutanix cluster information that might not be easily
available in the Prism GUI. For example, let's list all of the VMs in a given network. First, connect
to any CVM using SSH, then enter the aCLI.
nutanix@NTNX-A-CVM~$ acli
<acropolis> net.list_vms 1GBNet
VM UUID VM name MAC address
0d6afd4a-954d-4fe9-a184-4a9a51c9e2c1 VM2 50:6b:8d:cb:1b:f9

Example 4: ssh [email protected]


The shortcut between the CVM and AHV host can be helpful when we're connected directly to
a CVM but need to view some information or execute a command against the local AHV host
instead. In this example, we’re verifying the localhost line of the /etc/hosts file on the AHV host
while we're already connected to the CVM.
nutanix@NTNX-14SM36510031-A-CVM~$ ssh [email protected] "cat /etc/hosts | grep 127"
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

With these command line utilities, we can manage a large number of Nutanix nodes at once.
Centralized management helps administrators apply configuration consistently and verify
configuration across a number of servers.

Appendix | 53
AHV Networking

AHV Networking Command Examples


• Network view commands
nutanix@CVM$ manage_ovs --bridge_name br0 show_uplinks
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show br0-up"
nutanix@CVM$ ssh [email protected] "ovs-vsctl show"
nutanix@CVM$ acli
<acropolis> net.list
<acropolis> net.list_vms vlan.0
nutanix@CVM$ manage_ovs --help
nutanix@CVM$ manage_ovs show_interfaces
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> show_uplinks"
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces> update_uplinks
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces> --require_link=false
update_uplinks

• Bond configuration for 2x 10 Gb


nutanix@CVM$ manage_ovs --bridge_name br1 create_single_bridge
nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name br0-up --interfaces 10g update_uplinks
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name br1-up --interfaces 1g update_uplinks
nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

• Bond configuration for 4x 10 Gb


nutanix@CVM$ manage_ovs --bridge_name br1 create_single_bridge
nutanix@CVM$ manage_ovs --bridge_name br2 create_single_bridge
nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name br0-up --interfaces eth4,eth5
update_uplinks
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name br1-up --interfaces eth2,eth3
update_uplinks
nutanix@CVM$ manage_ovs --bridge_name br2 --bond_name br2-up --interfaces eth0,eth1
update_uplinks
nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99
nutanix@cvm$ acli net.create br2_vlan100 vswitch_name=br2 vlan=100

• Load balance view command


nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show"

Appendix | 54
AHV Networking

• Load balance active-backup configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=active-backup"

• Load balance balance-slb configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=balance-slb"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:bond-rebalance-
interval=30000"
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show br0-up "

• Load balance balance-tcp and LACP configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-fallback-
ab=true"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-time=fast"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up lacp=active"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=balance-tcp"
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show br0-up"
nutanix@CVM$ ssh [email protected] "ovs-appctl lacp/show br0-up"

• Disable LACP
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=active-backup"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up lacp=off"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-fallback-
ab=true"

• CVM and AHV host tagged VLAN configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=10"
nutanix@CVM$ ssh [email protected] "ovs-vsctl list port br0"
nutanix@CVM$ change_cvm_vlan 10
nutanix@CVM$ change_cvm_vlan 0
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=0"

• VM VLAN configuration
nutanix@cvm$ acli vm.nic_update <vm_name> <nic mac address> network=<network name>
nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address> trunked_networks=<comma
separated list of allowed VLAN IDs> update_vlan_trunk_info=true
nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address> vlan_mode=kAccess
update_vlan_trunk_info=true

Appendix | 55
AHV Networking

References
1. AHV Best Practices Guide
2. AHV Administration Guide: Host Network Management
3. AHV Administration Guide: VM Network Management
4. Nutanix Security Guide: Securing Traffic Through Network Segmentation
5. Open vSwitch Documentation
6. Physical Networking Best Practices Guide
7. Prism Web Console Guide: Network Visualization

About the Authors


Jason Burns is a Technical Marketing Engineer at Nutanix, Inc.
Lakshana Rajendran is a Technical Marketing Engineer at Nutanix, Inc.

About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix enterprise cloud software leverages web-scale engineering
and consumer-grade design to natively converge compute, virtualization, and storage into
a resilient, software-defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and seamless application
mobility for a broad range of enterprise applications. Learn more at www.nutanix.com or follow us
on Twitter @nutanix.

Appendix | 56
AHV Networking

List of Figures
Figure 1: Nutanix Enterprise Cloud OS Stack................................................................... 9

Figure 2: Post-Imaging Network State.............................................................................12

Figure 3: AHV Bridge Chain............................................................................................ 13

Figure 4: Prism UI Network Creation...............................................................................14

Figure 5: IPAM................................................................................................................. 15

Figure 6: Prism UI Network List...................................................................................... 17

Figure 7: Prism UI VM Network Details...........................................................................18

Figure 8: AHV Host Network Visualization...................................................................... 19

Figure 9: Network Connections for 2x 10 Gb NICs......................................................... 27

Figure 10: Network Connections for 2x 10 Gb and 2x 1 Gb NICs.................................. 28

Figure 11: Active-Backup Fault Tolerance....................................................................... 32

Figure 12: Balance-SLB Load Balancing.........................................................................34

Figure 13: LACP and Balance-TCP Load Balancing.......................................................36

Figure 14: Default Untagged VLAN for CVM and AHV Host...........................................39

Figure 15: Tagged VLAN for CVM and AHV Host...........................................................40

Figure 16: Prism UI CVM Network Interfaces................................................................. 44

Figure 17: Command Line Operation Overview.............................................................. 51

57
AHV Networking

List of Tables
Table 1: Document Version History................................................................................... 6

Table 2: Bridge and Bond Use Cases............................................................................. 25

Table 3: Load Balancing Use Cases............................................................................... 31

Table 4: Networking Terminology Matrix.......................................................................... 47

58

You might also like