bp-linux-nutanix-ahv
bp-linux-nutanix-ahv
BEST PRACTICES
Contents
1. Executive Summary.......................................................................................... 5
2. Introduction.........................................................................................................6
Audience.............................................................................................................................................................. 6
Purpose................................................................................................................................................................ 6
6. Linux Networking............................................................................................ 15
7. Nutanix Volumes............................................................................................. 16
Jumbo Frames.................................................................................................................................................16
iSCSI Settings...................................................................................................................................................16
Appendix....................................................................................................................... 34
References........................................................................................................................................................34
About Nutanix................................................................................................................................................ 34
List of Figures..............................................................................................................................................................35
List of Tables............................................................................................................................................................... 36
Linux on Nutanix AHV
1. Executive Summary
The Nutanix enterprise cloud OS provides a complete software-defined
datacenter infrastructure solution for applications running on Linux, eliminating
the complexities and inefficiencies of traditional multitier datacenter
environments. Whether you are virtualizing critical tier-1 applications or running
them on bare metal, Nutanix solutions bring the predictable performance,
availability, scalability, and cost benefits of web-scale architecture to your Linux
applications.
Nutanix provides the freedom to choose the hypervisor you want for your
applications, including Nutanix AHV, VMware vSphere, and Microsoft Hyper-
V. Solutions built on the Nutanix enterprise cloud deliver the performance
required for business-critical applications, while powerful self-healing, data
protection, and disaster recovery capabilities keep your applications running
and your vital data well protected. Nutanix provides near-instantaneous local
and remote data protection using snapshots. You can use these snapshots
for offloading database backups to tape and disk or WORM (write once,
read many) for offsite retention. Nutanix also enables one-click cloning, so
administrators can easily and quickly apply these snapshots to refresh a test or
development database from production.
This best practices guide collects recommended Nutanix AHV cluster settings
and Linux OS settings, which include those for Oracle Linux, Red Hat Enterprise
Linux, and CentOS. It covers vDisk (LUN) configuration and settings, Logical
Volume Manager (LVM) configuration, file system settings for ext4 and XFS,
and kernel parameters. Most of the recommendations in this guide also apply
generally to other hypervisor environments on Nutanix.
2. Introduction
Audience
This best practices guide is part of the Nutanix Solutions Library. We wrote
it for architects and system administrators responsible for designing and
maintaining a robust Linux environment and its infrastructure. Readers should
already be familiar with AHV-based Nutanix infrastructure and the Linux OS.
Purpose
In this document, we cover the following topics:
• An overview of Nutanix.
• Nutanix AHV best practices.
• Linux OS kernel settings.
• Linux networking.
• Nutanix Volumes.
• Configuration for volumes (vDisks).
• LVM configuration.
• File system mount options for ext4 and XFS.
• Linux disk device settings.
Version
Published Notes
Number
1.0 June 2018 Original publication.
Version
Published Notes
Number
Updated the Nutanix AHV Cluster Networking
1.1 July 2018
and Nutanix Volumes sections.
Updated product information and the Linux Disk
1.2 January 2019
Device Settings section.
1.3 December 2019 Updated LVM creation instructions.
Updated product information and jumbo frames
1.4 June 2020
guidance.
1.5 March 2021 Updated load balancing recommendations.
performance, algorithms make sure the most frequently used data is available in
memory or in flash on the node local to the VM.
To learn more about the Nutanix enterprise cloud, visit the Nutanix Bible and
Nutanix.com.
6. Linux Networking
When you create VMs for applications or databases, Nutanix highly
recommends that you use at least 10 GbE on the Nutanix cluster. If you require
high bandwidth and low latency, Nutanix recommends using multiple interfaces
(10 GbE, 25 GbE, or 40 GbE) in a bonded fashion, if your network switches
support doing so.
7. Nutanix Volumes
Nutanix Volumes allows bare-metal servers or VMs to access vDisks in a
Nutanix volume group (VG) natively on AHV or through iSCSI. For client iSCSI
access, the Nutanix cluster provides a single data services IP address. If you
require high bandwidth and low latency, you can use volume groups with load
balancing (VGLB) on AHV or Nutanix Volumes iSCSI access to achieve the
performance you need. For more information about Volumes, see the Nutanix
Volumes section of the Prism Web Console Guide.
Jumbo Frames
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission
unit) of 1,500 bytes for all the network interfaces by default. The standard
1,500 byte MTU delivers excellent performance and stability. Nutanix does not
support configuring the MTU on a CVM's network interfaces to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical network
interfaces of AHV, ESXi, or Hyper-V hosts and user VMs if the applications on
your user VMs require them. If you choose to use jumbo frames on hypervisor
hosts, be sure to enable them end to end in the desired network and consider
both the physical and virtual network infrastructure impacted by the change.
iSCSI Settings
If you use Nutanix Volumes, configure the following iSCSI settings on the guest
OS in the /etc/iscsi/iscsid.conf file and restart the iscsid process.
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 10
node.session.cmds_max = 2048
node.session.queue_depth = 1024
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 1048576
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 1048576
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 1048576
When you use Nutanix Volumes, you can configure single path (single
iSCSI NIC) or multipath (two iSCSI NICs). For bare-metal configurations, we
recommend using multipath to avoid a single point of failure in the client NIC. If
you use Volumes with a VM, you can only configure with a single NIC.
What follows is an example of how to set up a single NIC for iSCSI on a VM.
After you create the VM with a NIC dedicated for iSCSI traffic with an IP
address, follow this procedure. Assume the iSCSI NIC is ETH1.
• Create an iface:
[root@localhost ~]# iscsiadm -m iface -I iface1 --op=new
[root@localhost ~]# iscsiadm -m iface -I iface1 --op=update -n
iface.net_ifacename -v eth1
• Create a VG.
Figure 2: Create a VG
• Update the VG. This step allows you to specify the initiator IQN. Click Update.
• Click Save.
• Run the iSCSI discover and login commands using the Nutanix external data
service IP address:
[root@localhost ~]# iscsiadm -m discovery -t st -p <DSIP>
[root@localhost ~]# iscsiadm -m node - -login
device {
vendor "NUTANIX"
product "Server"
path_grouping_policy multibus
path_selector "round-robin 0"
features "1 queue_if_no_path"
path_checker tur
rr_min_io_rq 20
rr_weight priorities
failback immediate
}
}
multipaths {
multipath {
wwid 1NUTANIX_NFS_1_0_4424_ed6ae886_4ecc_473d_aadc_765f706bca13
alias asm1
}
}
• Enter the vDisk Universally Unique Identifier (UUID) and an alias name for it.
Add all vDisks to this file. The above shows a sample of a single vDisk only.
• Start your multipathd service:
[root@localhost ~]# systemctl stop multipathd.service
[root@localhost ~]# systemctl start multipathd.service
[root@localhost ~]# multipath -r -p multibus
[root@localhost ~]# for a in $(multipath -ll | grep "NUTANIX" | awk '{print
$1}') ; do dmsetup message $a 1 "queue_if_no_path" ; done
Note: The command for loop ensures that each vDisk has a feature of 1 queue_if_no_path. You
must verify that each vDisk has this feature by running the multipath -ll command.
The for loop command and the multipath -r -p multibus don’t survive a server
reboot. To make sure they’re set on reboot, add the commands to a shell script
and have it run automatically. You can put this shell script in the /etc/rc.local
file. New Linux distributions may not have /etc/rc.local configured to run, so
you may have to manually configure it.
Number
Purpose Comment
of vDisks
Can be used with LVM or Standard
1 Boot disk
partition
Database datafiles / control Can be used with Oracle ASM or
8
files / redo log files Filesystem with LVM
Can be used with Oracle ASM or
4 Database archive log files
Filesystem with LVM
Can be used with Oracle ASM or
4 Database RMAN backup files
Filesystem with LVM
Tip: Nutanix recommends that you use multiple vDisks with OS-level striping for any applications
that require high performance I/O.
There are two ways to create vDisks on Nutanix for your VMs:
1. Nutanix native vDisks.
VGLBs distribute ownership of the vDisks across all the CVMs in the cluster,
which can provide better performance. Use the VGLB feature if your
• Create a VG.
[root@localhost ~]# vgcreate vgdata /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/
sdg /dev/sdh /dev/sdi /dev/sdj
Volume group "vgdata" successfully created
• Create a logical volume. Make sure to use the -i (lowercase i) option, which
specifies the number of vDisks to stripe across, and the -I (uppercase i)
option, which specifies the stripe size. The recommended stripe size is 512
KB.
[root@localhost ~]# lvcreate -l 383994 -i 8 -I 512 -n vol1 vgdata
Logical volume "vol1" created.
• Create the file system. In this example, we are creating an ext4 file system.
• Mount the ext4 file system. If you are creating an XFS file system, please
refer to the Mount Options table.
[root@localhost ~]# mount /dev/vgdata/vol1 /u01/oradata -o
noatime,nodiratime,discard,barrier=0
Note: To give your file systems a friendly name, try xfs_admin for xfs or e2label for ext4. Use the
LABEL= option in the /etc/fstab file for easy management.
• Either put this command in the /etc/rc.local file, so that it runs the next time
the server reboots, or use UDEV rules. For UDEV rules, you can create a file
with the following content under the /etc/udev/rules.d directory:
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{vendor}=="NUTANIX ",
ATTRS{model}=="VDISK", RUN+="/bin/sh -c 'echo 1024 >/sys$DEVPATH/queue/
max_sectors_kb'"
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{vendor}=="NUTANIX ",
ATTRS{model}=="VDISK", RUN+="/bin/sh -c 'echo 60 >/sys$DEVPATH/device/timeout'"
If you put this command in the /etc/rc.local file, it runs the next time the server
reboots. Alternatively, you can put it in the grub configuration file. Follow these
steps for GRUB2 configuration:
• Edit the /etc/default/grub file, add “elevator=noop” to the
GRUB_CMDLINE_LINUX line, and save the file.
• Disable transparent_hugepage.
• In the same edit window, add “transparent_hugepage=never” to the end of
the elevator=noop line and save the file.
• If the Linux kernel supports the blk_mq (block multiqueue) option, add the
parameter “scsi_mod.use_blk_mq=1” to enable blk_mq and remove the
elevator=noop option.
11. Conclusion
This best practices guide outlines our recommended settings for a Linux VM
running on Nutanix with AHV. With the proper settings for Nutanix cluster
networking, as well as for volume, LVM, kernel configuration, and proper
file system mount options, you can maximize Linux performance for any
application. The Nutanix enterprise cloud OS removes the complexity of
constantly managing and optimizing the underlying compute, network, and
storage architecture, so you can focus on higher-value tasks for your business.
For feedback or questions, please contact us using the Nutanix NEXT
Community forums.
Appendix
References
1. Nutanix Volumes best practices guide
2. AHV best practices guide
3. Documentation for /proc/sys/vm/*
4. Prism Web Console Guide, Nutanix Volumes
5. Nutanix Volumes Guide
About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications
and services that power their business. The Nutanix enterprise cloud software
leverages web-scale engineering and consumer-grade design to natively
converge compute, virtualization, and storage into a resilient, software-
defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and
seamless application mobility for a broad range of enterprise applications.
Learn more at www.nutanix.com or follow us on Twitter @nutanix.
List of Figures
Figure 1: Nutanix Enterprise Cloud OS Stack................................................................................................... 9