Manual OpenHPC
Manual OpenHPC
5)
Cluster Building Recipes
CentOS7.5 Base OS
xCAT/SLURM Edition for Linux* (x86 64)
Legal Notice
Copyright © 2016-2018, OpenHPC, a Linux Foundation Collaborative Project. All rights reserved.
Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
2 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Contents
1 Introduction 5
1.1 Target Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Requirements/Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Appendices 28
A Installation Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
B Upgrading OpenHPC Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B.1 New component variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
C Integration Test Suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
D Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
D.1 Adding local Lmod modules to OpenHPC hierarchy . . . . . . . . . . . . . . . . . . . 33
D.2 Rebuilding Packages from Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
E Package Manifest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
F Package Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
1 Introduction
This guide presents a simple cluster installation procedure using components from the OpenHPC software
stack. OpenHPC represents an aggregation of a number of common ingredients required to deploy and
manage an HPC Linux* cluster including provisioning tools, resource management, I/O clients, develop-
ment tools, and a variety of scientific libraries. These packages have been pre-built with HPC integration
in mind while conforming to common Linux distribution standards. The documentation herein is intended
to be reasonably generic, but uses the underlying motivation of a small, 4-node stateless cluster installation
to define a step-by-step process. Several optional customizations are included and the intent is that these
collective instructions can be modified as needed for local site customizations.
Base Linux Edition: this edition of the guide highlights installation without the use of a companion con-
figuration management system and directly uses distro-provided package management tools for component
selection. The steps that follow also highlight specific changes to system configuration files that are required
as part of the cluster install process.
Unless specified otherwise, the examples presented are executed with elevated (root) privileges. The
examples also presume use of the BASH login shell, though the equivalent commands in other shells can
be substituted. In addition to specific command-line instructions called out in this guide, an alternate
convention is used to highlight potentially useful tips or optional configuration options. These tips are
highlighted via the following format:
Tip
How much sugar is in a cup? It depends on how big the cup is! –D. Brayford
1.2 Requirements/Assumptions
This installation recipe assumes the availability of a single head node master, and four compute nodes. The
master node serves as the overall system management server (SMS) and is provisioned with CentOS7.5 and
is subsequently configured to provision the remaining compute nodes with xCAT in a stateless configuration.
The terms master and SMS are used interchangeably in this guide. For power management, we assume that
the compute node baseboard management controllers (BMCs) are available via IPMI from the chosen master
host. For file systems, we assume that the chosen master server will host an NFS file system that is made
available to the compute nodes. Installation information is also discussed to optionally mount a parallel
file system and in this case, the parallel file system is assumed to exist previously.
An outline of the physical architecture discussed is shown in Figure 1 and highlights the high-level
networking configuration. The master host requires at least two Ethernet interfaces with eth0 connected to
the local data center network and eth1 used to provision and manage the cluster backend (note that these
5 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Master compute
(SMS) nodes
Data
Center eth0 eth1 to compute eth interface
Network
to compute BMC interface
tcp networking
interface names are examples and may be different depending on local settings and OS conventions). Two
logical IP interfaces are expected to each compute node: the first is the standard Ethernet interface that
will be used for provisioning and resource management. The second is used to connect to each host’s BMC
and is used for power management and remote console access. Physical connectivity for these two logical
IP networks is often accommodated via separate cabling and switching infrastructure; however, an alternate
configuration can also be accommodated via the use of a shared NIC, which runs a packet filter to divert
management packets between the host and BMC.
In addition to the IP networking, there is an optional high-speed network (InfiniBand or Omni-Path
in this recipe) that is also connected to each of the hosts. This high speed network is used for application
message passing and optionally for parallel file system connectivity as well (e.g. to existing Lustre or BeeGFS
storage targets).
1.3 Inputs
As this recipe details installing a cluster starting from bare-metal, there is a requirement to define IP ad-
dresses and gather hardware MAC addresses in order to support a controlled provisioning process. These
values are necessarily unique to the hardware being used, and this document uses variable substitution
(${variable}) in the command-line examples that follow to highlight where local site inputs are required.
A summary of the required and optional variables used throughout this recipe are presented below. Note
that while the example definitions above correspond to a small 4-node compute subsystem, the compute
parameters are defined in array format to accommodate logical extension to larger node counts.
6 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Prior to beginning the installation process of OpenHPC components, several additional considerations
are noted here for the SMS host configuration. First, the installation recipe herein assumes that the SMS
host name is resolvable locally. Depending on the manner in which you installed the BOS, there may be an
adequate entry already defined in /etc/hosts. If not, the following addition can be used to identify your
SMS host.
While it is theoretically possible to enable SELinux on a cluster provisioned with xCAT, doing so is
beyond the scope of this document. Even the use of permissive mode can be problematic and we therefore
recommend disabling SELinux on the master SMS host. If SELinux components are installed locally, the
selinuxenabled command can be used to determine if SELinux is currently enabled. If enabled, consult
the distro documentation for information on how to disable.
Finally, provisioning services rely on DHCP, TFTP, and HTTP network protocols. Depending on the
local BOS configuration on the SMS host, default firewall rules may prohibit these services. Consequently,
7 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
this recipe assumes that the local firewall running on the SMS host is disabled. If installed, the default
firewall service can be disabled as follows:
Tip
Many sites may find it useful or necessary to maintain a local copy of the OpenHPC repositories. To facilitate
this need, standalone tar archives are provided – one containing a repository of binary packages as well as any
available updates, and one containing a repository of source RPMS. The tar files also contain a simple bash
script to configure the package manager to use the local repository after download. To use, simply unpack
the tarball where you would like to host the local repository and execute the make repo.sh script. Tar files
for this release can be found at http://build.openhpc.community/dist/1.3.5
xCAT has a number of dependencies that are required for installation that are housed in separate public
repositories for various distributions. To enable for local use, issue the following:
8 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
In addition to the OpenHPC and xCAT package repositories, the master host also requires access to
the standard base OS distro repositories in order to resolve necessary dependencies. For CentOS7.5, the
requirements are to have access to both the base OS and EPEL repositories for which mirrors are freely
available online:
The public EPEL repository will be enabled automatically upon installation of the ohpc-release package.
Note that this requires the CentOS Extras repository, which is shipped with CentOS and is enabled by
default.
Tip
Many server BIOS configurations have PXE network booting configured as the primary option in the boot
order by default. If your compute nodes have a different device as the first in the sequence, the ipmitool
utility can be used to enable PXE.
HPC systems rely on synchronized clocks throughout the system and the NTP protocol can be used to
facilitate this synchronization. To enable NTP services on the SMS host with a specific server ${ntp server},
issue the following:
9 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Tip
SLURM requires enumeration of the physical hardware characteristics for compute nodes under its control.
In particular, three configuration parameters combine to define consumable compute resources: Sockets,
CoresPerSocket, and ThreadsPerCore. The default configuration file provided via OpenHPC assumes dual-
socket, 8 cores per socket, and two threads per core for this 4-node example. If this does not reflect your
local hardware, please update the configuration file at /etc/slurm/slurm.conf accordingly to match your
particular hardware. Note that the SLURM project provides an easy-to-use online configuration tool that
can be accessed here.
Other versions of this guide are available that describe installation of alternate resource management
systems, and they can be found in the docs-ohpc package.
# Load IB drivers
[sms]# systemctl start rdma
Tip
InfiniBand networks require a subnet management service that can typically be run on either an
administrative node, or on the switch itself. The optimal placement and configuration of the subnet
manager is beyond the scope of this document, but CentOS7.5 provides the opensm package should
you choose to run it on the master node.
10 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
With the InfiniBand drivers included, you can also enable (optional) IPoIB functionality which provides
a mechanism to send IP packets over the IB network. If you plan to mount a Lustre file system over
InfiniBand (see §3.9.4.3 for additional details), then having IPoIB enabled is a requirement for the Lustre
client. OpenHPC provides a template configuration file to aid in setting up an ib0 interface on the master
host. To use, copy the template provided and update the ${sms ipoib} and ${ipoib netmask} entries to
match local desired settings (alter ib0 naming as appropriate if system contains dual-ported or multiple
HCAs).
# Initiate ib0
[sms]# ifup ib0
Tip
Omni-Path networks require a subnet management service that can typically be run on either an
administrative node, or on the switch itself. The optimal placement and configuration of the subnet
manager is beyond the scope of this document, but CentOS7.5 provides the opa-fm package should
you choose to run it on the master node.
11 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Once completed, several OS images should be available for use within xCAT. These can be queried via:
In this example, we leverage the stateless (netboot) image for compute nodes and proceed by using
genimage to initialize a chroot-based install. Note that the previous query highlights the existence of other
provisioning images as well. OpenHPC provides a stateful xCAT recipe, follow that guide if you are interested
in stateful install. Statelite installation is an intermediate type of install, in which a limited number of files
and directories persist across reboots. Please consult available xCAT documentation if interested in this
type of install.
12 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Now, we can include additional components to the compute instance using yum to install into the chroot
location defined previously:
# Enable NTP time service on computes and identify master host as local NTP server
[sms]# chroot $CHROOT systemctl enable ntpd
[sms]# echo "server ${sms_ip}" >> $CHROOT/etc/ntp.conf
13 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Details on the steps required for each of these customizations are discussed further in the following sections.
3.9.4.1 Increase locked memory limits In order to utilize InfiniBand or Omni-Path as the underlying
high speed interconnect, it is generally necessary to increase the locked memory settings for system users.
This can be accomplished by updating the /etc/security/limits.conf file and this should be performed
within the compute image and on all job submission hosts. In this recipe, jobs are submitted from the master
host, and the following commands can be used to update the maximum locked memory settings on both the
master host and the compute image:
3.9.4.2 Enable ssh control via resource manager An additional optional customization that is
recommended is to restrict ssh access on compute nodes to only allow access by users who have an active
job associated with the node. This can be enabled via the use of a pluggable authentication module (PAM)
provided as part of the Slurm package installs. To enable this feature within the compute image, issue the
following:
3.9.4.3 Add Lustre client To add Lustre client support on the cluster, it necessary to install the client
and associated modules on each host needing to access a Lustre file system. In this recipe, it is assumed
that the Lustre file system is hosted by servers that are pre-existing and are not part of the install process.
Outlining the variety of Lustre client mounting options is beyond the scope of this document, but the general
requirement is to add a mount entry for the desired file system that defines the management server (MGS)
and underlying network transport protocol. To add client mounts on both the master server and compute
image, the following commands can be used. Note that the Lustre file system to be mounted is identified
by the ${mgs fs name} variable. In this example, the file system is configured to be mounted locally as
/mnt/lustre.
14 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Tip
The suggested mount options shown for Lustre leverage the localflock option. This is a Lustre-specific
setting that enables client-local flock support. It is much faster than cluster-wide flock, but if you have an
application requiring cluster-wide, coherent file locks, use the standard flock attribute instead.
The default underlying network type used by Lustre is tcp. If your external Lustre file system is to be
mounted using a network type other than tcp, additional configuration files are necessary to identify the de-
sired network type. The example below illustrates creation of modprobe configuration files instructing Lustre
to use an InfiniBand network with the o2ib LNET driver attached to ib0. Note that these modifications
are made to both the master host and compute image.
With the Lustre configuration complete, the client can be mounted on the master host as follows:
3.9.4.4 Add Nagios monitoring Nagios is an open source infrastructure monitoring package that
monitors servers, switches, applications, and services and offers user-defined alerting facilities. As provided
by OpenHPC, it consists of a base monitoring daemon and a set of plug-ins for monitoring various aspects
of an HPC cluster. The following commands can be used to install and configure a Nagios server on the
master node, and add the facility to run tests and gather metrics from provisioned compute nodes.
15 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
3.9.4.5 Add Ganglia monitoring Ganglia is a scalable distributed system monitoring tool for high-
performance computing systems such as clusters and grids. It allows the user to remotely view live or
historical statistics (such as CPU load averages or network utilization) for all machines running the gmond
daemon. The following commands can be used to enable Ganglia to monitor both the master and compute
hosts.
Once enabled and running, Ganglia should provide access to a web-based monitoring console on the master
host. Read access to monitoring metrics will be enabled by default and can be accessed via a web browser.
When running a web browser directly on the master host, the Ganglia top-level overview is available at
http://localhost/ganglia. When accessing remotely, replace localhost with the chosen name of your master
host (${sms name}).
16 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
3.9.4.6 Add ClusterShell ClusterShell is an event-based Python library to execute commands in par-
allel across cluster nodes. Installation and basic configuration defining three node groups (adm, compute,
and all) is as follows:
# Install ClusterShell
[sms]# yum -y install clustershell-ohpc
3.9.4.7 Add mrsh mrsh is a secure remote shell utility, like ssh, which uses munge for authentication
and encryption. By using the munge installation used by Slurm, mrsh provides shell access to systems using
the same munge key without having to track ssh keys. Like ssh, mrsh provides a remote copy command,
mrcp, and can be used as a rcmd by pdsh. Example installation and configuration is as follows:
# Install mrsh
[sms]# yum -y install mrsh-ohpc mrsh-rsh-compat-ohpc
[sms]# yum -y --installroot=$CHROOT install mrsh-ohpc mrsh-rsh-compat-ohpc mrsh-server-ohpc
3.9.4.8 Add genders genders is a static cluster configuration database or node typing database used
for cluster configuration management. Other tools and users can access the genders database in order to
make decisions about where an action, or even what action, is appropriate based on associated types or
”genders”. Values may also be assigned to and retrieved from a gender to provide further granularity. The
following example highlights installation and configuration of two genders: compute and bmc.
# Install genders
[sms]# yum -y install genders-ohpc
3.9.4.9 Add ConMan ConMan is a serial console management program designed to support a large
number of console devices and simultaneous users. It supports logging console device output and connecting
to compute node consoles via IPMI serial-over-lan. Installation and example configuration is outlined below.
# Configure conman for computes (note your IPMI password is required for console access)
17 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Note that additional options are typically necessary to enable serial console output. These are setup during
the node registration process in §3.11
3.9.4.10 Add NHC Resource managers provide for a periodic ”node health check” to be performed
on each compute node to verify that the node is working properly. Nodes which are determined to be
”unhealthy” can be marked as down or offline so as to prevent jobs from being scheduled or run on them.
This helps increase the reliability and throughput of a cluster by reducing preventable job failures due to
misconfiguration, hardware failure, etc. OpenHPC distributes NHC to fulfill this requirement.
In a typical scenario, the NHC driver script is run periodically on each compute node by the resource
manager client daemon. It loads its configuration file to determine which checks are to be run on the current
node (based on its hostname). Each matching check is run, and if a failure is encountered, NHC will exit
with an error message describing the problem. It can also be configured to mark nodes offline so that the
scheduler will not assign jobs to bad nodes, reducing the risk of system-induced job failures.
Similarly, to import the global Slurm configuration file and the cryptographic key that is required by the
munge authentication library to be available on every host in the resource management pool, issue the
following:
18 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Tip
The “updatenode compute -F” command can be used to distribute changes made to any defined synchro-
nization files on the SMS host. Users wishing to automate this process may want to consider adding a crontab
entry to perform this action at defined intervals.
Tip
Defining nodes one-by-one, as done above, is only efficient for a small number of nodes. For larger node
counts, xCAT provides capabilities for automated detection and configuration. Consult the xCAT Hardware
Discovery & Define Node Guide. Alternatively, confluent, a tool related to xCAT, also has robust discovery
capabilities and can be used to detect and auto-configure compute hosts.
xCAT requires a network domain name specification for system-wide name resolution. This value can be set
to match your local DNS schema or given a unique identifier such as “local”. In this recipe, we leverage the
$domain name variable to define as follows:
If enabling optional IPoIB functionality (e.g. to support Lustre over InfiniBand), additional settings are
required to define the IPoIB network with xCAT and specify desired IP settings for each compute. This can
be accomplished as follows for the ib0 interface:
19 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
With the desired compute nodes and domain identified, the remaining steps in the provisioning configura-
tion process are to define the provisioning mode and image for the compute group and use xCAT commands
to complete configuration for network services like DNS and DHCP. These tasks are accomplished as follows:
At this point, the master server should be able to boot the newly defined compute nodes. This is done by
using the rpower xCAT command leveraging IPMI protocol set up during the the compute node definition
in § 3.11. The following power cycles each of the desired hosts.
Once kicked off, the boot process should take about 5 minutes (depending on BIOS post times). You
can monitor the provisioning by using the rcons command, which displays serial console for a selected node.
Note that the escape sequence is CTRL-e c . typed sequentially.
Successful provisioning can be verified by a parallel command on the compute nodes. The default install
provides two such tools: xCAT-provided psh command, which uses xCAT node names and groups, and pdsh,
which works independently. For example, to run a command on the newly imaged compute hosts using psh,
execute the following:
20 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
4.2 Compilers
OpenHPC presently packages the GNU compiler toolchain integrated with the underlying modules-environment
system in a hierarchical fashion. The modules system will conditionally present compiler-dependent software
based on the toolchain currently loaded.
The llvm compiler toolchains are also provided as a standalone additional compiler family (ie. users can
easily switch between gcc/clang environments), but we do not provide the full complement of downstream
library builds.
21 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
If your system includes InfiniBand and you enabled underlying support in §3.6 and §3.9.4, an additional
MVAPICH2 family is available for use:
Alternatively, if your system includes Intel® Omni-Path, use the (psm2) variant of MVAPICH2 instead:
An additional OpenMPI build variant is listed in Table 1 which enables PMIx job launch support for use
with Slurm. This optional variant is available as openmpi3-pmix-slurm-gnu7-ohpc.
Tip
If you want to change the default environment from the suggestion above, OpenHPC also provides the GNU
compiler toolchain with the MPICH and MVAPICH2 stacks:
• lmod-defaults-gnu7-mpich-ohpc
• lmod-defaults-gnu7-mvapich2-ohpc
22 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
name. For example, libraries that do not require MPI as part of the build process adopt the following RPM
name:
Packages that do require MPI as part of the build expand upon this convention to additionally include the
MPI family name as follows:
To illustrate this further, the command below queries the locally configured repositories to identify all of
the available PETSc packages that were built with the GNU toolchain. The resulting output that is included
shows that pre-built versions are available for each of the supported MPI families presented in §4.3.
Tip
OpenHPC-provided 3rd party builds are configured to be installed into a common top-level repository so that
they can be easily exported to desired hosts within the cluster. This common top-level path (/opt/ohpc/pub)
was previously configured to be mounted on compute nodes in §3.9.3, so the packages will be immediately
available for use on the cluster after installation on the master host.
For convenience, OpenHPC provides package aliases for these 3rd party libraries and utilities that can
be used to install available libraries for use with the GNU compiler family toolchain. For parallel libraries,
aliases are grouped by MPI family toolchain so that administrators can choose a subset should they favor a
particular MPI stack. Please refer to Appendix E for a more detailed listing of all available packages in each
of these functional areas. To install all available package offerings within OpenHPC, issue the following:
23 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Tip
As noted in §3.9.3, the default installation path for OpenHPC (/opt/ohpc/pub) is exported over
NFS from the master to the compute nodes, but the Parallel Studio installer defaults to a path of
/opt/intel. To make the Intel® compilers available to the compute nodes one must either customize
the Parallel Studio installation path to be within /opt/ohpc/pub, or alternatively, add an additional
NFS export for /opt/intel that is mounted on desired compute nodes.
To enable all 3rd party builds available in OpenHPC that are compatible with Intel® Parallel Studio, issue
the following:
# Optionally, choose the Omni-Path enabled build for MVAPICH2. Otherwise, skip to retain IB variant
[sms]# yum -y install mvapich2-psm2-intel-ohpc
24 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Next, the user’s credentials need to be distributed across the cluster. xCAT’s xdcp has a merge function-
ality that adds new entries into credential files on compute nodes:
Alternatively, the updatenode compute -f command can be used. This re-synchronizes (i.e. copies) all
the files defined in the syncfile setup in Section 3.9.5.
OpenHPC includes a simple “hello-world” MPI application in the /opt/ohpc/pub/examples directory that
can be used for this quick compilation and execution. OpenHPC also provides a companion job-launch
utility named prun that is installed in concert with the pre-packaged MPI toolchains. This convenience
script provides a mechanism to abstract job launch across different resource managers and MPI stacks such
that a single launch command can be used for parallel job launch in a variety of OpenHPC environments.
It also provides a centralizing mechanism for administrators to customize desired environment settings for
their users.
25 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Tip
The following table provides approximate command equivalences between SLURM and PBS Pro:
26 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
prun ./a.out
Tip
The use of the %j option in the example batch job script shown is a convenient way to track application output
on an individual job basis. The %j token is replaced with the Slurm job allocation number once assigned
(job #339 in this example).
27 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
Appendices
A Installation Template
This appendix highlights the availability of a companion installation script that is included with OpenHPC
documentation. This script, when combined with local site inputs, can be used to implement a starting
recipe for bare-metal system installation and configuration. This template script is used during validation
efforts to test cluster installations and is provided as a convenience for administrators as a starting point for
potential site customization.
Tip
Note that the template script provided is intended for use during initial installation and is not designed for
repeated execution. If modifications are required after using the script initially, we recommend running the
relevant subset of commands interactively.
The template script relies on the use of a simple text file to define local site variables that were outlined
in §1.3. By default, the template installation script attempts to use local variable settings sourced from
the /opt/ohpc/pub/doc/recipes/vanilla/input.local file, however, this choice can be overridden by
the use of the ${OHPC INPUT LOCAL} environment variable. The template install script is intended for
execution on the SMS master host and is installed as part of the docs-ohpc package into /opt/ohpc/pub/
doc/recipes/vanilla/recipe.sh. After enabling the OpenHPC repository and reviewing the guide for
additional information on the intent of the commands, the general starting approach for using this template
is as follows:
2. Copy the provided template input file to use as a starting point to define local site settings:
[sms]# cp -p /opt/ohpc/pub/doc/recipes/centos7/x86_64/xcat/slurm/recipe.sh .
28 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
4. Rebuild image(s)
In the case where packages were upgraded within the chroot compute image, you will need to reboot the
compute nodes when convenient to enable the changes.
29 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
30 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
The RPM installation creates a user named ohpc-test to house the test suite and provide an isolated
environment for execution. Configuration of the test suite is done using standard GNU autotools semantics
and the BATS shell-testing framework is used to execute and log a number of individual unit tests. Some
tests require privileged execution, so a different combination of tests will be enabled depending on which user
executes the top-level configure script. Non-privileged tests requiring execution on one or more compute
nodes are submitted as jobs through the SLURM resource manager. The tests are further divided into
“short” and “long” run categories. The short run configuration is a subset of approximately 180 tests to
demonstrate basic functionality of key components (e.g. MPI stacks) and should complete in 10-20 minutes.
The long run (around 1000 tests) is comprehensive and can take an hour or more to complete.
Most components can be tested individually, but a default configuration is setup to enable collective
testing. To test an isolated component, use the configure option to disable all tests, then re-enable the
desired test to run. The --help option to configure will display all possible tests. Example output is
shown below (some output is omitted for the sake of brevity).
[sms]# su - ohpc-test
[test@sms ~]$ cd tests
[test@sms ~]$ ./configure --disable-all --enable-fftw
checking for a BSD-compatible install... /bin/install -c
checking whether build environment is sane... yes
...
---------------------------------------------- SUMMARY ---------------------------------------------
Many OpenHPC components exist in multiple flavors to support multiple compiler and MPI runtime
permutations, and the test suite takes this in to account by iterating through these combinations by default.
31 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
If make check is executed from the top-level test directory, all configured compiler and MPI permutations
of a library will be exercised. The following highlights the execution of the FFTW related tests that were
enabled in the previous step.
32 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
D Customization
D.1 Adding local Lmod modules to OpenHPC hierarchy
Locally installed applications can easily be integrated in to OpenHPC systems by following the Lmod con-
vention laid out by the provided packages. Two sample module files are included in the examples-ohpc
package—one representing an application with no compiler or MPI runtime dependencies, and one depen-
dent on OpenMPI and the GNU toolchain. Simply copy these files to the prescribed locations, and the lmod
application should pick them up automatically.
Where:
L: Module is loaded
33 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
# Rebuild binary RPM. Note that additional directives can be specified to modify build
[test@sms ~rpmbuild/SPECS]$ rpmbuild -bb --define "mpi_family mpich" fftw.spec
34 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
E Package Manifest
This appendix provides a summary of available meta-package groupings and all of the individual RPM
packages that are available as part of this OpenHPC release. The meta-packages provide a mechanism to
group related collections of RPMs by functionality and provide a convenience mechanism for installation. A
list of the available meta-packages and a brief description is presented in Table 2.
35 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
36 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
37 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
What follows next in this Appendix is a series of tables that summarize the underlying RPM packages
available in this OpenHPC release. These packages are organized by groupings based on their general
functionality and each table provides information for the specific RPM name, version, brief summary, and
the web URL where additional information can be obtained for the component. Note that many of the 3rd
party community libraries that are pre-packaged with OpenHPC are built using multiple compiler and MPI
families. In these cases, the RPM package name includes delimiters identifying the development environment
for which each package build is targeted. Additional information on the OpenHPC package naming scheme
is presented in §4.6. The relevant package groupings and associated Table references are as follows:
38 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
39 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
40 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
41 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
42 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
43 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
44 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
45 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
46 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
47 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
48 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
49 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
50 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
51 Rev: a78eeac38
Install Guide (v1.3.5): CentOS7.5/x86 64 + xCAT + SLURM
F Package Signatures
All of the RPMs provided via the OpenHPC repository are signed with a GPG signature. By default, the
underlying package managers will verify these signatures during installation to ensure that packages have
not been altered. The RPMs can also be manually verified and the public signing key fingerprint for the
latest repository is shown below:
Fingerprint: DD5D 8CAA CB57 364F FCC2 D3AE C468 07FF 26CE 6884
The following command can be used to verify an RPM once it has been downloaded locally by confirming
if the package is signed, and if so, indicating which key was used to sign it. The example below highlights
usage for a local copy of the docs-ohpc package and illustrates how the key ID matches the fingerprint
shown above.
52 Rev: a78eeac38