Canonical Charmed Openstack On Dellemc Hardware
Canonical Charmed Openstack On Dellemc Hardware
This guide discusses the Dell EMC hardware specifications and the tools and
services to set up both the hardware and software, including the foundation
cluster and the OpenStack cluster. It also covers other tools used for the
monitoring and management of the cluster in detail and how all these
components work together in the system. The guide also provides the
deployment steps and references to configuration developed by Dell EMC and
Canonical for the deployment process. October 2019
Document ID
Revisions
Revisions
Date Description
October 2019 Initial release
Acknowledgements
This paper was produced by the following:
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Table of contents
Revisions .......................................................................................................................................................................... 2
Acknowledgements ........................................................................................................................................................... 2
Table of contents .............................................................................................................................................................. 3
Executive summary ........................................................................................................................................................... 7
1 Core Components ...................................................................................................................................................... 8
1.1 Core components ............................................................................................................................................. 8
1.2 Dell EMC PowerEdge R740 overview .............................................................................................................. 9
1.3 OpenStack Rocky ............................................................................................................................................. 9
1.4 OpenStack and Canonical ................................................................................................................................ 9
1.5 MAAS (Metal as a Service) physical cloud ..................................................................................................... 10
1.6 Juju modeling tool ........................................................................................................................................... 12
1.7 Landscape Systems Management Tool ......................................................................................................... 12
1.8 Software versions ........................................................................................................................................... 14
2 Hardware specifications ........................................................................................................................................... 15
2.1 Dell EMC PowerEdge R740 rack specifications ............................................................................................. 15
2.2 Server components firmware versions ........................................................................................................... 15
2.3 Dell EMC PowerEdge R740 server specifications .......................................................................................... 15
2.4 Rack layout ..................................................................................................................................................... 16
2.5 Hardware Configuration Notes ....................................................................................................................... 17
3 Network architecture................................................................................................................................................. 19
3.1 S4148-ON 10 GbE Switch .............................................................................................................................. 19
3.2 S3048-ON 1 GbE Switch ................................................................................................................................ 19
3.3 Infrastructure layout ........................................................................................................................................ 20
3.4 Network components ...................................................................................................................................... 20
3.5 Server nodes .................................................................................................................................................. 20
3.6 Leaf switches .................................................................................................................................................. 21
3.7 VLANs ............................................................................................................................................................. 22
3.8 Out-of-Band management network ................................................................................................................. 23
4 Cluster Infrastructure components ........................................................................................................................... 24
4.1 How MAAS works ........................................................................................................................................... 24
4.2 High availability in MAAS ................................................................................................................................ 24
4.3 The node lifecycle ........................................................................................................................................... 25
4.3.1 New ................................................................................................................................................................. 25
4.3.2 Commissioning ............................................................................................................................................... 25
Executive summary
An OpenStack cluster is now a common need by many organizations. Dell EMC and Canonical have worked
together to build a jointly engineered and validated architecture that details software, hardware, and
integration points of all solution components. The architecture provides prescriptive guidance and
recommendations for:
o Hardware design
o Infrastructure nodes
o Cloud nodes
o Network hardware and design
o Software layout
o System configurations
1 Core Components
Dell EMC and Canonical designed this architecture guide to make it easy for Dell EMC and Canonical
customers to build their own operational readiness cluster and design their initial offerings. Dell EMC and
Canonical provide the support and services that the customers need to stand up production-ready OpenStack
clusters.
With the current release of Ubuntu OS, multiple releases of OpenStack are available for setup:
Current reference architecture is based on OpenStack Rocky, however is it possible and easy to upgrade to
the following supported releases, as well as deploy up-to-date release of Charmed OpenStack from scratch.
The code base for Charmed OpenStack Platform is evolving at a very rapid pace. Please see
https://www.ubuntu.com/info/release-end-of-life for more information.
Identity Keystone
Telemetry Ceilometer/AODH/Gnocchi
Orchestration Heat
Dashboard Horizon
Logging Graylog
The standards-based APIs are the same between all OpenStack deployments, and they enable customer and
vendor ecosystems to operate across multiple clouds. The site specific infrastructure combines open and
proprietary software, Dell EMC hardware, and operational processes to deliver cloud resources as a service.
The implementation choices for each cloud infrastructure are highly specific to the requirements of each site.
Many of these choices can be standardized and automated using the tools in this reference architecture.
Conforming to best practices helps reduce operational risk by leveraging the accumulated experience of Dell
EMC and Canonical.
Canonical’s Metal as a Service (MAAS) is used as a bare metal and VM provisioning tool. The foundation
cluster is composed of MAAS and other services (running in highly available (HA) mode) that are used to
deploy, manage and update the OpenStack cluster nodes.
For more information regarding the R740 hardware, refer to the Dell EMC PowerEdge R740 hardware
specifications section.
Canonical can also deploy OpenStack in a more traditional manner, grouping server per role:
• Controllers
• Computes
• Storage
MAAS treats physical servers like virtual machines, or instances in the cloud. Rather than having to manage
each server individually, MAAS turns bare metal into an elastic cloud-like resource.
MAAS provides management of a large number of physical machines by creating a single resource pool out
of them. Participating machines can then be provisioned automatically and used as normal. When those
machines are no longer required, they are "released" back into the pool. MAAS integrates all the tools
required in one smooth experience. It includes:
• Web UI
• Ubuntu, CentOS, Windows, RHEL, SUSE and VMware ESXi installations support open source IP
Address Management (IPAM)
• Full API/CLI support
• High availability
• Role-based Access Control (RBAC)
• IPv6 support
• Inventory of components
• DHCP and DNS for other devices on the network
• DHCP relay integration
• VLAN and fabric support
• NTP for the entire infrastructure
• Hardware testing
• Composable hardware support
MAAS works with any system configuration, and is recommended by the Juju team as a physical provisioning
system.
Automation Automatic discovery and registration of every device on the network. BMC (IPMI, RedFish and
more) and PXE (IPv4and IPv6) automation.
Machine Configures the machine’s network interfaces with bridges, VLANs, bonds and more. Creates
configuration advanced file system layouts with RAID, bcache, LVM and more.
Pod management Turns bare-metal servers into hypervisors, allowing automated creation of virtual machines and
present them as new servers available for the deployment.
Network Observes and catalogs every IP address on the network (IPAM). Built-in highly available DHCP
management (active-passive) and DNS (active-active).
The Juju store allows access to a wide range of best practice solutions which you can deploy with a single
command. You can use Juju from the command line or through its powerful graphical representation of the
model in the GUI.
Whether it involves deep learning, container orchestration, real-time big data or stream processing, big
software needs operations to be open source and automated.
Juju is the best way to encapsulate all the ops knowledge required to automate the behavior of your
application.
Landscape is the most cost-effective way to support and monitor large and growing networks of desktops,
servers and clouds; to reduce IT team’s efforts on day-to-day management with Landscape; and to take
control of the infrastructure.
The Landscape Juju charm will deploy Landscape Dedicated Server (LDS), and must be connected to other
charms to be fully functional. It has a client/server model where Landscape agents are deployed on the
service host, to manage and monitor.
As part of Canonical’s Reference Architecture, this service is deployed by default, where the whole
infrastructure will be managed and monitored through Landscape. The table below displays the features that
help Landscape to be part of the Charmed OpenStack infrastructure.
Landscape features
Feature Description
Monitor Your Machines at Scale Set alerts for updates on specific machines
Graph trends of temperature, disk, memory usage and system load
List all processes running on a system and remotely kill rogue processes
Build graphs with custom metrics
Maintain Security and Compliance Patch compliance - keep systems secure and up to date
Role Based Access Control (RBAC)
Automated audit logging and compliance reporting
Regulatory compliance is significantly simplified with custom reporting
Control Inventory Quickly track full software package information for all registered machines
Gather asset information in real time
Create dynamic search groups to perform operations on categories of machines
Easily access any machine property
Package Repository Management Mirror and stage internal or external APT repositories
Upload and manage custom packages
Software versions
Component Version
MAAS 2.5
Juju 2.6.2
2 Hardware specifications
The base validated reference architecture solution is based on the Dell EMC PowerEdge R740. The
reference architecture uses the following rack and server specifications.
Rack Standard data center rack (1) with enough capacity to hold 12 x 2RU nodes, and 1
3 x 1RU
switches
Firmware versions
Component Version
iDRAC 3.21.21.21
BIOS 1.6.11
Network Daughter card Intel X710 Quad Port 10Gb DA/SFP+ Ethernet, Network 1
Daughter Card, with SR Optics
Additional Network card Intel X710 Quad Port 10Gb, SFP+, Converged Network Adapter, 1
with SR Optics
Boot system BOSS controller card + with 2 M.2 Sticks 480GB (RAID 1),FH 1
(configured as RAID1)
Data drives 4TB 7.2K RPM NLSAS 12Gbps 512n 3.5in Hot-plug Hard Drive 6
NVME and PCIe Dell 1.6TB, NVMe, Mixed Use Express Flash, HHHL AIC, 1
Storage Adapters PM1725a, DIB
Infrastructure nodes:
Node Purpose
Cloud nodes:
Node Purpose
The R740 servers need to be configured for the Dell EMC Charmed OpenStack solution. Following are the
configurations that need to be taken care of:
• BIOS
• iDRAC
• RAID
• Network
Verify that the physical, and virtual disks are in ready state, and that the virtual disks are auto-configured to
RAID-0. The IPMI over LAN option must be enabled in each R740 server through the BIOS.
For detailed hardware configurations of the Dell EMC R740 solution for the Charmed OpenStack platform,
consult a Dell EMC sales and services representative.
Caution: Please ensure that the firmware on hardware is up to date or match the versions from the table
above.
3 Network architecture
A Dell EMC PowerEdge R740 rack solution is agnostic to the top of rack (ToR) switch a customer may
choose. For management network role reference implementation in this document uses the Dell EMC S3048-
ON switch. Two of the Dell EMC Networking S4148-ON switches are used at the leaf-layer of the standard
leaf-spine topology. The two switches are used to implement high availability on the data network. A pair of
switches of similar or better capacity may be added at the spine-layer of the topology, if desired.
• Data network infrastructure: The server NICs and the leaf switch pair. The leaf switches are
connected to the data center user networks and carry the main service traffic in / out of the reference
architecture.
• Management network infrastructure: The BMC management network, which consists of iDRAC ports
and the OOB management ports of the switches, are aggregated into a 1-rack unit (RU) Dell EMC
PowerConnect S3048 switch. This 1-RU switch in turn can connect to the data center management
network.
• MAAS Services: The MAAS Rack Controllers (see below) provide DHCP, IPMI, PXE, TFTP and other
local services on the provisioning and iDRAC network. Ensure that the MAAS DCHP server is
isolated from the data center DHCP server.
• Server nodes
• Leaf switches and networks
• VLANs
• Out-of-Band Management switch and network
There are several types (or modes) of channel bonding, however only one is recommended and supported for
this solution:
The endpoints for all nodes are terminated to switch ports that have been configured for LACP bonding mode,
across two Dell EMC S4148-ON’s configured with VLT across them. For details regarding network
configuration on the servers, please contact your Dell EMC services and sales representative.
Multiple bonds may be created on the servers for separating critical types of traffic from each other and
allocate them on different physical interfaces.
Actual layout depends on the particular cluster configuration and is out of scope of the Reference
Architecture.
Sample physical connections diagram, representing bonding setup of servers’ interfaces and switches LAG
setup:
3.7 VLANs
This reference architecture implements at a nine separate networks through Layer-2 VLANs. Some networks
below can be combined into single subnet based on end user requirements.
OAM (operation, Used for cluster access, provisioning, monitoring and OOB
administration, management.
management)
External Used for providing outbound access for tenant networks. Data
Public Used for public service endpoints, e.g., using the Data
OpenStack CLI and OpenStack Dashboard (Horizon UI).
Overlay Used mostly for guest compute traffic between tenants Data
and between tenants and OpenStack
services.
Storage (replication) Used for replicating persistent storage data between Data
units of Ceph.
Picture below displays the network diagram, showing how the server nodes are connected using VLANs over
bonds.
• MAAS
• Juju
• Monitoring
• Log aggregation
• Alerting
This section provides details about how each of these components work.
Rackd provides DHCP, PXE, TFTP and other local services. They cache large items like operating system
install images at the rack level for performance but maintain no exclusive state other than credentials to talk to
the region controller.
MAAS can scale from a small set of servers to many racks of hardware in a datacenter. High-bandwidth
activities (such as the initial operating system installation) are handled by the distributed gateways enabling
massively parallel deployments.
Picture below represents logical design of MAAS and high availability of its components.
MAAS diagram
4.3.1 New
New machines that PXE-boot on a MAAS network will be enlisted automatically if MAAS can detect their BMC
parameters. During the Enlistment phase MAAS will ensure that it can control the power status of the
machine through its BMC. Another option is to add machines through the API or UI by supplying BMC
credentials.
4.3.2 Commissioning
In the Commissioning phase, MAAS collects all data about the machine, which includes detailed hardware
inventory like CPU model, memory setup, disks, and chipsets. It also collects information about network
connectivity. This information can later be used in deployments. In this phase, you can apply custom
commissioning scripts that can update firmware, configure hardware RAID, etc.
4.3.3 Ready
A machine that is successfully commissioned is considered “Ready”. A “Ready” machine has configured BMC
credentials (on IPMI based BMCs) for ongoing power control, and ensures that MAAS can start or stop the
machine and allocate or redeploy it with a fresh operating system.
4.3.4 Allocated
“Ready” machines can be Allocated to users, who can configure network interface, bonding and addressing,
as well as disks, such as LVM, RAID, bcache or partitioning.
4.3.5 Deploying
Users can request that MAAS to turn the machine on and install a complete operating system from scratch
without any manual intervention, configuring network interfaces, disk partitions, and more.
4.3.6 Releasing
When a user has finished with the machine they can release it back to the shared pool of capacity. You can
request MAAS to verify that there is a full disk-wipe of the machine when that happens.
• Prerequisites
• Infrastructure nodes requirements
• Prerequisites
Three infrastructure nodes for fully HA, pre-installed with the latest Ubuntu 18.04-LTS, must be available to
host MAAS, the Juju controllers and other runtime and monitoring tools. The nodes must have SSH access to
the root user enabled through authorized_keys.
• MAAS and its dependencies, including PostgreSQL; Each Infrastructure Node has to be turned into
KVM host managed by MAAS, and necessary set of KVM-based Virtual Machines should be created
on top of them for further deployment of supporting services:
• Juju controllers
• Monitoring and alerting systems
• Log aggregation and analysis systems
• Landscape nodes management
• The PXE and BMC networks in order to commission and provision machines.
• The various APIs which must be monitored. In order to monitor OpenStack cluster, the nodes must
have access to the OpenStack Internal network (mentioned above).
• Externally, to the Ubuntu archives and other online services, in order to obtain images, packages, and
other reference data.
• MAAS credentials
• Enlist and commission servers
All Region controllers should point to the Virtual IP of PostgreSQL database. More info on MAAS HA
configuration can be found in MAAS documentation
After the packages installation it is required to create a set of credentials for admin user with “maas init”
command.
1. Set all the servers to PXE boot from the first 10Gbe network interface.
2. Boot each machine once. You should see these machines appear in MAAS.
3. Select all of the machines and commission them by clicking on the Take action button.
When machines have a Ready status you can deploy the services.
Juju controller has to be located in the same physical segment of the network as OpenStack cluster, and be
able to execute calls to MAAS API and connect to OpenStack Cluster nodes.
Juju controller is supposed to be created using a set of KVM Virtual machines mentioned in the previous
steps.
4.7.2 Charms
Charms are a collection of scripts that contain all of the operations necessary to deploy, configure, scale, and
maintain cloud applications with Juju. Charms encapsulate a single application and all the code and know-
how that it takes to operate it, such us how to combine and work with other related applications, or how to
upgrade it.
Charms also allow a hierarchy, with subordinate charms to complement a main service.
Charms source code and installables are stored in Canonical’s Charm Store.
4.7.3 Bundles
Bundles are ready-to-run collections of applications that are modelled to work together and can include
particular configurations and relations between the software to be deployed.
Bundles may also be optimized for different deployment scenarios of the same software. For example, a
scale-out, production-ready version like the OpenStack Base or an extended version of OpenStack
Telemetry.
• Install
• Configure
• Connect
• Upgrade and update
• Scale-out and scale-back
• Perform health checks
• Undertake operational actions
• Benchmark
Juju supports UI representation of deployed bundle and allow to dynamically manipulate with cluster’s
configuration options and layout prior to the bundle deployment and during the lifetime.
4.7.4 Provision
Specify the number of machines you want and how you want them to be deployed, or let Juju do it
automatically.
4.7.5 Deploy
Deploy your services, or (re)deploy your entire application infrastructure to another cloud, with a few clicks of
your mouse.
• Multiple models
• All VMs in all your running models
• Scale out, configure and placement
• User accounts and identification
• Sharing and access
One of the main advantages of Juju is the dynamic configuration ability, which enables you to:
The portal aggregates the relevant information from an operational perspective, and differentiates various
components, such as compute, network, and storage.
The Canonical observability tool allows operators to zoom in to the details of any of the higher-level graphs to
obtain further information. The portal also includes an efficient time series database that allows tracking of the
evolution of the cloud metrics and health status over time.
These services are integrated with Charmed OpenStack solution as part of the charms, fulfilling the same
requirements around upgradeability and operation
Graylog stores processed logs in highly available cluster of Elasticsearch, while the cloud hosts are running
local instances of Filebeat for logs collection.
This charm deploys Landscape Dedicated Server (LDS), and must be connected to other charms to be fully
functional.
Below is the high-level representation of logical components in the Openstack cluster delivered by Canonical.
5.1.1 ceph-monitor
Ceph monitors are the endpoints of the storage cluster and store the map of the data placement across Ceph
OSDs.
The Ceph Monitor charm has two pieces that cannot be changed post bootstrap:
• fsid
• monitor-secret
Caution: attempting to do this will cause a reconfiguration error and new service units will not join the existing
Ceph cluster.
By default the Ceph cluster does not bootstrap until three service units have been deployed and started. This
is to ensure that a quorum is achieved prior to adding storage devices.
After the initialization of the monitor cluster a quorum forms quickly, and OSD bring-up proceeds.
5.1.2 ceph-osd
Ceph OSDs are managing underlay storage devices that contain user’s data and represent the capacity of the
cluster.
This charm provides the Ceph OSD personality for expanding storage capacity within a Ceph deployment.
5.1.3 ceph-radosgateway
This charm provides an API endpoint for Swift or S3 clients, supporting Keystone-based RBAC and storing
objects in the Ceph cluster underneath.
5.2.1 cinder
This charm provides the Cinder volume service for OpenStack. It is intended to be used alongside the other
OpenStack components, starting with the Folsom release. Cinder is made up of four separate services:
• An API service
• A scheduler
• A volume service
• A backup service
5.2.2 glance
This charm provides the Glance image service for OpenStack. It is intended to be used alongside the other
OpenStack components, starting with the Essex release in Ubuntu 12.04.
Glance may be deployed in a number of ways. This charm focuses on three (3) main configurations. All
require the existence of the other core OpenStack services deployed via Juju charms, specifically:
• mysql
• keystone
• nova-cloud-controller
5.2.3 nova-cloud-controller
Nova Cloud Controller is the controller node for OpenStack Nova. It contains:
• nova-scheduler
• nova-api
• nova-consoleauth
• nova-conductor
Console access service (if required) depends on the preferred choice of virtual machines console type, at the
moment three different types of console can be configured via the charm:
• spice
• xvpvnx
• novnc
Note: The console access protocol is configured into a guest when it is created; if changed - console access
for existing guests will stop working.
5.2.4 nova-compute-kvm
This charm provides Nova Compute, responsible for configuring backend hypervisor and running and
governing Virtual Machines. The target platform is Ubuntu (preferably LTS) + OpenStack.
• nova-cloud-controller
• glance
• ceph
• cinder
• mysql
• ceilometer-agent
• rabbitmq-server
• neutron
5.2.5 heat
Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to
launch multiple composite cloud applications based on templates, in the form of yaml files that can be treated
like code. Heat requires the existence of the other core OpenStack services deployed via Juju charms;
specifically:
• mysql
• rabbitmq-server
• keystone
• nova-cloud-controller
5.2.6 openstack-dashboard
The OpenStack Dashboard provides a Django-based web interface for use by both administrators and users
of an OpenStack Cloud. It allows you to manage Nova, Glance, Cinder, Neutron, Heat and Designate
resources within the cloud.
5.2.7 keystone
Keystone is an OpenStack project that provides Identity, Token, Catalog and Policy services for use
specifically by projects in the OpenStack family. It implements OpenStack's Identity API.
By default Keystone uses in-cloud database for storing users/roles and project assignments, but LDAP-based
authentication as well as a SSO-based authentication can be enabled with the help of subordinate charms.
• Agent service
• A collector service
Ceilometer’s responsibility is to collect metrics of the virtual machines as well as the events and store them in
the backend database.
Gnocchi is an open-source, multi-tenant timeseries, metrics and resources database that is used as the
backend for Ceilometer as well as the target for API queries, allowing to request processed metrics of the
virtual machines.
5.2.9 aodh
Aodh provides the Alarming service as part of OpenStack telemetry. It allows to configure and store alarms
definitions based on the metrics collected by Ceilometer. Once created, the service makes repetitive calls to
Gnocchi service analysing current or cumulative metrics. Once the alarm is triggered it executes the action
that has to be pre-configured.
5.2.10 designate
Designate provides DNSaaS services for OpenStack:
5.2.11 neutron-api
This principle charm provides the OpenStack Neutron API service.
Just like OpenStack Nova provides an API to dynamically request and configure virtual servers, Neutron
provides an API to dynamically request and configure virtual networks. These networks connect "interfaces"
from other OpenStack services (e.g., virtual NICs from Nova VMs).
5.2.12 neutron-gateway
Neutron provides flexible software defined networking (SDN) for OpenStack.
• L3 network routing
• DHCP services
The charm is intended to be deployed on a separate node, providing “virtual routers” functionality, allowing
the virtual machines to reach out to the external resources via SNAT, as well as make them available from the
external networks via Floating IPs (DNAT).
In addition, DHCP agents are responsible for delivering network configuration to the virtual machines via
bootp protocol.
5.2.13 neutron-openvswitch
This subordinate charm provides the Neutron Open vSwitch configuration for a compute node. Once
deployed it takes over the management of the Neutron base and plugin configuration on the compute node.
This charm supports DPDK fast packet processing as well.
5.3.1 percona-cluster
Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona
XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a
single product package, which enables you to create a cost-effective MySQL cluster. This charm deploys
Percona XtraDB Cluster onto Ubuntu.
Note: Percona XtraDB Cluster is not a 'scale-out' MySQL solution. Reads and writes are channeled through a
single service unit and synchronously replicated to other nodes in the cluster. Reads/writes are as slow as the
slowest node you have in your deployment.
5.3.2 rabbitmq-server
RabbitMQ is an implementation of AMQP, the emerging standard for high performance enterprise messaging.
The RabbitMQ server is a robust and scalable implementation of an AMQP broker. This charm deploys
RabbitMQ server and provides AMQP connectivity to clients.
When more than one unit of the charm is deployed the charm will bring up a native RabbitMQ cluster. The
process of clustering the units together takes some time.
Note: Due to the nature of asynchronous hook execution, it is possible that client relationship hooks may be
executed before the cluster is complete. In some cases, this can lead to client charm errors. Single unit
deployments behave as expected.
5.3.3 hacluster
This subordinate charm provides corosync and pacemaker cluster configuration for principle charms which
support the hacluster, container scoped relation.
• Virtual IP address(es)
• DNS
In this reference architecture, deployment and testing the HA option used is the VIP. In both cases a
relationship to hacluster is required, which provides the Corosync back end HA functionality. To use virtual IP
address(es) the clustered nodes must be on the same subnet, such that:
• The VIP is a valid IP address on the subnet for one of the node's interfaces
• Each node has an interface in said subnet
At a minimum, the configuration option, vip, must be set in order to use virtual IP HA. If multiple networks are
being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or
vip_cidr may be specified.
To use DNS high availability there are several prerequisites. However, DNS HA does not require the
clustered nodes to be on the same subnet.
• The clustered nodes must have static or "reserved" IP addresses registered in MAAS.
• The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.
At a minimum, the configuration option, dns-ha, must be set to true, and at least one or more of the following
hostnames must be set, in order to use DNS HA:
• os-public-hostname
• os-internal-hostname
• os-internal-hostname
• If neither vip nor dns-ha is set and the charm is related to hacluster
• If both vip and dns-ha are set, as they are mutually exclusive
• If dns-ha is set and none of the os-{admin,internal,public}-hostname(s) are set
All the OpenStack services will be deployed in HA, and each service will have three units; each one running
on a LXC container in a separate hypervisor. The Charmed OpenStack provides high availability for all
OpenStack services as well as HA Juju. The following diagram explains the different types of HA used in
Charmed OpenStack:
5.3.4 ntp
The ntp charm is a subordinate charm which is designed for use with other principal charms. In its basic
mode, the ntp charm is used to configure NTP in service units to talk directly to a set of NTP time sources.
Network spaces are accordingly mapped to different VLANs managed by MAAS, making networking
management transparent and flexible.
To use this feature, use the --bind option when deploying the charm:
$ juju deploy neutron-api --bind "public=public-space
internal=internal-space admin=admin-space shared-db=internal-space"
Alternatively, these can also be provided as part of a Juju native bundle configuration:
neutron-api:
charm: cs:neutron-api
num_units: 1
bindings:
public: public-space
admin: admin-space
internal: internal-space
shared-db: internal-space
Note: Spaces must be configured in the underlying provider prior to attempting to use them.
Note: Existing deployments using os-*-network configuration options will continue to function; these options
are preferred over any network space binding provided if set.
Canonical recommends to run a set of automated tests leveraging existing Open Source components for
providing best user experience:
• OpenStack Tempest
• OpenStack Rally
• Rados bench
• FIO
Canonical’s recommendation is to run the test suite based on the RefStack Guideline that covers the
functionality of the Charmed OpenStack.
In addition to that, it is valuable to separately test block level of Ceph cluster with the Open Source tool FIO
(flexible I/O tester). Leveraging RBD (RADOS Block Device) driver, FIO tests input/output performance of the
image within Ceph in the same fashion as the performance of the generic block device.
6.1.1 Graylog
Graylog is the solution for aggregating and managing the logs from various components of the cluster, as well
as for providing visualization of the logs.
6.1.2 Elasticsearch
Elasticsearch is a distributed database used for storing indexed logs and act as the backend for Graylog.
6.1.3 Filebeat
As a log forwarder, Filebeat tails various log files on the client side and quickly sends this information to
Graylog for further parsing and enrichment, or to Elasticsearch for centralized storage and analysis.
6.2.1 Prometheus
Prometheus is a systems and services monitoring system. It collects metrics from configured targets at given
intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed
to be true.
6.2.2 Grafana
Grafana is the leading graph and dashboard builder for visualizing time series metrics.
6.2.3 Telegraf
Telegraf is a client-side system that collects information about the status of the host services and makes them
available for pulling by any monitoring solutions (in this architecture Prometheus).
6.2.4 Alarming
Charmed OpenStack telemetry suite also includes Nagios cluster that is responsible for sending alarms in
case of cloud hosts or individual services failure.
Each host and container of the OpenStack cluster runs NRPE (Nagios Remote Plugin Executor), collecting
the status of services running, and syncing back to Nagios server.
Below is the example of alarming dashboard showing the current status of the running cluster.
It is also possible, using additional Juju charms to configure integration of logging and alarming components
with existing corporate tools, such as Splunk, remote Syslog servers etc.
7 Appendix A References
Please see the following resources for more information.
http://www.dell.com/en-us/work/learn/rack-scale-infrastructure
https://maas.io/
https://wiki.ubuntu.com/
https://www.ubuntu.com/
http://www.ubuntu.com/download/server
OpenStack UA Support
https://docs.openstack.org/charm-guide/latest/
Storage technical documents and videos provide expertise that helps to ensure customer success on Dell
EMC storage platforms.