0% found this document useful (0 votes)
288 views

Canonical Charmed Openstack On Dellemc Hardware

Uploaded by

abdmeziane
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
288 views

Canonical Charmed Openstack On Dellemc Hardware

Uploaded by

abdmeziane
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Reference Architecture

Canonical Charmed OpenStack on Dell EMC


Hardware.
Abstract
This document provides a complete reference architecture guide for Charmed
OpenStack (Rocky) solution on Dell EMC hardware delivered by Canonical,
including Dell EMC PowerEdge servers for workloads and storage and Dell EMC
Networking.

This guide discusses the Dell EMC hardware specifications and the tools and
services to set up both the hardware and software, including the foundation
cluster and the OpenStack cluster. It also covers other tools used for the
monitoring and management of the cluster in detail and how all these
components work together in the system. The guide also provides the
deployment steps and references to configuration developed by Dell EMC and
Canonical for the deployment process. October 2019

Document ID
Revisions

Revisions
Date Description
October 2019 Initial release

Acknowledgements
This paper was produced by the following:

Author: Arkady Kanevsky and Andrey Grebennikov

The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.

2 Canonical Charmed OpenStack on Dell EMC Hardware.


Table of contents

Table of contents
Revisions .......................................................................................................................................................................... 2
Acknowledgements ........................................................................................................................................................... 2
Table of contents .............................................................................................................................................................. 3
Executive summary ........................................................................................................................................................... 7
1 Core Components ...................................................................................................................................................... 8
1.1 Core components ............................................................................................................................................. 8
1.2 Dell EMC PowerEdge R740 overview .............................................................................................................. 9
1.3 OpenStack Rocky ............................................................................................................................................. 9
1.4 OpenStack and Canonical ................................................................................................................................ 9
1.5 MAAS (Metal as a Service) physical cloud ..................................................................................................... 10
1.6 Juju modeling tool ........................................................................................................................................... 12
1.7 Landscape Systems Management Tool ......................................................................................................... 12
1.8 Software versions ........................................................................................................................................... 14
2 Hardware specifications ........................................................................................................................................... 15
2.1 Dell EMC PowerEdge R740 rack specifications ............................................................................................. 15
2.2 Server components firmware versions ........................................................................................................... 15
2.3 Dell EMC PowerEdge R740 server specifications .......................................................................................... 15
2.4 Rack layout ..................................................................................................................................................... 16
2.5 Hardware Configuration Notes ....................................................................................................................... 17
3 Network architecture................................................................................................................................................. 19
3.1 S4148-ON 10 GbE Switch .............................................................................................................................. 19
3.2 S3048-ON 1 GbE Switch ................................................................................................................................ 19
3.3 Infrastructure layout ........................................................................................................................................ 20
3.4 Network components ...................................................................................................................................... 20
3.5 Server nodes .................................................................................................................................................. 20
3.6 Leaf switches .................................................................................................................................................. 21
3.7 VLANs ............................................................................................................................................................. 22
3.8 Out-of-Band management network ................................................................................................................. 23
4 Cluster Infrastructure components ........................................................................................................................... 24
4.1 How MAAS works ........................................................................................................................................... 24
4.2 High availability in MAAS ................................................................................................................................ 24
4.3 The node lifecycle ........................................................................................................................................... 25
4.3.1 New ................................................................................................................................................................. 25
4.3.2 Commissioning ............................................................................................................................................... 25

3 Canonical Charmed OpenStack on Dell EMC Hardware.


Table of contents

4.3.3 Ready ............................................................................................................................................................. 26


4.3.4 Allocated ......................................................................................................................................................... 26
4.3.5 Deploying ........................................................................................................................................................ 26
4.3.6 Releasing ........................................................................................................................................................ 26
4.4 Install MAAS ................................................................................................................................................... 26
4.4.1 Configure Your Hardware ............................................................................................................................... 26
4.4.2 Install Ubuntu Server ...................................................................................................................................... 26
4.4.3 MAAS Installation ........................................................................................................................................... 26
4.5 Infrastructure nodes requirements .................................................................................................................. 27
4.6 MAAS initial configurations ............................................................................................................................. 27
4.6.1 MAAS Credentials .......................................................................................................................................... 27
4.6.2 Enlist and commission servers ....................................................................................................................... 28
4.6.3 Set up MAAS KVM pods ................................................................................................................................. 28
4.7 Juju components ............................................................................................................................................. 28
4.7.1 Juju controller - the heart of Juju .................................................................................................................... 28
4.7.2 Charms ........................................................................................................................................................... 28
4.7.3 Bundles ........................................................................................................................................................... 28
4.7.4 Provision ......................................................................................................................................................... 29
4.7.5 Deploy ............................................................................................................................................................. 29
4.7.6 Monitor and manage ....................................................................................................................................... 30
4.7.7 Comparing Juju to any configuration management tool ................................................................................. 30
4.8 Telemetry components ................................................................................................................................... 31
4.8.1 Monitoring Tools ............................................................................................................................................. 31
4.8.2 Log Aggregation ............................................................................................................................................. 32
4.8.3 Landscape management ................................................................................................................................ 32
5 Charmed OpenStack components ........................................................................................................................... 33
5.1 Storage charms .............................................................................................................................................. 33
5.1.1 ceph-monitor ................................................................................................................................................... 33
5.1.2 ceph-osd ......................................................................................................................................................... 34
5.1.3 ceph-radosgateway ........................................................................................................................................ 34
5.2 OpenStack charms ......................................................................................................................................... 34
5.2.1 cinder .............................................................................................................................................................. 34
5.2.2 glance ............................................................................................................................................................. 34
5.2.3 nova-cloud-controller ...................................................................................................................................... 34
5.2.4 nova-compute-kvm ......................................................................................................................................... 35
5.2.5 heat ................................................................................................................................................................. 35

4 Canonical Charmed OpenStack on Dell EMC Hardware.


Table of contents

5.2.6 openstack-dashboard ..................................................................................................................................... 35


5.2.7 keystone ......................................................................................................................................................... 35
5.2.8 gnocchi and ceilometer ................................................................................................................................... 36
5.2.9 aodh ................................................................................................................................................................ 36
5.2.10 designate .................................................................................................................................................... 36
5.2.11 neutron-api ................................................................................................................................................. 36
5.2.12 neutron-gateway ......................................................................................................................................... 36
5.2.13 neutron-openvswitch .................................................................................................................................. 37
5.3 Resource charms ............................................................................................................................................ 37
5.3.1 percona-cluster ............................................................................................................................................... 37
5.3.2 rabbitmq-server ............................................................................................................................................... 37
5.3.3 hacluster ......................................................................................................................................................... 37
5.3.4 ntp ................................................................................................................................................................... 39
5.4 Network space support ................................................................................................................................... 39
5.5 OpenStack validation ...................................................................................................................................... 40
5.5.1 OpenStack Tempest ....................................................................................................................................... 40
5.5.2 OpenStack Rally ............................................................................................................................................. 40
5.5.3 Rados Bench and FIO .................................................................................................................................... 40
6 Monitoring and logging tools..................................................................................................................................... 41
6.1 Logging the cluster ......................................................................................................................................... 41
6.1.1 Graylog ........................................................................................................................................................... 41
6.1.2 Elasticsearch .................................................................................................................................................. 41
6.1.3 Filebeat ........................................................................................................................................................... 41
6.2 Monitoring the cluster ..................................................................................................................................... 42
6.2.1 Prometheus .................................................................................................................................................... 42
6.2.2 Grafana ........................................................................................................................................................... 42
6.2.3 Telegraf ........................................................................................................................................................... 42
6.2.4 Alarming .......................................................................................................................................................... 42
6.2.5 External integration ......................................................................................................................................... 43
7 Appendix A References ............................................................................................................................................ 44
7.1 Dell EMC documentation ................................................................................................................................ 44
7.2 Canonical documentation ............................................................................................................................... 44
7.3 OpenStack Documentation ............................................................................................................................. 44
7.4 To Learn More ................................................................................................................................................ 44
A Technical support and resources ............................................................................................................................. 45
A.1 Related resources ........................................................................................................................................... 45

5 Canonical Charmed OpenStack on Dell EMC Hardware.


Table of contents

6 Canonical Charmed OpenStack on Dell EMC Hardware.


Executive summary

Executive summary
An OpenStack cluster is now a common need by many organizations. Dell EMC and Canonical have worked
together to build a jointly engineered and validated architecture that details software, hardware, and
integration points of all solution components. The architecture provides prescriptive guidance and
recommendations for:

o Hardware design
o Infrastructure nodes
o Cloud nodes
o Network hardware and design
o Software layout
o System configurations

7 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

1 Core Components
Dell EMC and Canonical designed this architecture guide to make it easy for Dell EMC and Canonical
customers to build their own operational readiness cluster and design their initial offerings. Dell EMC and
Canonical provide the support and services that the customers need to stand up production-ready OpenStack
clusters.

With the current release of Ubuntu OS, multiple releases of OpenStack are available for setup:

• OpenStack Queens (Long Term Support)


• OpenStack Rocky (18 months support)
• OpenStack Stein (Standard plus Extended support)
• Upcoming releases of OpenStack (T and U releases)

Current reference architecture is based on OpenStack Rocky, however is it possible and easy to upgrade to
the following supported releases, as well as deploy up-to-date release of Charmed OpenStack from scratch.

The code base for Charmed OpenStack Platform is evolving at a very rapid pace. Please see
https://www.ubuntu.com/info/release-end-of-life for more information.

1.1 Core components


Component Codename

Block Storage Cinder with Ceph

Image Service Glance with Ceph

Compute Nova with KVM

Identity Keystone

Networking Neutron with OpenVSwitch

Telemetry Ceilometer/AODH/Gnocchi

Orchestration Heat

DNS as a Service Designate

Load Balancing as a Service Octavia with Barbican

Dashboard Horizon

Logging Graylog

Monitoring Prometheus with Telegraf

Alerting Nagios with NRPE

Package Management Canonical Landscape

8 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

The standards-based APIs are the same between all OpenStack deployments, and they enable customer and
vendor ecosystems to operate across multiple clouds. The site specific infrastructure combines open and
proprietary software, Dell EMC hardware, and operational processes to deliver cloud resources as a service.

The implementation choices for each cloud infrastructure are highly specific to the requirements of each site.
Many of these choices can be standardized and automated using the tools in this reference architecture.
Conforming to best practices helps reduce operational risk by leveraging the accumulated experience of Dell
EMC and Canonical.

Canonical’s Metal as a Service (MAAS) is used as a bare metal and VM provisioning tool. The foundation
cluster is composed of MAAS and other services (running in highly available (HA) mode) that are used to
deploy, manage and update the OpenStack cluster nodes.

1.2 Dell EMC PowerEdge R740 overview


The Dell EMC PowerEdge R740-based solution is comprised of pools of compute, storage and networking
resources which are managed through a single point of rack management. All nodes in the rack are R740 2U
servers handling compute, control and storage functions, as assigned by the Metal as a Service (MAAS)
management nodes.

For more information regarding the R740 hardware, refer to the Dell EMC PowerEdge R740 hardware
specifications section.

1.3 OpenStack Rocky


This architecture guide is based on OpenStack Rocky - the 18th release of the most widely deployed open
source software for building clouds. The Charmed OpenStack solution is always released within a short time
frame from the upstream release of OpenStack, making sure that the latest stable code is well tested,
packaged and available for setup

1.4 OpenStack and Canonical


This reference architecture is based on the Canonical distribution of Ubuntu OpenStack. Canonical
commercially distributes and supports OpenStack, having Ubuntu as the reference operating system for
OpenStack deployments. Since 2011 OpenStack packages has been included in every Ubuntu release. The
release schedules of the two projects are synchronized, ensuring that OpenStack updates and releases are
immediately available on widely deployed releases of Ubuntu.

Current mapping of supported Ubuntu releases and OpenStack releases available:

9 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

Canonical’s Reference Architecture is delivered on a hyper-converged infrastructure approach, where any of


the servers can accommodate more than one specific OpenStack role or service simultaneously. This hyper-
converged approach has many benefits, including simplicity of operation and management overhead.

Canonical can also deploy OpenStack in a more traditional manner, grouping server per role:

• Controllers
• Computes
• Storage

1.5 MAAS (Metal as a Service) physical cloud


Metal as a Service (MAAS) is a complete automation of physical servers for data center operation efficiency
on premises, its open source and supported by Canonical.

MAAS treats physical servers like virtual machines, or instances in the cloud. Rather than having to manage
each server individually, MAAS turns bare metal into an elastic cloud-like resource.

10 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

MAAS provides management of a large number of physical machines by creating a single resource pool out
of them. Participating machines can then be provisioned automatically and used as normal. When those
machines are no longer required, they are "released" back into the pool. MAAS integrates all the tools
required in one smooth experience. It includes:

• Web UI
• Ubuntu, CentOS, Windows, RHEL, SUSE and VMware ESXi installations support open source IP
Address Management (IPAM)
• Full API/CLI support
• High availability
• Role-based Access Control (RBAC)
• IPv6 support
• Inventory of components
• DHCP and DNS for other devices on the network
• DHCP relay integration
• VLAN and fabric support
• NTP for the entire infrastructure
• Hardware testing
• Composable hardware support

MAAS works with any system configuration, and is recommended by the Juju team as a physical provisioning
system.

Key MAAS Features


Feature Description

Automation Automatic discovery and registration of every device on the network. BMC (IPMI, RedFish and
more) and PXE (IPv4and IPv6) automation.

Fast deployment Zero-touch deployment of Ubuntu, CentOS,


Windows, RHEL, SUSE and ESXi. Deploys Linux distributions in less than 5 minutes.

Machine Configures the machine’s network interfaces with bridges, VLANs, bonds and more. Creates
configuration advanced file system layouts with RAID, bcache, LVM and more.

DevOps Integration with DevOps automation tools like


integration conjure-up, Juju, Chef, Puppet, SALT, Ansible and more.

Pod management Turns bare-metal servers into hypervisors, allowing automated creation of virtual machines and
present them as new servers available for the deployment.

Network Observes and catalogs every IP address on the network (IPAM). Built-in highly available DHCP
management (active-passive) and DNS (active-active).

Manage Comes with a REST API, Web UI and CLI.

11 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

1.6 Juju modeling tool


Juju is an open source application modeling tool that allows you to deploy, configure, scale, and operate
cloud infrastructures quickly and efficiently on public clouds such as AWS, GCE, and Azure; along with
private clouds such as Metal as a Service (MAAS), OpenStack, and VMware VSphere.

The Juju store allows access to a wide range of best practice solutions which you can deploy with a single
command. You can use Juju from the command line or through its powerful graphical representation of the
model in the GUI.

Why use Juju?

Whether it involves deep learning, container orchestration, real-time big data or stream processing, big
software needs operations to be open source and automated.

Juju is the best way to encapsulate all the ops knowledge required to automate the behavior of your
application.

1.7 Landscape Systems Management Tool


The Landscape systems management tool helps you monitor, manage and update your entire Ubuntu
infrastructure from a single interface. Part of Canonical's Ubuntu Advantage support service, Landscape
brings you intuitive systems management tools combined with world-class support.

Landscape is the most cost-effective way to support and monitor large and growing networks of desktops,
servers and clouds; to reduce IT team’s efforts on day-to-day management with Landscape; and to take
control of the infrastructure.

The Landscape Juju charm will deploy Landscape Dedicated Server (LDS), and must be connected to other
charms to be fully functional. It has a client/server model where Landscape agents are deployed on the
service host, to manage and monitor.

12 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

As part of Canonical’s Reference Architecture, this service is deployed by default, where the whole
infrastructure will be managed and monitored through Landscape. The table below displays the features that
help Landscape to be part of the Charmed OpenStack infrastructure.

Landscape features
Feature Description

System Management Manage desktop, server and cloud deployments


Up to 40,000 machines with a single instance
Create custom profiles for managing different machine classes
Easily install, update, rollback and remove software
Define policies for automated updates and security patches
Configure users and groups

Monitor Your Machines at Scale Set alerts for updates on specific machines
Graph trends of temperature, disk, memory usage and system load
List all processes running on a system and remotely kill rogue processes
Build graphs with custom metrics

Maintain Security and Compliance Patch compliance - keep systems secure and up to date
Role Based Access Control (RBAC)
Automated audit logging and compliance reporting
Regulatory compliance is significantly simplified with custom reporting

Control Inventory Quickly track full software package information for all registered machines
Gather asset information in real time
Create dynamic search groups to perform operations on categories of machines
Easily access any machine property

Package Repository Management Mirror and stage internal or external APT repositories
Upload and manage custom packages

13 Canonical Charmed OpenStack on Dell EMC Hardware.


Core Components

1.8 Software versions


The following versions of software are part of this reference architecture:

Software versions
Component Version

Ubuntu 18.04.2 LTS (kernel 4.15)

OpenStack Rocky (19.04 charms)

MAAS 2.5

Juju 2.6.2

14 Canonical Charmed OpenStack on Dell EMC Hardware.


Hardware specifications

2 Hardware specifications
The base validated reference architecture solution is based on the Dell EMC PowerEdge R740. The
reference architecture uses the following rack and server specifications.

2.1 Dell EMC PowerEdge R740 rack specifications


Dell EMC PowerEdge R740 rack specifications
Component type Component description Quantity

Rack Standard data center rack (1) with enough capacity to hold 12 x 2RU nodes, and 1
3 x 1RU
switches

Chassis Dell PowerEdge R740 (3 Infrastructure nodes, 9 Cloud nodes) 12

Data switches Dell EMC Networking S4148-ON (10Gbps ToR, 48 ports) 2

iDRAC/Provisioning Dell Networking S3048-ON 1


switch

2.2 Server components firmware versions


NOTE: The versions listed below are the versions that were available at the time this Reference Architecture
was developed. Ensure that the firmware on all servers, storage devices,and switches are up to date.

Firmware versions
Component Version

iDRAC 3.21.21.21

External NIC firmware 18.8.9

BIOS 1.6.11

PERC RAID controller 25.5.5.0005

2.3 Dell EMC PowerEdge R740 server specifications


Dell EMC PowerEdge R740 server specifications
Component type Component Description Quantity

Processor Intel® Xeon® Gold 6154 3.0G,18C/36T,10.4GT/s,25M 2


Cache,Turbo,HT (200W) DDR4-2666

Memory 32GB RDIMM, 2666MT/s, Dual Rank 24

Drive controller PERC H330+ RAID Controller, Adapter, Full Height 1

15 Canonical Charmed OpenStack on Dell EMC Hardware.


Hardware specifications

Network Daughter card Intel X710 Quad Port 10Gb DA/SFP+ Ethernet, Network 1
Daughter Card, with SR Optics

Additional Network card Intel X710 Quad Port 10Gb, SFP+, Converged Network Adapter, 1
with SR Optics

Boot system BOSS controller card + with 2 M.2 Sticks 480GB (RAID 1),FH 1
(configured as RAID1)

Data drives 4TB 7.2K RPM NLSAS 12Gbps 512n 3.5in Hot-plug Hard Drive 6

NVME and PCIe Dell 1.6TB, NVMe, Mixed Use Express Flash, HHHL AIC, 1
Storage Adapters PM1725a, DIB

OOB license iDRAC9, Enterprise 1

2.4 Rack layout


The reference deployment of Charmed OpenStack on the Dell EMC PowerEdge R740 server, utilizes three
nodes as infrastructure nodes. The reference deployment uses the following purpose:

Infrastructure nodes:
Node Purpose

Rack1-MAAS1 Infra #1 (MAAS, Juju, LMA)

Rack1-MAAS2 Infra #1 (MAAS, Juju, LMA)

Rack1-MAAS3 Infra #1 (MAAS, Juju, LMA)

Cloud nodes:
Node Purpose

Rack1-cloud1 Converged node handling OpenStack components + Storage functions

Rack1-cloud2 Converged node handling OpenStack components + Storage functions

Rack1-cloud3 Converged node handling OpenStack components + Storage functions

Rack1-cloud4 Converged node handling OpenStack components + Storage functions

Rack1-cloud5 Converged node handling OpenStack components + Storage functions

Rack1-cloud6 Converged node handling OpenStack components + Storage functions

Rack1-cloud7 Converged node handling OpenStack components + Storage functions

Rack1-cloud8 Converged node handling OpenStack components + Storage functions

Rack1-cloud9 Converged node handling OpenStack components + Storage functions

16 Canonical Charmed OpenStack on Dell EMC Hardware.


Hardware specifications

2.5 Hardware Configuration Notes


The Dell EMC PowerEdge R740 configurations are used with 10GbE networking. To ensure that the network
is HA ready, two network cards, each offering 4 x 10GbE ports, are required for each node.

The R740 servers need to be configured for the Dell EMC Charmed OpenStack solution. Following are the
configurations that need to be taken care of:

• BIOS
• iDRAC
• RAID
• Network

Verify that the physical, and virtual disks are in ready state, and that the virtual disks are auto-configured to
RAID-0. The IPMI over LAN option must be enabled in each R740 server through the BIOS.

For detailed hardware configurations of the Dell EMC R740 solution for the Charmed OpenStack platform,
consult a Dell EMC sales and services representative.

17 Canonical Charmed OpenStack on Dell EMC Hardware.


Hardware specifications

Caution: Please ensure that the firmware on hardware is up to date or match the versions from the table
above.

18 Canonical Charmed OpenStack on Dell EMC Hardware.


Network architecture

3 Network architecture
A Dell EMC PowerEdge R740 rack solution is agnostic to the top of rack (ToR) switch a customer may
choose. For management network role reference implementation in this document uses the Dell EMC S3048-
ON switch. Two of the Dell EMC Networking S4148-ON switches are used at the leaf-layer of the standard
leaf-spine topology. The two switches are used to implement high availability on the data network. A pair of
switches of similar or better capacity may be added at the spine-layer of the topology, if desired.

3.1 S4148-ON 10 GbE Switch


The Dell EMC Networking S-Series S4148-ON is an ultra-low-latency 10/40 GbE top-of-rack (ToR) switch
built for applications in high performance data center and computing environments. Leveraging a non-
blocking switching architecture, the S4148-ON delivers line-rate L2 and L3 forwarding capacity with ultra-low-
latency to maximize network performance.

10GbE Switch Specification


Variable Description

SFP+ ports 48 x 10GbE SFP+ ports

QSFP+ ports 6 x 40GbE QSFP+ ports

RJ45 ports 1 x Console/Management Port

Operating System OS10 Enterprise

Refer to the S4148-ON switch specification sheet for more information.

3.2 S3048-ON 1 GbE Switch


The Dell EMC Networking S-Series S3048-ON delivers 48 ports of wire-speed, Gigabit Ethernet. It provides
260 Gbps for switching capacity and 131 Mbps forwarding capacity. With 48 built-in copper Gigabit Ethernet
ports in a 1U form factor, the switches offer flexibility with their four SFP transceiver slots, which can be used
in lieu of up to 4 copper ports to support fiber media.

1GbE Switch Specification


Variable Description

SFP+ ports 4 x 10GbE SFP+ ports

QSFP+ ports None

RJ45 ports 48 x 1 Gigabit Ethernet Ports

Operating System OS10 Enterprise

Refer to the S3048-ON switch specification sheet for more information.

19 Canonical Charmed OpenStack on Dell EMC Hardware.


Network architecture

3.3 Infrastructure layout


The network consists of the following major network infrastructure layouts:

• Data network infrastructure: The server NICs and the leaf switch pair. The leaf switches are
connected to the data center user networks and carry the main service traffic in / out of the reference
architecture.
• Management network infrastructure: The BMC management network, which consists of iDRAC ports
and the OOB management ports of the switches, are aggregated into a 1-rack unit (RU) Dell EMC
PowerConnect S3048 switch. This 1-RU switch in turn can connect to the data center management
network.
• MAAS Services: The MAAS Rack Controllers (see below) provide DHCP, IPMI, PXE, TFTP and other
local services on the provisioning and iDRAC network. Ensure that the MAAS DCHP server is
isolated from the data center DHCP server.

3.4 Network components


The following component blocks make up this network:

• Server nodes
• Leaf switches and networks
• VLANs
• Out-of-Band Management switch and network

3.5 Server nodes


To create a highly available solution, the network must be resilient to the loss of a single network switch,
network interface card (NIC) or bad cable. To achieve this, the network configuration uses channel bonding
across the servers and switches.

There are several types (or modes) of channel bonding, however only one is recommended and supported for
this solution:

• 802.3ad or LACP (mode=4)

The endpoints for all nodes are terminated to switch ports that have been configured for LACP bonding mode,
across two Dell EMC S4148-ON’s configured with VLT across them. For details regarding network
configuration on the servers, please contact your Dell EMC services and sales representative.

Supported channel bonding modes


Node type Channel Bonding type

Infrastructure nodes 802.3ad (LACP mode 4, channel fast)

Cloud nodes 802.3ad (LACP mode 4, channel fast)

20 Canonical Charmed OpenStack on Dell EMC Hardware.


Network architecture

Multiple bonds may be created on the servers for separating critical types of traffic from each other and
allocate them on different physical interfaces.

Actual layout depends on the particular cluster configuration and is out of scope of the Reference
Architecture.

3.6 Leaf switches


This reference implementation uses two Dell EMC Networking S4148-ON switches. There is a redundant
physical 2x 40GbE connection between the two switches. The recommended architecture uses Dell EMC VLT
between the switches in the leaf pair.

Sample physical connections diagram, representing bonding setup of servers’ interfaces and switches LAG
setup:

21 Canonical Charmed OpenStack on Dell EMC Hardware.


Network architecture

3.7 VLANs
This reference architecture implements at a nine separate networks through Layer-2 VLANs. Some networks
below can be combined into single subnet based on end user requirements.

VLAN Description Switch


allocation

OOB Management Used for the iDRAC network. OOB

OAM (operation, Used for cluster access, provisioning, monitoring and OOB
administration, management.
management)

Internal Used for internal endpoints and communications Data


between most of the services.

External Used for providing outbound access for tenant networks. Data

Public Used for public service endpoints, e.g., using the Data
OpenStack CLI and OpenStack Dashboard (Horizon UI).

Overlay Used mostly for guest compute traffic between tenants Data
and between tenants and OpenStack
services.

Storage (access) Used by clients of the Ceph/Swift storage backend to Data


consume block and object storage contents.

Storage (replication) Used for replicating persistent storage data between Data
units of Ceph.

Picture below displays the network diagram, showing how the server nodes are connected using VLANs over
bonds.

22 Canonical Charmed OpenStack on Dell EMC Hardware.


Network architecture

3.8 Out-of-Band management network


The Management network of all the servers is aggregated into the Dell Networking S3048-ON switch in the
reference architecture. One interface on the Out-of-Band (OOB) switch provides an uplink to a
router/jumphost.

The OOB management network is used for several functions:

• The highly available software uses it to reboot and partition servers.


• When an uplink to a router is added and the iDRACs are configured to use it as a gateway, there are
tools for monitoring the servers and gather metrics on them. Discussion of this topic is beyond the
scope of this document. Contact your Dell EMC sales or services representative for additional
information.

23 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

4 Cluster Infrastructure components


The infrastructure nodes are composed of the following services and tools that are configured for high
availability:

• MAAS
• Juju
• Monitoring
• Log aggregation
• Alerting

This section provides details about how each of these components work.

4.1 How MAAS works


Metal as a Service or MAAS, has a tiered architecture with a central Postgres database backing a region
controller regiond that deals with operator requests. Distributed rack controllers, or rackd provide high-
bandwidth services to multiple racks. The controller itself is stateless and horizontally scalable, and only
presents a REST API.

Rackd provides DHCP, PXE, TFTP and other local services. They cache large items like operating system
install images at the rack level for performance but maintain no exclusive state other than credentials to talk to
the region controller.

4.2 High availability in MAAS


MAAS is a mission critical service that provides infrastructure coordination upon which cloud infrastructures
depend. High availability in the region controller is achieved at the database level. The region controller will
automatically switches gateways to ensure high availability of services to network segments in the event of a
rack failure.

MAAS can scale from a small set of servers to many racks of hardware in a datacenter. High-bandwidth
activities (such as the initial operating system installation) are handled by the distributed gateways enabling
massively parallel deployments.

Picture below represents logical design of MAAS and high availability of its components.

24 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

MAAS diagram

4.3 The node lifecycle


Each machine, or “node”, managed by MAAS goes through a lifecycle — from “New” to “Ready” and in the
end to “Deployed” state. There are also special statuses as “Broken”, “Commissioning”, “Deploying”, “Testing”
and “Allocated”.

4.3.1 New
New machines that PXE-boot on a MAAS network will be enlisted automatically if MAAS can detect their BMC
parameters. During the Enlistment phase MAAS will ensure that it can control the power status of the
machine through its BMC. Another option is to add machines through the API or UI by supplying BMC
credentials.

4.3.2 Commissioning
In the Commissioning phase, MAAS collects all data about the machine, which includes detailed hardware
inventory like CPU model, memory setup, disks, and chipsets. It also collects information about network
connectivity. This information can later be used in deployments. In this phase, you can apply custom
commissioning scripts that can update firmware, configure hardware RAID, etc.

25 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

4.3.3 Ready
A machine that is successfully commissioned is considered “Ready”. A “Ready” machine has configured BMC
credentials (on IPMI based BMCs) for ongoing power control, and ensures that MAAS can start or stop the
machine and allocate or redeploy it with a fresh operating system.

4.3.4 Allocated
“Ready” machines can be Allocated to users, who can configure network interface, bonding and addressing,
as well as disks, such as LVM, RAID, bcache or partitioning.

4.3.5 Deploying
Users can request that MAAS to turn the machine on and install a complete operating system from scratch
without any manual intervention, configuring network interfaces, disk partitions, and more.

4.3.6 Releasing
When a user has finished with the machine they can release it back to the shared pool of capacity. You can
request MAAS to verify that there is a full disk-wipe of the machine when that happens.

4.4 Install MAAS


In this Reference Architecture MAAS is installed in Highly Available fashion using a set of open source tools
including but not limited to: MAAS, PostgreSQL, Corosync/Pacemaker.

For detailed configuration procedure please contact Canonical representatives.

4.4.1 Configure Your Hardware


MAAS requires one small server and at least one server that can be managed with a BMC. Dell EMC
recommends that you have the MAAS server provide DHCP and DNS on a network to which the managed
machines are connected.

4.4.2 Install Ubuntu Server


Download Ubuntu Server 18.04 LTS, and follow the step-by-step installation instructions on your MAAS
server.

4.4.3 MAAS Installation


This section describes the following MAAS installation topics:

• Prerequisites
• Infrastructure nodes requirements
• Prerequisites

Three infrastructure nodes for fully HA, pre-installed with the latest Ubuntu 18.04-LTS, must be available to
host MAAS, the Juju controllers and other runtime and monitoring tools. The nodes must have SSH access to
the root user enabled through authorized_keys.

26 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

4.5 Infrastructure nodes requirements


Three infrastructure nodes must be already preinstalled, and they host multiple services intended to support
building and operating the OpenStack solution, including:

• MAAS and its dependencies, including PostgreSQL; Each Infrastructure Node has to be turned into
KVM host managed by MAAS, and necessary set of KVM-based Virtual Machines should be created
on top of them for further deployment of supporting services:
• Juju controllers
• Monitoring and alerting systems
• Log aggregation and analysis systems
• Landscape nodes management

Infrastructure nodes must have network access to:

• The PXE and BMC networks in order to commission and provision machines.
• The various APIs which must be monitored. In order to monitor OpenStack cluster, the nodes must
have access to the OpenStack Internal network (mentioned above).
• Externally, to the Ubuntu archives and other online services, in order to obtain images, packages, and
other reference data.

To provide HA, infrastructure nodes must:

• Be placed in separate hardware availability zones


• Have PostgreSQL installed in HA fashion and its Virtual IP (VIP) configured. More info can be found
at ClusterLabs manual.
• MAAS has a concept of availability zones where server hardware can be placed into different rack
and each rack can be placed in single zone. Or within same rack hardware can be divided based on
the power redundancy or the slots within rack. It would be helpful to place different services in
different hardware zone.
• Have bonded network interfaces in order to provide resiliency from switch or NIC failures.
• Have the MTU on the bonded interfaces set to 9000B (jumbo frames).
• Have a bridge (`broam`) interface active which has the primary bond (typically `bond0`) as its only
member. The bridge inherits the MTU of the underlying device, so there is no need to set its MTU
explicitly.

4.6 MAAS initial configurations


This section describes the following MAAS initial configurations:

• MAAS credentials
• Enlist and commission servers

4.6.1 MAAS Credentials


For initial installation of MAAS follow official procedure.

All Region controllers should point to the Virtual IP of PostgreSQL database. More info on MAAS HA
configuration can be found in MAAS documentation

27 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

After the packages installation it is required to create a set of credentials for admin user with “maas init”
command.

4.6.2 Enlist and commission servers


Now MAAS is ready to enlist and commission machines. To perform that task:

1. Set all the servers to PXE boot from the first 10Gbe network interface.
2. Boot each machine once. You should see these machines appear in MAAS.
3. Select all of the machines and commission them by clicking on the Take action button.

When machines have a Ready status you can deploy the services.

4.6.3 Set up MAAS KVM pods


Once MAAS is completely set up, all infrastructure nodes should be turned into KVM hosts managed by
MAAS. Once done, MAAS will be able to dynamically provision Virtual Machines on the nodes and present
them as available servers to the users. Follow the guide for turning Infrastructure nodes into KVM Pods and
creating a set of VMs for setting up Juju controllers and LMA components.

4.7 Juju components


For an overview of Juju, refer to the Juju modeling tool. This section discusses the working of different
components of Juju.

4.7.1 Juju controller - the heart of Juju


The Juju controller manages all the machines in your running models and responds to the events that are
triggered throughout the system. It also manages scale-out, configuration, and placement of all of your
models and applications.

Juju controller has to be located in the same physical segment of the network as OpenStack cluster, and be
able to execute calls to MAAS API and connect to OpenStack Cluster nodes.

Juju controller is supposed to be created using a set of KVM Virtual machines mentioned in the previous
steps.

4.7.2 Charms
Charms are a collection of scripts that contain all of the operations necessary to deploy, configure, scale, and
maintain cloud applications with Juju. Charms encapsulate a single application and all the code and know-
how that it takes to operate it, such us how to combine and work with other related applications, or how to
upgrade it.

Charms also allow a hierarchy, with subordinate charms to complement a main service.

Charms source code and installables are stored in Canonical’s Charm Store.

4.7.3 Bundles
Bundles are ready-to-run collections of applications that are modelled to work together and can include
particular configurations and relations between the software to be deployed.

28 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

Bundles may also be optimized for different deployment scenarios of the same software. For example, a
scale-out, production-ready version like the OpenStack Base or an extended version of OpenStack
Telemetry.

Bundles perform the following functions:

• Install
• Configure
• Connect
• Upgrade and update
• Scale-out and scale-back
• Perform health checks
• Undertake operational actions
• Benchmark

Juju supports UI representation of deployed bundle and allow to dynamically manipulate with cluster’s
configuration options and layout prior to the bundle deployment and during the lifetime.

4.7.4 Provision
Specify the number of machines you want and how you want them to be deployed, or let Juju do it
automatically.

4.7.5 Deploy
Deploy your services, or (re)deploy your entire application infrastructure to another cloud, with a few clicks of
your mouse.

29 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

4.7.6 Monitor and manage


The Juju controller manages:

• Multiple models
• All VMs in all your running models
• Scale out, configure and placement
• User accounts and identification
• Sharing and access

4.7.7 Comparing Juju to any configuration management tool


Juju provides a higher level of abstraction, and supplies the tools needed to manage the full scope of
operations beyond deployment and configuration management, regardless of the machine on which it runs.

One of the main advantages of Juju is the dynamic configuration ability, which enables you to:

• Reconfigure services on the fly.


• Add, remove, or change relationships between services.
• Scale in or out with ease, shares the operational knowledge and makes the most of the wider
community.

30 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

4.8 Telemetry components


Charmed OpenStack solution includes a telemetry suite based on the best open source tools available. All
components will be described in details in one of the following chapters.

4.8.1 Monitoring Tools


The Canonical monitoring suite retrieves information from the OpenStack components and infrastructure
monitors, and combines it in a configurable portal, giving the customer visibility to all the different metrics.

The portal aggregates the relevant information from an operational perspective, and differentiates various
components, such as compute, network, and storage.

The Canonical observability tool allows operators to zoom in to the details of any of the higher-level graphs to
obtain further information. The portal also includes an efficient time series database that allows tracking of the
evolution of the cloud metrics and health status over time.

31 Canonical Charmed OpenStack on Dell EMC Hardware.


Cluster Infrastructure components

4.8.2 Log Aggregation


The solution also implements the Graylog suite for log aggregation, which makes it easy for customers to
have visibility on the different logs from their cloud services without accessing them directly.

These services are integrated with Charmed OpenStack solution as part of the charms, fulfilling the same
requirements around upgradeability and operation

Graylog stores processed logs in highly available cluster of Elasticsearch, while the cloud hosts are running
local instances of Filebeat for logs collection.

4.8.3 Landscape management


The Landscape systems management tool helps you monitor, manage, and update your entire Ubuntu
infrastructure from a single interface. As art of Canonical's Ubuntu Advantage support service, Landscape
includes intuitive system management tools combined with world-class support.

This charm deploys Landscape Dedicated Server (LDS), and must be connected to other charms to be fully
functional.

Visit https://landscape.canonical.com/landscape-features for more Landscape product information.

32 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

5 Charmed OpenStack components


This chapter presents detailed information about the OpenStack components included as charms in Charmed
OpenStack.

Below is the high-level representation of logical components in the Openstack cluster delivered by Canonical.

5.1 Storage charms


Ceph is a distributed storage and network file system designed to provide excellent performance, reliability,
and scalability. Canonical uses Ceph by default for storage, however this can be replaced by, or
complemented with, another storage solution.

5.1.1 ceph-monitor
Ceph monitors are the endpoints of the storage cluster and store the map of the data placement across Ceph
OSDs.

The Ceph Monitor charm has two pieces that cannot be changed post bootstrap:

• fsid
• monitor-secret

Caution: attempting to do this will cause a reconfiguration error and new service units will not join the existing
Ceph cluster.

By default the Ceph cluster does not bootstrap until three service units have been deployed and started. This
is to ensure that a quorum is achieved prior to adding storage devices.

After the initialization of the monitor cluster a quorum forms quickly, and OSD bring-up proceeds.

33 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

5.1.2 ceph-osd
Ceph OSDs are managing underlay storage devices that contain user’s data and represent the capacity of the
cluster.

This charm provides the Ceph OSD personality for expanding storage capacity within a Ceph deployment.

5.1.3 ceph-radosgateway
This charm provides an API endpoint for Swift or S3 clients, supporting Keystone-based RBAC and storing
objects in the Ceph cluster underneath.

5.2 OpenStack charms

5.2.1 cinder
This charm provides the Cinder volume service for OpenStack. It is intended to be used alongside the other
OpenStack components, starting with the Folsom release. Cinder is made up of four separate services:

• An API service
• A scheduler
• A volume service
• A backup service

5.2.2 glance
This charm provides the Glance image service for OpenStack. It is intended to be used alongside the other
OpenStack components, starting with the Essex release in Ubuntu 12.04.

Glance may be deployed in a number of ways. This charm focuses on three (3) main configurations. All
require the existence of the other core OpenStack services deployed via Juju charms, specifically:

• mysql
• keystone
• nova-cloud-controller

5.2.3 nova-cloud-controller
Nova Cloud Controller is the controller node for OpenStack Nova. It contains:

• nova-scheduler
• nova-api
• nova-consoleauth
• nova-conductor

Console access service (if required) depends on the preferred choice of virtual machines console type, at the
moment three different types of console can be configured via the charm:

• spice
• xvpvnx
• novnc

34 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

Note: The console access protocol is configured into a guest when it is created; if changed - console access
for existing guests will stop working.

5.2.4 nova-compute-kvm
This charm provides Nova Compute, responsible for configuring backend hypervisor and running and
governing Virtual Machines. The target platform is Ubuntu (preferably LTS) + OpenStack.

The following interfaces are provided:

cloud-compute - Used to relate with (at least) one or more of:

• nova-cloud-controller
• glance
• ceph
• cinder
• mysql
• ceilometer-agent
• rabbitmq-server
• neutron

In this deployment Canonical uses KVM as a hypervisor.

5.2.5 heat
Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to
launch multiple composite cloud applications based on templates, in the form of yaml files that can be treated
like code. Heat requires the existence of the other core OpenStack services deployed via Juju charms;
specifically:

• mysql
• rabbitmq-server
• keystone
• nova-cloud-controller

5.2.6 openstack-dashboard
The OpenStack Dashboard provides a Django-based web interface for use by both administrators and users
of an OpenStack Cloud. It allows you to manage Nova, Glance, Cinder, Neutron, Heat and Designate
resources within the cloud.

5.2.7 keystone
Keystone is an OpenStack project that provides Identity, Token, Catalog and Policy services for use
specifically by projects in the OpenStack family. It implements OpenStack's Identity API.

By default Keystone uses in-cloud database for storing users/roles and project assignments, but LDAP-based
authentication as well as a SSO-based authentication can be enabled with the help of subordinate charms.

35 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

5.2.8 gnocchi and ceilometer


These charms provide the Telemetry service for OpenStack. They are intended to be used alongside the
other OpenStack components, starting with the Folsom release.

Ceilometer is made up of two (2) separate services:

• Agent service
• A collector service

Ceilometer’s responsibility is to collect metrics of the virtual machines as well as the events and store them in
the backend database.

Gnocchi is an open-source, multi-tenant timeseries, metrics and resources database that is used as the
backend for Ceilometer as well as the target for API queries, allowing to request processed metrics of the
virtual machines.

5.2.9 aodh
Aodh provides the Alarming service as part of OpenStack telemetry. It allows to configure and store alarms
definitions based on the metrics collected by Ceilometer. Once created, the service makes repetitive calls to
Gnocchi service analysing current or cumulative metrics. Once the alarm is triggered it executes the action
that has to be pre-configured.

5.2.10 designate
Designate provides DNSaaS services for OpenStack:

• A REST API for domain/record management Multi-tenant


• Integrated with Keystone for authentication
• Framework in place to integrate with Nova and Neutron notifications

5.2.11 neutron-api
This principle charm provides the OpenStack Neutron API service.

Just like OpenStack Nova provides an API to dynamically request and configure virtual servers, Neutron
provides an API to dynamically request and configure virtual networks. These networks connect "interfaces"
from other OpenStack services (e.g., virtual NICs from Nova VMs).

5.2.12 neutron-gateway
Neutron provides flexible software defined networking (SDN) for OpenStack.

This charm provides two key services:

• L3 network routing
• DHCP services

The charm is intended to be deployed on a separate node, providing “virtual routers” functionality, allowing
the virtual machines to reach out to the external resources via SNAT, as well as make them available from the
external networks via Floating IPs (DNAT).

36 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

In addition, DHCP agents are responsible for delivering network configuration to the virtual machines via
bootp protocol.

5.2.13 neutron-openvswitch
This subordinate charm provides the Neutron Open vSwitch configuration for a compute node. Once
deployed it takes over the management of the Neutron base and plugin configuration on the compute node.
This charm supports DPDK fast packet processing as well.

5.3 Resource charms


This topic describes the resource charms used by Charmed OpenStack.

5.3.1 percona-cluster
Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona
XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a
single product package, which enables you to create a cost-effective MySQL cluster. This charm deploys
Percona XtraDB Cluster onto Ubuntu.

Note: Percona XtraDB Cluster is not a 'scale-out' MySQL solution. Reads and writes are channeled through a
single service unit and synchronously replicated to other nodes in the cluster. Reads/writes are as slow as the
slowest node you have in your deployment.

5.3.2 rabbitmq-server
RabbitMQ is an implementation of AMQP, the emerging standard for high performance enterprise messaging.
The RabbitMQ server is a robust and scalable implementation of an AMQP broker. This charm deploys
RabbitMQ server and provides AMQP connectivity to clients.

When more than one unit of the charm is deployed the charm will bring up a native RabbitMQ cluster. The
process of clustering the units together takes some time.

Note: Due to the nature of asynchronous hook execution, it is possible that client relationship hooks may be
executed before the cluster is complete. In some cases, this can lead to client charm errors. Single unit
deployments behave as expected.

5.3.3 hacluster
This subordinate charm provides corosync and pacemaker cluster configuration for principle charms which
support the hacluster, container scoped relation.

There are two mutually exclusive HA options available:

• Virtual IP address(es)
• DNS

In this reference architecture, deployment and testing the HA option used is the VIP. In both cases a
relationship to hacluster is required, which provides the Corosync back end HA functionality. To use virtual IP
address(es) the clustered nodes must be on the same subnet, such that:

37 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

• The VIP is a valid IP address on the subnet for one of the node's interfaces
• Each node has an interface in said subnet

The VIP becomes a highly-available API endpoint.

At a minimum, the configuration option, vip, must be set in order to use virtual IP HA. If multiple networks are
being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or
vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the
clustered nodes to be on the same subnet.

• The clustered nodes must have static or "reserved" IP addresses registered in MAAS.
• The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the configuration option, dns-ha, must be set to true, and at least one or more of the following
hostnames must be set, in order to use DNS HA:

• os-public-hostname
• os-internal-hostname
• os-internal-hostname

The charm will throw an exception in the following circumstances:

• If neither vip nor dns-ha is set and the charm is related to hacluster
• If both vip and dns-ha are set, as they are mutually exclusive
• If dns-ha is set and none of the os-{admin,internal,public}-hostname(s) are set

All the OpenStack services will be deployed in HA, and each service will have three units; each one running
on a LXC container in a separate hypervisor. The Charmed OpenStack provides high availability for all
OpenStack services as well as HA Juju. The following diagram explains the different types of HA used in
Charmed OpenStack:

38 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

5.3.4 ntp
The ntp charm is a subordinate charm which is designed for use with other principal charms. In its basic
mode, the ntp charm is used to configure NTP in service units to talk directly to a set of NTP time sources.

5.4 Network space support


Kubernetes charms support the use of Juju Network Spaces, allowing the charm to be bound to network
space configurations managed directly by Juju. API endpoints can be bound to distinct network spaces
supporting the network separation of all existing endpoints.

Network spaces are accordingly mapped to different VLANs managed by MAAS, making networking
management transparent and flexible.

To use this feature, use the --bind option when deploying the charm:
$ juju deploy neutron-api --bind "public=public-space
internal=internal-space admin=admin-space shared-db=internal-space"

Alternatively, these can also be provided as part of a Juju native bundle configuration:
neutron-api:
charm: cs:neutron-api

39 Canonical Charmed OpenStack on Dell EMC Hardware.


Charmed OpenStack components

num_units: 1
bindings:
public: public-space
admin: admin-space
internal: internal-space
shared-db: internal-space

Note: Spaces must be configured in the underlying provider prior to attempting to use them.
Note: Existing deployments using os-*-network configuration options will continue to function; these options
are preferred over any network space binding provided if set.

5.5 OpenStack validation


Once the OpenStack cluster is deployed it is important to make sure it is fully operational and expected
functionality is available.

Canonical recommends to run a set of automated tests leveraging existing Open Source components for
providing best user experience:

• OpenStack Tempest
• OpenStack Rally
• Rados bench
• FIO

5.5.1 OpenStack Tempest


Developed by the OpenStack community, Tempest includes the variety of tests for all possible features
provided by OpenStack components. Functional, Unit and Scenario tests allows to perform comprehensive
analysis of the running cluster.

Canonical’s recommendation is to run the test suite based on the RefStack Guideline that covers the
functionality of the Charmed OpenStack.

5.5.2 OpenStack Rally


Community-driven project Rally is ideal for performance analysis and functional testing of the OpenStack
environment. It acts as the real user only leveraging public endpoints of the cluster, making sure that the
cluster users are fully satisfied with OpenStack functionality.

5.5.3 Rados Bench and FIO


With Ceph being default storage backend for multiple components of Charmed OpenStack such as Block,
Image, Object and Compute services, it is important to get maximum performance from Ceph cluster. Ceph
community provides native tool for testing of Ceph RADOS (object layer) called Rados Bench. Canonical
recommends to use this tool for baseline testing of the cluster performance.

In addition to that, it is valuable to separately test block level of Ceph cluster with the Open Source tool FIO
(flexible I/O tester). Leveraging RBD (RADOS Block Device) driver, FIO tests input/output performance of the
image within Ceph in the same fashion as the performance of the generic block device.

40 Canonical Charmed OpenStack on Dell EMC Hardware.


Monitoring and logging tools

6 Monitoring and logging tools


This chapter describes the services which have been deployed to manage and monitor your OpenStack
cluster. Canonical has a specially designed architecture for their customers to manage and monitor
OpenStack clusters. Those services are an optional part of Foundation Cloud services. If customers have
different requirements as part of Foundation Cloud, we design the architecture for customers and do the
deployments.

6.1 Logging the cluster

6.1.1 Graylog
Graylog is the solution for aggregating and managing the logs from various components of the cluster, as well
as for providing visualization of the logs.

Below is the sample of log aggregation dashboard.

6.1.2 Elasticsearch
Elasticsearch is a distributed database used for storing indexed logs and act as the backend for Graylog.

6.1.3 Filebeat
As a log forwarder, Filebeat tails various log files on the client side and quickly sends this information to
Graylog for further parsing and enrichment, or to Elasticsearch for centralized storage and analysis.

41 Canonical Charmed OpenStack on Dell EMC Hardware.


Monitoring and logging tools

6.2 Monitoring the cluster


From an architectural standpoint, monitoring suite consists of Telegraf (host metric collection), Prometheus
(monitoring server) and Grafana (monitoring dashboard).

6.2.1 Prometheus
Prometheus is a systems and services monitoring system. It collects metrics from configured targets at given
intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed
to be true.

6.2.2 Grafana
Grafana is the leading graph and dashboard builder for visualizing time series metrics.

Picture below displays the monitoring tool dashboard.

6.2.3 Telegraf
Telegraf is a client-side system that collects information about the status of the host services and makes them
available for pulling by any monitoring solutions (in this architecture Prometheus).

6.2.4 Alarming
Charmed OpenStack telemetry suite also includes Nagios cluster that is responsible for sending alarms in
case of cloud hosts or individual services failure.

Each host and container of the OpenStack cluster runs NRPE (Nagios Remote Plugin Executor), collecting
the status of services running, and syncing back to Nagios server.

Below is the example of alarming dashboard showing the current status of the running cluster.

42 Canonical Charmed OpenStack on Dell EMC Hardware.


Monitoring and logging tools

6.2.5 External integration


All telemetry components described above are configured automatically with the help of Juju charms, thus are
dynamically scalable and configurable.

It is also possible, using additional Juju charms to configure integration of logging and alarming components
with existing corporate tools, such as Splunk, remote Syslog servers etc.

For detailed information contact Dell or Canonical representative.

43 Canonical Charmed OpenStack on Dell EMC Hardware.


Appendix A References

7 Appendix A References
Please see the following resources for more information.

7.1 Dell EMC documentation


http://en.community.dell.com/techcenter/cloud/w/wiki/12401.dell-emc-canonical-openstack-cloud-
solutions

http://www.dell.com/en-us/work/learn/rack-scale-infrastructure

7.2 Canonical documentation


https://jaas.ai/store

https://maas.io/

https://wiki.ubuntu.com/

https://www.ubuntu.com/

BootStack: Fully managed operations OpenStack and Kubernetes

http://www.ubuntu.com/download/server

Canonical service support

Canonical Foundation Cloud Services

OpenStack UA Support

7.3 OpenStack Documentation


https://wiki.openstack.org/wiki/Main_Page

https://docs.openstack.org/charm-guide/latest/

7.4 To Learn More


If you need additional services or implementation help, please contact your Dell EMC sales representative, or
email [email protected].

44 Canonical Charmed OpenStack on Dell EMC Hardware.


Technical support and resources

A Technical support and resources


Dell.com/support is focused on meeting customer needs with proven services and support.

Storage technical documents and videos provide expertise that helps to ensure customer success on Dell
EMC storage platforms.

A.1 Related resources


Provide a list of documents and other assets that are referenced in the paper; include other resources that
may be helpful.

45 Canonical Charmed OpenStack on Dell EMC Hardware.

You might also like