OpenShift - Container - Platform 3.11 Installing - Clusters en US PDF
OpenShift - Container - Platform 3.11 Installing - Clusters en US PDF
11
Installing Clusters
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
Install your OpenShift Container Platform 3.11 cluster with this guide
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .PLANNING
. . . . . . . . . . . .YOUR
. . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
1.1. INITIAL PLANNING 6
1.1.1. Limitations and Considerations for Installations on IBM POWER 6
1.2. SIZING CONSIDERATIONS 7
1.3. ENVIRONMENT SCENARIOS 7
1.3.1. Single master and node on one system 7
1.3.2. Single master and multiple nodes 7
1.3.3. Multiple masters using native HA 8
1.3.4. Multiple Masters Using Native HA with External Clustered etcd 8
1.3.5. Stand-alone registry 9
1.4. INSTALLATION TYPES FOR SUPPORTED OPERATING SYSTEMS 9
1.4.1. Required images for system containers 9
1.4.2. systemd service names 10
1.4.3. File path locations 10
1.4.4. Storage requirements 10
.CHAPTER
. . . . . . . . . . 2.
. . SYSTEM
. . . . . . . . . .AND
. . . . .ENVIRONMENT
. . . . . . . . . . . . . . . . REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .
2.1. SYSTEM REQUIREMENTS 11
2.1.1. Red Hat subscriptions 11
2.1.2. Minimum hardware requirements 11
2.1.3. Production level hardware requirements 13
2.1.4. Storage management 13
2.1.5. Red Hat Gluster Storage hardware requirements 14
2.1.6. Monitoring hardware requirements 15
2.1.7. SELinux requirements 15
2.1.8. Optional: Configuring Core Usage 16
2.1.9. Optional: Using OverlayFS 16
2.1.10. Security Warning 16
2.2. ENVIRONMENT REQUIREMENTS 17
2.2.1. DNS Requirements 17
2.2.1.1. Configuring Hosts to Use DNS 18
2.2.1.2. Configuring a DNS Wildcard 18
2.2.1.3. Configuring Node Host Names 19
2.2.2. Network Access Requirements 19
2.2.2.1. NetworkManager 19
2.2.2.2. Configuring firewalld as the firewall 20
2.2.2.3. Required Ports 20
2.2.3. Persistent Storage 23
2.2.4. Cloud Provider Considerations 23
2.2.4.1. Overriding Detected IP Addresses and Host Names 23
2.2.4.2. Post-Installation Configuration for Cloud Providers 24
.CHAPTER
. . . . . . . . . . 3.
. . PREPARING
. . . . . . . . . . . . . .YOUR
. . . . . . HOSTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
..............
3.1. OPERATING SYSTEM REQUIREMENTS 25
3.2. SERVER TYPE REQUIREMENTS 25
3.3. SETTING PATH 25
3.4. ENSURING HOST ACCESS 25
3.5. SETTING PROXY OVERRIDES 26
3.6. REGISTERING HOSTS 27
3.7. INSTALLING BASE PACKAGES 28
3.8. INSTALLING DOCKER 29
1
OpenShift Container Platform 3.11 Installing Clusters
.CHAPTER
. . . . . . . . . . 4.
. . .CONFIGURING
. . . . . . . . . . . . . . . .YOUR
. . . . . . INVENTORY
. . . . . . . . . . . . . FILE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
..............
4.1. CUSTOMIZING INVENTORY FILES FOR YOUR CLUSTER 38
4.2. CONFIGURING CLUSTER VARIABLES 38
4.3. CONFIGURING DEPLOYMENT TYPE 46
4.4. CONFIGURING HOST VARIABLES 47
4.5. DEFINING NODE GROUPS AND HOST MAPPINGS 48
4.5.1. Node ConfigMaps 48
4.5.2. Node Group Definitions 49
4.5.3. Mapping Hosts to Node Groups 51
4.5.4. Node Host Labels 51
4.5.4.1. Pod Schedulability on Masters 52
4.5.4.2. Pod Schedulability on Nodes 52
4.5.4.3. Configuring Dedicated Infrastructure Nodes 52
4.6. CONFIGURING PROJECT PARAMETERS 53
4.7. CONFIGURING MASTER API PORT 55
4.8. CONFIGURING CLUSTER PRE-INSTALL CHECKS 55
4.9. CONFIGURING A REGISTRY LOCATION 57
4.10. CONFIGURING A REGISTRY ROUTE 58
4.11. CONFIGURING ROUTER SHARDING 59
4.12. CONFIGURING RED HAT GLUSTER STORAGE PERSISTENT STORAGE 60
4.12.1. Configuring converged mode 60
4.12.2. Configuring independent mode 62
4.13. CONFIGURING AN OPENSHIFT CONTAINER REGISTRY 63
4.13.1. Configuring Registry Storage 63
Option A: NFS Host Group 63
Option B: External NFS Host 63
Upgrading or Installing OpenShift Container Platform with NFS 64
Option C: OpenStack Platform 64
Option D: AWS or Another S3 Storage Solution 64
Option E: converged mode 64
Option F: Google Cloud Storage (GCS) bucket on Google Compute Engine (GCE) 65
Option G: vSphere Volume with vSphere Cloud Provider (VCP) 65
4.14. CONFIGURING GLOBAL PROXY OPTIONS 66
4.15. CONFIGURING THE FIREWALL 68
4.16. CONFIGURING SESSION OPTIONS 69
4.17. CONFIGURING CUSTOM CERTIFICATES 69
4.18. CONFIGURING CERTIFICATE VALIDITY 70
4.19. CONFIGURING CLUSTER MONITORING 71
4.20. CONFIGURING CLUSTER METRICS 71
4.20.1. Configuring Metrics Storage 71
Option A: Dynamic 72
Option B: NFS Host Group 72
2
Table of Contents
.CHAPTER
. . . . . . . . . . 5.
. . EXAMPLE
. . . . . . . . . . . INVENTORY
. . . . . . . . . . . . . FILES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
..............
5.1. OVERVIEW 83
5.2. SINGLE MASTER EXAMPLES 83
5.2.1. Single Master, Single etcd, and Multiple Nodes 83
5.2.2. Single Master, Multiple etcd, and Multiple Nodes 84
5.3. MULTIPLE MASTERS EXAMPLES 86
5.3.1. Multiple Masters Using Native HA with External Clustered etcd 87
5.3.2. Multiple Masters Using Native HA with Co-located Clustered etcd 89
. . . . . . . . . . . 6.
CHAPTER . . .INSTALLING
. . . . . . . . . . . . .OPENSHIFT
. . . . . . . . . . . . .CONTAINER
. . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
..............
6.1. PREREQUISITES 91
6.1.1. Running the RPM-based installer 91
6.1.2. Running the containerized installer 92
6.1.2.1. Running the installer as a system container 92
6.1.2.2. Running other playbooks 93
6.1.2.3. Running the installer as a container 93
6.1.2.4. Running the Installation Playbook for OpenStack 95
6.1.3. About the installation playbooks 95
6.2. RETRYING THE INSTALLATION 96
6.3. VERIFYING THE INSTALLATION 98
Verifying Multiple etcd Hosts 98
Verifying Multiple Masters Using HAProxy 99
6.4. OPTIONALLY SECURING BUILDS 99
6.5. KNOWN ISSUES 99
6.6. WHAT’S NEXT? 100
.CHAPTER
. . . . . . . . . . 7.
. . DISCONNECTED
. . . . . . . . . . . . . . . . . . INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
...............
7.1. PREREQUISITES 101
7.2. OBTAINING REQUIRED SOFTWARE PACKAGES AND IMAGES 101
7.2.1. Obtaining OpenShift Container Platform packages 101
7.2.2. Obtaining images 104
7.2.3. Exporting images 107
7.3. PREPARE AND POPULATE THE REPOSITORY SERVER 111
7.4. POPULATE THE REGISTRY 112
7.5. PREPARING CLUSTER HOSTS 112
7.6. INSTALLING OPENSHIFT CONTAINER PLATFORM 113
3
OpenShift Container Platform 3.11 Installing Clusters
. . . . . . . . . . . 8.
CHAPTER . . .INSTALLING
. . . . . . . . . . . . .A. .STAND-ALONE
. . . . . . . . . . . . . . . . DEPLOYMENT
. . . . . . . . . . . . . . . .OF
. . . .OPENSHIFT
. . . . . . . . . . . . .CONTAINER
. . . . . . . . . . . . .IMAGE
. . . . . . . REGISTRY
........................
114
8.1. MINIMUM HARDWARE REQUIREMENTS 114
8.2. SUPPORTED SYSTEM TOPOLOGIES 115
8.3. INSTALLING THE OPENSHIFT CONTAINER REGISTRY 115
. . . . . . . . . . . 9.
CHAPTER . . .UNINSTALLING
. . . . . . . . . . . . . . . . OPENSHIFT
. . . . . . . . . . . . . CONTAINER
. . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
..............
9.1. UNINSTALLING AN OPENSHIFT CONTAINER PLATFORM CLUSTER 119
9.2. UNINSTALLING NODES 120
4
Table of Contents
5
OpenShift Container Platform 3.11 Installing Clusters
You can read more about Ansible and its basic usage in the official documentation.
Do your on-premise servers use IBM POWER or x86_64 processors? You can install OpenShift
Container Platform on servers that use either type of processor. If you use POWER servers,
review the Limitations and Considerations for Installations on IBM POWER .
How many pods are required in your cluster? The Sizing Considerations section provides limits
for nodes and pods so you can calculate how large your environment needs to be.
How many hosts do you require in the cluster? The Environment Scenarios section provides
multiple examples of Single Master and Multiple Master configurations.
Do you need a high availability cluster? High availability configurations improve fault tolerance.
In this situation, you might use the Multiple Masters Using Native HA example to set up your
environment.
Is cluster monitoring required? The monitoring stack requires additional system resources. Note
that the monitoring stack is installed by default. See the cluster monitoring documentation for
more information.
Do you want to use Red Hat Enterprise Linux (RHEL) or RHEL Atomic Host as the operating
system for your cluster nodes? If you install OpenShift Container Platform on RHEL, you use an
RPM-based installation. On RHEL Atomic Host, you use a system container. Both installation
types provide a working OpenShift Container Platform environment.
Which identity provider do you use for authentication? If you already use a supported identity
provider, configure OpenShift Container Platform to use that identity provider during
installation.
Your cluster must use only Power nodes and masters. Because of the way that images are
tagged, OpenShift Container Platform cannot differentiate between x86 images and Power
images.
Image streams and templates are not installed by default or updated when you upgrade. You
can manually install and update the image streams.
You can install only on on-premise Power servers. You cannot install OpenShift Container
6
CHAPTER 1. PLANNING YOUR INSTALLATION
You can install only on on-premise Power servers. You cannot install OpenShift Container
Platform on nodes in any cloud provider.
Not all storage providers are supported. You can use only the following storage providers:
GlusterFS
NFS
Local storage
NOTE
Moving from a single master cluster to multiple masters after installation is not
supported.
In all environments, if your etcd hosts are co-located with master hosts, etcd runs as a static pod on the
host. If your etcd hosts are not co-located with master hosts, they run etcd as standalone processes.
NOTE
If you use RHEL Atomic Host, you can configure etcd on only master hosts.
node1.example.com Node
7
OpenShift Container Platform 3.11 Installing Clusters
node2.example.com
IMPORTANT
Routers and master nodes must be load balanced to have a highly available and fault-
tolerant environment. Red Hat recommends the use of an enterprise-grade external load
balancer for production environments. This load balancing applies to the masters and
nodes, hosts running the OpenShift Container Platform routers. Transmission Control
Protocol (TCP) layer 4 load balancing, in which the load is spread across IP addresses, is
recommended. See External Load Balancer Integrations with OpenShift Enterprise 3 for
a reference design, which is not recommended for production use cases.
master3.example.com
node1.example.com Node
node2.example.com
master2.example.com
master3.example.com
8
CHAPTER 1. PLANNING YOUR INSTALLATION
etcd2.example.com
etcd3.example.com
node1.example.com Node
node2.example.com
An RPM installation installs all services through package management and configures services to run in
the same user space, while a system container installation installs services using system container
images and runs separate services in individual containers.
When using RPMs on RHEL, all services are installed and updated by package management from an
outside source. These packages modify a host’s existing configuration in the same user space. With
system container installations on RHEL Atomic Host, each component of OpenShift Container Platform
is shipped as a container, in a self-contained package, that uses the host’s kernel to run. Updated, newer
containers replace any existing ones on your host.
The following table and sections outline further differences between the installation types:
Delivery Mechanism RPM packages using yum System container images using
docker
9
OpenShift Container Platform 3.11 Installing Clusters
The system container installation type makes use of the following images:
openshift3/ose-node
By default, all of the above images are pulled from the Red Hat Registry at registry.redhat.io.
If you need to use a private registry to pull these images during the installation, you can specify the
registry information ahead of time. Set the following Ansible variables in your inventory file, as required:
oreg_url='<registry_hostname>/openshift3/ose-${component}:${version}'
openshift_docker_insecure_registries=<registry_hostname>
openshift_docker_blocked_registries=<registry_hostname>
NOTE
The default component inherits the image prefix and version from the oreg_url value.
The configuration of additional, insecure, and blocked container registries occurs at the beginning of the
installation process to ensure that these settings are applied before attempting to pull any of the
required images.
However, the default image stream and template files are installed at /etc/origin/examples/ for
Atomic Host installations rather than the standard /usr/share/openshift/examples/ because that
directory is read-only on RHEL Atomic Host.
10
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
Masters
Physical or virtual system or an instance running on a public or private IaaS.
Base OS: Red Hat Enterprise Linux (RHEL) 7.5 or laterwith the "Minimal" installation
option and the latest packages from the Extras channel, or RHEL Atomic Host 7.4.5 or
later.
IBM POWER9: RHEL-ALT 7.5 with the "Minimal" installation option and the latest
packages from the Extras channel.
IBM POWER8: RHEL 7.5 with the "Minimal" installation option and the latest
packages from the Extras channel. If you use RHEL, you must use the following
minimal kernel versions:
Minimum 40 GB hard disk space for the file system containing /var/.
Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.
Minimum 1 GB hard disk space for the file system containing the system’s temporary
directory.
11
OpenShift Container Platform 3.11 Installing Clusters
Nodes
Physical or virtual system, or an instance running on a public or private IaaS.
Base OS: RHEL 7.5 or laterwith "Minimal" installation option, orRHEL Atomic Host
7.4.5 or later.
IBM POWER9: RHEL-ALT 7.5 with the "Minimal" installation option and the latest
packages from the Extras channel.
IBM POWER8: RHEL 7.5 with the "Minimal" installation option and the latest
packages from the Extras channel. If you use RHEL, you must use the following
minimal kernel versions:
1 vCPU.
Minimum 8 GB RAM.
Minimum 15 GB hard disk space for the file system containing /var/.
Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.
Minimum 1 GB hard disk space for the file system containing the system’s temporary
directory.
External
Minimum 20 GB hard disk space for etcd data.
etcd
Nodes See the Hardware Recommendations section of the CoreOS etcd documentationfor
information how to properly size your etcd nodes.
Ansible The host that you run the Ansible playbook on must have at least 75MiB of free memory per
controller host in the inventory.
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes
to the default configuration. See Managing Storage with Docker-formatted Containers for instructions
on configuring this during or after installation.
The system’s temporary directory is determined according to the rules defined in the tempfile
module in Python’s standard library.
12
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
You must configure storage for each system that runs a container daemon. For containerized
installations, you need storage on masters. Also, by default, the web console runs in containers on
masters, and masters need storage to run the web console. Containers run on nodes, so nodes always
require storage. The size of storage depends on workload, the number of containers, the size of the
running containers, and the containers' storage requirements. You must also configure storage to run
containerized etcd.
It is highly recommended that you use etcd with storage that handles serial writes (fsync) quickly, such
as NVMe or SSD. Ceph, NFS, and spinning disks are not recommended.
Master hosts
In a highly available OpenShift Container Platform cluster with external etcd, a master host needs to
meet the minimum requirements and have 1 CPU core and 1.5 GB of memory for each 1000 pods.
Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of
2000 pods is the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3
GB of RAM, totaling 4 CPU cores and 19 GB of RAM.
See Recommended Practices for OpenShift Container Platform Master Hosts for performance
guidance.
Node hosts
The size of a node host depends on the expected size of its workload. As an OpenShift Container
Platform cluster administrator, you need to calculate the expected workload and add about 10
percent for overhead. For production environments, allocate enough resources so that a node host
failure does not affect your maximum capacity.
IMPORTANT
Table 2.1. The main directories to which OpenShift Container Platform components write data
/var/lib/openshift Used for etcd storage Less than 10GB. Will grow slowly with the
only when in single environment. Only
master mode and etcd is storing metadata.
embedded in the
atomic-openshift-
master process.
13
OpenShift Container Platform 3.11 Installing Clusters
/var/lib/etcd Used for etcd storage Less than 20 GB. Will grow slowly with the
when in Multi-Master environment. Only
mode or when etcd is storing metadata.
made standalone by an
administrator.
/var/lib/docker When the run time is 50 GB for a Node with Growth is limited by the
docker, this is the mount 16 GB memory. capacity for running
point. Storage used for containers.
active container Additional 20-25 GB for
runtimes (including every additional 8 GB of
pods) and storage of memory.
local images (not used
for registry storage).
Mount point should be
managed by docker-
storage rather than
manually.
/var/lib/containers When the run time is 50 GB for a Node with Growth limited by
CRI-O, this is the mount 16 GB memory. capacity for running
point. Storage used for containers
active container Additional 20-25 GB for
runtimes (including every additional 8 GB of
pods) and storage of memory.
local images (not used
for registry storage).
/var/log Log files for all 10 to 30 GB. Log files can grow
components. quickly; size can be
managed by growing
disks or managed using
log rotate.
Any nodes used in a converged mode or independent mode cluster are considered storage nodes.
14
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
Any nodes used in a converged mode or independent mode cluster are considered storage nodes.
Storage nodes can be grouped into distinct cluster groups, though a single node can not be in multiple
groups. For each group of storage nodes:
A minimum of one or more storage nodes per group is required based on storage gluster
volumetype option.
Each storage node must have a minimum of 8 GB of RAM. This is to allow running the Red Hat
Gluster Storage pods, as well as other applications and the underlying operating system.
Each GlusterFS volume also consumes memory on every storage node in its storage cluster,
which is about 30 MB. The total amount of RAM should be determined based on how many
concurrent volumes are desired or anticipated.
Each storage node must have at least one raw block device with no present data or metadata.
These block devices will be used in their entirety for GlusterFS storage. Make sure the following
are not present:
IMPORTANT
It is recommended to plan for two clusters: one dedicated to storage for infrastructure
applications (such as an OpenShift Container Registry) and one dedicated to storage for
general applications. This would require a total of six storage nodes. This
recommendation is made to avoid potential impacts on performance in I/O and volume
creation.
15
OpenShift Container Platform 3.11 Installing Clusters
For example, run the following before starting the server to make OpenShift Container Platform only
run on one core:
# export GOMAXPROCS=1
As of Red Hat Enterprise Linux 7.4, you have the option to configure your OpenShift Container Platform
environment to use OverlayFS. The overlay2 graph driver is fully supported in addition to the older
overlay driver. However, Red Hat recommends using overlay2 instead of overlay, because of its speed
and simple implementation.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and
overlay2 drivers.
See the Overlay Graph Driver section of the Atomic Host documentation for instructions on how to
enable the overlay2 graph driver for the Docker service.
Exposure to harmful containers can be limited by assigning specific builds to nodes so that any exposure
is limited to those nodes. To do this, see the Assigning Builds to Specific Nodes section of the
Developer Guide. For cluster administrators, see the Configuring Global Build Defaults and Overrides
topic.
You can also use security context constraints to control the actions that a pod can perform and what it
has the ability to access. For instructions on how to enable images to run with USER in the Dockerfile,
see Managing Security Context Constraints (requires a user with cluster-admin privileges).
http://opensource.com/business/14/7/docker-security-selinux
https://docs.docker.com/engine/security/security/
16
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
IMPORTANT
Adding entries into the /etc/hosts file on each host is not enough. This file is not copied
into containers running on the platform.
Key components of OpenShift Container Platform run themselves inside of containers and use the
following process for name resolution:
1. By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
2. OpenShift Container Platform then sets the pod’s first nameserver to the IP address of the
node.
As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes.
The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is
configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS
application.
NOTE
Similarly, if the PEERDNS parameter is set to no in the network script, for example,
/etc/sysconfig/network-scripts/ifcfg-em1, then the dnsmasq files are not generated,
and the Ansible install will fail. Ensure the PEERDNS setting is set to yes.
master1 A 10.64.33.100
master2 A 10.64.33.103
node1 A 10.64.33.101
node2 A 10.64.33.102
If you do not have a properly functioning DNS environment, you might experience failure with:
17
OpenShift Container Platform 3.11 Installing Clusters
Access to the OpenShift Container Platform web console, because it is not accessible via IP
address alone
Make sure each host in your environment is configured to resolve hostnames from your DNS server. The
configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:
Disabled, then configure your network interface to be static, and add DNS nameservers to
NetworkManager.
Enabled, then the NetworkManager dispatch script automatically configures DNS based on the
DHCP configuration.
$ cat /etc/resolv.conf
# Generated by NetworkManager
search example.com
nameserver 10.64.33.1
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
2. Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP
addresses of all masters and nodes in your OpenShift Container Platform environment:
For example:
Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS
configuration when new routes are added.
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container
Platform router.
For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and
points to the public IP address of the host where the router will be deployed:
18
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
WARNING
In your /etc/resolv.conf file on each node host, ensure that the DNS server that
has the wildcard entry is not listed as a nameserver or that the wildcard domain is
not listed in the search list. Otherwise, containers managed by OpenShift Container
Platform might fail to resolve host names properly.
When you set up a cluster that is not integrated with a cloud provider, you must correctly set your nodes'
host names. Each node’s host name must be resolvable, and each node must be able to reach each
other node.
$ hostname
master-1.example.com
2. On that same node, obtain the fully qualified domain name of the host:
$ hostname -f
master-1.example.com
3. From a different node, confirm that you can reach the first node:
$ ping master-1.example.com -c 1
2.2.2.1. NetworkManager
NetworkManager, a program for providing detection and configuration for systems to automatically
connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP
addresses.
19
OpenShift Container Platform 3.11 Installing Clusters
While iptables is the default firewall, firewalld is recommended for new installations. You can enable
firewalld by setting os_firewall_use_firewalld=true in the Ansible inventory file .
[OSEv3:vars]
os_firewall_use_firewalld=True
Setting this variable to true opens the required ports and adds rules to the default zone, which ensure
that firewalld is configured correctly.
NOTE
Using the firewalld default configuration comes with limited configuration options, and
cannot be overridden. For example, while you can set up a storage network with
interfaces in multiple zones, the interface that nodes communicate on must be in the
default zone.
The OpenShift Container Platform installation automatically creates a set of internal firewall rules on
each host using iptables. However, if your network configuration uses an external firewall, such as a
hardware-based firewall, you must ensure infrastructure components can communicate with each other
through specific ports that act as communication endpoints for certain processes or services.
Ensure the following ports required by OpenShift Container Platform are open on your network and
configured to allow access between hosts. Some ports are optional depending on your configuration and
usage.
4789 UDP Required for SDN communication between pods on separate hosts.
4789 UDP Required for SDN communication between pods on separate hosts.
443 or 8443 TCP Required for node hosts to communicate to the master API, for the node hosts
to post back status, to receive tasks, and so on.
4789 UDP Required for SDN communication between pods on separate hosts.
10250 TCP The master proxies to node hosts via the Kubelet for oc commands. This port
must to be allowed from masters and infra nodes to any master and node. For
metrics, the source must be the infra nodes.
20
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
10010 TCP If using CRI-O, open this port to allow oc exec and oc rsh operations.
2049 TCP/ Required when provisioning an NFS host as part of the installer.
UDP
2379 TCP Used for standalone etcd (clustered) to accept changes in state.
2380 TCP etcd requires this port be open between masters for leader election and
peering connections when using standalone etcd (clustered).
4789 UDP Required for SDN communication between pods on separate hosts.
9000 TCP If you choose the native HA method, optional to allow access to the HAProxy
statistics page.
443 or 8443 TCP Required for node hosts to communicate to the master API, for node hosts to
post back status, to receive tasks, and so on.
8444 TCP Port that the controller manager and scheduler services listen on. Required to
be open for the /metrics and /healthz endpoints.
53 or 8053 TCP/ Required for DNS resolution of cluster services (SkyDNS). Installations prior to
UDP 3.2 or environments upgraded to 3.2 use port 53. New installations will use
8053 by default so that dnsmasq might be configured. Only required to be
internally open on master hosts.
80 or 443 TCP For HTTP/HTTPS use for the router. Required to be externally open on node
hosts, especially on nodes running the router.
1936 TCP (Optional) Required to be open when running the template router to access
statistics. Can be open externally or internally to connections depending on if
you want the statistics to be expressed publicly. Can require extra configuration
to open. See the Notes section below for more information.
2379 and 2380 TCP For standalone etcd use. Only required to be internally open on the master
host. 2379 is for server-client connections. 2380 is for server-server
connections, and is only required if you have clustered etcd.
21
OpenShift Container Platform 3.11 Installing Clusters
4789 UDP For VxLAN use (OpenShift SDN). Required only internally on node hosts.
8443 TCP For use by the OpenShift Container Platform web console, shared with the API
server.
10250 TCP For use by the Kubelet. Required to be externally open on nodes.
Notes
In the above examples, port 4789 is used for User Datagram Protocol (UDP).
When deployments are using the SDN, the pod network is accessed via a service proxy, unless it
is accessing the registry from the same node the registry is deployed on.
OpenShift Container Platform internal DNS cannot be received over SDN. For non-cloud
deployments, this will default to the IP address associated with the default route on the master
host. For cloud deployments, it will default to the IP address associated with the first internal
interface as defined by the cloud metadata.
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on
the target host of the deployment and uses the computed value of
openshift_public_hostname.
Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure
iptables to open port 1936:
9200 TCP For Elasticsearch API use. Required to be internally open on any infrastructure
nodes so Kibana is able to retrieve logs for display. It can be externally open for
direct access to Elasticsearch by means of a route. The route can be created
using oc expose.
9300 TCP For Elasticsearch inter-cluster use. Required to be internally open on any
infrastructure node so the members of the Elasticsearch cluster might
communicate with each other.
9100 TCP For the Prometheus Node-Exporter, which exports hardware and operating
system metrics. Port 9100 needs to be open on each OpenShift Container
Platform host in order for the Prometheus server to scrape the metrics.
8443 TCP For node hosts to communicate to the master API, for the node hosts to post
back status, to receive tasks, and so on. This port needs to be allowed from
masters and infra nodes to any master and node.
22
CHAPTER 2. SYSTEM AND ENVIRONMENT REQUIREMENTS
10250 TCP For the Kubernetes cAdvisor, a container resource usage and performance
analysis agent. This port must to be allowed from masters and infra nodes to
any master and node. For metrics, the source must be the infra nodes.
8444 TCP Port that the controller manager and scheduler services listen on. Port 8444
must be open on each OpenShift Container Platform host.
1936 TCP (Optional) Required to be open when running the template router to access
statistics. This port must be allowed from the infra nodes to any infra nodes
hosting the routers if Prometheus metrics are enabled on routers. Can be open
externally or internally to connections depending on if you want the statistics to
be expressed publicly. Can require extra configuration to open. See the Notes
section above for more information.
Notes
The Configuring Clusters guide provides instructions for cluster administrators on provisioning an
OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD,
OpenStack Cinder, AWS Elastic Block Store (EBS) , GCE Persistent Disks , and iSCSI.
For Amazon Web Services, see the Permissions and the Configuring a Security Group sections.
For OpenStack, see the Permissions and the Configuring a Security Group sections.
Some deployments require that the user override the detected host names and IP addresses for the
hosts. To see the default values, change to the playbook directory and run the openshift_facts
playbook:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/inventory] \
playbooks/byo/openshift_facts.yml
IMPORTANT
23
OpenShift Container Platform 3.11 Installing Clusters
IMPORTANT
For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names
section.
Now, verify the detected common settings. If they are not what you expect them to be, you can override
them.
The Configuring Your Inventory File topic discusses the available Ansible variables in greater detail.
Variable Usage
hostname
Resolves to the internal IP address from the instances
themselves.
ip
The internal IP address of the instance.
public_hostname
Resolves to the external IP from hosts outside of the cloud.
public_ip
The externally accessible IP address associated with the
instance.
openshift_public_ip overrides.
use_openshift_sdn
For all clouds but GCE, set to true.
openshift_use_openshift_sdn overrides.
Following the installation process, you can configure OpenShift Container Platform for AWS,
OpenStack, or GCE.
24
CHAPTER 3. PREPARING YOUR HOSTS
For servers that use x86_64 architecture, use a base installation of Red Hat Enterprise Linux
(RHEL) 7.5 or later with the latest packages from the Extras channel or RHEL Atomic Host 7.4.2
or later.
For cloud-based installations, use a base installation of RHEL 7.5 or later with the latest
packages from the Extras channel.
For servers that use IBM POWER8 architecture, use a base installation of RHEL 7.5 or later with
the latest packages from the Extras channel.
For servers that use IBM POWER9 architecture, use a base installation of RHEL-ALT 7.5 or later
with the latest packages from the Extras channel.
See the following documentation for the respective installation instructions, if required:
Red Hat Enterprise Linux Atomic Host 7 Installation and Configuration Guide
/bin
/sbin
/usr/bin
/usr/sbin
25
OpenShift Container Platform 3.11 Installing Clusters
1. Generate an SSH key on the host you run the installation playbook on:
# ssh-keygen
2. Distribute the key to the other cluster hosts. You can use a bash loop:
3. Confirm that you can access each host that is listed in the loop through SSH.
NOTE
The no_proxy parameter in /etc/environment file is not the same value as the global
proxy values that you set in your inventory file. The global proxy values configure specific
OpenShift Container Platform services with your proxy settings. See Configuring Global
Proxy Options for details.
If the /etc/environment file contains proxy values, define the following values in the no_proxy
parameter of that file on each node:
Etcd IP addresses. You must provide IP addresses and not host names because etcd access is
controlled by IP address.
NOTE
Because no_proxy does not support CIDR, you can use domain suffixes.
If you use either an http_proxy or https_proxy value, your no_proxy parameter value resembles the
26
CHAPTER 3. PREPARING YOUR HOSTS
If you use either an http_proxy or https_proxy value, your no_proxy parameter value resembles the
following example:
no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.
0.1
# subscription-manager refresh
4. In the output for the previous command, find the pool ID for an OpenShift Container Platform
subscription and attach it:
b. List the remaining yum repositories and note their names under repo id, if any:
# yum repolist
yum-config-manager --disable \*
Note that this might take a few minutes if you have a large number of available repositories
For cloud installations and on-premise installations on x86_64 servers, run the following
command:
27
OpenShift Container Platform 3.11 Installing Clusters
# subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-ose-3.11-rpms" \
--enable="rhel-7-server-ansible-2.9-rpms"
For on-premise installations on IBM POWER8 servers, run the following command:
# subscription-manager repos \
--enable="rhel-7-for-power-le-rpms" \
--enable="rhel-7-for-power-le-extras-rpms" \
--enable="rhel-7-for-power-le-optional-rpms" \
--enable="rhel-7-server-ansible-2.9-for-power-le-rpms" \
--enable="rhel-7-server-for-power-le-rhscl-rpms" \
--enable="rhel-7-for-power-le-ose-3.11-rpms"
For on-premise installations on IBM POWER9 servers, run the following command:
# subscription-manager repos \
--enable="rhel-7-for-power-9-rpms" \
--enable="rhel-7-for-power-9-extras-rpms" \
--enable="rhel-7-for-power-9-optional-rpms" \
--enable="rhel-7-server-ansible-2.9-for-power-9-rpms" \
--enable="rhel-7-server-for-power-9-rhscl-rpms" \
--enable="rhel-7-for-power-9-ose-3.11-rpms"
NOTE
Older versions of OpenShift Container Platform 3.11 supported only Ansible 2.6.
The most recent versions of the playbooks now support Ansible 2.9, which is the
preferred version to use.
IMPORTANT
If your hosts use RHEL 7.5 and you want to accept OpenShift Container Platform’s
default docker configuration (using OverlayFS storage and all default logging options),
do not manually install these packages. These packages are installed when you run the
prerequisites.yml playbook during installation.
If your hosts use RHEL 7.4 or if they use RHEL 7.5 and you want to customize the docker
configuration, install these packages.
# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-
completion kexec-tools sos psacct
28
CHAPTER 3. PREPARING YOUR HOSTS
# yum update
# reboot
If you plan to use the containerized installer, install the following package:
If you plan to use the RPM-based installer, install the following package:
This package provides installer utilities and pulls in other packages that the cluster
installation process needs, such as Ansible, playbooks, and related configuration files
1. Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:
2. After the upgrade is completed and prepared for the next boot, reboot the host:
# reboot
NOTE
# rpm -V docker-1.13.1
# docker version
Containers and the images they are created from are stored in Docker’s storage back end. This storage
is ephemeral and separate from any persistent storage allocated to meet the needs of your applications.
With Ephemeral storage, container-saved data is lost when the container is removed. With persistent
storage, container-saved data remains if the container is removed.
You must configure storage for all master and node hosts because by default each system runs a
container daemon. For containerized installations, you need storage on masters. Also, by default, the
web console and etcd, which require storage, run in containers on masters. Containers run on nodes, so
storage is always required on them.
The size of storage depends on workload, number of containers, the size of the containers being run, and
the containers' storage requirements.
IMPORTANT
If your hosts use RHEL 7.5 and you want to accept OpenShift Container Platform’s
default docker configuration (using OverlayFS storage and all default logging options),
do not manually install these packages. These packages are installed when you run the
prerequisites.yml playbook during installation.
If your hosts use RHEL 7.4 or if they use RHEL 7.5 and you want to customize the docker
configuration, install these packages.
Docker stores images and containers in a graph driver, which is a pluggable storage technology, such as
DeviceMapper, OverlayFS, and Btrfs. Each has advantages and disadvantages. For example,
OverlayFS is faster than DeviceMapper at starting and stopping containers but is not Portable
Operating System Interface for Unix (POSIX) compliant because of the architectural limitations of a
union file system. See the Red Hat Enterprise Linux release notes for information on using OverlayFS
with your version of RHEL.
For more information about the benefits and limitations of DeviceMapper and OverlayFS, see Choosing
a Graph Driver.
If you do not have enough space allocated, see Managing Storage with Docker Formatted Containers
for details about using docker-storage-setup and basic instructions on storage management in RHEL
Atomic Host.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and
overlay2 drivers.
30
CHAPTER 3. PREPARING YOUR HOSTS
For information about enabling the OverlayFS storage driver for the Docker service, see the Red Hat
Enterprise Linux Atomic Host documentation.
Use the remaining free space from the volume group where your root file system is located.
Using an additional block device is the most robust option, but it requires adding another block device to
your host before you configure Docker storage. The other options both require leaving free space
available when you provision your host. Using the remaining free space in the root file system volume
group is known to cause issues with some applications, for example Red Hat Mobile Application Platform
(RHMAP).
1. Create the docker-pool volume using one of the following three options:
b. Run docker-storage-setup and review the output to ensure the docker-pool volume
was created:
# docker-storage-setup
[5/1868]
0
Checking that no-one is using this disk right now ...
OK
Old situation:
sfdisk: No partitions found
New situation:
Units: sectors of 512 bytes, counting from 0
31
OpenShift Container Platform 3.11 Installing Clusters
/dev/vdc4 0 - 0 0 Empty
Warning: partition 1 does not start at a cylinder boundary
Warning: partition 1 does not end at a cylinder boundary
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
Physical volume "/dev/vdc1" successfully created
Volume group "docker-vg" successfully created
Rounding up size to full physical extent 16.00 MiB
Logical volume "docker-poolmeta" created.
Logical volume "docker-pool" created.
WARNING: Converting logical volume docker-vg/docker-pool and docker-
vg/docker-poolmeta to pool's data and metadata volumes.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Converted docker-vg/docker-pool to thin pool.
Logical volume "docker-pool" changed.
b. Then run docker-storage-setup and review the output to ensure the docker-pool
volume was created:
# docker-storage-setup
Rounding up size to full physical extent 16.00 MiB
Logical volume "docker-poolmeta" created.
Logical volume "docker-pool" created.
WARNING: Converting logical volume docker-vg/docker-pool and docker-
vg/docker-poolmeta to pool's data and metadata volumes.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Converted docker-vg/docker-pool to thin pool.
Logical volume "docker-pool" changed.
To use the remaining free space from the volume group where your root file system is
located:
a. Verify that the volume group where your root file system resides has the required free
space, then run docker-storage-setup and review the output to ensure the docker-
pool volume was created:
# docker-storage-setup
Rounding up size to full physical extent 32.00 MiB
Logical volume "docker-poolmeta" created.
Logical volume "docker-pool" created.
32
CHAPTER 3. PREPARING YOUR HOSTS
# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --
storage-opt dm.thinpooldev=/dev/mapper/rhel-docker--pool --storage-opt
dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true "
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool rhel twi-a-t--- 9.29g 0.00 0.12
IMPORTANT
Before using Docker or OpenShift Container Platform, verify that the docker-
pool logical volume is large enough to meet your needs. Make the docker-pool
volume 60% of the available volume group; it will grow to fill the volume group
through LVM monitoring.
If Docker has never run on the host, enable and start the service, then verify that it is
running:
a. Re-initialize Docker:
WARNING
33
OpenShift Container Platform 3.11 Installing Clusters
2. If you use a dedicated volume group, remove the volume group and any associated physical
volumes
See Logical Volume Manager Administration for more detailed information about LVM management.
You can configure image signature verification using the atomic command line interface (CLI), version
1.12.5 or greater. The atomic CLI is pre-installed on RHEL Atomic Host systems.
NOTE
For more on the atomic CLI, see the Atomic CLI documentation.
The following files and directories comprise the trust configuration of a host:
/etc/containers/registries.d/*
/etc/containers/policy.json
You can manage trust configuration directly on each node or manage the files on a separate host
distribute them to the appropriate nodes using Ansible, for example. See the Container Image Signing
Integration Guide for an example of automating file distribution with Ansible.
The default configuration is to whitelist all registries, which means that no signature verification
is configured.
3. Customize your trust configuration. In the following example, you whitelist one registry or
namespace, blacklist (reject) untrusted registries, and require signature verification on a vendor
registry:
34
CHAPTER 3. PREPARING YOUR HOSTS
172.30.1.1:5000/production
4. You can further harden nodes by adding a global reject default trust:
5. Optionally, review the atomic man page man atomic-trust for more configuration options.
Option Purpose
--log-opt max-size Sets the size at which a new log file is created.
--log-opt max-file Sets the maximum number of log files to be kept per
host.
1. To configure the log file, edit the /etc/sysconfig/docker file. For example, to set the maximum
file size to 1 MB and always keep the last three log files, append max-size=1M and max-file=3
to the OPTIONS= line, ensuring that the values maintain the single quotation mark formatting:
See Docker’s documentation for additional information on how to configure logging drivers .
35
OpenShift Container Platform 3.11 Installing Clusters
# ls -lh
/var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/
total 2.6M
-rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json
-rw-r--r--. 1 root root 649K Nov 24 00:15
f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log
-rw-r--r--. 1 root root 977K Nov 24 00:15
f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1
-rw-r--r--. 1 root root 977K Nov 24 00:15
f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2
-rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json
drwx------. 2 root root 6 Nov 24 00:12 secrets
In OpenShift Container Platform, users trying to run their own images risk filling the entire storage space
on a node host. One solution to this issue is to prevent users from running images with volumes. This
way, the only storage a user has access to can be limited, and the cluster administrator can assign
storage quota.
Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes
defined. In particular, the plug-in blocks docker run commands that contain:
References to existing volumes that were provisioned with the docker volume command
36
CHAPTER 3. PREPARING YOUR HOSTS
3. Edit the /etc/sysconfig/docker file and append the following to the OPTIONS list:
--authorization-plugin=docker-novolume-plugin
After you enable this plug-in, containers with local volumes defined fail to start and show the following
error message:
This package comes installed on every RHEL system. However, it is recommended to update to the
latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do
this, the following RPM repository must be enabled:
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
37
OpenShift Container Platform 3.11 Installing Clusters
NOTE
See Ansible documentation for details about the format of an inventory file, including
basic details about YAML syntax.
When you install the openshift-ansible RPM package as described in Host preparation, Ansible
dependencies create a file at the default location of /etc/ansible/hosts. However, the file is simply the
default Ansible example and has no variables related specifically to OpenShift Container Platform
configuration. To successfully install OpenShift Container Platform, you must replace the default
contents of the file with your own configuration based on your cluster topography and requirements.
The following sections describe commonly-used variables to set in your inventory file during cluster
installation. Many of the Ansible variables described are optional. For development environments, you
can accept the default values for the required parameters, but you must select appropriate values for
them in production environments.
You can review Example Inventory Files for various examples to use as a starting point for your cluster
installation.
NOTE
Images require a version number policy in order to maintain updates. See the Image
Version Tag Policy section in the Architecture Guide for more information.
[OSEv3:vars]
openshift_master_identity_providers=[{'name': 'htpasswd_auth',
'login': 'true', 'challenge': 'true',
'kind': 'HTPasswdPasswordIdentityProvider',}]
openshift_master_default_subdomain=apps.test.example.com
IMPORTANT
38
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
IMPORTANT
If a parameter value in the Ansible inventory file contains special characters, such as #, {
or }, you must double-escape the value (that is enclose the value in both single and
double quotation marks). For example, to use mypasswordwith###hashsigns as a value
for the variable openshift_cloudprovider_openstack_password, declare it as
openshift_cloudprovider_openstack_password='"mypasswordwith###hashsigns"'
in the Ansible host inventory file.
The following tables describe global cluster variables for use with the Ansible installer:
Variable Purpose
ansible_ssh_user This variable sets the SSH user for the installer to use
and defaults to root. This user must allow SSH-
based authentication without requiring a password. If
using SSH key-based authentication, then the key
must be managed by an SSH agent.
39
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
IMPORTANT
openshift_master_admission_plugin_config=
{"ClusterResourceOverride":{"configuration":
{"apiVersion":"v1","kind":"ClusterResourceOve
rrideConfig","memoryRequestToLimitPercent"
:"25","cpuRequestToLimitPercent":"25","limitC
PUToMemoryPercent":"200"}}}
In this value,
openshift_master_admission_plugin_config=
{"openshift.io/ImagePolicy":{"configuration":
{"apiVersion":"v1","executionRules":
[{"matchImageAnnotations":
[{"key":"images.openshift.io/deny-
execution","value":"true"}],"name":"executio
n-denied","onResources":
[{"resource":"pods"},
{"resource":"builds"}],"reject":true,"skipOnR
esolutionFailure":true}],"kind":"ImagePolicy
Config"}}} is the default parameter value.
IMPORTANT
40
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
openshift_master_cluster_hostname This variable overrides the host name for the cluster,
which defaults to the host name of the master.
openshift_master_cluster_public_hostname This variable overrides the public host name for the
cluster, which defaults to the host name of the
master. If you use an external load balancer, specify
the address of the external load balancer.
For example:
openshift_master_cluster_public_hostname=o
penshift-ansible.public.example.com
41
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
openshift_master_ca_certificate Provide the single certificate and key that signs the
OpenShift Container Platform certificates. See
Redeploying a New or Custom OpenShift Container
Platform CA
openshift_master_session_auth_secrets
42
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
openshift_master_session_encryption_secre
ts
43
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
For example:
openshift_docker_additional_registries=exampl
e.com:443
NOTE
openshift_metrics_hawkular_hostname This variable sets the host name for integration with
the metrics console by overriding
metricsPublicURL in the master configuration for
cluster metrics. If you alter this variable, ensure the
host name is accessible via your router.
44
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
WARNING
Variable Purpose
45
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
osm_host_subnet_length This variable specifies the size of the per host subnet
allocated for pod IPs by OpenShift Container
Platform SDN. Defaults to 9 which means that a
subnet of size /23 is allocated to each host; for
example, given the default 10.128.0.0/14 cluster
network, this will allocate 10.128.0.0/23, 10.128.2.0/23,
10.128.4.0/23, and so on. This cannot be re-
configured after deployment.
openshift_sdn_vxlan_port This variable sets the vxlan port number for cluster
network. Defaults to 4789. See Changing the
VXLAN PORT for the cluster network for more
information.
46
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Various defaults used throughout the playbooks and roles used by the installer are based on the
deployment type configuration (usually defined in an Ansible inventory file).
Ensure the openshift_deployment_type parameter in your inventory file’s [OSEv3:vars] section is set
to openshift-enterprise to install the OpenShift Container Platform variant:
[OSEv3:vars]
openshift_deployment_type=openshift-enterprise
[masters]
ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-
master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to
individual host entries:
Variable Purpose
47
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
This process replaces administrators having to manually maintain the node configuration uniquely on
each node host. Instead, the contents of a node host’s /etc/origin/node/node-config.yaml file are
now provided by ConfigMaps sourced from the master.
By default during a cluster installation, the installer creates the following default ConfigMaps:
node-config-master
node-config-infra
node-config-compute
The following ConfigMaps are also created, which label nodes into multiple roles:
48
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
node-config-all-in-one
node-config-master-infra
The following ConfigMaps are CRI-O variants for each of the existing default node groups:
node-config-master-crio
node-config-infra-crio
node-config-compute-crio
node-config-all-in-one-crio
node-config-master-infra-crio
IMPORTANT
openshift_node_groups:
- name: node-config-master 1
labels:
- 'node-role.kubernetes.io/master=true' 2
edits: [] 3
- name: node-config-infra
labels:
- 'node-role.kubernetes.io/infra=true'
edits: []
- name: node-config-compute
labels:
- 'node-role.kubernetes.io/compute=true'
edits: []
- name: node-config-master-infra
labels:
- 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true'
edits: []
- name: node-config-all-in-one
labels:
- 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-
role.kubernetes.io/compute=true'
edits: []
2 List of node labels associated with the node group. See Node Host Labels for details.
49
OpenShift Container Platform 3.11 Installing Clusters
If you do not set the openshift_node_groups variable in your inventory file’s [OSEv3:vars] group,
these defaults values are used. However, if you want to set custom node groups, you must define the
entire openshift_node_groups structure, including all planned node groups, in your inventory file.
The openshift_node_groups value is not merged with the default values, and you must translate the
YAML definitions into a Python dictionary. You can then use the edits field to modify any node
configuration variables by specifying key-value pairs.
NOTE
See Master and Node Configuration Files for reference on configurable node variables.
For example, the following entry in an inventory file defines groups named node-config-master, node-
config-infra, and node-config-compute.
You can also define new node group names with other labels, the following entry in an inventory file
defines groups named node-config-master, node-config-infra, node-config-compute and node-
config-compute-storage.
When you set an entry in the inventory file, you can also edit the ConfigMap for a node group:
You can use a list to modify multiple key value pairs, such as modifying the node-config-
compute group to add two parameters to the kubelet:
You can use also use a dictionary as value, such as modifying the node-config-compute group
to set perFSGroup to 512Mi:
50
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Whenever the openshift_node_group.yml playbook is run, the changes defined in the edits field will
update the related ConfigMap (node-config-compute in this example), which will ultimately affect the
node’s configuration file on the host.
NOTE
IMPORTANT
Setting openshift_node_group_name per host to a node group is required for all cluster
installations whether you use the default node group definitions and ConfigMaps or are
customizing your own.
The value of openshift_node_group_name is used to select the ConfigMap that configures each node.
For example:
[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'
If other custom ConfigMaps have been defined in openshift_node_groups they can also be used. For
exmaple:
[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'
gluster[1:6].example.com openshift_node_group_name='node-config-compute-storage'
You must create your own custom node groups if you want to modify the default labels that are
51
OpenShift Container Platform 3.11 Installing Clusters
You must create your own custom node groups if you want to modify the default labels that are
assigned to node hosts. You can no longer set the openshift_node_labels variable to change labels.
See Node Group Definitions to modify the default node groups.
Other than node-role.kubernetes.io/infra=true (hosts using this group are also referred to as
dedicated infrastructure nodes and discussed further in Configuring Dedicated Infrastructure Nodes),
the actual label names and values are arbitrary and can be assigned however you see fit per your
cluster’s requirements.
Configure all hosts that you designate as masters during the installation process as nodes. By doing so,
the masters are configured as part of the OpenShift SDN. You must add entries for the master hosts to
the [nodes] section:
[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'
If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable
or Schedulable.
Masters are marked as schedulable nodes by default, so the default node selector is set by default
during cluster installations. The default node selector is defined in the master configuration file’s
projectConfig.defaultNodeSelector field to determine which node projects will use by default when
placing pods. It is set to node-role.kubernetes.io/compute=true unless overridden using the
osm_default_node_selector variable.
IMPORTANT
See Setting the Cluster-wide Default Node Selector for steps on adjusting this setting post-installation
if needed.
It is recommended for production environments that you maintain dedicated infrastructure nodes where
the registry and router pods can run separately from pods used for user applications.
The registry and router are only able to run on node hosts with the node-role.kubernetes.io/infra=true
52
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
label, which are then considered dedicated infrastructure nodes. Ensure that at least one node host in
your OpenShift Container Platform environment has the node-role.kubernetes.io/infra=true label; you
can use the default node-config-infra, which sets this label:
[nodes]
infra-node1.example.com openshift_node_group_name='node-config-infra'
IMPORTANT
If there is not a node in the [nodes] section that matches the selector settings, the
default router and registry will be deployed as failed with Pending status.
If you do not intend to use OpenShift Container Platform to manage the registry and router, configure
the following Ansible settings:
openshift_hosted_manage_registry=false
openshift_hosted_manage_router=false
If you use an image registry other than the default registry.redhat.io, you must specify the registry in
the /etc/ansible/hosts file.
As described in Configuring Schedulability on Masters, master hosts are marked schedulable by default.
If you label a master host with node-role.kubernetes.io/infra=true and have no other dedicated
infrastructure nodes, the master hosts must also be marked as schedulable. Otherwise, the registry and
router pods cannot be placed anywhere.
You can use the default node-config-master-infra node group to achieve this:
[nodes]
master.example.com openshift_node_group_name='node-config-master-infra'
53
OpenShift Container Platform 3.11 Installing Clusters
osm_project_reques The template to use for String with the format null
t_template creating projects in <namespace>/<temp
response to a late>
projectrequest. If you
do not specify a value,
the default template is
used.
54
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
For example:
openshift_master_api_port=3443
The web console port setting (openshift_master_console_port) must match the API server port
(openshift_master_api_port).
The following table describes available pre-install checks that will run before every Ansible installation of
OpenShift Container Platform:
55
OpenShift Container Platform 3.11 Installing Clusters
disk_availability This check only runs on etcd, master, and node hosts.
It ensures that the mount path for an OpenShift
Container Platform installation has sufficient disk
space remaining. Recommended disk values are
taken from the latest installation documentation. A
user-defined value for minimum disk space
requirements might be set by setting
openshift_check_min_host_disk_gb cluster
variable in your inventory file.
56
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
To disable specific pre-install checks, include the variable openshift_disable_check with a comma-
delimited list of check names in your inventory file. For example:
openshift_disable_check=memory_availability,disk_availability
NOTE
A similar set of health checks meant to run for diagnostics on existing clusters can be
found in Ansible-based Health Checks . Another set of checks for checking certificate
expiration can be found in Redeploying Certificates.
oreg_url=registry.redhat.io/openshift3/ose-${component}:${version}
oreg_auth_user="<user>"
oreg_auth_password="<password>"
For more information about setting up the registry access token, see Red Hat Container Registry
Authentication.
If you use an image registry other than the default at registry.redhat.io, specify the registry in the
/etc/ansible/hosts file.
oreg_url=example.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
57
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
NOTE
The default registry requires an authentication token. For more information, see
Accessing and Configuring the Red Hat Registry
For example:
oreg_url=example.com/openshift3/ose-${component}:${version}
oreg_auth_user=${user_name}
oreg_auth_password=${password}
openshift_examples_modify_imagestreams=true
58
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
certfile
keyfile
cafile
For example:
openshift_hosted_registry_routehost=<path>
openshift_hosted_registry_routetermination=reencrypt
openshift_hosted_registry_routecertificates= "{'certfile': '<path>/org-cert.pem', 'keyfile': '<path>/org-
privkey.pem', 'cafile': '<path>/org-chain.pem'}"
59
OpenShift Container Platform 3.11 Installing Clusters
You configure Red Hat Gluster Storage clusters using variables, which interact with the OpenShift
Container Platform clusters. The variables, which you define in the [OSEv3:vars] group, include host
variables, role variables, and image name and version tag variables.
You use the glusterfs_devices host variable to define the list of block devices to manage the Red Hat
Gluster Storage cluster. Each host in your configuration must have at least one glusterfs_devices
variable, and for every configuration, there must be at least one bare device with no partitions or LVM
PVs.
Role variables control the integration of a Red Hat Gluster Storage cluster into a new or existing
OpenShift Container Platform cluster. You can define a number of role variables, each of which also has
a corresponding variable to optionally configure a separate Red Hat Gluster Storage cluster for use as
storage for an integrated Docker registry.
You can define image name and version tag variables to prevent OpenShift Container Platform pods
from upgrading after an outage, which could lead to a cluster with different OpenShift Container
Platform versions. You can also define these variables to specify the image name and version tags for all
containerized components.
Additional information and examples, including the ones below, can be found at Persistent Storage
Using Red Hat Gluster Storage.
IMPORTANT
See converged mode Considerations for specific host preparations and prerequisites.
1. In your inventory file, include the following variables in the [OSEv3:vars] section, and adjust
them as required for your configuration:
[OSEv3:vars]
60
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
...
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=false
openshift_storage_glusterfs_block_deploy=true
openshift_storage_glusterfs_block_host_vol_size=100
openshift_storage_glusterfs_block_storageclass=true
openshift_storage_glusterfs_block_storageclass_default=false
[OSEv3:children]
masters
nodes
glusterfs
3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS
storage. For each node, set glusterfs_devices to a list of raw block devices that will be
completely managed as part of a GlusterFS cluster. There must be at least one device listed.
Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:
For example:
[glusterfs]
node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[nodes]
...
node11.example.com openshift_node_group_name="node-config-compute"
node12.example.com openshift_node_group_name="node-config-compute"
node13.example.com openshift_node_group_name="node-config-compute"
A valid image tag is required for your deployment to succeed. Replace <tag> with the version of Red Hat
Gluster Storage that is compatible with OpenShift Container Platform 3.11 as described in the
interoperability matrix for the following variables in your inventory file:
openshift_storage_glusterfs_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:<tag>
openshift_storage_glusterfs_block_image=registry.redhat.io/rhgs3/rhgs-gluster-block-
prov-rhel7:<tag>
openshift_storage_glusterfs_s3_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:
<tag>
openshift_storage_glusterfs_heketi_image=registry.redhat.io/rhgs3/rhgs-volmanager-
rhel7:<tag>
openshift_storage_glusterfs_registry_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:
<tag>
61
OpenShift Container Platform 3.11 Installing Clusters
openshift_storage_glusterfs_block_registry_image=registry.redhat.io/rhgs3/rhgs-gluster-
block-prov-rhel7:<tag>
openshift_storage_glusterfs_s3_registry_image=registry.redhat.io/rhgs3/rhgs-s3-server-
rhel7:<tag>
openshift_storage_glusterfs_heketi_registry_image=registry.redhat.io/rhgs3/rhgs-
volmanager-rhel7:<tag>
[OSEv3:vars]
...
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=false
openshift_storage_glusterfs_block_deploy=true
openshift_storage_glusterfs_block_host_vol_size=100
openshift_storage_glusterfs_block_storageclass=true
openshift_storage_glusterfs_block_storageclass_default=false
openshift_storage_glusterfs_is_native=false
openshift_storage_glusterfs_heketi_is_native=true
openshift_storage_glusterfs_heketi_executor=ssh
openshift_storage_glusterfs_heketi_ssh_port=22
openshift_storage_glusterfs_heketi_ssh_user=root
openshift_storage_glusterfs_heketi_ssh_sudo=false
openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
[OSEv3:children]
masters
nodes
glusterfs
3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS
storage. For each node, set glusterfs_devices to a list of raw block devices that will be
completely managed as part of a GlusterFS cluster. There must be at least one device listed.
Each device must be bare, with no partitions or LVM PVs. Also, set glusterfs_ip to the IP
address of the node. Specifying the variable takes the form:
For example:
[glusterfs]
gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc",
"/dev/xvdd" ]'
gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc",
62
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
"/dev/xvdd" ]'
gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc",
"/dev/xvdd" ]'
IMPORTANT
Testing shows issues with using the RHEL NFS server as a storage backend for the
container image registry. This includes the OpenShift Container Registry and Quay.
Therefore, using the RHEL NFS server to back PVs used by core services is not
recommended.
Other NFS implementations on the marketplace might not have these issues. Contact
the individual NFS implementation vendor for more information on any testing that was
possibly completed against these OpenShift core components.
There are several options for enabling registry storage when using the advanced installer:
[OSEv3:vars]
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
[OSEv3:vars]
openshift_hosted_registry_storage_kind=nfs
63
OpenShift Container Platform 3.11 Installing Clusters
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=nfs.example.com
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
[OSEv3:vars]
openshift_hosted_registry_storage_kind=openstack
openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem=ext4
openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57
openshift_hosted_registry_storage_volume_size=10Gi
[OSEv3:vars]
#openshift_hosted_registry_storage_kind=object
#openshift_hosted_registry_storage_provider=s3
#openshift_hosted_registry_storage_s3_accesskey=access_key_id
#openshift_hosted_registry_storage_s3_secretkey=secret_access_key
#openshift_hosted_registry_storage_s3_bucket=bucket_name
#openshift_hosted_registry_storage_s3_region=bucket_region
#openshift_hosted_registry_storage_s3_chunksize=26214400
#openshift_hosted_registry_storage_s3_rootdirectory=/registry
#openshift_hosted_registry_pullthrough=true
#openshift_hosted_registry_acceptschema2=true
#openshift_hosted_registry_enforcequota=true
If you use a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:
openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/
IMPORTANT
See converged mode Considerations for specific host preparations and prerequisites.
1. In your inventory file, set the following variable under [OSEv3:vars] section, and adjust them as
required for your configuration:
[OSEv3:vars]
...
openshift_hosted_registry_storage_kind=glusterfs 1
64
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
openshift_hosted_registry_storage_volume_size=5Gi
openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
[OSEv3:children]
masters
nodes
glusterfs_registry
3. Add a [glusterfs_registry] section with entries for each storage node that will host the
GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will
be completely managed as part of a GlusterFS cluster. There must be at least one device listed.
Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:
For example:
[glusterfs_registry]
node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[nodes]
...
node11.example.com openshift_node_group_name="node-config-infra"
node12.example.com openshift_node_group_name="node-config-infra"
node13.example.com openshift_node_group_name="node-config-infra"
Option F: Google Cloud Storage (GCS) bucket on Google Compute Engine (GCE)
A GCS bucket must already exist.
[OSEv3:vars]
openshift_hosted_registry_storage_provider=gcs
openshift_hosted_registry_storage_gcs_bucket=bucket01
openshift_hosted_registry_storage_gcs_keyfile=test.key
openshift_hosted_registry_storage_gcs_rootdirectory=/registry
When using vSphere volume for the registry, you must set the storage access mode to ReadWriteOnce
and the replica count to 1:
65
OpenShift Container Platform 3.11 Installing Clusters
[OSEv3:vars]
openshift_hosted_registry_storage_kind=vsphere
openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner:
kubernetes.io/vsphere-volume']
openshift_hosted_registry_replicas=1
In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host
level to apply these settings uniformly across your environment.
NOTE
See Configuring Global Build Defaults and Overrides for more information on how the
proxy environment is defined for builds.
Variable Purpose
66
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
67
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
IMPORTANT
If you are changing the default firewall, ensure that each host in your cluster is
using the same firewall type to prevent inconsistencies.
Do not use firewalld with the OpenShift Container Platform installed on Atomic
Host. firewalld is not supported on Atomic host.
NOTE
While iptables is the default firewall, firewalld is recommended for new installations.
OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to
68
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to
use firewalld during the install process.
Because iptables is the default firewall, OpenShift Container Platform is designed to have it configured
automatically. However, iptables rules can break OpenShift Container Platform if not configured
correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.
To use firewalld as the firewall for an OpenShift Container Platform installation, add the
os_firewall_use_firewalld variable to the list of configuration variables in the Ansible host file at install:
[OSEv3:vars]
os_firewall_use_firewalld=True 1
1 Setting this variable to true opens the required ports and adds rules to the default zone, ensuring
that firewalld is configured correctly.
NOTE
Using the firewalld default configuration comes with limited configuration options, and
cannot be overridden. For example, while you can set up a storage network with
interfaces in multiple zones, the interface that nodes communicate on must be in the
default zone.
You can set the session name and maximum number of seconds with
openshift_master_session_name and openshift_master_session_max_seconds:
openshift_master_session_name=ssn
openshift_master_session_max_seconds=3600
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
Custom serving certificates for the public host names of the OpenShift Container Platform API and web
69
OpenShift Container Platform 3.11 Installing Clusters
Custom serving certificates for the public host names of the OpenShift Container Platform API and web
console can be deployed during cluster installation and are configurable in the inventory file.
NOTE
Configure custom certificates for the host name associated with the publicMasterURL,
which you set as the openshift_master_cluster_public_hostname parameter value.
Using a custom serving certificate for the host name associated with the masterURL
(openshift_master_cluster_hostname) results in TLS errors because infrastructure
components attempt to contact the master API using the internal masterURL host.
Certificate and key file paths can be configured using the openshift_master_named_certificates
cluster variable:
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts
and are deployed in the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name and Subject Alternative Names. Detected names can
be overridden by providing the "names" key when setting openshift_master_named_certificates:
If you want to overwrite openshift_master_named_certificates with the provided value (or no value),
specify the openshift_master_overwrite_named_certificates cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native
openshift_master_cluster_hostname=lb-internal.openshift.com
openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, set the following parameter values:
70
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
[OSEv3:vars]
openshift_hosted_registry_cert_expire_days=730
openshift_ca_cert_expire_days=1825
openshift_master_cert_expire_days=730
etcd_ca_default_days=1825
These values are also used when redeploying certificates via Ansible post-installation.
[OSEv3:vars]
openshift_cluster_monitoring_operator_install=false
For more information on Prometheus Cluster Monitoring and its configuration, see Prometheus Cluster
Monitoring documentation.
[OSEv3:vars]
openshift_metrics_install_metrics=true
The metrics public URL can be set during cluster installation using the
openshift_metrics_hawkular_hostname Ansible variable, which defaults to:
https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics
If you alter this variable, ensure the host name is accessible via your router.
openshift_metrics_hawkular_hostname=hawkular-metrics.
{{openshift_master_default_subdomain}}
IMPORTANT
In accordance with upstream Kubernetes rules, metrics can be collected only on the
default interface of eth0.
NOTE
IMPORTANT
Testing shows issues with using the RHEL NFS server as a storage backend for the
container image registry. This includes Cassandra for metrics storage. Therefore, using
the RHEL NFS server to back PVs used by core services is not recommended.
However, NFS/SAN implementations on the marketplace might not have issues backing
or providing storage to this component. Contact the individual NFS/SAN implementation
vendor for more information on any testing that was possibly completed against these
OpenShift core components.
There are three options for enabling cluster metrics storage during cluster installation:
Option A: Dynamic
If your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud
provider, use the following variable:
[OSEv3:vars]
openshift_metrics_cassandra_storage_type=dynamic
If there are multiple default dynamically provisioned volume types, such as gluster-storage and
glusterfs-storage-block, you can specify the provisioned volume type by variable. Use the following
variables:
[OSEv3:vars]
openshift_metrics_cassandra_storage_type=pv
openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block
[OSEv3:vars]
openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/exports
72
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
[OSEv3:vars]
openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_host=nfs.example.com
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
As a result, the installer and update playbooks require an option to enable the use of NFS with core
infrastructure components.
If you see the following messages when upgrading or installing your cluster, then an additional step is
required.
[OSEv3:vars]
openshift_enable_unsupported_configurations=True
73
OpenShift Container Platform 3.11 Installing Clusters
[OSEv3:vars]
openshift_logging_install_logging=true
NOTE
When installing cluster logging, you must also specify a node selector, such as
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} in the
Ansible inventory file.
For more information on the available cluster logging variables, see Specifying Logging Ansible
Variables.
IMPORTANT
Testing shows issues with using the RHEL NFS server as a storage backend for the
container image registry. This includes ElasticSearch for logging storage. Therefore, using
the RHEL NFS server to back PVs used by core services is not recommended.
Due to ElasticSearch not implementing a custom deletionPolicy, the use of NFS storage
as a volume or a persistent volume is not supported for Elasticsearch storage, as Lucene
and the default deletionPolicy, relies on file system behavior that NFS does not supply.
Data corruption and other problems can occur.
NFS implementations on the marketplace might not have these issues. Contact the
individual NFS implementation vendor for more information on any testing they might
have performed against these OpenShift core components.
There are three options for enabling cluster logging storage during cluster installation:
Option A: Dynamic
If your OpenShift Container Platform environment has dynamic volume provisioning, it could be
configured either via the cloud provider or by an independent storage provider. For instance, the cloud
provider could have a StorageClass with provisioner kubernetes.io/gce-pd on GCE, and an independent
storage provider such as GlusterFS could have a StorageClass with provisioner
kubernetes.io/glusterfs. In either case, use the following variable:
[OSEv3:vars]
openshift_logging_es_pvc_dynamic=true
For additional information on dynamic provisioning, see Dynamic provisioning and creating storage
classes.
If there are multiple default dynamically provisioned volume types, such as gluster-storage and
glusterfs-storage-block, you can specify the provisioned volume type by variable. Use the following
variables:
74
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
[OSEv3:vars]
openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block
[OSEv3:vars]
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/exports 1
openshift_logging_storage_nfs_options='*(rw,root_squash)' 2
openshift_logging_storage_volume_name=logging 3
openshift_logging_storage_volume_size=10Gi
openshift_enable_unsupported_configurations=true
openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_es_pvc_size=10Gi
openshift_logging_es_pvc_storage_class_name=''
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_prefix=logging
[OSEv3:vars]
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_host=nfs.example.com 1
openshift_logging_storage_nfs_directory=/exports 2
openshift_logging_storage_volume_name=logging 3
openshift_logging_storage_volume_size=10Gi
openshift_enable_unsupported_configurations=true
openshift_logging_elasticsearch_storage_type=pvc
75
OpenShift Container Platform 3.11 Installing Clusters
openshift_logging_es_pvc_size=10Gi
openshift_logging_es_pvc_storage_class_name=''
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_prefix=logging
As a result, the installer and update playbooks require an option to enable the use of NFS with core
infrastructure components.
If you see the following messages when upgrading or installing your cluster, then an additional step is
required.
[OSEv3:vars]
openshift_enable_unsupported_configurations=True
To disable automatic deployment of the service catalog, set the following cluster variable in your
inventory file:
openshift_enable_service_catalog=false
76
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
For example:
openshift_service_catalog_image="docker-registry.default.example.com/openshift/ose-service-
catalog:${version}"
openshift_service_catalog_image_prefix="docker-registry-default.example.com/openshift/ose-"
openshift_service_catalog_image_version="v3.9.30"
If you do not want to install the OAB, set the ansible_service_broker_install parameter value to false
in the inventory file:
ansible_service_broker_install=false
Variable Purpose
The OAB deploys its own etcd instance separate from the etcd used by the rest of the OpenShift
Container Platform cluster. The OAB’s etcd instance requires separate storage using persistent volumes
(PVs) to function. If no PV is available, etcd will wait until the PV can be satisfied. The OAB application
will enter a CrashLoop state until its etcd instance is available.
Some Ansible playbook bundles (APBs) also require a PV for their own usage in order to deploy. For
example, each of the database APBs have two plans: the Development plan uses ephemeral storage and
does not require a PV, while the Production plan is persisted and does require a PV.
APB PV Required?
mediawiki-apb Yes
77
OpenShift Container Platform 3.11 Installing Clusters
NOTE
The following example shows usage of an NFS host to provide the required PVs, but
other persistent storage providers can be used instead.
1. In your inventory file, add nfs to the [OSEv3:children] section to enable the [nfs] group:
[OSEv3:children]
masters
nodes
nfs
2. Add a [nfs] group section and add the host name for the system that will be the NFS host:
[nfs]
master1.example.com
openshift_hosted_etcd_storage_kind=nfs
openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"
openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd 1
openshift_hosted_etcd_storage_volume_name=etcd-vol2 2
openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
openshift_hosted_etcd_storage_volume_size=1G
openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
These settings create a persistent volume that is attached to the OAB’s etcd instance during
cluster installation.
4.22.1.2. Configuring the OpenShift Ansible Broker for Local APB Development
In order to do APB development with the OpenShift Container Registry in conjunction with the OAB, a
whitelist of images the OAB can access must be defined. If a whitelist is not defined, the broker will
ignore APBs and users will not see any APBs available.
By default, the whitelist is empty so that a user cannot add APB images to the broker without a cluster
administrator configuring the broker. To whitelist all images that end in -apb:
ansible_service_broker_local_registry_whitelist=['.*-apb$']
78
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
If you do not want to install the TSB, set the template_service_broker_install parameter value to false:
template_service_broker_install=false
To configure the TSB, one or more projects must be defined as the broker’s source namespace(s) for
loading templates and image streams into the service catalog. Set the source projects by modifying the
following in your inventory file’s [OSEv3:vars] section:
openshift_template_service_broker_namespaces=['openshift','myproject']
Variable Purpose
Variable Purpose
79
OpenShift Container Platform 3.11 Installing Clusters
Variable Purpose
80
CHAPTER 4. CONFIGURING YOUR INVENTORY FILE
Variable Purpose
IMPORTANT
For more information on Red Hat Technology Preview features support scope, see
https://access.redhat.com/support/offerings/techpreview/.
The Technology Preview Operator Framework includes the Operator Lifecycle Manager (OLM). You
can optionally install the OLM during cluster installation by setting the following variables in your
inventory file:
NOTE
81
OpenShift Container Platform 3.11 Installing Clusters
NOTE
Alternatively, the Technology Preview Operator Framework can be installed after cluster
installation. See Installing Operator Lifecycle Manager using Ansible for separate
instructions.
openshift_enable_olm=true
openshift_additional_registry_credentials=
[{'host':'registry.connect.redhat.com','user':'<your_user_name>','password':'<your_password>','t
est_image':'mongodb/enterprise-operator:0.3.2'}]
Set user and password to the credentials that you use to log in to the Red Hat Customer
Portal at https://access.redhat.com.
The test_image represents an image that will be used to test the credentials you provided.
After your cluster installation has completed successful, see Launching your first Operator for further
steps on using the OLM as a cluster administrator during this Technology Preview phase.
82
CHAPTER 5. EXAMPLE INVENTORY FILES
5.1. OVERVIEW
After getting to know the basics of configuring your own inventory file , you can review the following
example inventories which describe various environment topographies, including using multiple masters
for high availability. You can choose an example that matches your requirements, modify it to match
your own environment, and use it as your inventory file when running the installation.
IMPORTANT
The following example inventories use the default set of node groups when setting
openshift_node_group_name per host in the [nodes] group. To define and use your
own custom node group definitions, set the openshift_node_groups variable in the
inventory file. See Defining Node Groups and Host Mappings for details.
NOTE
Moving from a single master cluster to multiple masters after installation is not
supported.
node2.example.com
infra-node2.example.com
You can see these example hosts present in the [masters], [etcd], and [nodes] sections of the
following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
83
OpenShift Container Platform 3.11 Installing Clusters
[OSEv3:children]
masters
nodes
etcd
openshift_deployment_type=openshift-enterprise
IMPORTANT
See Configuring Node Host Labels to ensure you understand the default node selector
requirements and node label considerations beginning in OpenShift Container Platform
3.9.
To use this example, modify the file to match your environment and specifications, and save it as
/etc/ansible/hosts.
84
CHAPTER 5. EXAMPLE INVENTORY FILES
etcd1.example.com etcd
etcd2.example.com
etcd3.example.com
node2.example.com
infra-node2.example.com
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the
following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd
85
OpenShift Container Platform 3.11 Installing Clusters
master.example.com openshift_node_group_name='node-config-master'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'
IMPORTANT
See Configuring Node Host Labels to ensure you understand the default node selector
requirements and node label considerations beginning in OpenShift Container Platform
3.9.
To use this example, modify the file to match your environment and specifications, and save it as
/etc/ansible/hosts.
NOTE
Moving from a single master cluster to multiple masters after installation is not
supported.
When configuring multiple masters, the cluster installation process supports the native high availability
(HA) method. This method leverages the native HA master capabilities built into OpenShift Container
Platform and can be combined with any load balancing solution.
If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy
automatically as the load balancing solution for the masters. If no host is defined, it is assumed you have
pre-configured an external load balancing solution of your choice to balance the master API (port 8443)
on all master hosts.
NOTE
This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is
not recommended for production environments. If you are deploying to a cloud provider,
Red Hat recommends deploying a cloud-native TCP-based load balancer or take other
steps to provide a highly available load balancer.
The HAProxy load balancer is only used to load balance traffic to the API server and does
not load balance any user application traffic.
86
CHAPTER 5. EXAMPLE INVENTORY FILES
See the External Load Balancer Integrations example in Github for more information. For more on the
high availability master architecture, see Kubernetes Infrastructure.
NOTE
The cluster installation process does not currently support multiple HAProxy load
balancers in an active-passive setup. See the Load Balancer Administration
documentation for post-installation amendments.
master2.example.com
master3.example.com
etcd1.example.com etcd
etcd2.example.com
etcd3.example.com
node2.example.com
infra-node2.example.com
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the
following example inventory file:
87
OpenShift Container Platform 3.11 Installing Clusters
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
88
CHAPTER 5. EXAMPLE INVENTORY FILES
node2.example.com openshift_node_group_name='node-config-compute'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'
IMPORTANT
See Configuring Node Host Labels to ensure you understand the default node selector
requirements and node label considerations beginning in OpenShift Container Platform
3.9.
To use this example, modify the file to match your environment and specifications, and save it as
/etc/ansible/hosts.
master3.example.com
node2.example.com
infra-node2.example.com
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the
following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
89
OpenShift Container Platform 3.11 Installing Clusters
IMPORTANT
See Configuring Node Host Labels to ensure you understand the default node selector
requirements and node label considerations beginning in OpenShift Container Platform
3.9.
To use this example, modify the file to match your environment and specifications, and save it as
/etc/ansible/hosts.
90
CHAPTER 6. INSTALLING OPENSHIFT CONTAINER PLATFORM
IMPORTANT
Running Ansible playbooks with the --tags or --check options is not supported by Red
Hat.
NOTE
6.1. PREREQUISITES
Before installing OpenShift Container Platform, prepare your cluster hosts:
If you will have a large cluster, review the Scaling and Performance Guide for suggestions for
optimizing installation time.
Prepare your hosts . This process includes verifying system and environment requirements per
component type, installing and configuring the docker service, and installing Ansible version 2.6
or later. You must install Ansible to run the installation playbooks.
Configure your inventory file to define your environment and OpenShift Container Platform
cluster configuration. Both your initial installation and future cluster upgrades are based on this
inventory file.
If you are installing OpenShift Container Platform on Red Hat Enterprise Linux, decide if you
want to use the RPM or system container installation method. The system container method is
required for RHEL Atomic Host systems.
IMPORTANT
Do not run OpenShift Ansible playbooks under nohup. Using nohup with the playbooks
causes file descriptors to be created but not closed. Therefore, the system can run out of
files to open and the playbook fails.
1. Change to the playbook directory and run the prerequisites.yml playbook. This playbook installs
required software packages, if any, and modifies the container runtimes. Unless you need to
configure the container runtimes, run this playbook only once, before you deploy a cluster the
first time:
91
OpenShift Container Platform 3.11 Installing Clusters
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/inventory] \ 1
playbooks/prerequisites.yml
1 If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path to
the inventory file.
2. Change to the playbook directory and run the deploy_cluster.yml playbook to initiate the
cluster installation:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/inventory] \ 1
playbooks/deploy_cluster.yml
1 If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path to
the inventory file.
. If your installation succeeded, verify the installation. If your installation failed, retry the installation.
The installer image can be used as a system container. System containers are stored and run outside of
the traditional docker service. This enables running the installer image from one of the target hosts
without concern for the install restarting docker on the host.
To use the Atomic CLI to run the installer as a run-once system container, perform the following steps
as the root user:
1 Specify the location on the local host for your inventory file.
This command runs a set of prerequiste tasks by using the inventory file specified and the root
user’s SSH configuration.
92
CHAPTER 6. INSTALLING OPENSHIFT CONTAINER PLATFORM
--storage=ostree \
--set INVENTORY_FILE=/path/to/inventory \ 1
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
\
--set OPTS="-v" \
registry.redhat.io/openshift3/ose-ansible:v3.11
1 Specify the location on the local host for your inventory file.
This command initiates the cluster installation by using the inventory file specified and the root
user’s SSH configuration. It logs the output on the terminal and also saves it in the
/var/log/ansible.log file. The first time this command is run, the image is imported into OSTree
storage (system containers use this rather than docker daemon storage). On subsequent runs,
it reuses the stored image.
If for any reason the installation fails, before re-running the installer, see Known Issues to check
for any specific instructions or workarounds.
You can use the PLAYBOOK_FILE environment variable to specify other playbooks you want to run by
using the containerized installer. The default value of the PLAYBOOK_FILE is
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml, which is the main cluster
installation playbook, but you can set it to the path of another playbook inside the container.
For example, to run the pre-install checks playbook before installation, use the following command:
1 Set PLAYBOOK_FILE to the full path of the playbook starting at the playbooks/ directory.
Playbooks are located in the same locations as with the RPM-based installer.
The installer image can also run as a docker container anywhere that docker can run.
93
OpenShift Container Platform 3.11 Installing Clusters
WARNING
This method must not be used to run the installer on one of the hosts being
configured, because the installer might restart docker on the host and disrupt the
installation.
NOTE
Although this method and the system container method above use the same image, they
run with different entry points and contexts, so runtime parameters are not the same.
At a minimum, when running the installer as a docker container you must provide:
Here is an example of how to run an install via docker, which must be run by a non- root user with access
to docker:
1 -u `id -u` makes the container run with the same UID as the current user, which allows that
user to use the SSH key inside the container. SSH private keys are expected to be readable
only by their owner.
94
CHAPTER 6. INSTALLING OPENSHIFT CONTAINER PLATFORM
separate copy of the SSH key or directory so that the original file labels remain untouched.
To install OpenShift Container Platform on an existing OpenStack installation, use the OpenStack
playbook. For more information about the playbook, including detailed prerequisites, see the OpenStack
Provisioning readme file.
95
OpenShift Container Platform 3.11 Installing Clusters
IMPORTANT
While RHEL Atomic Host is supported for running OpenShift Container Platform services
as system container, the installation method uses Ansible, which is not available in RHEL
Atomic Host. The RPM-based installer must therefore be run from a RHEL 7 system. The
host initiating the installation does not need to be intended for inclusion in the OpenShift
Container Platform cluster, but it can be. Alternatively, a containerized version of the
installer is available as a system container, which can be run from a RHEL Atomic Host
system.
1. Review the Known Issues to check for any specific instructions or workarounds.
If you did not modify the SDN configuration or generate new certificates, retry the
installation.
If you modified the SDN configuration, generated new certificates, or the installer fails
again, you must either start over with a clean operating system installation or uninstall and
install again.
If you use virtual machines, start from a new image or uninstall and install again.
The following table lists the playbooks in the order that they must run:
96
CHAPTER 6. INSTALLING OPENSHIFT CONTAINER PLATFORM
metrics-server /usr/share/ansible/openshift-ansible/playbooks/metrics-
server/config.yml
Availability /usr/share/ansible/openshift-ansible/playbooks/openshift-monitor-
Monitoring Install availability/config.yml
97
OpenShift Container Platform 3.11 Installing Clusters
1. Verify that the master is started and nodes are registered and reporting in Ready status. On the
master host, run the following command as root:
# oc get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 7h v1.9.1+a0ce1bc657
node1.example.com Ready compute 7h v1.9.1+a0ce1bc657
node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
2. To verify that the web console is installed correctly, use the master host name and the web
console port number to access the web console with a web browser.
For example, for a master host with a host name of master.openshift.com and using the
default port of 8443, the web console URL is https://master.openshift.com:8443/console.
1. First, verify that the etcd package, which provides the etcdctl command, is installed:
2. On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts
in the following:
# etcdctl -C \
https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:237
9\
98
CHAPTER 6. INSTALLING OPENSHIFT CONTAINER PLATFORM
--ca-file=/etc/origin/master/master.etcd-ca.crt \
--cert-file=/etc/origin/master/master.etcd-client.crt \
--key-file=/etc/origin/master/master.etcd-client.key cluster-health
# etcdctl -C \
https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:237
9\
--ca-file=/etc/origin/master/master.etcd-ca.crt \
--cert-file=/etc/origin/master/master.etcd-client.crt \
--key-file=/etc/origin/master/master.etcd-client.key member list
http://<lb_hostname>:9000 1
1 Provide the load balancer host name listed in the [lb] section of your inventory file.
You can verify your installation by consulting the HAProxy Configuration documentation .
Due to a known issue, after running the installation, if NFS volumes are provisioned for any
component, the following directories might be created whether their components are being
deployed to NFS volumes or not:
/exports/logging-es
/exports/logging-es-ops/
/exports/metrics/
/exports/prometheus
/exports/prometheus-alertbuffer/
/exports/prometheus-alertmanager/
99
OpenShift Container Platform 3.11 Installing Clusters
Deploy a router.
100
CHAPTER 7. DISCONNECTED INSTALLATION
After the installation components are available to your node hosts, you install OpenShift Container
Platform by following the standard installation steps.
After you install OpenShift Container Platform, you must make the S2I builder images that you pulled
available to the cluster.
7.1. PREREQUISITES
Review OpenShift Container Platform’s overall architecture and plan your environment
topology.
Obtain a Red Hat Enterprise Linux (RHEL) 7 server that you have root access to with access to
the Internet and at least 110 GB of disk space. You download the required software repositories
and container images to this computer.
Plan to maintain a webserver within your disconnected environment to serve the mirrored
repositories. You copy the repositories from the Internet-connected host to this webserver,
either over the network or by using physical media in disconnected deployments.
Provide a source control repository. After installation, your nodes must access source code in a
source code repository, such as Git.
When building applications in OpenShift Container Platform, your build might contain external
dependencies, such as a Maven Repository or Gem files for Ruby applications.
Using a Red Hat Satellite 6.1 server that acts as a container image registry.
IMPORTANT
You must obtain the required images and software components on a system with the
same architecture as the cluster that is in your disconnected environment.
1. To ensure that the packages are not deleted after you sync the repository, import the GPG key:
2. Register the server with the Red Hat Customer Portal. You must use the credentials that are
101
OpenShift Container Platform 3.11 Installing Clusters
2. Register the server with the Red Hat Customer Portal. You must use the credentials that are
associated with the account that has access to the OpenShift Container Platform subscriptions:
$ subscription-manager register
$ subscription-manager refresh
a. Find an available subscription pool that provides the OpenShift Container Platform
channels:
For cloud installations and on-premise installations on x86_64 servers, run the following
command:
# subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-ose-3.11-rpms" \
--enable="rhel-7-server-ansible-2.9-rpms"
For on-premise installations on IBM POWER8 servers, run the following command:
# subscription-manager repos \
--enable="rhel-7-for-power-le-rpms" \
--enable="rhel-7-for-power-le-extras-rpms" \
--enable="rhel-7-for-power-le-optional-rpms" \
--enable="rhel-7-server-ansible-2.9-for-power-le-rpms" \
--enable="rhel-7-server-for-power-le-rhscl-rpms" \
--enable="rhel-7-for-power-le-ose-3.11-rpms"
For on-premise installations on IBM POWER9 servers, run the following command:
# subscription-manager repos \
--enable="rhel-7-for-power-9-rpms" \
--enable="rhel-7-for-power-9-extras-rpms" \
--enable="rhel-7-for-power-9-optional-rpms" \
--enable="rhel-7-server-ansible-2.9-for-power-9-rpms" \
--enable="rhel-7-server-for-power-9-rhscl-rpms" \
--enable="rhel-7-for-power-9-ose-3.11-rpms"
NOTE
102
CHAPTER 7. DISCONNECTED INSTALLATION
NOTE
Older versions of OpenShift Container Platform 3.11 supported only Ansible 2.6.
The most recent versions of the playbooks now support Ansible 2.9, which is the
preferred version to use.
The yum-utils package provides the reposync utility, which lets you mirror yum repositories,
and you can use the createrepo package to create a usable yum repository from a directory.
7. Make a directory to store the software in the server’s storage or to a USB drive or other external
device:
$ mkdir -p </path/to/repos>
IMPORTANT
If you can re-connect this server to the disconnected LAN and use it as the
repository server, store the files locally. If you cannot, use USB-connected
storage so you can transport the software to a repository server in your
disconnected LAN.
8. Sync the packages and create the repository for each of them.
$ for repo in \
rhel-7-server-rpms \
rhel-7-server-extras-rpms \
rhel-7-server-ansible-2.9-rpms \
rhel-7-server-ose-3.11-rpms
do
reposync --gpgcheck -lm --repoid=${repo} --download_path=</path/to/repos> 1
createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo} 2
done
For on-premise installations on IBM POWER8 servers, run the following command:
$ for repo in \
rhel-7-for-power-le-rpms \
rhel-7-for-power-le-extras-rpms \
rhel-7-for-power-le-optional-rpms \
rhel-7-server-ansible-2.9-for-power-le-rpms \
rhel-7-server-for-power-le-rhscl-rpms \
rhel-7-for-power-le-ose-3.11-rpms
do
103
OpenShift Container Platform 3.11 Installing Clusters
For on-premise installations on IBM POWER9 servers, run the following command:
$ for repo in \
rhel-7-for-power-9-rpms \
rhel-7-for-power-9-extras-rpms \
rhel-7-for-power-9-optional-rpms \
rhel-7-server-ansible-2.9-for-power-9-rpms \
rhel-7-server-for-power-9-rhscl-rpms \
rhel-7-for-power-9-ose-3.11-rpms
do
reposync --gpgcheck -lm --repoid=${repo} --download_path=/<path/to/repos> 1
createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo} 2
done
2. Pull all of the required OpenShift Container Platform infrastructure component images. Replace
<tag> with the version to install. For example, specify v3.11.272 for the latest version. You can
specify a different minor version. If you are using a containerized installer, pull
registry.redhat.io/openshift3/ose-ansible:v3.11 in addition to these required images:
104
CHAPTER 7. DISCONNECTED INSTALLATION
3. For on-premise installations on x86_64 servers, pull the following image. Replace <tag> with the
version to install. For example, specify v3.11.272 for the latest version. You can specify a
different minor version.
4. Pull all of the required OpenShift Container Platform component images for the optional
components. Replace <tag> with the version to install. For example, specify v3.11.272 for the
latest version. You can specify a different minor version.
105
OpenShift Container Platform 3.11 Installing Clusters
For on-premise installations on IBM POWER8 or IBM POWER9 servers, run the following
commands:
IMPORTANT
For Red Hat support, a converged mode subscription is required for rhgs3/
images.
5. Pull the Red Hat-certified Source-to-Image (S2I) builder images that you intend to use in your
OpenShift Container Platform environment.
Make sure to indicate the correct tag by specifying the version number. See the S2I table in the
OpenShift and Atomic Platform Tested Integrations page for details about image version
compatibility.
106
CHAPTER 7. DISCONNECTED INSTALLATION
$ mkdir </path/to/images>
$ cd </path/to/images>
2. Export the OpenShift Container Platform infrastructure component images. If you are using a
containerized installer, export registry.redhat.io/openshift3/ose-ansible:v3.11 in addition to
these required images:
107
OpenShift Container Platform 3.11 Installing Clusters
registry.redhat.io/openshift3/kuryr-controller \
registry.redhat.io/openshift3/kuryr-cni \
registry.redhat.io/openshift3/local-storage-provisioner \
registry.redhat.io/openshift3/manila-provisioner \
registry.redhat.io/openshift3/mariadb-apb \
registry.redhat.io/openshift3/mediawiki \
registry.redhat.io/openshift3/mediawiki-apb \
registry.redhat.io/openshift3/mysql-apb \
registry.redhat.io/openshift3/ose-ansible-service-broker \
registry.redhat.io/openshift3/ose-cli \
registry.redhat.io/openshift3/ose-cluster-autoscaler \
registry.redhat.io/openshift3/ose-cluster-capacity \
registry.redhat.io/openshift3/ose-cluster-monitoring-operator \
registry.redhat.io/openshift3/ose-console \
registry.redhat.io/openshift3/ose-configmap-reloader \
registry.redhat.io/openshift3/ose-control-plane \
registry.redhat.io/openshift3/ose-deployer \
registry.redhat.io/openshift3/ose-descheduler \
registry.redhat.io/openshift3/ose-docker-builder \
registry.redhat.io/openshift3/ose-docker-registry \
registry.redhat.io/openshift3/ose-efs-provisioner \
registry.redhat.io/openshift3/ose-egress-dns-proxy \
registry.redhat.io/openshift3/ose-egress-http-proxy \
registry.redhat.io/openshift3/ose-egress-router \
registry.redhat.io/openshift3/ose-haproxy-router \
registry.redhat.io/openshift3/ose-hyperkube \
registry.redhat.io/openshift3/ose-hypershift \
registry.redhat.io/openshift3/ose-keepalived-ipfailover \
registry.redhat.io/openshift3/ose-kube-rbac-proxy \
registry.redhat.io/openshift3/ose-kube-state-metrics \
registry.redhat.io/openshift3/ose-metrics-server \
registry.redhat.io/openshift3/ose-node \
registry.redhat.io/openshift3/ose-node-problem-detector \
registry.redhat.io/openshift3/ose-operator-lifecycle-manager \
registry.redhat.io/openshift3/ose-ovn-kubernetes \
registry.redhat.io/openshift3/ose-pod \
registry.redhat.io/openshift3/ose-prometheus-config-reloader \
registry.redhat.io/openshift3/ose-prometheus-operator \
registry.redhat.io/openshift3/ose-recycler \
registry.redhat.io/openshift3/ose-service-catalog \
registry.redhat.io/openshift3/ose-template-service-broker \
registry.redhat.io/openshift3/ose-tests \
registry.redhat.io/openshift3/ose-web-console \
registry.redhat.io/openshift3/postgresql-apb \
registry.redhat.io/openshift3/registry-console \
registry.redhat.io/openshift3/snapshot-controller \
registry.redhat.io/openshift3/snapshot-provisioner \
registry.redhat.io/rhel7/etcd:3.2.28 \
For on-premise installations on IBM POWER8 or IBM POWER9 servers, run the following
command:
108
CHAPTER 7. DISCONNECTED INSTALLATION
registry.redhat.io/openshift3/csi-attacher \
registry.redhat.io/openshift3/csi-driver-registrar \
registry.redhat.io/openshift3/csi-livenessprobe \
registry.redhat.io/openshift3/csi-provisioner \
registry.redhat.io/openshift3/grafana \
registry.redhat.io/openshift3/kuryr-controller \
registry.redhat.io/openshift3/kuryr-cni \
registry.redhat.io/openshift3/local-storage-provisioner \
registry.redhat.io/openshift3/manila-provisioner \
registry.redhat.io/openshift3/mariadb-apb \
registry.redhat.io/openshift3/mediawiki \
registry.redhat.io/openshift3/mediawiki-apb \
registry.redhat.io/openshift3/mysql-apb \
registry.redhat.io/openshift3/ose-ansible-service-broker \
registry.redhat.io/openshift3/ose-cli \
registry.redhat.io/openshift3/ose-cluster-autoscaler \
registry.redhat.io/openshift3/ose-cluster-capacity \
registry.redhat.io/openshift3/ose-cluster-monitoring-operator \
registry.redhat.io/openshift3/ose-console \
registry.redhat.io/openshift3/ose-configmap-reloader \
registry.redhat.io/openshift3/ose-control-plane \
registry.redhat.io/openshift3/ose-deployer \
registry.redhat.io/openshift3/ose-descheduler \
registry.redhat.io/openshift3/ose-docker-builder \
registry.redhat.io/openshift3/ose-docker-registry \
registry.redhat.io/openshift3/ose-egress-dns-proxy \
registry.redhat.io/openshift3/ose-egress-http-proxy \
registry.redhat.io/openshift3/ose-egress-router \
registry.redhat.io/openshift3/ose-haproxy-router \
registry.redhat.io/openshift3/ose-hyperkube \
registry.redhat.io/openshift3/ose-hypershift \
registry.redhat.io/openshift3/ose-keepalived-ipfailover \
registry.redhat.io/openshift3/ose-kube-rbac-proxy \
registry.redhat.io/openshift3/ose-kube-state-metrics \
registry.redhat.io/openshift3/ose-metrics-server \
registry.redhat.io/openshift3/ose-node \
registry.redhat.io/openshift3/ose-node-problem-detector \
registry.redhat.io/openshift3/ose-operator-lifecycle-manager \
registry.redhat.io/openshift3/ose-ovn-kubernetes \
registry.redhat.io/openshift3/ose-pod \
registry.redhat.io/openshift3/ose-prometheus-config-reloader \
registry.redhat.io/openshift3/ose-prometheus-operator \
registry.redhat.io/openshift3/ose-recycler \
registry.redhat.io/openshift3/ose-service-catalog \
registry.redhat.io/openshift3/ose-template-service-broker \
registry.redhat.io/openshift3/ose-tests \
registry.redhat.io/openshift3/ose-web-console \
registry.redhat.io/openshift3/postgresql-apb \
registry.redhat.io/openshift3/registry-console \
registry.redhat.io/openshift3/snapshot-controller \
registry.redhat.io/openshift3/snapshot-provisioner \
registry.redhat.io/rhel7/etcd:3.2.28 \
109
OpenShift Container Platform 3.11 Installing Clusters
For on-premise installations on IBM POWER8 or IBM POWER9 servers, run the following
command:
4. Export the S2I builder images that you pulled. For example, if you synced only the Jenkins and
Tomcat images:
110
CHAPTER 7. DISCONNECTED INSTALLATION
registry.redhat.io/openshift3/jenkins-slave-base-rhel7:<tag> \
registry.redhat.io/openshift3/jenkins-slave-maven-rhel7:<tag> \
registry.redhat.io/openshift3/jenkins-slave-nodejs-rhel7:<tag> \
5. Copy the compressed files from your Internet-connected host to your internal host.
a. If you need to install a new webserver in your disconnected environment, install a new RHEL
7 system with at least 110 GB of space on your LAN. During RHEL installation, select the
Basic Web Server option.
b. If you are re-using the server where you downloaded the OpenShift Container Platform
software and required images, install Apache on the server:
$ mv /path/to/repos /var/www/html/
$ chmod -R +r /var/www/html/repos
$ restorecon -vR /var/www/html
If you installed a new server, attach external storage and then copy the files:
$ cp -a /path/to/repos /var/www/html/
$ chmod -R +r /var/www/html/repos
$ restorecon -vR /var/www/html
111
OpenShift Container Platform 3.11 Installing Clusters
IMPORTANT
The following steps are a generic guide to loading the images into a registry. You might
need to take more or different actions to load the images.
1. Before you push the images into the registry, re-tag each image.
For images in the openshift3 repository, tag the image as both the major and minor version
number. For example, to tag the OpenShift Container Platform node image:
For other images, tag the image with the exact version number. For example, to tag the
etcd image:
2. Push each image into the registry. For example, to push the OpenShift Container Platform node
images:
1. Create the hosts for your OpenShift Container Platform cluster. It is recommended to use the
latest version of RHEL 7 and to perform a minimal installation. Ensure that the hosts meet the
system requirements.
2. On each node host, create the repository definitions. Place the following text in the
/etc/yum.repos.d/ose.repo file:
[rhel-7-server-rpms]
name=rhel-7-server-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-rpms 1
enabled=1
gpgcheck=0
[rhel-7-server-extras-rpms]
name=rhel-7-server-extras-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-extras-rpms 2
enabled=1
gpgcheck=0
[rhel-7-server-ansible-2.9-rpms]
name=rhel-7-server-ansible-2.9-rpms
112
CHAPTER 7. DISCONNECTED INSTALLATION
baseurl=http://<server_IP>/repos/rhel-7-server-ansible-2.9-rpms 3
enabled=1
gpgcheck=0
[rhel-7-server-ose-3.11-rpms]
name=rhel-7-server-ose-3.11-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.11-rpms 4
enabled=1
gpgcheck=0
1 2 3 4 Replace <server_IP> with the IP address or host name of the Apache server that
hosts the software repositories.
3. Finish preparing the hosts for installation. Follow the Preparing your hosts steps, omitting the
steps in the Host Registration section.
oreg_url=registry.example.com/openshift3/ose-<component>:<version> 1
openshift_examples_modify_imagestreams=true
oreg_url=satellite.example.com/oreg-prod-openshift3_ose-<component>:<version> 1
osm_etcd_image=satellite.example.com/oreg-prod-rhel7_etcd:3.2.28 2
openshift_examples_modify_imagestreams=true
2 If the URL prefix for the etcd image is different on your Satellite server, you must
specify the location and name of the etcd image in the osm_etcd_image parameter.
113
OpenShift Container Platform 3.11 Installing Clusters
When installing a stand-alone deployment of OCR, a cluster of masters and nodes is still installed, similar
to a typical OpenShift Container Platform installation. Then, the container image registry is deployed to
run on the cluster. This stand-alone deployment option is useful for administrators that want a container
image registry but do not require the full OpenShift Container Platform environment that includes the
developer-focused web console and application build and deployment tools.
A project namespace model to enable teams to collaborate through role-based access control
(RBAC) authorization.
Administrators can deploy a stand-alone OCR to manage a registry separately that supports multiple
OpenShift Container Platform clusters. A stand-alone OCR also enables administrators to separate their
registry to satisfy their own security or compliance requirements.
Base OS: RHEL 7.5 or later with the "Minimal" installation option and the latest packages from
the RHEL 7 Extras channel, or RHEL Atomic Host 7.4.5 or later.
2 vCPU.
Minimum 16 GB RAM.
Minimum 15 GB hard disk space for the file system containing /var/.
An additional minimum 15 GB unallocated space for Docker’s storage back end; see Configuring
Docker Storage for details.
IMPORTANT
114
CHAPTER 8. INSTALLING A STAND-ALONE DEPLOYMENT OF OPENSHIFT CONTAINER IMAGE REGISTRY
IMPORTANT
OpenShift Container Platform supports servers with x86_64 or IBM POWER architecture.
If you use IBM POWER servers to host cluster nodes, you can only use IBM POWER
servers.
NOTE
To meet the /var/ file system sizing requirements in RHEL Atomic Host you must modify
the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic
Host for instructions on configuring this during or after installation.
All-in-one A single host that includes the master, node, and registry components.
Multiple Three hosts with all components, master, node, and registry, included on each with the
masters masters configured for native high-availability.
(Highly-
Available)
IMPORTANT
Use the following example inventory files for the different supported system topologies:
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
115
OpenShift Container Platform 3.11 Installing Clusters
ansible_ssh_user=root
openshift_master_default_subdomain=apps.test.example.com
openshift_deployment_type=openshift-enterprise
deployment_subtype=registry 1
openshift_hosted_infra_selector="" 2
2 Allows the registry and its web console to be scheduled on the single host.
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
openshift_master_default_subdomain=apps.test.example.com
116
CHAPTER 8. INSTALLING A STAND-ALONE DEPLOYMENT OF OPENSHIFT CONTAINER IMAGE REGISTRY
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge':
'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
4. Install the stand-alone OCR. The process is similar to a full cluster installation process.
IMPORTANT
The host that you run the Ansible playbook on must have at least 75MiB of free
memory per host in the inventory file.
117
OpenShift Container Platform 3.11 Installing Clusters
a. Before you deploy a new cluster, change to the cluster directory and run the
prerequisites.yml playbook:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/inventory] \ 1
playbooks/prerequisites.yml
1 If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path
to the inventory file.
b. To initiate installation, change to the playbook directory and run the deploy_cluster.yml
playbook:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/inventory] \ 1
playbooks/deploy_cluster.yml
1 If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path
to the inventory file.
118
CHAPTER 9. UNINSTALLING OPENSHIFT CONTAINER PLATFORM
Configuration
Containers
Images
RPM packages
The playbook deletes content for any hosts defined in the inventory file that you specify when running
the playbook.
IMPORTANT
Before you uninstall your cluster, review the following list of scenarios and make sure that
uninstalling is the best option:
If your installation process failed and you want to continue the process, you can
retry the installation. The installation playbooks are designed so that if they fail to
install your cluster, you can run them again without needing to uninstall the
cluster.
If you want to restart a failed installation from the beginning, you can uninstall the
OpenShift Container Platform hosts in your cluster by running the uninstall.yml
playbook, as described in the following section. This playbook only uninstalls the
OpenShift Container Platform assets for the most recent version that you
installed.
If you must change the host names or certificate names, you must recreate your
certificates before retrying installation by running the uninstall.yml playbook.
Running the installation playbooks again will not recreate the certificates.
If you want to repurpose hosts that you installed OpenShift Container Platform
on earlier, such as with a proof-of-concept installation, or want to install a
different minor or asynchronous version of OpenShift Container Platform you
must reimage the hosts before you use them in a production cluster. After you
run the uninstall.yml playbooks, some host assets might remain in an altered
state.
119
OpenShift Container Platform 3.11 Installing Clusters
1 If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path to the
inventory file.
WARNING
Use this method only when attempting to uninstall specific node hosts, not specific
masters or etcd hosts. Uninstalling master or etcd hosts requires more
configuration changes in the cluster.
1. Follow the steps in Deleting Nodes to remove the node object from the cluster.
2. Create a different inventory file that references only those hosts. For example, to delete
content from only one node:
[OSEv3:children]
nodes 1
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise
[nodes]
node3.example.com openshift_node_group_name='node-config-infra' 2
# ansible-playbook -i /path/to/new/file \ 1
/usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
When the playbook completes, all OpenShift Container Platform content is removed from the specified
hosts.
120