Openshift Container Platform 4.3: Installing On Openstack
Openshift Container Platform 4.3: Installing On Openstack
Installing on OpenStack
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides instructions for installing and uninstalling OpenShift Container Platform 4.3
clusters on OpenStack Container Platform.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INSTALLING
. . . . . . . . . . . . . ON
. . . . OPENSTACK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .
1.1. INSTALLING A CLUSTER ON OPENSTACK WITH CUSTOMIZATIONS 3
1.1.1. Resource guidelines for installing OpenShift Container Platform on OpenStack 3
1.1.1.1. Control plane and compute machines 4
1.1.1.2. Bootstrap machine 4
1.1.2. Internet and Telemetry access for OpenShift Container Platform 5
1.1.3. Enabling Swift on OpenStack 5
1.1.4. Verifying external network access 6
1.1.5. Defining parameters for the installation program 7
1.1.6. Obtaining the installation program 8
1.1.7. Creating the installation configuration file 9
1.1.8. Installation configuration parameters 10
1.1.8.1. Sample customized install-config.yaml file for OpenStack 14
1.1.9. Generating an SSH private key and adding it to the agent 15
1.1.10. Enabling access to the environment 16
1.1.10.1. Enabling access with floating IP addresses 16
1.1.10.2. Enabling access without floating IP addresses 17
1.1.11. Deploy the cluster 17
1.1.12. Verifying cluster status 18
1.1.13. Logging in to the cluster 19
1.1.14. Configuring application access with floating IP addresses 19
1.2. INSTALLING A CLUSTER ON OPENSTACK WITH KURYR 20
1.2.1. About Kuryr SDN 20
1.2.2. Resource guidelines for installing OpenShift Container Platform on OpenStack with Kuryr 21
1.2.2.1. Increasing quota 23
1.2.2.2. Configuring Neutron 23
1.2.2.3. Configuring Octavia 23
1.2.2.4. Known limitations of installing with Kuryr 26
1.2.2.5. Control plane and compute machines 26
1.2.2.6. Bootstrap machine 27
1.2.3. Internet and Telemetry access for OpenShift Container Platform 27
1.2.4. Enabling Swift on OpenStack 28
1.2.5. Verifying external network access 28
1.2.6. Defining parameters for the installation program 29
1.2.7. Obtaining the installation program 30
1.2.8. Creating the installation configuration file 31
1.2.9. Installation configuration parameters 32
1.2.9.1. Sample customized install-config.yaml file for OpenStack with Kuryr 36
1.2.10. Generating an SSH private key and adding it to the agent 37
1.2.11. Enabling access to the environment 38
1.2.11.1. Enabling access with floating IP addresses 38
1.2.11.2. Enabling access without floating IP addresses 39
1.2.12. Deploy the cluster 39
1.2.13. Verifying cluster status 40
1.2.14. Logging in to the cluster 41
1.2.15. Configuring application access with floating IP addresses 41
1.3. UNINSTALLING A CLUSTER ON OPENSTACK 42
1.3.1. Removing a cluster that uses installer-provisioned infrastructure 42
1
OpenShift Container Platform 4.3 Installing on OpenStack
2
CHAPTER 1. INSTALLING ON OPENSTACK
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
OpenShift Container Platform 4.3 is supported for use with RHOSP 13 and RHOSP 16.
The latest OpenShift Container Platform release supports both the latest RHOSP long
life release and intermediate release. The release cycles of OpenShift Container Platform
and RHOSP are different and versions tested may vary in the future depending on the
release dates of both products.
Table 1.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
Resource Value
Floating IP addresses 2
Ports 15
Routers 1
Subnets 1
RAM 112 GB
vCPUs 28
Instances 7
3
OpenShift Container Platform 4.3 Installing on OpenStack
Resource Value
Security groups 3
Swift containers 2
Swift objects 1
NOTE
Swift space requirements vary depending on the size of the bootstrap Ignition file and
image registry.
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
NOTE
By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> to increase them.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
By default, the OpenShift Container Platform installation program stands up three control plane and
compute machines.
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
4
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
The installation program cannot pass certificate authority bundles to Ignition on control
plane machines. Therefore, the bootstrap machine cannot retrieve Ignition configurations
from Swift if your endpoint uses self-signed certificates.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
Prerequisites
5
OpenShift Container Platform 4.3 Installing on OpenStack
Procedure
To enable Swift on RHOSP:
1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:
Your RHOSP deployment can now use Swift to store and serve files.
Prerequisites
parameter_defaults:
NeutronDhcpAgentDnsmasqDnsServers:
['<DNS_server_address_1>','<DNS_server_address_2']
c. Include the environment file in your Overcloud deploy command. For example:
Procedure
1. Using the RHOSP CLI, verify the name and ID of the 'External' network:
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an External router type appears in the network list. If at least one does not, see Create an
external network.
IMPORTANT
6
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If the external network’s CIDR range overlaps one of the default network ranges, you
must change the matching network ranges in the install-config.yaml file before you run
the installation program.
Network Range
machineCIDR 10.0.0.0/16
serviceNetwork 172.30.0.0/16
clusterNetwork 10.128.0.0/14
CAUTION
If the installation program finds multiple networks with the same name, it sets one of them at random.
To avoid this behavior, create unique names for resources in RHOSP.
NOTE
If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .
Procedure
If your OpenStack distribution includes the Horizon web UI, generate a clouds.yaml file in
it.
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your OpenStack distribution does not include the Horizon web UI, or you do not want to
use Horizon, create the file yourself. For detailed information about clouds.yaml, see
Config files in the RHOSP documentation.
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
7
OpenShift Container Platform 4.3 Installing on OpenStack
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. Place the file that you generate in one of the following locations:
Prerequisites
You must install the cluster from a computer that uses Linux or macOS.
You need 500 MB of local disk space to download the installation program.
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
8
CHAPTER 1. INSTALLING ON OPENSTACK
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the Floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
9
OpenShift Container Platform 4.3 Installing on OpenStack
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. This value is example.com .
used to create routes to your
OpenShift Container Platform
cluster components. The full
DNS name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
10
CHAPTER 1. INSTALLING ON OPENSTACK
controlPlane.pla The cloud provider to host the aws, azure , gcp , openstack, or {}
tform control plane machines. This
parameter value must match
the compute.platform
parameter value.
compute.platfor The cloud provider to host the aws, azure , gcp , openstack, or {}
m worker machines. This
parameter value must match
the controlPlane.platform
parameter value.
metadata.name The name of your cluster. A string that contains uppercase or lowercase letters,
such as dev. The string must be 14 characters or
fewer long.
platform. The region to deploy your A valid region for your cloud, such as us-east-1 for
<platform>.regi cluster in. AWS, centralus for Azure, or region1 for Red Hat
on OpenStack Platform (RHOSP).
11
OpenShift Container Platform 4.3 Installing on OpenStack
sshKey The SSH key to use to access your A valid, local public SSH key that you
cluster machines. added to the ssh-agent process.
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery on, specify an
SSH key that your
ssh-agent process
uses.
12
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.replica The number of control plane machines A positive integer greater than or equal
s to provision. to 3. The default value is 3.
13
OpenShift Container Platform 4.3 Installing on OpenStack
compute.platform.op For compute machines, the size in Integer, for example 30.
enstack.rootVolume. gigabytes of the root volume. If you do
size not set this value, machines use
ephemeral storage.
compute.platform.op For compute machines, the root String, for example performance .
enstack.rootVolume. volume’s type.
type
controlPlane.platfor For control plane machines, the size in Integer, for example 30.
m.openstack.rootVol gigabytes of the root volume. If you do
ume.size not set this value, machines use
ephemeral storage.
controlPlane.platfor For control plane machines, the root String, for example performance .
m.openstack.rootVol volume’s type.
ume.type
platform.openstack.r The region where the RHOSP cluster is String, for example region1 .
egion created.
platform.openstack. The name of the RHOSP cloud to use String, for example MyCloud.
cloud from the list of clouds in the
clouds.yaml file.
platform.openstack. The RHOSP external network name to String, for example external.
externalNetwork be used for installation.
platform.openstack. The RHOSP flavor to use for control String, for example m1.xlarge.
computeFlavor plane and compute machines.
14
CHAPTER 1. INSTALLING ON OPENSTACK
This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
region: region1
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
15
OpenShift Container Platform 4.3 Installing on OpenStack
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
You can configure the OpenShift Container Platform API to be accessible either with or without floating
IP addresses.
Make OpenShift Container Platform API endpoints accessible by attaching two floating IP (FIP)
addresses to them: one for the API load balancer (lb FIP), and one for OpenShift Container Platform
applications (apps FIP).
IMPORTANT
16
CHAPTER 1. INSTALLING ON OPENSTACK
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create a new external network:
NOTE
If you do not control the DNS server you can add the record to your /etc/hosts
file instead. This action makes the API accessible to you only, which is not suitable
for production deployment but does allow installation for development and
testing.
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
If you cannot use floating IP addresses, the OpenShift Container Platform installation might still finish.
However, the installation program fails after it times out waiting for API access.
After the installation program times out, the cluster might still initialize. After the bootstrapping
processing begins, it must complete. You must edit the cluster’s networking configuration after it is
deployed, however.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
17
OpenShift Container Platform 4.3 Installing on OpenStack
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.
2. View the control plane and compute machines created after a deployment:
$ oc get nodes
$ oc get clusterversion
18
CHAPTER 1. INSTALLING ON OPENSTACK
$ oc get clusteroperator
$ oc get pods -A
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Prerequisites
Procedure
After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress
port:
19
OpenShift Container Platform 4.3 Installing on OpenStack
NOTE
If you do not control the DNS server but want to enable application access for non-
production purposes, you can add these hostnames to /etc/hosts:
Next steps
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
OpenShift Container Platform 4.3 is supported for use with RHOSP 13 and RHOSP 16.
The latest OpenShift Container Platform release supports both the latest RHOSP long
life release and intermediate release. The release cycles of OpenShift Container Platform
and RHOSP are different and versions tested may vary in the future depending on the
release dates of both products.
20
CHAPTER 1. INSTALLING ON OPENSTACK
Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container
Platform clusters running on OpenStack VMs. Kuryr improves the network performance by plugging
OpenShift Pods into OpenStack SDN. In addition, it provides interconnectivity between OpenShift Pods
and OpenStack virtual instances.
Kuryr components are installed as Pods in OpenShift Container Platform using the openshift-kuryr
namespace:
kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift
Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet.
The Kuryr controller watches the OpenShift API server for Pod, Service, and namespace create, update,
and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in
Neutron and Octavia. This means that every network solution that implements the Neutron trunk port
functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source
solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible
commercial SDNs.
Your deployment uses many Services on a few hypervisors. Each OpenShift Service creates an
Octavia Amphora virtual machine in OpenStack that hosts a required load balancer.
Table 1.5. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
with Kuryr
Resource Value
Routers 1
21
OpenShift Container Platform 4.3 Installing on OpenStack
Resource Value
RAM 112 GB
vCPUs 28
Instances 7
Swift containers 2
Swift objects 1
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
The number of ports required is actually larger than the number of Pods. Kuryr uses ports pools
to have pre-created ports ready to be used by Pods and speed up the Pods booting time.
Each NetworkPolicy is mapped into an RHOSP security group, and depending on the
NetworkPolicy spec, one or more rules are added to the security group.
Each Service is mapped into an RHOSP load balancer. Each load balancer has a security group
with the user project; therefore, it must be taken into account when estimating the number of
security groups required for the quota.
Swift space requirements vary depending on the size of the bootstrap Ignition file and image
registry.
Although the quota does not account for load balancer VM resources, they must be considered
when deciding the OpenStack deployment size.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
To enable Kuryr SDN, your environment must meet the following requirements:
22
CHAPTER 1. INSTALLING ON OPENSTACK
Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.
When using Kuryr SDN, you must increase quotas to satisfy the OpenStack resources used by Pods,
Services, namespaces, and network policies.
Procedure
$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets
250 --networks 250 <project>
Kuryr CNI leverages the Neutron Trunks extension to plug containers into the OpenStack SDN, so you
must use the trunks extension for Kuryr to properly work.
In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to
openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr
can properly handle network policies.
Kuryr SDN uses OpenStack Octavia LBaaS to implement OpenShift Services. Thus, you must install and
configure Octavia components in your OpenStack environment to use Kuryr SDN.
To enable Octavia, you must include the Octavia Service during the installation of the OpenStack
Overcloud, or upgrade the Octavia Service if the Overcloud already exists. The following steps for
enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.
NOTE
The following steps only capture the key pieces required during the deployment of
OpenStack when dealing with Octavia. It is also important to note that registry methods
vary.
Procedure
1. If you are using the local registry, create a template to upload the images to the registry. For
example:
23
OpenShift Container Platform 4.3 Installing on OpenStack
--push-destination=<local-ip-from-undercloud.conf>:8787 \
--prefix=openstack- \
--tag-from-label {version}-{release} \
--output-env-file=/home/stack/templates/overcloud_images.yaml \
--output-images-file /home/stack/local_registry_images.yaml
2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:
...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-
45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
push_destination: <local-ip-from-undercloud.conf>:8787
NOTE
The Octavia container versions vary depending upon the specific RHOSP release
installed.
This may take some time depending on the speed of your network and Undercloud disk.
4. Since an Octavia load balancer is used to access the OpenShift API, you must increase their
listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the
timeout to 20 minutes by passing the following file to the Overcloud deploy command:
NOTE
NOTE
24
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
This command only includes the files associated with Octavia; it varies based on
your specific installation of OpenStack. See the official OpenStack
documentation for further information. For more information on customizing
your Octavia installation, see installation of Octavia using Director.
NOTE
When leveraging Kuryr SDN, the Overcloud installation requires the Neutron
trunk extension. This is available by default on Director deployments. Use the
openvswitch firewall instead of the default ovs-hybrid when the Neutron
backend is ML2/OVS. There is no need for modifications if the backend is
ML2/OVN.
6. To enforce network policies across Services, like when traffic goes through the Octavia load
balancer, you must ensure Octavia creates the Amphora VM security groups on the user project.
To do that, you must add the project ID to the octavia.conf configuration file after you create
the project.
This ensures that required LoadBalancer security groups belong to that project and that they
can be updated to enforce Services isolation.
25
OpenShift Container Platform 4.3 Installing on OpenStack
│
| dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE |
ctlplane=192.168.24.6 | overcloud-full | compute |
│
+--------------------------------------+--------------+--------+-----------------------+------------
----+------------+
$ ssh [email protected]
iii. Edit the octavia.conf to add the project into the list of projects where Amphora
security groups are on the user’s account.
# List of project IDs that are allowed to have Load balancer security groups
# belonging to them.
amp_secgroup_allowed_projects = PROJECT_ID
NOTE
Depending on your OpenStack environment, Octavia might not support UDP listeners,
which means there is no support for UDP Services if Kuryr SDN is used.
An Amphora load balancer VM is deployed per OpenShift Service with the default Octavia load
balancer driver (Amphora driver). If the environment is resource constrained, creating a large
amount of Services could be a problem.
Depending on the Octavia version, UDP listeners are not supported. This means that OpenShift
UDP Services are not supported.
There is a known limitation of Octavia not supporting listeners on different protocols, like UDP
and TCP, on the same port. Thus, Services exposing the same port for different protocols are
not supported.
Due to the above UDP limitations of Octavia, Kuryr forces Pods to use TCP for DNS resolution.
This is set with the use-vc option in resolv.conf. This might be a problem for Pods running Go
applications compiled with the CGO_DEBUG flag disabled, as that uses the go resolver that
only leverages UDP and is not considering the use-vc option added by Kuryr to the resolv.conf.
This is a problem also for musl-based containers as its resolver does not support the use-vc
option. This includes images built from alpine.
By default, the OpenShift Container Platform installation program stands up three control plane and
compute machines.
26
CHAPTER 1. INSTALLING ON OPENSTACK
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
NOTE
The installation program cannot pass certificate authority bundles to Ignition on control
plane machines. Therefore, the bootstrap machine cannot retrieve Ignition configurations
from Swift if your endpoint uses self-signed certificates.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
27
OpenShift Container Platform 4.3 Installing on OpenStack
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
Prerequisites
Procedure
To enable Swift on RHOSP:
1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:
Your RHOSP deployment can now use Swift to store and serve files.
Prerequisites
parameter_defaults:
NeutronDhcpAgentDnsmasqDnsServers:
['<DNS_server_address_1>','<DNS_server_address_2']
28
CHAPTER 1. INSTALLING ON OPENSTACK
c. Include the environment file in your Overcloud deploy command. For example:
Procedure
1. Using the RHOSP CLI, verify the name and ID of the 'External' network:
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an External router type appears in the network list. If at least one does not, see Create an
external network.
IMPORTANT
If the external network’s CIDR range overlaps one of the default network ranges, you
must change the matching network ranges in the install-config.yaml file before you run
the installation program.
Network Range
machineCIDR 10.0.0.0/16
serviceNetwork 172.30.0.0/16
clusterNetwork 10.128.0.0/14
CAUTION
If the installation program finds multiple networks with the same name, it sets one of them at random.
To avoid this behavior, create unique names for resources in RHOSP.
NOTE
If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .
29
OpenShift Container Platform 4.3 Installing on OpenStack
Procedure
If your OpenStack distribution includes the Horizon web UI, generate a clouds.yaml file in
it.
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your OpenStack distribution does not include the Horizon web UI, or you do not want to
use Horizon, create the file yourself. For detailed information about clouds.yaml, see
Config files in the RHOSP documentation.
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. Place the file that you generate in one of the following locations:
Prerequisites
You must install the cluster from a computer that uses Linux or macOS.
You need 500 MB of local disk space to download the installation program.
Procedure
30
CHAPTER 1. INSTALLING ON OPENSTACK
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
31
OpenShift Container Platform 4.3 Installing on OpenStack
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the Floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
32
CHAPTER 1. INSTALLING ON OPENSTACK
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. This value is example.com .
used to create routes to your
OpenShift Container Platform
cluster components. The full
DNS name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
controlPlane.pla The cloud provider to host the aws, azure , gcp , openstack, or {}
tform control plane machines. This
parameter value must match
the compute.platform
parameter value.
compute.platfor The cloud provider to host the aws, azure , gcp , openstack, or {}
m worker machines. This
parameter value must match
the controlPlane.platform
parameter value.
metadata.name The name of your cluster. A string that contains uppercase or lowercase letters,
such as dev. The string must be 14 characters or
fewer long.
platform. The region to deploy your A valid region for your cloud, such as us-east-1 for
<platform>.regi cluster in. AWS, centralus for Azure, or region1 for Red Hat
on OpenStack Platform (RHOSP).
33
OpenShift Container Platform 4.3 Installing on OpenStack
sshKey The SSH key to use to access your A valid, local public SSH key that you
cluster machines. added to the ssh-agent process.
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery on, specify an
SSH key that your
ssh-agent process
uses.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
34
CHAPTER 1. INSTALLING ON OPENSTACK
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.replica The number of control plane machines A positive integer greater than or equal
s to provision. to 3. The default value is 3.
compute.platform.op For compute machines, the size in Integer, for example 30.
enstack.rootVolume. gigabytes of the root volume. If you do
size not set this value, machines use
ephemeral storage.
compute.platform.op For compute machines, the root String, for example performance .
enstack.rootVolume. volume’s type.
type
controlPlane.platfor For control plane machines, the size in Integer, for example 30.
m.openstack.rootVol gigabytes of the root volume. If you do
ume.size not set this value, machines use
ephemeral storage.
controlPlane.platfor For control plane machines, the root String, for example performance .
m.openstack.rootVol volume’s type.
ume.type
35
OpenShift Container Platform 4.3 Installing on OpenStack
platform.openstack.r The region where the RHOSP cluster is String, for example region1 .
egion created.
platform.openstack. The name of the RHOSP cloud to use String, for example MyCloud.
cloud from the list of clouds in the
clouds.yaml file.
platform.openstack. The RHOSP external network name to String, for example external.
externalNetwork be used for installation.
platform.openstack. The RHOSP flavor to use for control String, for example m1.xlarge.
computeFlavor plane and compute machines.
To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-
config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default
OpenShift SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red
Hat OpenStack Platform (RHOSP) customization options.
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
36
CHAPTER 1. INSTALLING ON OPENSTACK
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: Kuryr
platform:
openstack:
region: region1
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
trunkSupport: true
octaviaSupport: true
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
NOTE
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
37
OpenShift Container Platform 4.3 Installing on OpenStack
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
You can configure the OpenShift Container Platform API to be accessible either with or without floating
IP addresses.
Make OpenShift Container Platform API endpoints accessible by attaching two floating IP (FIP)
addresses to them: one for the API load balancer (lb FIP), and one for OpenShift Container Platform
applications (apps FIP).
IMPORTANT
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create a new external network:
38
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
If you do not control the DNS server you can add the record to your /etc/hosts
file instead. This action makes the API accessible to you only, which is not suitable
for production deployment but does allow installation for development and
testing.
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
If you cannot use floating IP addresses, the OpenShift Container Platform installation might still finish.
However, the installation program fails after it times out waiting for API access.
After the installation program times out, the cluster might still initialize. After the bootstrapping
processing begins, it must complete. You must edit the cluster’s networking configuration after it is
deployed, however.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
39
OpenShift Container Platform 4.3 Installing on OpenStack
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.
2. View the control plane and compute machines created after a deployment:
$ oc get nodes
$ oc get clusterversion
$ oc get clusteroperator
$ oc get pods -A
40
CHAPTER 1. INSTALLING ON OPENSTACK
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Prerequisites
Procedure
After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress
port:
41
OpenShift Container Platform 4.3 Installing on OpenStack
NOTE
If you do not control the DNS server but want to enable application access for non-
production purposes, you can add these hostnames to /etc/hosts:
Next steps
Prerequisites
Have a copy of the installation program that you used to deploy the cluster.
Have the files that the installation program generated when you created your cluster.
Procedure
1. From the computer that you used to install the cluster, run the following command:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
NOTE
42
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
You must specify the directory that contains the cluster definition files for your
cluster. The installation program requires the metadata.json file in this directory
to delete the cluster.
2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform
installation program.
43