OpenShift_Container_Platform-4.17-Installing_on_bare_metal-en-US
OpenShift_Container_Platform-4.17-Installing_on_bare_metal-en-US
17
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document describes how to install OpenShift Container Platform on bare metal.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .PREPARING
. . . . . . . . . . . . .FOR
. . . . .BARE
. . . . . . METAL
. . . . . . . . CLUSTER
. . . . . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
1.1. PREREQUISITES 8
1.2. PLANNING A BARE METAL CLUSTER FOR OPENSHIFT VIRTUALIZATION 8
1.3. NIC PARTITIONING FOR SR-IOV DEVICES 8
1.4. CHOOSING A METHOD TO INSTALL OPENSHIFT CONTAINER PLATFORM ON BARE METAL 9
1.4.1. Installing a cluster on installer-provisioned infrastructure 10
1.4.2. Installing a cluster on user-provisioned infrastructure 10
.CHAPTER
. . . . . . . . . . 2.
. . INSTALLING
. . . . . . . . . . . . . .A. .USER-PROVISIONED
. . . . . . . . . . . . . . . . . . . . . .CLUSTER
. . . . . . . . . . ON
. . . . BARE
. . . . . . METAL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .
2.1. PREREQUISITES 11
2.2. INTERNET ACCESS FOR OPENSHIFT CONTAINER PLATFORM 11
2.3. REQUIREMENTS FOR A CLUSTER WITH USER-PROVISIONED INFRASTRUCTURE 12
2.3.1. Required machines for cluster installation 12
2.3.2. Minimum resource requirements for cluster installation 13
2.3.3. Certificate signing requests management 14
2.3.4. Requirements for baremetal clusters on vSphere 14
2.3.5. Networking requirements for user-provisioned infrastructure 14
2.3.5.1. Setting the cluster node hostnames through DHCP 15
2.3.5.2. Network connectivity requirements 15
NTP configuration for user-provisioned infrastructure 16
2.3.6. User-provisioned DNS requirements 16
2.3.6.1. Example DNS configuration for user-provisioned clusters 18
2.3.7. Load balancing requirements for user-provisioned infrastructure 21
2.3.7.1. Example load balancer configuration for user-provisioned clusters 23
2.3.8. Creating a manifest object that includes a customized br-ex bridge 24
2.3.9. Scaling each machine set to compute nodes 27
2.4. PREPARING THE USER-PROVISIONED INFRASTRUCTURE 28
2.5. VALIDATING DNS RESOLUTION FOR USER-PROVISIONED INFRASTRUCTURE 30
2.6. GENERATING A KEY PAIR FOR CLUSTER NODE SSH ACCESS 33
2.7. OBTAINING THE INSTALLATION PROGRAM 35
2.8. INSTALLING THE OPENSHIFT CLI 36
Installing the OpenShift CLI on Linux 36
Installing the OpenShift CLI on Windows 36
Installing the OpenShift CLI on macOS 37
2.9. MANUALLY CREATING THE INSTALLATION CONFIGURATION FILE 37
2.9.1. Sample install-config.yaml file for bare metal 38
2.9.2. Configuring the cluster-wide proxy during installation 41
2.9.3. Configuring a three-node cluster 43
2.10. CREATING THE KUBERNETES MANIFEST AND IGNITION CONFIG FILES 44
2.11. INSTALLING RHCOS AND STARTING THE OPENSHIFT CONTAINER PLATFORM BOOTSTRAP PROCESS
46
2.11.1. Installing RHCOS by using an ISO image 47
2.11.2. Installing RHCOS by using PXE or iPXE booting 50
2.11.3. Advanced RHCOS installation configuration 55
2.11.3.1. Using advanced networking options for PXE and ISO installations 55
2.11.3.2. Disk partitioning 56
2.11.3.2.1. Creating a separate /var partition 57
2.11.3.2.2. Retaining existing partitions 59
2.11.3.3. Identifying Ignition configs 60
2.11.3.4. Default console configuration 60
2.11.3.5. Enabling the serial console for PXE and ISO installations 61
1
OpenShift Container Platform 4.17 Installing on bare metal
2
Table of Contents
3
OpenShift Container Platform 4.17 Installing on bare metal
. . . . . . . . . . . 4.
CHAPTER . . .INSTALLING
. . . . . . . . . . . . .A. .USER-PROVISIONED
. . . . . . . . . . . . . . . . . . . . . . BARE
. . . . . . METAL
. . . . . . . . CLUSTER
. . . . . . . . . . .ON
. . . .A. .RESTRICTED
. . . . . . . . . . . . . NETWORK
...........................
192
4.1. PREREQUISITES 192
4.2. ABOUT INSTALLATIONS IN RESTRICTED NETWORKS 192
4.2.1. Additional limits 193
4.3. INTERNET ACCESS FOR OPENSHIFT CONTAINER PLATFORM 193
4.4. REQUIREMENTS FOR A CLUSTER WITH USER-PROVISIONED INFRASTRUCTURE 193
4.4.1. Required machines for cluster installation 193
4.4.2. Minimum resource requirements for cluster installation 194
4.4.3. Certificate signing requests management 195
4.4.4. Networking requirements for user-provisioned infrastructure 196
4.4.4.1. Setting the cluster node hostnames through DHCP 196
4.4.4.2. Network connectivity requirements 197
NTP configuration for user-provisioned infrastructure 198
4.4.5. User-provisioned DNS requirements 198
4.4.5.1. Example DNS configuration for user-provisioned clusters 200
4.4.6. Load balancing requirements for user-provisioned infrastructure 202
4.4.6.1. Example load balancer configuration for user-provisioned clusters 204
4.4.7. Creating a manifest object that includes a customized br-ex bridge 206
4.4.8. Scaling each machine set to compute nodes 208
4.5. PREPARING THE USER-PROVISIONED INFRASTRUCTURE 209
4.6. VALIDATING DNS RESOLUTION FOR USER-PROVISIONED INFRASTRUCTURE 212
4.7. GENERATING A KEY PAIR FOR CLUSTER NODE SSH ACCESS 214
4.8. MANUALLY CREATING THE INSTALLATION CONFIGURATION FILE 216
4
Table of Contents
5
OpenShift Container Platform 4.17 Installing on bare metal
.CHAPTER
. . . . . . . . . . 5.
. . SCALING
. . . . . . . . . . .A. .USER-PROVISIONED
. . . . . . . . . . . . . . . . . . . . . .CLUSTER
. . . . . . . . . .WITH
. . . . . . THE
. . . . .BARE
. . . . . . METAL
. . . . . . . .OPERATOR
. . . . . . . . . . . . . . . . . . . . . . . .282
...............
5.1. ABOUT SCALING A USER-PROVISIONED CLUSTER WITH THE BARE METAL OPERATOR 282
5.1.1. Prerequisites for scaling a user-provisioned cluster 282
5.1.2. Limitations for scaling a user-provisioned cluster 282
5.2. CONFIGURING A PROVISIONING RESOURCE TO SCALE USER-PROVISIONED CLUSTERS 282
5.3. PROVISIONING NEW HOSTS IN A USER-PROVISIONED CLUSTER BY USING THE BMO 283
5.4. OPTIONAL: MANAGING EXISTING HOSTS IN A USER-PROVISIONED CLUSTER BY USING THE BMO
288
5.5. REMOVING HOSTS FROM A USER-PROVISIONED CLUSTER BY USING THE BMO 290
. . . . . . . . . . . 6.
CHAPTER . . .INSTALLATION
. . . . . . . . . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . .PARAMETERS
. . . . . . . . . . . . . . .FOR
. . . . .BARE
. . . . . .METAL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .293
...............
6.1. AVAILABLE INSTALLATION CONFIGURATION PARAMETERS FOR BARE METAL 293
6.1.1. Required configuration parameters 293
6.1.2. Network configuration parameters 294
6.1.3. Optional configuration parameters 297
6
Table of Contents
7
OpenShift Container Platform 4.17 Installing on bare metal
1.1. PREREQUISITES
You reviewed details about the OpenShift Container Platform installation and update
processes.
You have read the documentation on selecting a cluster installation method and preparing it for
users.
If you want to use live migration features, you must have multiple worker nodes at the time of
cluster installation. This is because live migration requires the cluster-level high availability (HA)
flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed
afterwards. If there are fewer than two worker nodes defined when you install your cluster, the
HA flag is set to false for the life of the cluster.
NOTE
Live migration requires shared storage. Storage for OpenShift Virtualization must support and
use the ReadWriteMany (RWX) access mode.
If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface
controllers (NICs) are supported by OpenShift Container Platform.
Additional resources
This feature supports the use of bonds for high availability with the Link Aggregation Control Protocol
(LACP).
8
CHAPTER 1. PREPARING FOR BARE METAL CLUSTER INSTALLATION
NOTE
An OpenShift Container Platform cluster can be deployed on a bond interface with 2 VFs on 2 physical
functions (PFs) using the following methods:
Agent-based installer
NOTE
Additional resources
Interactive: You can deploy a cluster with the web-based Assisted Installer. This is the
recommended approach for clusters with networks connected to the internet. The Assisted
Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults,
and it performs pre-flight validations before installing the cluster. It also provides a RESTful API
for automation and advanced configuration scenarios.
Local Agent-based: You can deploy a cluster locally with the agent-based installer for air-
gapped or restricted networks. It provides many of the benefits of the Assisted Installer, but you
must download and configure the agent-based installer first. Configuration is done with a
commandline interface. This approach is ideal for air-gapped or restricted networks.
Automated: You can deploy a cluster on installer-provisioned infrastructure and the cluster it
maintains. The installer uses each cluster host’s baseboard management controller (BMC) for
provisioning. You can deploy clusters with both connected or air-gapped or restricted networks.
Full control: You can deploy a cluster on infrastructure that you prepare and maintain , which
provides maximum customizability. You can deploy clusters with both connected or air-gapped
or restricted networks.
9
OpenShift Container Platform 4.17 Installing on bare metal
Administrators maintain control over what updates are applied and when.
See Installation process for more information about installer-provisioned and user-provisioned
installation processes.
Installing an installer-provisioned cluster on bare metal: You can install OpenShift Container
Platform on bare metal by using installer provisioning.
Installing a user-provisioned cluster on bare metal: You can install OpenShift Container
Platform on bare metal infrastructure that you provision. For a cluster that contains user-
provisioned infrastructure, you must deploy all of the required machines.
Installing a user-provisioned bare metal cluster with network customizations: You can install
a bare metal cluster on user-provisioned infrastructure with network-customizations. By
customizing your network configuration, your cluster can coexist with existing IP address
allocations in your environment and integrate with existing MTU and VXLAN configurations.
Most of the network customizations must be applied at the installation stage.
Installing a user-provisioned bare metal cluster on a restricted network: You can install a
user-provisioned bare metal cluster on a restricted or disconnected network by using a mirror
registry. You can also use this installation method to ensure that your clusters only use
container images that satisfy your organizational controls on external content.
10
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
IMPORTANT
While you might be able to follow this procedure to deploy a cluster on virtualized or cloud
environments, you must be aware of additional considerations for non-bare metal
platforms. Review the information in the guidelines for deploying OpenShift Container
Platform on non-tested platforms before you attempt to install an OpenShift Container
Platform cluster in such an environment.
2.1. PREREQUISITES
You reviewed details about the OpenShift Container Platform installation and update
processes.
You read the documentation on selecting a cluster installation method and preparing it for
users.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
NOTE
Be sure to also review this site list if you are configuring a proxy.
Access OpenShift Cluster Manager to download the installation program and perform
subscription management. If the cluster has internet access and you do not disable Telemetry,
that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the required content and use it to populate a mirror registry with the
installation packages. With some installation types, the environment that you install your
cluster in will not require internet access. Before you update the cluster, you update the
content of the mirror registry.
Additional resources
11
OpenShift Container Platform 4.17 Installing on bare metal
See Installing a user-provisioned bare metal cluster on a restricted network for more
information about performing a restricted network installation on bare metal infrastructure that
you provision.
This section describes the requirements for deploying OpenShift Container Platform on user-
provisioned infrastructure.
Hosts Description
One temporary bootstrap machine The cluster requires the bootstrap machine to deploy
the OpenShift Container Platform cluster on the
three control plane machines. You can remove the
bootstrap machine after you install the cluster.
Three control plane machines The control plane machines run the Kubernetes and
OpenShift Container Platform services that form the
control plane.
At least two compute machines, which are also The workloads requested by OpenShift Container
known as worker machines. Platform users run on the compute machines.
NOTE
As an exception, you can run zero compute machines in a bare metal cluster that consists
of three control plane machines only. This provides smaller, more resource efficient
clusters for cluster administrators and developers to use for testing, development, and
production. Running one compute machine is not supported.
IMPORTANT
To maintain high availability of your cluster, use separate physical hosts for these cluster
machines.
The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the
operating system. However, the compute machines can choose between Red Hat Enterprise Linux
CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later.
Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware
certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits .
12
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
1. One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-
Threading, is not enabled. When enabled, use the following formula to calculate the
corresponding ratio: (threads per core × cores) × sockets = CPUs.
2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster
storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms
p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so
you might need to over-allocate storage volume to obtain sufficient performance.
3. As with all user-provisioned installations, if you choose to use RHEL compute machines in your
cluster, you take responsibility for all operating system life cycle management and maintenance,
including performing system updates, applying patches, and completing all other required tasks.
Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container
Platform 4.10 and later.
NOTE
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2,
which updates the micro-architecture requirements. The following list contains the
minimum instruction set architectures (ISA) that each architecture requires:
If an instance type for your platform meets the minimum requirements for cluster machines, it is
supported to use in OpenShift Container Platform.
Additional resources
13
OpenShift Container Platform 4.17 Installing on bare metal
Optimizing storage
Additional resources
See Configuring a three-node cluster for details about deploying three-node clusters in bare
metal environments.
See Approving the certificate signing requests for your machines for more information about
approving cluster certificate signing requests after installation.
Additional resources
See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for
details on setting the disk.EnableUUID parameter’s value to TRUE on VMware vSphere for
user-provisioned infrastructure.
During the initial boot, the machines require an IP address configuration that is set either through a
DHCP server or statically by providing the required boot options. After a network connection is
established, the machines download their Ignition config files from an HTTP or HTTPS server. The
Ignition config files are then used to set the exact state of each machine. The Machine Config Operator
completes more changes to the machines, such as the application of new certificates or keys, after
installation.
It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure
that the DHCP server is configured to provide persistent IP addresses, DNS server information, and
hostnames to the cluster machines.
NOTE
If a DHCP service is not available for your user-provisioned infrastructure, you can instead
provide the IP networking configuration and the address of the DNS server to the nodes
at RHCOS install time. These can be passed as boot arguments if you are installing from
an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform
bootstrap process section for more information about static IP provisioning and advanced
networking options.
14
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API
servers and worker nodes are in different zones, you can configure a default DNS search zone to allow
the API server to resolve the node names. Another supported approach is to always refer to hosts by
their fully-qualified domain names in both the node objects and all DNS requests.
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through
NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not
provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a
reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and
can take time to resolve. Other system services can start prior to this and detect the hostname as
localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name
configuration errors in environments that have a DNS split-horizon implementation.
You must configure the network connectivity between machines to allow OpenShift Container Platform
cluster components to communicate. Each machine must be able to resolve the hostnames of all other
machines in the cluster.
This section provides details about the ports that are required.
IMPORTANT
In connected OpenShift Container Platform environments, all nodes are required to have
internet access to pull images for platform containers and provide telemetry data to Red
Hat.
9000- 9999 Host level services, including the node exporter on ports
9100- 9101 and the Cluster Version Operator on port9099.
6081 Geneve
9000- 9999 Host level services, including the node exporter on ports
9100- 9101.
15
OpenShift Container Platform 4.17 Installing on bare metal
Table 2.5. Ports used for control plane machine to control plane machine communications
If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise
Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.
Additional resources
Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control
plane machines, and the compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse
name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS
(RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are
provided by DHCP. Additionally, the reverse records are used to generate the certificate signing
requests (CSR) that OpenShift Container Platform needs to operate.
NOTE
It is recommended to use a DHCP server to provide the hostnames to each cluster node.
See the DHCP recommendations for user-provisioned infrastructure section for more
information.
The following DNS records are required for a user-provisioned OpenShift Container Platform cluster
and they must be in place before installation. In each record, <cluster_name> is the cluster name and
<base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS
record takes the form: <component>.<cluster_name>.<base_domain>..
Kuberne api.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
tes API <base_domain>. to identify the API load balancer. These records must be
resolvable by both clients external to the cluster and from
all the nodes within the cluster.
IMPORTANT
17
OpenShift Container Platform 4.17 Installing on bare metal
Bootstra bootstrap.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
p <base_domain>. to identify the bootstrap machine. These records must be
machine resolvable by the nodes within the cluster.
Control <control_plane><n>. DNS A/AAAA or CNAME records and DNS PTR records to
plane <cluster_name>. identify each machine for the control plane nodes. These
machine <base_domain>. records must be resolvable by the nodes within the cluster.
s
Comput <compute><n>. DNS A/AAAA or CNAME records and DNS PTR records to
e <cluster_name>. identify each machine for the worker nodes. These records
machine <base_domain>. must be resolvable by the nodes within the cluster.
s
NOTE
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and
SRV records in your DNS configuration.
TIP
You can use the dig command to verify name and reverse name resolution. See the section on
Validating DNS resolution for user-provisioned infrastructure for detailed validation steps.
This section provides A and PTR record configuration samples that meet the DNS requirements for
deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant
to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4 and the base domain is example.com.
18
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5 1
api-int.ocp4.example.com. IN A 192.168.1.5 2
;
*.apps.ocp4.example.com. IN A 192.168.1.5 3
;
bootstrap.ocp4.example.com. IN A 192.168.1.96 4
;
control-plane0.ocp4.example.com. IN A 192.168.1.97 5
control-plane1.ocp4.example.com. IN A 192.168.1.98 6
control-plane2.ocp4.example.com. IN A 192.168.1.99 7
;
compute0.ocp4.example.com. IN A 192.168.1.11 8
compute1.ocp4.example.com. IN A 192.168.1.7 9
;
;EOF
1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer.
2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer and is used for internal cluster communications.
3 Provides name resolution for the wildcard routes. The record refers to the IP address of the
application ingress load balancer. The application ingress load balancer targets the machines
that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines
by default.
NOTE
In the example, the same load balancer is used for the Kubernetes API and
application ingress traffic. In production scenarios, you can deploy the API and
application ingress load balancers separately so that you can scale the load
balancer infrastructure for each in isolation.
19
OpenShift Container Platform 4.17 Installing on bare metal
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2
;
96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3
;
97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4
98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5
99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6
;
11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7
7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8
;
;EOF
1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer.
2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer and is used for internal cluster communications.
NOTE
20
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
NOTE
A PTR record is not required for the OpenShift Container Platform application wildcard.
Additional resources
NOTE
If you want to deploy the API and application Ingress load balancers with a Red Hat
Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately.
1. API load balancer: Provides a common endpoint for users, both human and machine, to interact
with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
A stateless load balancing algorithm. The options vary based on the load balancer
implementation.
IMPORTANT
Configure the following ports on both the front and back of the load balancers:
21
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
2. Application Ingress load balancer: Provides an ingress point for application traffic flowing in
from outside the cluster. A working configuration for the Ingress router is required for an
OpenShift Container Platform cluster.
Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
TIP
If the true IP address of the client can be seen by the application Ingress load balancer, enabling
source IP-based session persistence can improve performance for applications that use end-
to-end TLS encryption.
Configure the following ports on both the front and back of the load balancers:
NOTE
22
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
This section provides an example API and application Ingress load balancer configuration that meets the
load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg
configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing
one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In
production scenarios, you can deploy the API and application ingress load balancers separately so that
you can scale the load balancer infrastructure for each in isolation.
NOTE
If you are using HAProxy as a load balancer and SELinux is set to enforcing, you must
ensure that the HAProxy service can bind to the configured TCP port by running
setsebool -P haproxy_connect_any=1.
Example 2.3. Sample API and application Ingress load balancer configuration
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-server-6443 1
bind *:6443
mode tcp
option httpchk GET /readyz HTTP/1.0
option log-health-checks
balance roundrobin
server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2
rise 3 backup 2
23
OpenShift Container Platform 4.17 Installing on bare metal
server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s
fall 2 rise 3
server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s
fall 2 rise 3
server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s
fall 2 rise 3
listen machine-config-server-22623 3
bind *:22623
mode tcp
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 5
bind *:443
mode tcp
balance source
server compute0 compute0.ocp4.example.com:443 check inter 1s
server compute1 compute1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 6
bind *:80
mode tcp
balance source
server compute0 compute0.ocp4.example.com:80 check inter 1s
server compute1 compute1.ocp4.example.com:80 check inter 1s
1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines.
2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster
installation and they must be removed after the bootstrap process is complete.
3 Port 22623 handles the machine config server traffic and points to the control plane machines.
5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
TIP
If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports
6443, 22623, 443, and 80 by running netstat -nltupe on the HAProxy node.
24
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal
platform, you can create a MachineConfig object that includes an NMState configuration file. The
NMState configuration file creates a customized br-ex bridge network configuration on each node in
your cluster.
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
Consider the following use cases for creating a manifest object that includes a customized br-ex bridge:
You want to make postinstallation changes to the bridge, such as changing the Open vSwitch
(OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not
support making postinstallation changes to the bridge.
You want to deploy the bridge on a different interface than the interface available on a host or
server IP address.
You want to make advanced configurations to the bridge that are not possible with the
configure-ovs.sh shell script. Using the script for these configurations might result in the
bridge failing to connect multiple network interfaces and facilitating data forwarding between
the interfaces.
NOTE
If you require an environment with a single network interface controller (NIC) and default
network settings, use the configure-ovs.sh shell script.
After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine
Config Operator injects Ignition configuration files into each node in your cluster, so that each node
received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-
ovs.sh shell script receives a signal to not configure the br-ex bridge.
Prerequisites
Optional: You have installed the nmstate API so that you can validate the NMState
configuration.
Procedure
1. Create a NMState configuration file that has decoded base64 information for your customized
br-ex bridge network:
interfaces:
25
OpenShift Container Platform 4.17 Installing on bare metal
- name: enp2s0 1
type: ethernet 2
state: up 3
ipv4:
enabled: false 4
ipv6:
enabled: false
- name: br-ex
type: ovs-bridge
state: up
ipv4:
enabled: false
dhcp: false
ipv6:
enabled: false
dhcp: false
bridge:
port:
- name: enp2s0 5
- name: br-ex
- name: br-ex
type: ovs-interface
state: up
copy-mac-from: enp2s0
ipv4:
enabled: true
dhcp: true
ipv6:
enabled: false
dhcp: false
# ...
2. Use the cat command to base64-encode the contents of the NMState configuration:
1 Replace <nmstate_configuration> with the name of your NMState resource YAML file.
3. Create a MachineConfig manifest file and define a customized br-ex bridge network
configuration analogous to the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
26
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
metadata:
labels:
machineconfiguration.openshift.io/role: worker 1
name: 10-br-ex-worker 2
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,
<base64_encoded_nmstate_configuration> 3
mode: 0644
overwrite: true
path: /etc/nmstate/openshift/cluster.yml
# ...
1 For each node in your cluster, specify the hostname path to your node and the base-64
encoded Ignition configuration file data for the machine type. If you have a single global
configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that
you want to apply to all nodes in your cluster, you do not need to specify the hostname
path for each node. The worker role is the default role for nodes in your cluster. The .yaml
extension does not work when specifying the hostname path for each node or all nodes in
the MachineConfig manifest file.
After you configure these resources, you must scale machine sets, so that the machine sets can apply
the resource configuration to each compute node and reboot the nodes.
Prerequisites
You created a MachineConfig manifest object that includes a customized br-ex bridge
configuration.
Procedure
$ oc edit mc <machineconfig_custom_resource_name>
2. Add each compute node configuration to the CR, so that the CR can manage roles for each
defined compute node in your cluster.
27
OpenShift Container Platform 4.17 Installing on bare metal
3. Create a Secret object named extraworker-secret that has a minimal static IP configuration.
4. Apply the extraworker-secret secret to each node in your cluster by entering the following
command. This step provides each compute node access to the Ignition config file.
$ oc apply -f ./extraworker-secret.yaml
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
spec:
# ...
preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret
# ...
$ oc project openshift-machine-api
$ oc get machinesets
8. Scale each machine set by entering the following command. You must run this command for
each machine set.
1 Where <machineset_name> is the name of the machine set and <n> is the number of
compute nodes.
This section provides details about the high-level steps required to set up your cluster infrastructure in
preparation for an OpenShift Container Platform installation. This includes configuring IP networking
and network connectivity for your cluster nodes, enabling the required ports through your firewall, and
setting up the required DNS and load balancing infrastructure.
After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements
for a cluster with user-provisioned infrastructure section.
Prerequisites
You have reviewed the OpenShift Container Platform 4.x Tested Integrations page.
28
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
You have reviewed the infrastructure requirements detailed in the Requirements for a cluster
with user-provisioned infrastructure section.
Procedure
1. If you are using DHCP to provide the IP networking configuration to your cluster nodes,
configure your DHCP service.
a. Add persistent IP addresses for the nodes to your DHCP server configuration. In your
configuration, match the MAC address of the relevant network interface to the intended IP
address for each node.
b. When you use DHCP to configure IP addressing for the cluster machines, the machines also
obtain the DNS server information through DHCP. Define the persistent DNS server
address that is used by the cluster nodes through your DHCP server configuration.
NOTE
If you are not using a DHCP service, you must provide the IP networking
configuration and the address of the DNS server to the nodes at RHCOS
install time. These can be passed as boot arguments if you are installing from
an ISO image. See the Installing RHCOS and starting the OpenShift
Container Platform bootstrap process section for more information about
static IP provisioning and advanced networking options.
c. Define the hostnames of your cluster nodes in your DHCP server configuration. See the
Setting the cluster node hostnames through DHCP section for details about hostname
considerations.
NOTE
If you are not using a DHCP service, the cluster nodes obtain their hostname
through a reverse DNS lookup.
2. Ensure that your network infrastructure provides the required network connectivity between
the cluster components. See the Networking requirements for user-provisioned infrastructure
section for details about the requirements.
3. Configure your firewall to enable the ports required for the OpenShift Container Platform
cluster components to communicate. See Networking requirements for user-provisioned
infrastructure section for details about the ports that are required.
IMPORTANT
Avoid using the Ingress load balancer to expose this port, because doing so
might result in the exposure of sensitive information, such as statistics and
metrics, related to Ingress Controllers.
a. Configure DNS name resolution for the Kubernetes API, the application wildcard, the
29
OpenShift Container Platform 4.17 Installing on bare metal
a. Configure DNS name resolution for the Kubernetes API, the application wildcard, the
bootstrap machine, the control plane machines, and the compute machines.
b. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the
control plane machines, and the compute machines.
See the User-provisioned DNS requirements section for more information about the
OpenShift Container Platform DNS requirements.
a. From your installation node, run DNS lookups against the record names of the Kubernetes
API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the
responses correspond to the correct components.
b. From your installation node, run reverse DNS lookups against the IP addresses of the load
balancer and the cluster nodes. Validate that the record names in the responses correspond
to the correct components.
See the Validating DNS resolution for user-provisioned infrastructure section for detailed
DNS validation steps.
6. Provision the required API and application ingress load balancing infrastructure. See the Load
balancing requirements for user-provisioned infrastructure section for more information about
the requirements.
NOTE
Some load balancing solutions require the DNS name resolution for the cluster nodes to
be in place before the load balancing is initialized.
Additional resources
Installing RHCOS and starting the OpenShift Container Platform bootstrap process
IMPORTANT
30
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
IMPORTANT
The validation steps detailed in this section must succeed before you install your cluster.
Prerequisites
You have configured the required DNS records for your user-provisioned infrastructure.
Procedure
1. From your installation node, run DNS lookups against the record names of the Kubernetes API,
the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the
responses correspond to the correct components.
a. Perform a lookup against the Kubernetes API record name. Check that the result points to
the IP address of the API load balancer:
Example output
b. Perform a lookup against the Kubernetes internal API record name. Check that the result
points to the IP address of the API load balancer:
Example output
Example output
NOTE
In the example outputs, the same load balancer is used for the Kubernetes
API and application ingress traffic. In production scenarios, you can deploy
the API and application ingress load balancers separately so that you can
scale the load balancer infrastructure for each in isolation.
31
OpenShift Container Platform 4.17 Installing on bare metal
You can replace random with another wildcard value. For example, you can query the route
to the OpenShift Container Platform console:
Example output
d. Run a lookup against the bootstrap DNS record name. Check that the result points to the IP
address of the bootstrap node:
Example output
e. Use this method to perform lookups against the DNS record names for the control plane
and compute nodes. Check that the results correspond to the IP addresses of each node.
2. From your installation node, run reverse DNS lookups against the IP addresses of the load
balancer and the cluster nodes. Validate that the record names contained in the responses
correspond to the correct components.
a. Perform a reverse lookup against the IP address of the API load balancer. Check that the
response includes the record names for the Kubernetes API and the Kubernetes internal
API:
Example output
NOTE
b. Perform a reverse lookup against the IP address of the bootstrap node. Check that the
result points to the DNS record name of the bootstrap node:
32
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Example output
c. Use this method to perform reverse lookups against the IP addresses for the control plane
and compute nodes. Check that the results correspond to the DNS record names of each
node.
Additional resources
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user
core. To access the nodes through SSH, the private key identity must be managed by SSH for your local
user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you
must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
IMPORTANT
Do not skip this procedure in production environments, where disaster recovery and
debugging is required.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an existing SSH key pair on your local machine to use for authentication onto
your cluster nodes, create one. For example, on a computer that uses a Linux operating system,
run the following command:
1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have
an existing key pair, ensure your public key is in the your ~/.ssh directory.
NOTE
33
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
If you plan to install an OpenShift Container Platform cluster that uses the RHEL
cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3
Validation on only the x86_64, ppc64le, and s390x architectures, do not create a
key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or
ecdsa algorithm.
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub public key:
$ cat ~/.ssh/id_ed25519.pub
3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been
added. SSH agent management of the key is required for password-less SSH authentication
onto your cluster nodes, or if you want to use the ./openshift-install gather command.
NOTE
a. If the ssh-agent process is not already running for your local user, start it as a background
task:
Example output
NOTE
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Next steps
34
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
When you install OpenShift Container Platform, provide the SSH public key to the installation
program. If you install a cluster on infrastructure that you provision, you must provide the key to
the installation program.
Additional resources
Prerequisites
You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
1. Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat
account, log in with your credentials. If you do not, create an account.
2. Select your infrastructure provider from the Run it yourself section of the page.
3. Select your host operating system and architecture from the dropdown menus under
OpenShift Installer and click Download Installer.
4. Place the downloaded file in the directory where you want to store the installation configuration
files.
IMPORTANT
The installation program creates several files on the computer that you use
to install your cluster. You must keep the installation program and the files
that the installation program creates after you finish installing the cluster.
Both of the files are required to delete the cluster.
Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for
your specific cloud provider.
5. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
6. Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret
allows you to authenticate with the services that are provided by the included authorities,
including Quay.io, which serves the container images for OpenShift Container Platform
components.
TIP
35
OpenShift Container Platform 4.17 Installing on bare metal
TIP
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you
can specify a version of the installation program to download. However, you must have an active
subscription to access this page.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.17. Download and install the new version of oc.
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
4. Click Download Now next to the OpenShift v4.17 Linux Clients entry and save the file.
$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
36
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
3. Click Download Now next to the OpenShift v4.17 Windows Client entry and save the file.
C:\> path
Verification
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
3. Click Download Now next to the OpenShift v4.17 macOS Clients entry and save the file.
NOTE
For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry.
$ echo $PATH
Verification
$ oc <command>
37
OpenShift Container Platform 4.17 Installing on bare metal
Prerequisites
You have an SSH public key on your local machine to provide to the installation program. The
key will be used for SSH authentication onto your cluster nodes for debugging and disaster
recovery.
You have obtained the OpenShift Container Platform installation program and the pull secret
for your cluster.
Procedure
$ mkdir <installation_directory>
IMPORTANT
You must create a directory. Some installation assets, like bootstrap X.509
certificates have short expiration intervals, so you must not reuse an installation
directory. If you want to reuse individual files from another cluster installation,
you can copy them into your directory. However, the file names for the
installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.
2. Customize the sample install-config.yaml file template that is provided and save it in the
<installation_directory>.
NOTE
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
The install-config.yaml file is consumed during the next step of the installation
process. You must back it up now.
Additional resources
apiVersion: v1
baseDomain: example.com 1
compute: 2
- hyperthreading: Enabled 3
name: worker
38
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
replicas: 0 4
controlPlane: 5
hyperthreading: Enabled 6
name: master
replicas: 3 7
metadata:
name: test 8
networking:
clusterNetwork:
- cidr: 10.128.0.0/14 9
hostPrefix: 23 10
networkType: OVNKubernetes 11
serviceNetwork: 12
- 172.30.0.0/16
platform:
none: {} 13
fips: false 14
pullSecret: '{"auths": ...}' 15
sshKey: 'ssh-ed25519 AAAA...' 16
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the
cluster name.
2 5 The controlPlane section is a single mapping, but the compute section is a sequence of
mappings. To meet the requirements of the different data structures, the first line of the compute
section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only
one control plane pool is used.
NOTE
IMPORTANT
4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned
infrastructure. In installer-provisioned installations, the parameter controls the number of compute
machines that the cluster creates and manages for you. In user-provisioned installations, you must
manually deploy the compute machines before you finish installing the cluster.
NOTE
If you are installing a three-node cluster, do not deploy any compute machines when
you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines.
39
OpenShift Container Platform 4.17 Installing on bare metal
7 The number of control plane machines that you add to the cluster. Because the cluster uses these
values as the number of etcd endpoints in the cluster, the value must match the number of control
9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap
with existing physical networks. These IP addresses are used for the pod network. If you need to
access the pods from an external network, you must configure load balancers and routers to
manage the traffic.
NOTE
Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you
must ensure your networking environment accepts the IP addresses within the Class
E CIDR range.
10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23,
then each node is assigned a /23 subnet out of the given cidr, which allows for 510 (2^(32 - 23) - 2)
pod IP addresses. If you are required to provide access to nodes from an external network,
configure load balancers and routers to manage the traffic.
11 The cluster network plugin to install. The default value OVNKubernetes is the only supported
value.
12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This
block must not overlap with existing physical networks. If you need to access the services from an
external network, configure load balancers and routers to manage the traffic.
13 You must set the platform to none. You cannot provide additional platform configuration variables
for your platform.
IMPORTANT
Clusters that are installed with the platform type none are unable to use some
features, such as managing compute machines with the Machine API. This limitation
applies even if the compute machines that are attached to the cluster are installed
on a platform that would normally support the feature. This parameter cannot be
changed after installation.
14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
IMPORTANT
40
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
IMPORTANT
To enable FIPS mode for your cluster, you must run the installation program from a
Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode.
For more information about configuring FIPS mode on RHEL, see Switching RHEL
to FIPS mode.
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux
CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core
components use the RHEL cryptographic libraries that have been submitted to NIST
for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x
architectures.
15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to
authenticate with the services that are provided by the included authorities, including Quay.io,
which serves the container images for OpenShift Container Platform components.
16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
Additional resources
See Load balancing requirements for user-provisioned infrastructure for more information on
the API and application ingress load balancing requirements.
See Cluster capabilities for more information on enabling cluster capabilities that were disabled
before installation.
See Optional cluster capabilities in OpenShift Container Platform 4.17 for more information
about the features provided by each capability.
NOTE
For bare metal installations, if you do not assign node IP addresses from the range that is
specified in the networking.machineNetwork[].cidr field in the install-config.yaml file,
you must include them in the proxy.noProxy field.
Prerequisites
You reviewed the sites that your cluster requires access to and determined whether any of
41
OpenShift Container Platform 4.17 Installing on bare metal
them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to
hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to
bypass the proxy if necessary.
NOTE
The Proxy object status.noProxy field is populated with the values of the
networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and
networking.serviceNetwork[] fields from your installation configuration.
For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object
status.noProxy field is also populated with the instance metadata endpoint
(169.254.169.254).
Procedure
1. Edit your install-config.yaml file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http.
2 A proxy URL to use for creating HTTPS connections outside the cluster.
4 If provided, the installation program generates a config map that is named user-ca-bundle
in the openshift-config namespace that contains one or more additional CA certificates
that are required for proxying HTTPS connections. The Cluster Network Operator then
creates a trusted-ca-bundle config map that merges these contents with the Red Hat
Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the
trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
5 Optional: The policy to determine the configuration of the Proxy object to reference the
user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and
Always. Use Proxyonly to reference the user-ca-bundle config map only when
http/https proxy is configured. Use Always to always reference the user-ca-bundle
config map. The default value is Proxyonly.
42
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
NOTE
The installation program does not support the proxy readinessEndpoints field.
NOTE
If the installer times out, restart and then complete the deployment by using the
wait-for command of the installer. For example:
2. Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.
NOTE
Only the Proxy object named cluster is supported, and no additional proxies can be
created.
In three-node OpenShift Container Platform environments, the three control plane machines are
schedulable, which means that your application workloads are scheduled to run on them.
Prerequisites
Procedure
Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown
in the following compute stanza:
compute:
- name: worker
platform: {}
replicas: 0
NOTE
43
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
You must set the value of the replicas parameter for the compute machines to 0
when you install OpenShift Container Platform on user-provisioned
infrastructure, regardless of the number of compute machines you are deploying.
In installer-provisioned installations, the parameter controls the number of
compute machines that the cluster creates and manages for you. This does not
apply to user-provisioned installations, where the compute machines are
deployed manually.
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods
run on the control plane nodes. In three-node cluster deployments, you must configure your
application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
See the Load balancing requirements for user-provisioned infrastructure section for more
information.
When you create the Kubernetes manifest files in the following procedure, ensure that the
mastersSchedulable parameter in the <installation_directory>/manifests/cluster-
scheduler-02-config.yml file is set to true. This enables your application workloads to run on
the control plane nodes.
Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS
(RHCOS) machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to configure the cluster machines.
IMPORTANT
The Ignition config files that the OpenShift Container Platform installation
program generates contain certificates that expire after 24 hours, which are then
renewed at that time. If the cluster is shut down before renewing the certificates
and the cluster is later restarted after the 24 hours have elapsed, the cluster
automatically recovers the expired certificates. The exception is that you must
manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering
from expired control plane certificates for more information.
It is recommended that you use Ignition config files within 12 hours after they are
generated because the 24-hour certificate rotates from 16 to 22 hours after the
cluster is installed. By using the Ignition config files within 12 hours, you can avoid
installation failure if the certificate update runs during installation.
Prerequisites
44
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Procedure
1. Change to the directory that contains the OpenShift Container Platform installation program
and generate the Kubernetes manifests for the cluster:
1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.
WARNING
If you are installing a three-node cluster, skip the following step to allow the
control plane nodes to be schedulable.
IMPORTANT
When you configure control plane nodes from the default unschedulable to
schedulable, additional subscriptions are required. This is because control plane
nodes then become compute nodes.
3. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:
Ignition config files are created for the bootstrap, control plane, and compute nodes in the
installation directory. The kubeadmin-password and kubeconfig files are created in the
./<installation_directory>/auth directory:
.
├── auth
│ ├── kubeadmin-password
45
OpenShift Container Platform 4.17 Installing on bare metal
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Additional resources
See Recovering from expired control plane certificates for more information about recovering
kubelet certificates.
To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting.
NOTE
The compute node deployment steps included in this installation document are RHCOS-
specific. If you choose instead to deploy RHEL-based compute nodes, you take
responsibility for all operating system life cycle management and maintenance, including
performing system updates, applying patches, and completing all other required tasks.
Only RHEL 8 compute machines are supported.
You can configure RHCOS during ISO and PXE installations by using the following methods:
Kernel arguments: You can use kernel arguments to provide installation-specific information.
For example, you can specify the locations of the RHCOS installation files that you uploaded to
your HTTP server and the location of the Ignition config file for the type of node you are
installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to
the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot
process to add the kernel arguments. In both installation cases, you can use special
coreos.inst.* arguments to direct the live installer, as well as standard installation boot
arguments for turning standard kernel services on or off.
Ignition configs: OpenShift Container Platform Ignition config files (*.ign) are specific to the
type of node you are installing. You pass the location of a bootstrap, control plane, or compute
node Ignition config file during the RHCOS installation so that it takes effect on first boot. In
special cases, you can create a separate, limited Ignition config to pass to the live system. That
Ignition config could do a certain set of tasks, such as reporting success to a provisioning system
after completing installation. This special Ignition config is consumed by the coreos-installer to
be applied on first boot of the installed system. Do not provide the standard control plane and
compute node Ignition configs to the live ISO directly.
coreos-installer: You can boot the live ISO installer to a shell prompt, which allows you to
prepare the permanent system in a variety of ways before first boot. In particular, you can run
the coreos-installer command to identify various artifacts to include, work with disk partitions,
46
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
and set up networking. In some cases, you can configure features on the live system and copy
them to the installed system.
Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP
service and more preparation, but can make the installation process more automated. An ISO install is a
more manual process and can be inconvenient if you are setting up more than a few machines.
NOTE
As of OpenShift Container Platform 4.6, the RHCOS ISO and other installation artifacts
provide support for installation on disks with 4K sectors.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have an HTTP server that can be accessed from your computer, and from the machines
that you create.
You have reviewed the Advanced RHCOS installation configuration section for different ways to
configure features, such as networking and disk partitioning.
Procedure
1. Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the
following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition
config file:
$ sha512sum <installation_directory>/bootstrap.ign
The digests are provided to the coreos-installer in a later step to validate the authenticity of
the Ignition config files on the cluster nodes.
2. Upload the bootstrap, control plane, and compute node Ignition config files that the installation
program created to your HTTP server. Note the URLs of these files.
IMPORTANT
You can add or change configuration settings in your Ignition configs before
saving them to your HTTP server. If you plan to add more compute machines to
your cluster after you finish installation, do not delete these files.
3. From the installation host, validate that the Ignition config files are available on the URLs. The
following example gets the Ignition config file for the bootstrap node:
$ curl -k http://<HTTP_server>/bootstrap.ign 1
Example output
47
OpenShift Container Platform 4.17 Installing on bare metal
Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the
Ignition config files for the control plane and compute nodes are also available.
4. Although it is possible to obtain the RHCOS images that are required for your preferred method
of installing operating system instances from the RHCOS image mirror page, the recommended
way to obtain the correct version of your RHCOS images are from the output of openshift-
install command:
Example output
"location": "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-
<release>-live.aarch64.iso",
"location": "<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-
<release>-live.ppc64le.iso",
"location": "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-
live.s390x.iso",
"location": "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-
live.x86_64.iso",
IMPORTANT
The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available. Use only ISO images for this procedure. RHCOS qcow2 images are not
supported for this installation type.
rhcos-<version>-live.<architecture>.iso
5. Use the ISO to start the RHCOS installation. Use one of the following installation options:
6. Boot the RHCOS ISO image without specifying any options or interrupting the live boot
sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NOTE
48
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
7. Run the coreos-installer command and specify the options that meet your installation
requirements. At a minimum, you must specify the URL that points to the Ignition config file for
the node type, and the device that you are installing to:
1 1 You must run the coreos-installer command by using sudo, because the core user does
not have the required root privileges to perform the installation.
2 The --ignition-hash option is required when the Ignition config file is obtained through an
HTTP URL to validate the authenticity of the Ignition config file on the cluster node.
<digest> is the Ignition config file SHA512 digest obtained in a preceding step.
NOTE
If you want to provide your Ignition config files through an HTTPS server that
uses TLS, you can add the internal certificate authority (CA) to the system trust
store before running coreos-installer.
The following example initializes a bootstrap node installation to the /dev/sda device. The
Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP
address 192.168.1.2:
8. Monitor the progress of the RHCOS installation on the console of the machine.
IMPORTANT
Be sure that the installation is successful on each node before commencing with
the OpenShift Container Platform installation. Observing the installation process
can also help to determine the cause of RHCOS installation issues that might
arise.
9. After RHCOS installs, you must reboot the system. During the system reboot, it applies the
Ignition config file that you specified.
Example command
IMPORTANT
49
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
You must create the bootstrap and control plane machines at this time. If the
control plane machines are not made schedulable, also create at least two
compute machines before you install OpenShift Container Platform.
If the required network, DNS, and load balancer infrastructure are in place, the OpenShift
Container Platform bootstrap process begins automatically after the RHCOS nodes have
rebooted.
NOTE
RHCOS nodes do not include a default password for the core user. You can
access the nodes by running ssh core@<node>.<cluster_name>.
<base_domain> as a user with access to the SSH private key that is paired to
the public key that you specified in your install_config.yaml file. OpenShift
Container Platform 4 cluster nodes running RHCOS are immutable and rely on
Operators to apply cluster changes. Accessing cluster nodes by using SSH is not
recommended. However, when investigating installation issues, if the OpenShift
Container Platform API is not available, or the kubelet is not properly functioning
on a target node, SSH access might be required for debugging or disaster
recovery.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have an HTTP server that can be accessed from your computer, and from the machines
that you create.
You have reviewed the Advanced RHCOS installation configuration section for different ways to
configure features, such as networking and disk partitioning.
Procedure
1. Upload the bootstrap, control plane, and compute node Ignition config files that the installation
program created to your HTTP server. Note the URLs of these files.
IMPORTANT
You can add or change configuration settings in your Ignition configs before
saving them to your HTTP server. If you plan to add more compute machines to
your cluster after you finish installation, do not delete these files.
2. From the installation host, validate that the Ignition config files are available on the URLs. The
following example gets the Ignition config file for the bootstrap node:
50
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
$ curl -k http://<HTTP_server>/bootstrap.ign 1
Example output
Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the
Ignition config files for the control plane and compute nodes are also available.
3. Although it is possible to obtain the RHCOS kernel, initramfs and rootfs files that are required
for your preferred method of installing operating system instances from the RHCOS image
mirror page, the recommended way to obtain the correct version of your RHCOS files are from
the output of openshift-install command:
Example output
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
kernel-aarch64"
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
initramfs.aarch64.img"
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
rootfs.aarch64.img"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-
<release>-live-kernel-ppc64le"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-
initramfs.ppc64le.img"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-
rootfs.ppc64le.img"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-
s390x"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-
initramfs.s390x.img"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-
rootfs.s390x.img"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-
x86_64"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-
initramfs.x86_64.img"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-
rootfs.x86_64.img"
IMPORTANT
51
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
The RHCOS artifacts might not change with every release of OpenShift
Container Platform. You must download images with the highest version that is
less than or equal to the OpenShift Container Platform version that you install.
Only use the appropriate kernel, initramfs, and rootfs artifacts described below
for this procedure. RHCOS QCOW2 images are not supported for this installation
type.
The file names contain the OpenShift Container Platform version number. They resemble the
following examples:
kernel: rhcos-<version>-live-kernel-<architecture>
initramfs: rhcos-<version>-live-initramfs.<architecture>.img
rootfs: rhcos-<version>-live-rootfs.<architecture>.img
4. Upload the rootfs, kernel, and initramfs files to your HTTP server.
IMPORTANT
If you plan to add more compute machines to your cluster after you finish
installation, do not delete these files.
5. Configure the network boot infrastructure so that the machines boot from their local disks after
RHCOS is installed on them.
6. Configure PXE or iPXE installation for the RHCOS images and begin the installation.
Modify one of the following example menu entries for your environment and verify that the
image and Ignition files are properly accessible:
DEFAULT pxeboot
TIMEOUT 20
PROMPT 0
LABEL pxeboot
KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1
APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.
<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-
rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda
coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3
1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The
URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url
parameter value is the location of the rootfs file, and the coreos.inst.ignition_url
parameter value is the location of the bootstrap Ignition config file. You can also add
more kernel arguments to the APPEND line to configure networking or other boot
52
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
options.
NOTE
This configuration does not enable serial console access on machines with a
graphical console. To configure a different console, add one or more
console= arguments to the APPEND line. For example, add console=tty0
console=ttyS0 to set the first PC serial port as the primary console and the
graphical console as a secondary console. For more information, see How
does one set up a serial terminal and/or console in Red Hat Enterprise Linux?
and "Enabling the serial console for PXE and ISO installation" in the
"Advanced RHCOS installation configuration" section.
1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel parameter value is the location of the kernel file, the initrd=main argument is
needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is
the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the
location of the bootstrap Ignition config file.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the location of the initramfs file that you uploaded to your HTTP server.
NOTE
This configuration does not enable serial console access on machines with a
graphical console. To configure a different console, add one or more
console= arguments to the kernel line. For example, add console=tty0
console=ttyS0 to set the first PC serial port as the primary console and the
graphical console as a secondary console. For more information, see How
does one set up a serial terminal and/or console in Red Hat Enterprise Linux?
and "Enabling the serial console for PXE and ISO installation" in the
"Advanced RHCOS installation configuration" section.
NOTE
53
OpenShift Container Platform 4.17 Installing on bare metal
1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP
server. The kernel parameter value is the location of the kernel file on your TFTP
server. The coreos.live.rootfs_url parameter value is the location of the rootfs file,
and the coreos.inst.ignition_url parameter value is the location of the bootstrap
Ignition config file on your HTTP Server.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the location of the initramfs file that you uploaded to your TFTP server.
7. Monitor the progress of the RHCOS installation on the console of the machine.
IMPORTANT
Be sure that the installation is successful on each node before commencing with
the OpenShift Container Platform installation. Observing the installation process
can also help to determine the cause of RHCOS installation issues that might
arise.
8. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config
file that you specified.
Example command
IMPORTANT
You must create the bootstrap and control plane machines at this time. If the
control plane machines are not made schedulable, also create at least two
compute machines before you install the cluster.
If the required network, DNS, and load balancer infrastructure are in place, the OpenShift
Container Platform bootstrap process begins automatically after the RHCOS nodes have
rebooted.
NOTE
54
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
NOTE
RHCOS nodes do not include a default password for the core user. You can
access the nodes by running ssh core@<node>.<cluster_name>.
<base_domain> as a user with access to the SSH private key that is paired to
the public key that you specified in your install_config.yaml file. OpenShift
Container Platform 4 cluster nodes running RHCOS are immutable and rely on
Operators to apply cluster changes. Accessing cluster nodes by using SSH is not
recommended. However, when investigating installation issues, if the OpenShift
Container Platform API is not available, or the kubelet is not properly functioning
on a target node, SSH access might be required for debugging or disaster
recovery.
The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations
detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways.
2.11.3.1. Using advanced networking options for PXE and ISO installations
Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary
configuration settings. To set up static IP addresses or configure special settings, such as bonding, you
can do one of the following:
Pass special kernel parameters when you boot the live installer.
Configure networking from a live installer shell prompt, then copy those settings to the installed
system so that they take effect when the installed system first boots.
Procedure
2. From the live system shell prompt, configure networking for the live system using available
RHEL tools, such as nmcli or nmtui.
55
OpenShift Container Platform 4.17 Installing on bare metal
3. Run the coreos-installer command to install the system, adding the --copy-network option to
copy networking configuration. For example:
IMPORTANT
Additional resources
See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for
more information about the nmcli and nmtui tools.
Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat
Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the
same partition layout, unless you override the default partitioning configuration. During the RHCOS
installation, the size of the root file system is increased to use any remaining available space on the
target device.
IMPORTANT
The use of a custom partition scheme on your node might result in OpenShift Container
Platform not monitoring or alerting on some node partitions. If you override the default
partitioning, see Understanding OpenShift File System Monitoring (eviction conditions)
for more information about how OpenShift Container Platform monitors your host file
systems.
For the default partition scheme, nodefs and imagefs monitor the same root filesystem, /.
To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster
node, you must create separate partitions. Consider a situation where you want to add a separate
storage partition for your containers and container images. For example, by mounting
/var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the
imagefs directory and the root file system as the nodefs directory.
IMPORTANT
56
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
IMPORTANT
If you have resized your disk size to host a larger file system, consider creating a separate
/var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce
CPU time issues caused by a high number of allocation groups.
In general, you should use the default disk partitioning that is created during the RHCOS installation.
However, there are cases where you might want to create a separate partition for a directory that you
expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the
/var directory or a subdirectory of /var. For example:
/var/lib/containers: Holds container-related content that can grow as more images and
containers are added to a system.
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as
performance optimization of etcd storage.
/var: Holds data that you might want to keep separate for purposes such as auditing.
IMPORTANT
For disk sizes larger than 100GB, and especially larger than 1TB, create a separate
/var partition.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as
needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this
method, you will not have to pull all your containers again, nor will you have to copy massive log files
when you update systems.
The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth
in the partitioned directory from filling up the root file system.
The following procedure sets up a separate /var partition by adding a machine config manifest that is
wrapped into the Ignition config file for a node type during the preparation phase of an installation.
Procedure
1. On your installation host, change to the directory that contains the OpenShift Container
Platform installation program and generate the Kubernetes manifests for the cluster:
2. Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the
storage device on the worker systems, and set the storage size as appropriate. This example
places the /var directory on a separate partition:
variant: openshift
version: 4.17.0
metadata:
labels:
57
OpenShift Container Platform 4.17 Installing on bare metal
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
storage:
disks:
- device: /dev/disk/by-id/<device_name> 1
partitions:
- label: var
start_mib: <partition_start_offset> 2
size_mib: <partition_size> 3
number: 5
filesystems:
- device: /dev/disk/by-partlabel/var
path: /var
format: xfs
mount_options: [defaults, prjquota] 4
with_mount_unit: true
1 The storage device name of the disk that you want to partition.
2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes
is recommended. The root file system is automatically resized to fill all available space up
to the specified offset. If no offset value is specified, or if the specified value is smaller than
the recommended minimum, the resulting root file system will be too small, and future
reinstalls of RHCOS might overwrite the beginning of the data partition.
4 The prjquota mount option must be enabled for filesystems used for container storage.
NOTE
When creating a separate /var partition, you cannot use different instance types
for compute nodes, if the different instance types do not have the same device
name.
3. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory.
For example, run the following command:
Ignition config files are created for the bootstrap, control plane, and compute nodes in the
installation directory:
.
├── auth
58
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Next steps
You can apply the custom disk partitioning by referencing the Ignition config files during the
RHCOS installations.
For an ISO installation, you can add options to the coreos-installer command that cause the installer to
maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the
APPEND parameter to preserve partitions.
Saved partitions might be data partitions from an existing OpenShift Container Platform system. You
can identify the disk partitions you want to keep either by partition label or by number.
NOTE
If you save existing partitions, and those partitions do not leave enough space for
RHCOS, the installation will fail without damaging the saved partitions.
The following example illustrates running the coreos-installer in a way that preserves the sixth (6)
partition on the disk:
In the previous examples where partition saving is used, coreos-installer recreates the partition
immediately.
59
OpenShift Container Platform 4.17 Installing on bare metal
coreos.inst.save_partlabel=data*
coreos.inst.save_partindex=5-
coreos.inst.save_partindex=6
When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide,
with different reasons for providing each one:
Permanent install Ignition config: Every manual RHCOS installation needs to pass one of the
Ignition config files generated by openshift-installer, such as bootstrap.ign, master.ign and
worker.ign, to carry out the installation.
IMPORTANT
It is not recommended to modify these Ignition config files directly. You can
update the manifest files that are wrapped into the Ignition config files, as
outlined in examples in the preceding sections.
For PXE installations, you pass the Ignition configs on the APPEND line using the
coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt,
you identify the Ignition config on the coreos-installer command line with the --ignition-url=
option. In both cases, only HTTP and HTTPS protocols are supported.
Live install Ignition config: This type can be created by using the coreos-installer customize
subcommand and its various options. With this method, the Ignition config passes to the live
install medium, runs immediately upon booting, and performs setup tasks before or after the
RHCOS system installs to disk. This method should only be used for performing tasks that must
be done once and not applied again later, such as with advanced partitioning that cannot be
done using a machine config.
For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url=
option to identify the location of the Ignition config. You also need to append ignition.firstboot
ignition.platform.id=metal or the ignition.config.url option will be ignored.
Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.17
boot image use a default console that is meant to accomodate most virtualized and bare metal setups.
Different cloud and virtualization platforms may use different default settings depending on the chosen
architecture. Bare metal installations use the kernel default settings which typically means the graphical
console is the primary console and the serial console is disabled.
The default consoles may not match your specific hardware configuration or you might have specific
needs that require you to adjust the default console. For example:
You want to access the emergency shell on the console for debugging purposes.
Your cloud platform does not provide interactive access to the graphical console, but provides a
60
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Your cloud platform does not provide interactive access to the graphical console, but provides a
serial console.
Console configuration is inherited from the boot image. This means that new nodes in existing clusters
are unaffected by changes to the default console.
You can configure the console for bare metal installations in the following ways:
NOTE
2.11.3.5. Enabling the serial console for PXE and ISO installations
By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is
written to the graphical console. You can enable the serial console for an ISO installation and
reconfigure the bootloader so that output is sent to both the serial console and the graphical console.
Procedure
2. Run the coreos-installer command to install the system, adding the --console option once to
specify the graphical console, and a second time to specify the serial console:
$ coreos-installer install \
--console=tty0 \ 1
--console=ttyS0,<options> \ 2
--ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>
1 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
2 The desired primary console. In this case the serial console. The options field defines the
baud rate and other settings. A common value for this field is 11520n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see Linux kernel serial console documentation.
NOTE
61
OpenShift Container Platform 4.17 Installing on bare metal
To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is
omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation
procedure.
You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file
directly into the image. This creates a customized image that you can use to provision your system.
For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which
modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the
coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your
customizations.
The customize subcommand is a general purpose tool that can embed other types of customizations as
well. The following tasks are examples of some of the more common customizations:
Inject custom CA certificates for when corporate security policy requires their use.
You can customize a live RHCOS ISO image directly with the coreos-installer iso customize
subcommand. When you boot the ISO image, the customizations are applied automatically.
You can use this feature to configure the ISO image to automatically install RHCOS.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file,
and then run the following command to inject the Ignition config directly into the ISO image:
1 The Ignition config file that is generated from the openshift-installer installation program.
2 When you specify this option, the ISO image automatically runs an installation. Otherwise,
the image remains configured for installation, but does not install automatically unless you
specify the coreos.inst.install_dev kernel argument.
3. Optional: To remove the ISO image customizations and return the image to its pristine state,
run:
You can now re-customize the live ISO image or use it in its pristine state.
62
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
2.11.3.7.1. Modifying a live install ISO image to enable the serial console
On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by
default and all output is written to the graphical console. You can enable the serial console with the
following procedure.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image to enable the serial console to receive output:
2 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
3 The desired primary console. In this case, the serial console. The options field defines the
baud rate and other settings. A common value for this field is 115200n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see the Linux kernel serial console documentation.
4 The specified disk to install to. If you omit this option, the ISO image automatically runs the
installation program which will fail unless you also specify the coreos.inst.install_dev
kernel argument.
NOTE
The --dest-console option affects the installed system and not the live ISO
system. To modify the console for a live ISO system, use the --live-karg-append
option and specify the console with console=.
Your customizations are applied and affect every subsequent boot of the ISO image.
3. Optional: To remove the ISO image customizations and return the image to its original state,
run the following command:
You can now recustomize the live ISO image or use it in its original state.
2.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
63
OpenShift Container Platform 4.17 Installing on bare metal
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
customize subcommand. You can use the CA certificates during both the installation boot and when
provisioning the installed system.
NOTE
Custom CA certificates affect how Ignition fetches remote resources but they do not
affect the certificates installed onto the system.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image for use with a custom CA:
IMPORTANT
The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag.
You must use the --dest-ignition flag to create a customized image for each cluster.
2.11.3.7.3. Modifying a live install ISO image with customized network settings
You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed
system with the --network-keyfile flag of the customize subcommand.
WARNING
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Create a connection profile for a bonded interface. For example, create the
bond0.nmconnection file in your local directory with the following content:
[connection]
id=bond0
type=bond
interface-name=bond0
64
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
multi-connect=1
[bond]
miimon=100
mode=active-backup
[ipv4]
method=auto
[ipv6]
method=auto
3. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em1.nmconnection file in your local directory with the following content:
[connection]
id=em1
type=ethernet
interface-name=em1
master=bond0
multi-connect=1
slave-type=bond
4. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em2.nmconnection file in your local directory with the following content:
[connection]
id=em2
type=ethernet
interface-name=em2
master=bond0
multi-connect=1
slave-type=bond
5. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with your configured networking:
Network settings are applied to the live system and are carried over to the destination system.
2.11.3.7.4. Customizing a live install ISO image for an iSCSI boot device
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
65
OpenShift Container Platform 4.17 Installing on bare metal
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with the following information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The location of the destination system. You must provide the IP address of the target
portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI
logical unit number (LUN).
5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
2.11.3.7.5. Customizing a live install ISO image for an iSCSI boot device with iBFT
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with the following information:
66
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
--post-install unmount-iscsi.sh \ 2
--dest-device /dev/mapper/mpatha \ 3
--dest-ignition config.ign \ 4
--dest-karg-append rd.iscsi.firmware=1 \ 5
--dest-karg-append rd.multipath=default \ 6
-o custom.iso rhcos-<version>-live.x86_64.iso
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The path to the device. If you are using multipath, the multipath device,
/dev/mapper/mpatha, If there are multiple multipath devices connected, or to be explicit,
you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize
subcommand. When you boot the PXE environment, the customizations are applied automatically.
You can use this feature to configure the PXE environment to automatically install RHCOS.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
the Ignition config file, and then run the following command to create a new initramfs file that
contains the customizations from your Ignition config:
2 When you specify this option, the PXE environment automatically runs an install.
Otherwise, the image remains configured for installing, but does not do so automatically
unless you specify the coreos.inst.install_dev kernel argument.
3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot
and ignition.platform.id=metal kernel arguments if they are not already present.
67
OpenShift Container Platform 4.17 Installing on bare metal
2.11.3.8.1. Modifying a live install PXE environment to enable the serial console
On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by
default and all output is written to the graphical console. You can enable the serial console with the
following procedure.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
the Ignition config file, and then run the following command to create a new customized
initramfs file that enables the serial console to receive output:
2 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
3 The desired primary console. In this case, the serial console. The options field defines the
baud rate and other settings. A common value for this field is 115200n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see the Linux kernel serial console documentation.
4 The specified disk to install to. If you omit this option, the PXE environment automatically
runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel
argument.
5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot
and ignition.platform.id=metal kernel arguments if they are not already present.
Your customizations are applied and affect every subsequent boot of the PXE environment.
2.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
customize subcommand. You can use the CA certificates during both the installation boot and when
provisioning the installed system.
NOTE
Custom CA certificates affect how Ignition fetches remote resources but they do not
affect the certificates installed onto the system.
68
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file for use with a custom CA:
3. Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and
ignition.platform.id=metal kernel arguments if they are not already present.
IMPORTANT
The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag.
You must use the --dest-ignition flag to create a customized image for each cluster.
2.11.3.8.3. Modifying a live install PXE environment with customized network settings
You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the
installed system with the --network-keyfile flag of the customize subcommand.
WARNING
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Create a connection profile for a bonded interface. For example, create the
bond0.nmconnection file in your local directory with the following content:
[connection]
id=bond0
type=bond
interface-name=bond0
multi-connect=1
[bond]
miimon=100
mode=active-backup
69
OpenShift Container Platform 4.17 Installing on bare metal
[ipv4]
method=auto
[ipv6]
method=auto
3. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em1.nmconnection file in your local directory with the following content:
[connection]
id=em1
type=ethernet
interface-name=em1
master=bond0
multi-connect=1
slave-type=bond
4. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em2.nmconnection file in your local directory with the following content:
[connection]
id=em2
type=ethernet
interface-name=em2
master=bond0
multi-connect=1
slave-type=bond
5. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file that contains your
configured networking:
6. Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and
ignition.platform.id=metal kernel arguments if they are not already present.
Network settings are applied to the live system and are carried over to the destination system.
2.11.3.8.4. Customizing a live install PXE environment for an iSCSI boot device
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
70
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file with the following
information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The location of the destination system. You must provide the IP address of the target
portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI
logical unit number (LUN).
5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
2.11.3.8.5. Customizing a live install PXE environment for an iSCSI boot device with iBFT
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file with the following
information:
71
OpenShift Container Platform 4.17 Installing on bare metal
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The path to the device. If you are using multipath, the multipath device,
/dev/mapper/mpatha, If there are multiple multipath devices connected, or to be explicit,
you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This section illustrates the networking configuration and other advanced options that allow you to
modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables
describe the kernel arguments and command-line options you can use with the RHCOS live installer and
the coreos-installer command.
If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the
image to configure networking for a node. If no networking arguments are specified, DHCP is activated
in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file.
IMPORTANT
When adding networking arguments manually, you must also add the rd.neednet=1
kernel argument to bring the network up in the initramfs.
The following information provides examples for configuring networking and bonding on your RHCOS
nodes for ISO installations. The examples describe how to use the ip=, nameserver=, and bond= kernel
arguments.
NOTE
72
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
NOTE
Ordering is important when adding the kernel arguments: ip=, nameserver=, and then
bond=.
The networking options are passed to the dracut tool during system boot. For more information about
the networking options supported by dracut, see the dracut.cmdline manual page.
The following examples are the networking options for ISO installation.
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
nameserver=4.4.4.41
NOTE
When you use DHCP to configure IP addressing for the RHCOS machines, the machines
also obtain the DNS server information through DHCP. For DHCP-based deployments,
you can define the DNS server address that is used by the RHCOS nodes through your
DHCP server configuration.
73
OpenShift Container Platform 4.17 Installing on bare metal
ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none
nameserver=4.4.4.41
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
NOTE
When you configure one or multiple networks, one default gateway is required. If the
additional network gateway is different from the primary network gateway, the default
gateway must be the primary network gateway.
ip=::10.10.10.254::::
Enter the following command to configure the route for the additional network:
rd.route=20.20.20.0/24:20.20.20.254:enp2s0
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=::::core0.example.com:enp2s0:none
ip=enp1s0:dhcp
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
To configure a VLAN on a network interface and use a static IP address, run the following
command:
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none
vlan=enp2s0.100:enp2s0
To configure a VLAN on a network interface and to use DHCP, run the following command:
74
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
ip=enp2s0.100:dhcp
vlan=enp2s0.100:enp2s0
nameserver=1.1.1.1
nameserver=8.8.8.8
When you create a bonded interface using bond=, you must specify how the IP address is
assigned and other information for the bonded interface.
To configure the bonded interface to use DHCP, set the bond’s IP address to dhcp. For
example:
bond=bond0:em1,em2:mode=active-backup
ip=bond0:dhcp
To configure the bonded interface to use a static IP address, enter the specific IP address
you want and related information. For example:
bond=bond0:em1,em2:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
1. Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices.
Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section.
2. Create the bond, attach the desired VFs to the bond and set the bond link state up following
the guidance in Configuring network bonding. Follow any of the described procedures to create
the bond.
75
OpenShift Container Platform 4.17 Installing on bare metal
When you create a bonded interface using bond=, you must specify how the IP address is
assigned and other information for the bonded interface.
To configure the bonded interface to use DHCP, set the bond’s IP address to dhcp. For
example:
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=bond0:dhcp
To configure the bonded interface to use a static IP address, enter the specific IP address
you want and related information. For example:
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
NOTE
team=team0:em1,em2
ip=team0:dhcp
You can install RHCOS by running coreos-installer install <options> <device> at the command
prompt, after booting into the RHCOS live environment from an ISO image.
The following table shows the subcommands, options, and arguments you can pass to the coreos-
installer command.
Subcommand Description
Option Description
76
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
-f, --image-file <path> Specify a local image file manually. Used for
debugging.
-p, --platform <name> Override the Ignition platform ID for the installed
system.
--console <spec> Set the kernel and bootloader console for the
installed system. For more information about the
format of <spec>, see the Linux kernel serial
console documentation.
IMPORTANT
77
OpenShift Container Platform 4.17 Installing on bare metal
Argument Description
Subcommand Description
coreos-installer iso reset <options> Restore a RHCOS live ISO image to default settings.
<ISO_image>
coreos-installer iso ignition remove Remove the embedded Ignition config from an ISO
<options> <ISO_image> image.
Option Description
--dest-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the destination system.
--dest-console <spec> Specify the kernel and bootloader console for the
destination system.
78
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
--live-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the live environment.
--live-karg-delete <arg> Delete a kernel argument from each boot of the live
environment.
Subcommand Description
Note that not all of these options are accepted by all subcommands.
coreos-installer pxe customize <options> Customize a RHCOS live PXE boot config.
<path>
coreos-installer pxe ignition unwrap Show the wrapped Ignition config in an image.
<options> <image_name>
Option Description
Note that not all of these options are accepted by all subcommands.
79
OpenShift Container Platform 4.17 Installing on bare metal
--dest-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the destination system.
--dest-console <spec> Specify the kernel and bootloader console for the
destination system.
--live-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the live environment.
NOTE
You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot
arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments.
For ISO installations, the coreos.inst options can be added by interrupting the automatic boot
at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL
CoreOS (Live) menu option is highlighted.
For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line
before the RHCOS live installer is booted.
The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE
installations.
80
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Argument Description
81
OpenShift Container Platform 4.17 Installing on bare metal
Argument Description
ignition.config.url Optional: The URL of the Ignition config for the live
boot. For example, this can be used to customize
how coreos-installer is invoked, or to run code
before or after the installation. This is different from
coreos.inst.ignition_url, which is the Ignition
config for the installed system.
You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container
Platform 4.8 or later. While postinstallation support is available by activating multipathing via the
machine config, enabling multipathing during installation is recommended.
In setups where any I/O to non-optimized paths results in I/O system errors, you must enable
multipathing at installation time.
IMPORTANT
On IBM Z® and IBM® LinuxONE, you can enable multipathing only if you configured your
cluster for it during installation. For more information, see "Installing RHCOS and starting
the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on
IBM Z® and IBM® LinuxONE.
The following procedure enables multipath at installation time and appends kernel arguments to the
coreos-installer install command so that the installed system itself will use multipath beginning from
the first boot.
NOTE
OpenShift Container Platform does not support enabling multipathing as a day-2 activity
on nodes that have been upgraded from 4.6 or earlier.
Prerequisites
You have created the Ignition config files for your cluster.
You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap
process.
Procedure
82
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
1. To enable multipath and start the multipathd daemon, run the following command on the
installation host:
Optional: If booting the PXE or ISO, you can instead enable multipath by adding
rd.multipath=default from the kernel command line.
If there is only one multipath device connected to the machine, it should be available at path
/dev/mapper/mpatha. For example:
If there are multiple multipath devices connected to the machine, or to be more explicit,
instead of using /dev/mapper/mpatha, it is recommended to use the World Wide Name
(WWN) symlink available in /dev/disk/by-id. For example:
This symlink can also be used as the coreos.inst.install_dev kernel argument when using
special coreos.inst.* arguments to direct the live installer. For more information, see
"Installing RHCOS and starting the OpenShift Container Platform bootstrap process".
4. Check that the kernel arguments worked by going to one of the worker nodes and listing the
kernel command line arguments (in /proc/cmdline on the host):
$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
83
OpenShift Container Platform 4.17 Installing on bare metal
...
sh-4.2# exit
RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to
enable multipathing for the secondary disk at installation time.
Prerequisites
Procedure
Example multipath-config.bu
variant: openshift
version: 4.17.0
systemd:
units:
- name: mpath-configure.service
enabled: true
contents: |
[Unit]
Description=Configure Multipath on Secondary Disk
ConditionFirstBoot=true
ConditionPathExists=!/etc/multipath.conf
Before=multipathd.service 1
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/usr/sbin/mpathconf --enable 2
[Install]
WantedBy=multi-user.target
- name: mpath-var-lib-container.service
enabled: true
contents: |
[Unit]
Description=Set Up Multipath On /var/lib/containers
ConditionFirstBoot=true 3
Requires=dev-mapper-mpatha.device
After=dev-mapper-mpatha.device
After=ostree-remount.service
Before=kubelet.service
84
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
DefaultDependencies=no
[Service] 4
Type=oneshot
ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha
ExecStart=/usr/bin/mkdir -p /var/lib/containers
[Install]
WantedBy=multi-user.target
- name: var-lib-containers.mount
enabled: true
contents: |
[Unit]
Description=Mount /var/lib/containers
After=mpath-var-lib-containers.service
Before=kubelet.service 5
[Mount] 6
What=/dev/disk/by-label/dm-mpath-containers
Where=/var/lib/containers
Type=xfs
[Install]
WantedBy=multi-user.target
6 Mounts the device to the /var/lib/containers mount point. This location cannot be a
symlink.
3. Continue with the rest of the first boot RHCOS installation process.
IMPORTANT
Prerequisites
85
OpenShift Container Platform 4.17 Installing on bare metal
Prerequisites
2. You have an iSCSI target that you want to install RHCOS on.
Procedure
1. Mount the iSCSI target from the live environment by running the following command:
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ 1
--login
2. Install RHCOS onto the iSCSI target by running the following command and using the necessary
kernel arguments, for example:
$ coreos-installer install \
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 1
--append-karg rd.iscsi.initiator=<initiator_iqn> \ 2
--append.karg netroot=<target_iqn> \ 3
--console ttyS0,115200n8
--ignition-file <path_to_file>
1 The location you are installing to. You must provide the IP address of the target portal, the
associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit
number (LUN).
2 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This procedure can also be performed using the coreos-installer iso customize or coreos-installer
pxe customize subcommands.
Prerequisites
86
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Procedure
1. Mount the iSCSI target from the live environment by running the following command:
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ 1
--login
2. Optional: enable multipathing and start the daemon with the following command:
3. Install RHCOS onto the iSCSI target by running the following command and using the necessary
kernel arguments, for example:
$ coreos-installer install \
/dev/mapper/mpatha \ 1
--append-karg rd.iscsi.firmware=1 \ 2
--append-karg rd.multipath=default \ 3
--console ttyS0 \
--ignition-file <path_to_file>
1 The path of a single multipathed device. If there are multiple multipath devices connected,
or to be explicit, you can use the World Wide Name (WWN) symlink available in
/dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This procedure can also be performed using the coreos-installer iso customize or coreos-installer
pxe customize subcommands.
Additional resources
See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for
87
OpenShift Container Platform 4.17 Installing on bare metal
See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for
more information on using special coreos.inst.* arguments to direct the live installer.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have obtained the installation program and generated the Ignition config files for your
cluster.
You installed RHCOS on your cluster machines and provided the Ignition config files that the
OpenShift Container Platform installation program generated.
Your machines have direct internet access or have an HTTP or HTTPS proxy available.
Procedure
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 To view different installation details, specify warn, debug, or error instead of info.
Example output
The command succeeds when the Kubernetes API server signals that it has been bootstrapped
on the control plane machines.
2. After the bootstrap process is complete, remove the bootstrap machine from the load balancer.
IMPORTANT
You must remove the bootstrap machine from the load balancer at this point.
You can also remove or reformat the bootstrap machine itself.
88
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Additional resources
See Monitoring installation progress for more information about monitoring the installation logs
and retrieving diagnostic data if installation issues arise.
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
Prerequisites
Procedure
$ oc get nodes
89
OpenShift Container Platform 4.17 Installing on bare metal
Example output
NOTE
The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.
2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:
$ oc get csr
Example output
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After the client CSR is
approved, the Kubelet creates a secondary CSR for the serving certificate, which
requires manual approval. Then, subsequent serving certificate renewal requests
are automatically approved by the machine-approver if the Kubelet requests a
new certificate with identical parameters.
NOTE
90
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
NOTE
For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.
To approve them individually, run the following command for each valid CSR:
NOTE
Some Operators might not become available until some CSRs are approved.
4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:
$ oc get csr
Example output
5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:
To approve them individually, run the following command for each valid CSR:
91
OpenShift Container Platform 4.17 Installing on bare metal
6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:
$ oc get nodes
Example output
NOTE
It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.
Additional information
Prerequisites
Procedure
Example output
92
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
Additional resources
See Gathering logs from a failed installation for details about gathering data in the event of a
failed OpenShift Container Platform installation.
See Troubleshooting Operator issues for steps to check Operator pod health across the cluster
and gather Operator logs for diagnosis.
After installation, you must edit the Image Registry Operator configuration to switch the
managementState from Removed to Managed. When this has completed, you must configure storage.
Instructions are shown for configuring a persistent volume, which is required for production clusters.
Where applicable, instructions are shown for configuring an empty directory as the storage location,
which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using
93
OpenShift Container Platform 4.17 Installing on bare metal
Additional instructions are provided for allowing the image registry to use block storage types by using
the Recreate rollout strategy during upgrades.
2.15.2.1. Configuring registry storage for bare metal and other manual installations
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS)
nodes, such as bare metal.
You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data
Foundation.
IMPORTANT
Procedure
NOTE
When you use shared storage, review your security settings to prevent outside
access.
Example output
NOTE
If you do have a registry pod in your output, you do not need to continue with this
procedure.
94
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
$ oc edit configs.imageregistry.operator.openshift.io
Example output
storage:
pvc:
claim:
Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
Example output
5. Ensure that your registry is set to managed to enable building and pushing of images.
Run:
$ oc edit configs.imageregistry/cluster
managementState: Removed
to
managementState: Managed
You must configure storage for the Image Registry Operator. For non-production clusters, you can set
the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
95
OpenShift Container Platform 4.17 Installing on bare metal
WARNING
If you run this command before the Image Registry Operator initializes its components, the oc
patch command fails with the following error:
To allow the image registry to use block storage types during upgrades as a cluster administrator, you
can use the Recreate rollout strategy.
IMPORTANT
Block storage volumes, or block persistent volumes, are supported but not recommended
for use with the image registry on production clusters. An installation where the registry is
configured on block storage is not highly available because the registry cannot have more
than one replica.
If you choose to use a block storage volume with the image registry, you must use a
filesystem persistent volume claim (PVC).
Procedure
1. Enter the following command to set the image registry storage as a block storage type, patch
the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1) replica:
2. Provision the PV for the block storage device, and create a PVC for that volume. The requested
block volume uses the ReadWriteOnce (RWO) access mode.
a. Create a pvc.yaml file with the following contents to define a VMware vSphere
PersistentVolumeClaim object:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-storage 1
namespace: openshift-image-registry 2
spec:
accessModes:
- ReadWriteOnce 3
96
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
resources:
requests:
storage: 100Gi 4
3 The access mode of the persistent volume claim. With ReadWriteOnce, the volume
can be mounted with read and write permissions by a single node.
b. Enter the following command to create the PersistentVolumeClaim object from the file:
3. Enter the following command to edit the registry configuration so that it references the correct
PVC:
Example output
storage:
pvc:
claim: 1
1 By creating a custom PVC, you can leave the claim field blank for the default automatic
creation of an image-registry-storage PVC.
Prerequisites
Procedure
1. Confirm that all the cluster components are online with the following command:
Example output
97
OpenShift Container Platform 4.17 Installing on bare metal
Alternatively, the following command notifies you when all of the clusters are available. It also
retrieves and displays credentials:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
Example output
The command succeeds when the Cluster Version Operator finishes deploying the OpenShift
Container Platform cluster from Kubernetes API server.
IMPORTANT
98
CHAPTER 2. INSTALLING A USER-PROVISIONED CLUSTER ON BARE METAL
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If
the cluster is shut down before renewing the certificates and the cluster is
later restarted after the 24 hours have elapsed, the cluster automatically
recovers the expired certificates. The exception is that you must manually
approve the pending node-bootstrapper certificate signing requests (CSRs)
to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.
It is recommended that you use Ignition config files within 12 hours after they
are generated because the 24-hour certificate rotates from 16 to 22 hours
after the cluster is installed. By using the Ignition config files within 12 hours,
you can avoid installation failure if the certificate update runs during
installation.
2. Confirm that the Kubernetes API server is communicating with the pods.
Example output
b. View the logs for a pod that is listed in the output of the previous command by using the
following command:
1 Specify the pod name and namespace, as shown in the output of the previous
command.
If the pod logs display, the Kubernetes API server can communicate with the cluster
machines.
3. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable
multipathing. Do not enable multipathing during installation.
See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine
configuration tasks documentation for more information.
99
OpenShift Container Platform 4.17 Installing on bare metal
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to
track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
See About remote health monitoring for more information about the Telemetry service
100
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
When you customize OpenShift Container Platform networking, you must set most of the network
configuration parameters during installation. You can modify only kubeProxy network configuration
parameters in a running cluster.
3.1. PREREQUISITES
You reviewed details about the OpenShift Container Platform installation and update
processes.
You read the documentation on selecting a cluster installation method and preparing it for
users.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow
the sites that your cluster requires access to.
Access OpenShift Cluster Manager to download the installation program and perform
subscription management. If the cluster has internet access and you do not disable Telemetry,
that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the required content and use it to populate a mirror registry with the
installation packages. With some installation types, the environment that you install your
cluster in will not require internet access. Before you update the cluster, you update the
content of the mirror registry.
Additional resources
See Installing a user-provisioned bare metal cluster on a restricted network for more
information about performing a restricted network installation on bare metal infrastructure that
you provision.
This section describes the requirements for deploying OpenShift Container Platform on user-
provisioned infrastructure.
Hosts Description
One temporary bootstrap machine The cluster requires the bootstrap machine to deploy
the OpenShift Container Platform cluster on the
three control plane machines. You can remove the
bootstrap machine after you install the cluster.
Three control plane machines The control plane machines run the Kubernetes and
OpenShift Container Platform services that form the
control plane.
At least two compute machines, which are also The workloads requested by OpenShift Container
known as worker machines. Platform users run on the compute machines.
NOTE
As an exception, you can run zero compute machines in a bare metal cluster that consists
of three control plane machines only. This provides smaller, more resource efficient
clusters for cluster administrators and developers to use for testing, development, and
production. Running one compute machine is not supported.
IMPORTANT
To maintain high availability of your cluster, use separate physical hosts for these cluster
machines.
The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the
operating system. However, the compute machines can choose between Red Hat Enterprise Linux
CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later.
Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware
certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits .
102
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
1. One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-
Threading, is not enabled. When enabled, use the following formula to calculate the
corresponding ratio: (threads per core × cores) × sockets = CPUs.
2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster
storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms
p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so
you might need to over-allocate storage volume to obtain sufficient performance.
3. As with all user-provisioned installations, if you choose to use RHEL compute machines in your
cluster, you take responsibility for all operating system life cycle management and maintenance,
including performing system updates, applying patches, and completing all other required tasks.
Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container
Platform 4.10 and later.
NOTE
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2,
which updates the micro-architecture requirements. The following list contains the
minimum instruction set architectures (ISA) that each architecture requires:
If an instance type for your platform meets the minimum requirements for cluster machines, it is
supported to use in OpenShift Container Platform.
Additional resources
Optimizing storage
Because your cluster has limited access to automatic machine management when you use infrastructure
103
OpenShift Container Platform 4.17 Installing on bare metal
Because your cluster has limited access to automatic machine management when you use infrastructure
that you provision, you must provide a mechanism for approving cluster certificate signing requests
(CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The
machine-approver cannot guarantee the validity of a serving certificate that is requested by using
kubelet credentials because it cannot confirm that the correct machine issued the request. You must
determine and implement a method of verifying the validity of the kubelet serving certificate requests
and approving them.
Additional resources
See Configuring a three-node cluster for details about deploying three-node clusters in bare
metal environments.
See Approving the certificate signing requests for your machines for more information about
approving cluster certificate signing requests after installation.
During the initial boot, the machines require an IP address configuration that is set either through a
DHCP server or statically by providing the required boot options. After a network connection is
established, the machines download their Ignition config files from an HTTP or HTTPS server. The
Ignition config files are then used to set the exact state of each machine. The Machine Config Operator
completes more changes to the machines, such as the application of new certificates or keys, after
installation.
It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure
that the DHCP server is configured to provide persistent IP addresses, DNS server information, and
hostnames to the cluster machines.
NOTE
If a DHCP service is not available for your user-provisioned infrastructure, you can instead
provide the IP networking configuration and the address of the DNS server to the nodes
at RHCOS install time. These can be passed as boot arguments if you are installing from
an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform
bootstrap process section for more information about static IP provisioning and advanced
networking options.
The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API
servers and worker nodes are in different zones, you can configure a default DNS search zone to allow
the API server to resolve the node names. Another supported approach is to always refer to hosts by
their fully-qualified domain names in both the node objects and all DNS requests.
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through
NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not
provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a
reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and
can take time to resolve. Other system services can start prior to this and detect the hostname as
localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name
104
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name
configuration errors in environments that have a DNS split-horizon implementation.
You must configure the network connectivity between machines to allow OpenShift Container Platform
cluster components to communicate. Each machine must be able to resolve the hostnames of all other
machines in the cluster.
This section provides details about the ports that are required.
IMPORTANT
In connected OpenShift Container Platform environments, all nodes are required to have
internet access to pull images for platform containers and provide telemetry data to Red
Hat.
9000- 9999 Host level services, including the node exporter on ports
9100- 9101 and the Cluster Version Operator on port9099.
6081 Geneve
9000- 9999 Host level services, including the node exporter on ports
9100- 9101.
105
OpenShift Container Platform 4.17 Installing on bare metal
Table 3.5. Ports used for control plane machine to control plane machine communications
If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise
Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.
Additional resources
Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control
plane machines, and the compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse
name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS
(RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are
provided by DHCP. Additionally, the reverse records are used to generate the certificate signing
requests (CSR) that OpenShift Container Platform needs to operate.
NOTE
It is recommended to use a DHCP server to provide the hostnames to each cluster node.
See the DHCP recommendations for user-provisioned infrastructure section for more
information.
The following DNS records are required for a user-provisioned OpenShift Container Platform cluster
106
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
and they must be in place before installation. In each record, <cluster_name> is the cluster name and
<base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS
record takes the form: <component>.<cluster_name>.<base_domain>..
Kuberne api.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
tes API <base_domain>. to identify the API load balancer. These records must be
resolvable by both clients external to the cluster and from
all the nodes within the cluster.
IMPORTANT
Bootstra bootstrap.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
p <base_domain>. to identify the bootstrap machine. These records must be
machine resolvable by the nodes within the cluster.
Control <control_plane><n>. DNS A/AAAA or CNAME records and DNS PTR records to
plane <cluster_name>. identify each machine for the control plane nodes. These
machine <base_domain>. records must be resolvable by the nodes within the cluster.
s
Comput <compute><n>. DNS A/AAAA or CNAME records and DNS PTR records to
e <cluster_name>. identify each machine for the worker nodes. These records
machine <base_domain>. must be resolvable by the nodes within the cluster.
s
107
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and
SRV records in your DNS configuration.
TIP
You can use the dig command to verify name and reverse name resolution. See the section on
Validating DNS resolution for user-provisioned infrastructure for detailed validation steps.
This section provides A and PTR record configuration samples that meet the DNS requirements for
deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant
to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4 and the base domain is example.com.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5 1
api-int.ocp4.example.com. IN A 192.168.1.5 2
;
*.apps.ocp4.example.com. IN A 192.168.1.5 3
;
bootstrap.ocp4.example.com. IN A 192.168.1.96 4
;
control-plane0.ocp4.example.com. IN A 192.168.1.97 5
control-plane1.ocp4.example.com. IN A 192.168.1.98 6
control-plane2.ocp4.example.com. IN A 192.168.1.99 7
;
compute0.ocp4.example.com. IN A 192.168.1.11 8
108
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
compute1.ocp4.example.com. IN A 192.168.1.7 9
;
;EOF
1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer.
2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer and is used for internal cluster communications.
3 Provides name resolution for the wildcard routes. The record refers to the IP address of the
application ingress load balancer. The application ingress load balancer targets the machines
that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines
by default.
NOTE
In the example, the same load balancer is used for the Kubernetes API and
application ingress traffic. In production scenarios, you can deploy the API and
application ingress load balancers separately so that you can scale the load
balancer infrastructure for each in isolation.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2
;
96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3
;
97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4
98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5
99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6
109
OpenShift Container Platform 4.17 Installing on bare metal
;
11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7
7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8
;
;EOF
1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer.
2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer and is used for internal cluster communications.
NOTE
A PTR record is not required for the OpenShift Container Platform application wildcard.
NOTE
If you want to deploy the API and application Ingress load balancers with a Red Hat
Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately.
1. API load balancer: Provides a common endpoint for users, both human and machine, to interact
with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
A stateless load balancing algorithm. The options vary based on the load balancer
implementation.
IMPORTANT
110
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Configure the following ports on both the front and back of the load balancers:
NOTE
2. Application Ingress load balancer: Provides an ingress point for application traffic flowing in
from outside the cluster. A working configuration for the Ingress router is required for an
OpenShift Container Platform cluster.
Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
TIP
If the true IP address of the client can be seen by the application Ingress load balancer, enabling
source IP-based session persistence can improve performance for applications that use end-
to-end TLS encryption.
Configure the following ports on both the front and back of the load balancers:
111
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
This section provides an example API and application Ingress load balancer configuration that meets the
load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg
configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing
one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In
production scenarios, you can deploy the API and application ingress load balancers separately so that
you can scale the load balancer infrastructure for each in isolation.
NOTE
If you are using HAProxy as a load balancer and SELinux is set to enforcing, you must
ensure that the HAProxy service can bind to the configured TCP port by running
setsebool -P haproxy_connect_any=1.
Example 3.3. Sample API and application Ingress load balancer configuration
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
112
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines.
2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster
installation and they must be removed after the bootstrap process is complete.
3 Port 22623 handles the machine config server traffic and points to the control plane machines.
5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
NOTE
113
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
TIP
If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports
6443, 22623, 443, and 80 by running netstat -nltupe on the HAProxy node.
As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal
platform, you can create a MachineConfig object that includes an NMState configuration file. The
NMState configuration file creates a customized br-ex bridge network configuration on each node in
your cluster.
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
Consider the following use cases for creating a manifest object that includes a customized br-ex bridge:
You want to make postinstallation changes to the bridge, such as changing the Open vSwitch
(OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not
support making postinstallation changes to the bridge.
You want to deploy the bridge on a different interface than the interface available on a host or
server IP address.
You want to make advanced configurations to the bridge that are not possible with the
configure-ovs.sh shell script. Using the script for these configurations might result in the
bridge failing to connect multiple network interfaces and facilitating data forwarding between
the interfaces.
NOTE
If you require an environment with a single network interface controller (NIC) and default
network settings, use the configure-ovs.sh shell script.
After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine
114
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Config Operator injects Ignition configuration files into each node in your cluster, so that each node
received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-
ovs.sh shell script receives a signal to not configure the br-ex bridge.
Prerequisites
Optional: You have installed the nmstate API so that you can validate the NMState
configuration.
Procedure
1. Create a NMState configuration file that has decoded base64 information for your customized
br-ex bridge network:
interfaces:
- name: enp2s0 1
type: ethernet 2
state: up 3
ipv4:
enabled: false 4
ipv6:
enabled: false
- name: br-ex
type: ovs-bridge
state: up
ipv4:
enabled: false
dhcp: false
ipv6:
enabled: false
dhcp: false
bridge:
port:
- name: enp2s0 5
- name: br-ex
- name: br-ex
type: ovs-interface
state: up
copy-mac-from: enp2s0
ipv4:
enabled: true
dhcp: true
ipv6:
enabled: false
dhcp: false
# ...
115
OpenShift Container Platform 4.17 Installing on bare metal
2. Use the cat command to base64-encode the contents of the NMState configuration:
1 Replace <nmstate_configuration> with the name of your NMState resource YAML file.
3. Create a MachineConfig manifest file and define a customized br-ex bridge network
configuration analogous to the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker 1
name: 10-br-ex-worker 2
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,
<base64_encoded_nmstate_configuration> 3
mode: 0644
overwrite: true
path: /etc/nmstate/openshift/cluster.yml
# ...
1 For each node in your cluster, specify the hostname path to your node and the base-64
encoded Ignition configuration file data for the machine type. If you have a single global
configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that
you want to apply to all nodes in your cluster, you do not need to specify the hostname
path for each node. The worker role is the default role for nodes in your cluster. The .yaml
extension does not work when specifying the hostname path for each node or all nodes in
the MachineConfig manifest file.
116
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
After you configure these resources, you must scale machine sets, so that the machine sets can apply
the resource configuration to each compute node and reboot the nodes.
Prerequisites
You created a MachineConfig manifest object that includes a customized br-ex bridge
configuration.
Procedure
$ oc edit mc <machineconfig_custom_resource_name>
2. Add each compute node configuration to the CR, so that the CR can manage roles for each
defined compute node in your cluster.
3. Create a Secret object named extraworker-secret that has a minimal static IP configuration.
4. Apply the extraworker-secret secret to each node in your cluster by entering the following
command. This step provides each compute node access to the Ignition config file.
$ oc apply -f ./extraworker-secret.yaml
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
spec:
# ...
preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret
# ...
$ oc project openshift-machine-api
$ oc get machinesets
8. Scale each machine set by entering the following command. You must run this command for
each machine set.
1 Where <machineset_name> is the name of the machine set and <n> is the number of
compute nodes.
117
OpenShift Container Platform 4.17 Installing on bare metal
This section provides details about the high-level steps required to set up your cluster infrastructure in
preparation for an OpenShift Container Platform installation. This includes configuring IP networking
and network connectivity for your cluster nodes, enabling the required ports through your firewall, and
setting up the required DNS and load balancing infrastructure.
After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements
for a cluster with user-provisioned infrastructure section.
Prerequisites
You have reviewed the OpenShift Container Platform 4.x Tested Integrations page.
You have reviewed the infrastructure requirements detailed in the Requirements for a cluster
with user-provisioned infrastructure section.
Procedure
1. If you are using DHCP to provide the IP networking configuration to your cluster nodes,
configure your DHCP service.
a. Add persistent IP addresses for the nodes to your DHCP server configuration. In your
configuration, match the MAC address of the relevant network interface to the intended IP
address for each node.
b. When you use DHCP to configure IP addressing for the cluster machines, the machines also
obtain the DNS server information through DHCP. Define the persistent DNS server
address that is used by the cluster nodes through your DHCP server configuration.
NOTE
If you are not using a DHCP service, you must provide the IP networking
configuration and the address of the DNS server to the nodes at RHCOS
install time. These can be passed as boot arguments if you are installing from
an ISO image. See the Installing RHCOS and starting the OpenShift
Container Platform bootstrap process section for more information about
static IP provisioning and advanced networking options.
c. Define the hostnames of your cluster nodes in your DHCP server configuration. See the
Setting the cluster node hostnames through DHCP section for details about hostname
considerations.
NOTE
If you are not using a DHCP service, the cluster nodes obtain their hostname
through a reverse DNS lookup.
2. Ensure that your network infrastructure provides the required network connectivity between
the cluster components. See the Networking requirements for user-provisioned infrastructure
section for details about the requirements.
118
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
3. Configure your firewall to enable the ports required for the OpenShift Container Platform
cluster components to communicate. See Networking requirements for user-provisioned
infrastructure section for details about the ports that are required.
IMPORTANT
Avoid using the Ingress load balancer to expose this port, because doing so
might result in the exposure of sensitive information, such as statistics and
metrics, related to Ingress Controllers.
a. Configure DNS name resolution for the Kubernetes API, the application wildcard, the
bootstrap machine, the control plane machines, and the compute machines.
b. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the
control plane machines, and the compute machines.
See the User-provisioned DNS requirements section for more information about the
OpenShift Container Platform DNS requirements.
a. From your installation node, run DNS lookups against the record names of the Kubernetes
API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the
responses correspond to the correct components.
b. From your installation node, run reverse DNS lookups against the IP addresses of the load
balancer and the cluster nodes. Validate that the record names in the responses correspond
to the correct components.
See the Validating DNS resolution for user-provisioned infrastructure section for detailed
DNS validation steps.
6. Provision the required API and application ingress load balancing infrastructure. See the Load
balancing requirements for user-provisioned infrastructure section for more information about
the requirements.
NOTE
Some load balancing solutions require the DNS name resolution for the cluster nodes to
be in place before the load balancing is initialized.
Additional resources
Installing RHCOS and starting the OpenShift Container Platform bootstrap process
119
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
The validation steps detailed in this section must succeed before you install your cluster.
Prerequisites
You have configured the required DNS records for your user-provisioned infrastructure.
Procedure
1. From your installation node, run DNS lookups against the record names of the Kubernetes API,
the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the
responses correspond to the correct components.
a. Perform a lookup against the Kubernetes API record name. Check that the result points to
the IP address of the API load balancer:
Example output
b. Perform a lookup against the Kubernetes internal API record name. Check that the result
points to the IP address of the API load balancer:
Example output
120
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Example output
NOTE
In the example outputs, the same load balancer is used for the Kubernetes
API and application ingress traffic. In production scenarios, you can deploy
the API and application ingress load balancers separately so that you can
scale the load balancer infrastructure for each in isolation.
You can replace random with another wildcard value. For example, you can query the route
to the OpenShift Container Platform console:
Example output
d. Run a lookup against the bootstrap DNS record name. Check that the result points to the IP
address of the bootstrap node:
Example output
e. Use this method to perform lookups against the DNS record names for the control plane
and compute nodes. Check that the results correspond to the IP addresses of each node.
2. From your installation node, run reverse DNS lookups against the IP addresses of the load
balancer and the cluster nodes. Validate that the record names contained in the responses
correspond to the correct components.
a. Perform a reverse lookup against the IP address of the API load balancer. Check that the
response includes the record names for the Kubernetes API and the Kubernetes internal
API:
Example output
121
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
b. Perform a reverse lookup against the IP address of the bootstrap node. Check that the
result points to the DNS record name of the bootstrap node:
Example output
c. Use this method to perform reverse lookups against the IP addresses for the control plane
and compute nodes. Check that the results correspond to the DNS record names of each
node.
Additional resources
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user
core. To access the nodes through SSH, the private key identity must be managed by SSH for your local
user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you
must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
IMPORTANT
Do not skip this procedure in production environments, where disaster recovery and
debugging is required.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
122
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Procedure
1. If you do not have an existing SSH key pair on your local machine to use for authentication onto
your cluster nodes, create one. For example, on a computer that uses a Linux operating system,
run the following command:
1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have
an existing key pair, ensure your public key is in the your ~/.ssh directory.
NOTE
If you plan to install an OpenShift Container Platform cluster that uses the RHEL
cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3
Validation on only the x86_64, ppc64le, and s390x architectures, do not create a
key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or
ecdsa algorithm.
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub public key:
$ cat ~/.ssh/id_ed25519.pub
3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been
added. SSH agent management of the key is required for password-less SSH authentication
onto your cluster nodes, or if you want to use the ./openshift-install gather command.
NOTE
a. If the ssh-agent process is not already running for your local user, start it as a background
task:
Example output
NOTE
123
OpenShift Container Platform 4.17 Installing on bare metal
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
Additional resources
Prerequisites
You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
1. Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat
account, log in with your credentials. If you do not, create an account.
2. Select your infrastructure provider from the Run it yourself section of the page.
3. Select your host operating system and architecture from the dropdown menus under
OpenShift Installer and click Download Installer.
4. Place the downloaded file in the directory where you want to store the installation configuration
files.
IMPORTANT
The installation program creates several files on the computer that you use
to install your cluster. You must keep the installation program and the files
that the installation program creates after you finish installing the cluster.
Both of the files are required to delete the cluster.
Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for
your specific cloud provider.
5. Extract the installation program. For example, on a computer that uses a Linux operating
124
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
5. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
6. Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret
allows you to authenticate with the services that are provided by the included authorities,
including Quay.io, which serves the container images for OpenShift Container Platform
components.
TIP
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you
can specify a version of the installation program to download. However, you must have an active
subscription to access this page.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.17. Download and install the new version of oc.
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
4. Click Download Now next to the OpenShift v4.17 Linux Clients entry and save the file.
$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the oc command:
125
OpenShift Container Platform 4.17 Installing on bare metal
$ oc <command>
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
3. Click Download Now next to the OpenShift v4.17 Windows Client entry and save the file.
C:\> path
Verification
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Procedure
1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer
Portal.
3. Click Download Now next to the OpenShift v4.17 macOS Clients entry and save the file.
NOTE
For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry.
$ echo $PATH
Verification
126
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
$ oc <command>
Prerequisites
You have an SSH public key on your local machine to provide to the installation program. The
key will be used for SSH authentication onto your cluster nodes for debugging and disaster
recovery.
You have obtained the OpenShift Container Platform installation program and the pull secret
for your cluster.
Procedure
$ mkdir <installation_directory>
IMPORTANT
You must create a directory. Some installation assets, like bootstrap X.509
certificates have short expiration intervals, so you must not reuse an installation
directory. If you want to reuse individual files from another cluster installation,
you can copy them into your directory. However, the file names for the
installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.
2. Customize the sample install-config.yaml file template that is provided and save it in the
<installation_directory>.
NOTE
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
The install-config.yaml file is consumed during the next step of the installation
process. You must back it up now.
Additional resources
127
OpenShift Container Platform 4.17 Installing on bare metal
apiVersion: v1
baseDomain: example.com 1
compute: 2
- hyperthreading: Enabled 3
name: worker
replicas: 0 4
controlPlane: 5
hyperthreading: Enabled 6
name: master
replicas: 3 7
metadata:
name: test 8
networking:
clusterNetwork:
- cidr: 10.128.0.0/14 9
hostPrefix: 23 10
networkType: OVNKubernetes 11
serviceNetwork: 12
- 172.30.0.0/16
platform:
none: {} 13
fips: false 14
pullSecret: '{"auths": ...}' 15
sshKey: 'ssh-ed25519 AAAA...' 16
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the
cluster name.
2 5 The controlPlane section is a single mapping, but the compute section is a sequence of
mappings. To meet the requirements of the different data structures, the first line of the compute
section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only
one control plane pool is used.
NOTE
IMPORTANT
128
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned
infrastructure. In installer-provisioned installations, the parameter controls the number of compute
NOTE
If you are installing a three-node cluster, do not deploy any compute machines when
you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines.
7 The number of control plane machines that you add to the cluster. Because the cluster uses these
values as the number of etcd endpoints in the cluster, the value must match the number of control
plane machines that you deploy.
9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap
with existing physical networks. These IP addresses are used for the pod network. If you need to
access the pods from an external network, you must configure load balancers and routers to
manage the traffic.
NOTE
Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you
must ensure your networking environment accepts the IP addresses within the Class
E CIDR range.
10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23,
then each node is assigned a /23 subnet out of the given cidr, which allows for 510 (2^(32 - 23) - 2)
pod IP addresses. If you are required to provide access to nodes from an external network,
configure load balancers and routers to manage the traffic.
11 The cluster network plugin to install. The default value OVNKubernetes is the only supported
value.
12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This
block must not overlap with existing physical networks. If you need to access the services from an
external network, configure load balancers and routers to manage the traffic.
13 You must set the platform to none. You cannot provide additional platform configuration variables
for your platform.
IMPORTANT
Clusters that are installed with the platform type none are unable to use some
features, such as managing compute machines with the Machine API. This limitation
applies even if the compute machines that are attached to the cluster are installed
on a platform that would normally support the feature. This parameter cannot be
changed after installation.
14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
IMPORTANT
129
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
To enable FIPS mode for your cluster, you must run the installation program from a
Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode.
For more information about configuring FIPS mode on RHEL, see Switching RHEL
to FIPS mode.
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux
CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core
components use the RHEL cryptographic libraries that have been submitted to NIST
for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x
architectures.
15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to
authenticate with the services that are provided by the included authorities, including Quay.io,
which serves the container images for OpenShift Container Platform components.
16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
Additional resources
See Load balancing requirements for user-provisioned infrastructure for more information on
the API and application ingress load balancing requirements.
Phase 1
You can customize the following network-related fields in the install-config.yaml file before you
create the manifest files:
networking.networkType
networking.clusterNetwork
networking.serviceNetwork
networking.machineNetwork
For more information, see "Installation configuration parameters".
NOTE
IMPORTANT
130
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
IMPORTANT
The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use any
other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for
networks in your cluster.
Phase 2
After creating the manifest files by running openshift-install create manifests, you can define a
customized Cluster Network Operator manifest with only the fields you want to modify. You can use
the manifest to specify an advanced network configuration.
During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml
file. However, you can customize the network plugin during phase 2.
You can specify advanced network configuration only before you install the cluster.
IMPORTANT
Prerequisites
You have created the install-config.yaml file and completed any modifications to it.
Procedure
1. Change to the directory that contains the installation program and create the manifests:
1 <installation_directory> specifies the name of the directory that contains the install-
config.yaml file for your cluster.
2. Create a stub manifest file for the advanced network configuration that is named cluster-
network-03-config.yml in the <installation_directory>/manifests/ directory:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
3. Specify the advanced network configuration for your cluster in the cluster-network-03-
config.yml file, such as in the following example:
131
OpenShift Container Platform 4.17 Installing on bare metal
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
ipsecConfig:
mode: Full
5. Remove the Kubernetes manifest files that define the control plane machines and compute
MachineSets:
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-
cluster-api_worker-machineset-*.yaml
Because you create and manage these resources yourself, you do not have to initialize them.
You can preserve the MachineSet files to create compute machines by using the machine
API, but you must update references to them to match your environment.
The CNO configuration inherits the following fields during cluster installation from the Network API in
the Network.config.openshift.io API group:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin. OVNKubernetes is the only supported plugin during installation.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the
defaultNetwork object in the CNO object named cluster.
132
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
metadata.name string The name of the CNO object. This name is always cluster.
spec.clusterNet array A list specifying the blocks of IP addresses from which pod IP
work addresses are allocated and the subnet prefix length assigned to
each individual node in the cluster. For example:
spec:
clusterNetwork:
- cidr: 10.128.0.0/19
hostPrefix: 23
- cidr: 10.128.32.0/19
hostPrefix: 23
spec:
serviceNetwork:
- 172.30.0.0/14
spec.defaultNet object Configures the network plugin for the cluster network.
work
spec.kubeProxy object The fields for this object specify the kube-proxy configuration. If
Config you are using the OVN-Kubernetes cluster network plugin, the
kube-proxy configuration has no effect.
IMPORTANT
For a cluster that needs to deploy objects across multiple networks, ensure that you
specify the same value for the clusterNetwork.hostPrefix parameter for each network
type that is defined in the install-config.yaml file. Setting a different value for each
clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin,
where the plugin cannot effectively route object traffic among different nodes.
133
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
mtu integer The maximum transmission unit (MTU) for the Geneve (Generic
Network Virtualization Encapsulation) overlay network. This is
detected automatically based on the MTU of the primary
network interface. You do not normally need to override the
detected MTU.
genevePort integer The port to use for all Geneve packets. The default value is
6081. This value cannot be changed after cluster installation.
134
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
NOTE
135
OpenShift Container Platform 4.17 Installing on bare metal
maxFileSize integer The maximum size for the audit log in bytes. The default value is
50000000 or 50 MB.
maxLogFiles integer The maximum number of log files that are retained.
libc
The libc syslog() function of the journald process on the
host.
udp:<host>:<port>
A syslog server. Replace <host>:<port> with the host and
port of the syslog server.
unix:<file>
A Unix Domain Socket file specified by <file> .
null
Do not send the audit logs to any additional target.
136
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
syslogFacility string The syslog facility, such as kern, as defined by RFC5424. The
default value is local0.
routingViaHost boolean Set this field to true to send egress traffic from pods to the
host networking stack. For highly-specialized installations and
applications that rely on manually configured routes in the
kernel routing table, you might want to route egress traffic to
the host networking stack. By default, egress traffic is processed
in OVN to exit the cluster and is not affected by specialized
routes in the kernel routing table. The default value is false.
ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes
managed interfaces by using the ipForwarding specification in
the Network resource. Specify Restricted to only allow IP
forwarding for Kubernetes related traffic. Specify Global to
allow forwarding of all IP traffic. For new installations, the default
is Restricted . For updates to OpenShift Container Platform
4.14 or later, the default is Global.
137
OpenShift Container Platform 4.17 Installing on bare metal
internalMasquer string The masquerade IPv4 addresses that are used internally to
adeSubnet enable host to service traffic. The host is configured with these
IP addresses as well as the shared gateway bridge interface. The
default value is 169.254.169.0/29 .
IMPORTANT
internalMasquer string The masquerade IPv6 addresses that are used internally to
adeSubnet enable host to service traffic. The host is configured with these
IP addresses as well as the shared gateway bridge interface. The
default value is fd69::/125.
IMPORTANT
138
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig:
mode: Full
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.
It is recommended that you use Ignition config files within 12 hours after they are
generated because the 24-hour certificate rotates from 16 to 22 hours after the
cluster is installed. By using the Ignition config files within 12 hours, you can avoid
installation failure if the certificate update runs during installation.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
139
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
If you created an install-config.yaml file, specify the directory that contains it.
Otherwise, specify an empty directory. Some installation assets, like bootstrap
X.509 certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names for
the installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting.
NOTE
The compute node deployment steps included in this installation document are RHCOS-
specific. If you choose instead to deploy RHEL-based compute nodes, you take
responsibility for all operating system life cycle management and maintenance, including
performing system updates, applying patches, and completing all other required tasks.
Only RHEL 8 compute machines are supported.
You can configure RHCOS during ISO and PXE installations by using the following methods:
Kernel arguments: You can use kernel arguments to provide installation-specific information.
For example, you can specify the locations of the RHCOS installation files that you uploaded to
your HTTP server and the location of the Ignition config file for the type of node you are
installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to
the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot
process to add the kernel arguments. In both installation cases, you can use special
coreos.inst.* arguments to direct the live installer, as well as standard installation boot
arguments for turning standard kernel services on or off.
Ignition configs: OpenShift Container Platform Ignition config files (*.ign) are specific to the
type of node you are installing. You pass the location of a bootstrap, control plane, or compute
140
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
node Ignition config file during the RHCOS installation so that it takes effect on first boot. In
special cases, you can create a separate, limited Ignition config to pass to the live system. That
Ignition config could do a certain set of tasks, such as reporting success to a provisioning system
after completing installation. This special Ignition config is consumed by the coreos-installer to
be applied on first boot of the installed system. Do not provide the standard control plane and
compute node Ignition configs to the live ISO directly.
coreos-installer: You can boot the live ISO installer to a shell prompt, which allows you to
prepare the permanent system in a variety of ways before first boot. In particular, you can run
the coreos-installer command to identify various artifacts to include, work with disk partitions,
and set up networking. In some cases, you can configure features on the live system and copy
them to the installed system.
Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP
service and more preparation, but can make the installation process more automated. An ISO install is a
more manual process and can be inconvenient if you are setting up more than a few machines.
NOTE
As of OpenShift Container Platform 4.6, the RHCOS ISO and other installation artifacts
provide support for installation on disks with 4K sectors.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have an HTTP server that can be accessed from your computer, and from the machines
that you create.
You have reviewed the Advanced RHCOS installation configuration section for different ways to
configure features, such as networking and disk partitioning.
Procedure
1. Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the
following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition
config file:
$ sha512sum <installation_directory>/bootstrap.ign
The digests are provided to the coreos-installer in a later step to validate the authenticity of
the Ignition config files on the cluster nodes.
2. Upload the bootstrap, control plane, and compute node Ignition config files that the installation
program created to your HTTP server. Note the URLs of these files.
IMPORTANT
141
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
You can add or change configuration settings in your Ignition configs before
saving them to your HTTP server. If you plan to add more compute machines to
your cluster after you finish installation, do not delete these files.
3. From the installation host, validate that the Ignition config files are available on the URLs. The
following example gets the Ignition config file for the bootstrap node:
$ curl -k http://<HTTP_server>/bootstrap.ign 1
Example output
Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the
Ignition config files for the control plane and compute nodes are also available.
4. Although it is possible to obtain the RHCOS images that are required for your preferred method
of installing operating system instances from the RHCOS image mirror page, the recommended
way to obtain the correct version of your RHCOS images are from the output of openshift-
install command:
Example output
"location": "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-
<release>-live.aarch64.iso",
"location": "<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-
<release>-live.ppc64le.iso",
"location": "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-
live.s390x.iso",
"location": "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-
live.x86_64.iso",
IMPORTANT
The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available. Use only ISO images for this procedure. RHCOS qcow2 images are not
supported for this installation type.
rhcos-<version>-live.<architecture>.iso
5. Use the ISO to start the RHCOS installation. Use one of the following installation options:
142
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
6. Boot the RHCOS ISO image without specifying any options or interrupting the live boot
sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NOTE
7. Run the coreos-installer command and specify the options that meet your installation
requirements. At a minimum, you must specify the URL that points to the Ignition config file for
the node type, and the device that you are installing to:
1 1 You must run the coreos-installer command by using sudo, because the core user does
not have the required root privileges to perform the installation.
2 The --ignition-hash option is required when the Ignition config file is obtained through an
HTTP URL to validate the authenticity of the Ignition config file on the cluster node.
<digest> is the Ignition config file SHA512 digest obtained in a preceding step.
NOTE
If you want to provide your Ignition config files through an HTTPS server that
uses TLS, you can add the internal certificate authority (CA) to the system trust
store before running coreos-installer.
The following example initializes a bootstrap node installation to the /dev/sda device. The
Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP
address 192.168.1.2:
8. Monitor the progress of the RHCOS installation on the console of the machine.
IMPORTANT
Be sure that the installation is successful on each node before commencing with
the OpenShift Container Platform installation. Observing the installation process
can also help to determine the cause of RHCOS installation issues that might
arise.
9. After RHCOS installs, you must reboot the system. During the system reboot, it applies the
143
OpenShift Container Platform 4.17 Installing on bare metal
9. After RHCOS installs, you must reboot the system. During the system reboot, it applies the
Ignition config file that you specified.
Example command
IMPORTANT
You must create the bootstrap and control plane machines at this time. If the
control plane machines are not made schedulable, also create at least two
compute machines before you install OpenShift Container Platform.
If the required network, DNS, and load balancer infrastructure are in place, the OpenShift
Container Platform bootstrap process begins automatically after the RHCOS nodes have
rebooted.
NOTE
RHCOS nodes do not include a default password for the core user. You can
access the nodes by running ssh core@<node>.<cluster_name>.
<base_domain> as a user with access to the SSH private key that is paired to
the public key that you specified in your install_config.yaml file. OpenShift
Container Platform 4 cluster nodes running RHCOS are immutable and rely on
Operators to apply cluster changes. Accessing cluster nodes by using SSH is not
recommended. However, when investigating installation issues, if the OpenShift
Container Platform API is not available, or the kubelet is not properly functioning
on a target node, SSH access might be required for debugging or disaster
recovery.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have an HTTP server that can be accessed from your computer, and from the machines
that you create.
You have reviewed the Advanced RHCOS installation configuration section for different ways to
configure features, such as networking and disk partitioning.
Procedure
144
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
1. Upload the bootstrap, control plane, and compute node Ignition config files that the installation
program created to your HTTP server. Note the URLs of these files.
IMPORTANT
You can add or change configuration settings in your Ignition configs before
saving them to your HTTP server. If you plan to add more compute machines to
your cluster after you finish installation, do not delete these files.
2. From the installation host, validate that the Ignition config files are available on the URLs. The
following example gets the Ignition config file for the bootstrap node:
$ curl -k http://<HTTP_server>/bootstrap.ign 1
Example output
Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the
Ignition config files for the control plane and compute nodes are also available.
3. Although it is possible to obtain the RHCOS kernel, initramfs and rootfs files that are required
for your preferred method of installing operating system instances from the RHCOS image
mirror page, the recommended way to obtain the correct version of your RHCOS files are from
the output of openshift-install command:
Example output
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
kernel-aarch64"
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
initramfs.aarch64.img"
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
rootfs.aarch64.img"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-
<release>-live-kernel-ppc64le"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-
initramfs.ppc64le.img"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-
rootfs.ppc64le.img"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-
s390x"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-
initramfs.s390x.img"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-
rootfs.s390x.img"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-
145
OpenShift Container Platform 4.17 Installing on bare metal
x86_64"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-
initramfs.x86_64.img"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-
rootfs.x86_64.img"
IMPORTANT
The RHCOS artifacts might not change with every release of OpenShift
Container Platform. You must download images with the highest version that is
less than or equal to the OpenShift Container Platform version that you install.
Only use the appropriate kernel, initramfs, and rootfs artifacts described below
for this procedure. RHCOS QCOW2 images are not supported for this installation
type.
The file names contain the OpenShift Container Platform version number. They resemble the
following examples:
kernel: rhcos-<version>-live-kernel-<architecture>
initramfs: rhcos-<version>-live-initramfs.<architecture>.img
rootfs: rhcos-<version>-live-rootfs.<architecture>.img
4. Upload the rootfs, kernel, and initramfs files to your HTTP server.
IMPORTANT
If you plan to add more compute machines to your cluster after you finish
installation, do not delete these files.
5. Configure the network boot infrastructure so that the machines boot from their local disks after
RHCOS is installed on them.
6. Configure PXE or iPXE installation for the RHCOS images and begin the installation.
Modify one of the following example menu entries for your environment and verify that the
image and Ignition files are properly accessible:
DEFAULT pxeboot
TIMEOUT 20
PROMPT 0
LABEL pxeboot
KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1
APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.
<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-
rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda
coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3
1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The
URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported.
2
If you use multiple NICs, specify a single interface in the ip option. For example, to use
146
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url
parameter value is the location of the rootfs file, and the coreos.inst.ignition_url
parameter value is the location of the bootstrap Ignition config file. You can also add
more kernel arguments to the APPEND line to configure networking or other boot
options.
NOTE
This configuration does not enable serial console access on machines with a
graphical console. To configure a different console, add one or more
console= arguments to the APPEND line. For example, add console=tty0
console=ttyS0 to set the first PC serial port as the primary console and the
graphical console as a secondary console. For more information, see How
does one set up a serial terminal and/or console in Red Hat Enterprise Linux?
and "Enabling the serial console for PXE and ISO installation" in the
"Advanced RHCOS installation configuration" section.
1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel parameter value is the location of the kernel file, the initrd=main argument is
needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is
the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the
location of the bootstrap Ignition config file.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the location of the initramfs file that you uploaded to your HTTP server.
NOTE
This configuration does not enable serial console access on machines with a
graphical console. To configure a different console, add one or more
console= arguments to the kernel line. For example, add console=tty0
console=ttyS0 to set the first PC serial port as the primary console and the
graphical console as a secondary console. For more information, see How
does one set up a serial terminal and/or console in Red Hat Enterprise Linux?
and "Enabling the serial console for PXE and ISO installation" in the
"Advanced RHCOS installation configuration" section.
147
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP
server. The kernel parameter value is the location of the kernel file on your TFTP
server. The coreos.live.rootfs_url parameter value is the location of the rootfs file,
and the coreos.inst.ignition_url parameter value is the location of the bootstrap
Ignition config file on your HTTP Server.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the location of the initramfs file that you uploaded to your TFTP server.
7. Monitor the progress of the RHCOS installation on the console of the machine.
IMPORTANT
Be sure that the installation is successful on each node before commencing with
the OpenShift Container Platform installation. Observing the installation process
can also help to determine the cause of RHCOS installation issues that might
arise.
8. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config
file that you specified.
Example command
IMPORTANT
You must create the bootstrap and control plane machines at this time. If the
control plane machines are not made schedulable, also create at least two
compute machines before you install the cluster.
148
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
If the required network, DNS, and load balancer infrastructure are in place, the OpenShift
Container Platform bootstrap process begins automatically after the RHCOS nodes have
rebooted.
NOTE
RHCOS nodes do not include a default password for the core user. You can
access the nodes by running ssh core@<node>.<cluster_name>.
<base_domain> as a user with access to the SSH private key that is paired to
the public key that you specified in your install_config.yaml file. OpenShift
Container Platform 4 cluster nodes running RHCOS are immutable and rely on
Operators to apply cluster changes. Accessing cluster nodes by using SSH is not
recommended. However, when investigating installation issues, if the OpenShift
Container Platform API is not available, or the kubelet is not properly functioning
on a target node, SSH access might be required for debugging or disaster
recovery.
The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations
detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways.
3.14.3.1. Using advanced networking options for PXE and ISO installations
Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary
configuration settings. To set up static IP addresses or configure special settings, such as bonding, you
can do one of the following:
Pass special kernel parameters when you boot the live installer.
Configure networking from a live installer shell prompt, then copy those settings to the installed
system so that they take effect when the installed system first boots.
Procedure
149
OpenShift Container Platform 4.17 Installing on bare metal
2. From the live system shell prompt, configure networking for the live system using available
RHEL tools, such as nmcli or nmtui.
3. Run the coreos-installer command to install the system, adding the --copy-network option to
copy networking configuration. For example:
IMPORTANT
Additional resources
See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for
more information about the nmcli and nmtui tools.
Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat
Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the
same partition layout, unless you override the default partitioning configuration. During the RHCOS
installation, the size of the root file system is increased to use any remaining available space on the
target device.
IMPORTANT
The use of a custom partition scheme on your node might result in OpenShift Container
Platform not monitoring or alerting on some node partitions. If you override the default
partitioning, see Understanding OpenShift File System Monitoring (eviction conditions)
for more information about how OpenShift Container Platform monitors your host file
systems.
For the default partition scheme, nodefs and imagefs monitor the same root filesystem, /.
To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster
node, you must create separate partitions. Consider a situation where you want to add a separate
storage partition for your containers and container images. For example, by mounting
/var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the
imagefs directory and the root file system as the nodefs directory.
IMPORTANT
150
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
IMPORTANT
If you have resized your disk size to host a larger file system, consider creating a separate
/var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce
CPU time issues caused by a high number of allocation groups.
In general, you should use the default disk partitioning that is created during the RHCOS installation.
However, there are cases where you might want to create a separate partition for a directory that you
expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the
/var directory or a subdirectory of /var. For example:
/var/lib/containers: Holds container-related content that can grow as more images and
containers are added to a system.
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as
performance optimization of etcd storage.
/var: Holds data that you might want to keep separate for purposes such as auditing.
IMPORTANT
For disk sizes larger than 100GB, and especially larger than 1TB, create a separate
/var partition.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as
needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this
method, you will not have to pull all your containers again, nor will you have to copy massive log files
when you update systems.
The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth
in the partitioned directory from filling up the root file system.
The following procedure sets up a separate /var partition by adding a machine config manifest that is
wrapped into the Ignition config file for a node type during the preparation phase of an installation.
Procedure
1. On your installation host, change to the directory that contains the OpenShift Container
Platform installation program and generate the Kubernetes manifests for the cluster:
2. Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the
storage device on the worker systems, and set the storage size as appropriate. This example
places the /var directory on a separate partition:
variant: openshift
version: 4.17.0
metadata:
labels:
151
OpenShift Container Platform 4.17 Installing on bare metal
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
storage:
disks:
- device: /dev/disk/by-id/<device_name> 1
partitions:
- label: var
start_mib: <partition_start_offset> 2
size_mib: <partition_size> 3
number: 5
filesystems:
- device: /dev/disk/by-partlabel/var
path: /var
format: xfs
mount_options: [defaults, prjquota] 4
with_mount_unit: true
1 The storage device name of the disk that you want to partition.
2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes
is recommended. The root file system is automatically resized to fill all available space up
to the specified offset. If no offset value is specified, or if the specified value is smaller than
the recommended minimum, the resulting root file system will be too small, and future
reinstalls of RHCOS might overwrite the beginning of the data partition.
4 The prjquota mount option must be enabled for filesystems used for container storage.
NOTE
When creating a separate /var partition, you cannot use different instance types
for compute nodes, if the different instance types do not have the same device
name.
3. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory.
For example, run the following command:
Ignition config files are created for the bootstrap, control plane, and compute nodes in the
installation directory:
.
├── auth
152
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Next steps
You can apply the custom disk partitioning by referencing the Ignition config files during the
RHCOS installations.
For an ISO installation, you can add options to the coreos-installer command that cause the installer to
maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the
APPEND parameter to preserve partitions.
Saved partitions might be data partitions from an existing OpenShift Container Platform system. You
can identify the disk partitions you want to keep either by partition label or by number.
NOTE
If you save existing partitions, and those partitions do not leave enough space for
RHCOS, the installation will fail without damaging the saved partitions.
The following example illustrates running the coreos-installer in a way that preserves the sixth (6)
partition on the disk:
In the previous examples where partition saving is used, coreos-installer recreates the partition
immediately.
153
OpenShift Container Platform 4.17 Installing on bare metal
coreos.inst.save_partlabel=data*
coreos.inst.save_partindex=5-
coreos.inst.save_partindex=6
When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide,
with different reasons for providing each one:
Permanent install Ignition config: Every manual RHCOS installation needs to pass one of the
Ignition config files generated by openshift-installer, such as bootstrap.ign, master.ign and
worker.ign, to carry out the installation.
IMPORTANT
It is not recommended to modify these Ignition config files directly. You can
update the manifest files that are wrapped into the Ignition config files, as
outlined in examples in the preceding sections.
For PXE installations, you pass the Ignition configs on the APPEND line using the
coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt,
you identify the Ignition config on the coreos-installer command line with the --ignition-url=
option. In both cases, only HTTP and HTTPS protocols are supported.
Live install Ignition config: This type can be created by using the coreos-installer customize
subcommand and its various options. With this method, the Ignition config passes to the live
install medium, runs immediately upon booting, and performs setup tasks before or after the
RHCOS system installs to disk. This method should only be used for performing tasks that must
be done once and not applied again later, such as with advanced partitioning that cannot be
done using a machine config.
For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url=
option to identify the location of the Ignition config. You also need to append ignition.firstboot
ignition.platform.id=metal or the ignition.config.url option will be ignored.
Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.17
boot image use a default console that is meant to accomodate most virtualized and bare metal setups.
Different cloud and virtualization platforms may use different default settings depending on the chosen
architecture. Bare metal installations use the kernel default settings which typically means the graphical
console is the primary console and the serial console is disabled.
The default consoles may not match your specific hardware configuration or you might have specific
needs that require you to adjust the default console. For example:
You want to access the emergency shell on the console for debugging purposes.
Your cloud platform does not provide interactive access to the graphical console, but provides a
154
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Your cloud platform does not provide interactive access to the graphical console, but provides a
serial console.
Console configuration is inherited from the boot image. This means that new nodes in existing clusters
are unaffected by changes to the default console.
You can configure the console for bare metal installations in the following ways:
NOTE
3.14.3.5. Enabling the serial console for PXE and ISO installations
By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is
written to the graphical console. You can enable the serial console for an ISO installation and
reconfigure the bootloader so that output is sent to both the serial console and the graphical console.
Procedure
2. Run the coreos-installer command to install the system, adding the --console option once to
specify the graphical console, and a second time to specify the serial console:
$ coreos-installer install \
--console=tty0 \ 1
--console=ttyS0,<options> \ 2
--ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>
1 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
2 The desired primary console. In this case the serial console. The options field defines the
baud rate and other settings. A common value for this field is 11520n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see Linux kernel serial console documentation.
NOTE
155
OpenShift Container Platform 4.17 Installing on bare metal
To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is
omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation
procedure.
You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file
directly into the image. This creates a customized image that you can use to provision your system.
For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which
modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the
coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your
customizations.
The customize subcommand is a general purpose tool that can embed other types of customizations as
well. The following tasks are examples of some of the more common customizations:
Inject custom CA certificates for when corporate security policy requires their use.
You can customize a live RHCOS ISO image directly with the coreos-installer iso customize
subcommand. When you boot the ISO image, the customizations are applied automatically.
You can use this feature to configure the ISO image to automatically install RHCOS.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file,
and then run the following command to inject the Ignition config directly into the ISO image:
1 The Ignition config file that is generated from the openshift-installer installation program.
2 When you specify this option, the ISO image automatically runs an installation. Otherwise,
the image remains configured for installation, but does not install automatically unless you
specify the coreos.inst.install_dev kernel argument.
3. Optional: To remove the ISO image customizations and return the image to its pristine state,
run:
You can now re-customize the live ISO image or use it in its pristine state.
156
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
3.14.3.7.1. Modifying a live install ISO image to enable the serial console
On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by
default and all output is written to the graphical console. You can enable the serial console with the
following procedure.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image to enable the serial console to receive output:
2 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
3 The desired primary console. In this case, the serial console. The options field defines the
baud rate and other settings. A common value for this field is 115200n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see the Linux kernel serial console documentation.
4 The specified disk to install to. If you omit this option, the ISO image automatically runs the
installation program which will fail unless you also specify the coreos.inst.install_dev
kernel argument.
NOTE
The --dest-console option affects the installed system and not the live ISO
system. To modify the console for a live ISO system, use the --live-karg-append
option and specify the console with console=.
Your customizations are applied and affect every subsequent boot of the ISO image.
3. Optional: To remove the ISO image customizations and return the image to its original state,
run the following command:
You can now recustomize the live ISO image or use it in its original state.
3.14.3.7.2. Modifying a live install ISO image to use a custom certificate authority
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
157
OpenShift Container Platform 4.17 Installing on bare metal
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
customize subcommand. You can use the CA certificates during both the installation boot and when
provisioning the installed system.
NOTE
Custom CA certificates affect how Ignition fetches remote resources but they do not
affect the certificates installed onto the system.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image for use with a custom CA:
IMPORTANT
The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag.
You must use the --dest-ignition flag to create a customized image for each cluster.
3.14.3.7.3. Modifying a live install ISO image with customized network settings
You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed
system with the --network-keyfile flag of the customize subcommand.
WARNING
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Create a connection profile for a bonded interface. For example, create the
bond0.nmconnection file in your local directory with the following content:
[connection]
id=bond0
type=bond
interface-name=bond0
158
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
multi-connect=1
[bond]
miimon=100
mode=active-backup
[ipv4]
method=auto
[ipv6]
method=auto
3. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em1.nmconnection file in your local directory with the following content:
[connection]
id=em1
type=ethernet
interface-name=em1
master=bond0
multi-connect=1
slave-type=bond
4. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em2.nmconnection file in your local directory with the following content:
[connection]
id=em2
type=ethernet
interface-name=em2
master=bond0
multi-connect=1
slave-type=bond
5. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with your configured networking:
Network settings are applied to the live system and are carried over to the destination system.
3.14.3.7.4. Customizing a live install ISO image for an iSCSI boot device
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
159
OpenShift Container Platform 4.17 Installing on bare metal
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with the following information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The location of the destination system. You must provide the IP address of the target
portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI
logical unit number (LUN).
5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
3.14.3.7.5. Customizing a live install ISO image for an iSCSI boot device with iBFT
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with the following information:
160
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
--post-install unmount-iscsi.sh \ 2
--dest-device /dev/mapper/mpatha \ 3
--dest-ignition config.ign \ 4
--dest-karg-append rd.iscsi.firmware=1 \ 5
--dest-karg-append rd.multipath=default \ 6
-o custom.iso rhcos-<version>-live.x86_64.iso
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The path to the device. If you are using multipath, the multipath device,
/dev/mapper/mpatha, If there are multiple multipath devices connected, or to be explicit,
you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize
subcommand. When you boot the PXE environment, the customizations are applied automatically.
You can use this feature to configure the PXE environment to automatically install RHCOS.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
the Ignition config file, and then run the following command to create a new initramfs file that
contains the customizations from your Ignition config:
2 When you specify this option, the PXE environment automatically runs an install.
Otherwise, the image remains configured for installing, but does not do so automatically
unless you specify the coreos.inst.install_dev kernel argument.
3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot
and ignition.platform.id=metal kernel arguments if they are not already present.
161
OpenShift Container Platform 4.17 Installing on bare metal
3.14.3.8.1. Modifying a live install PXE environment to enable the serial console
On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by
default and all output is written to the graphical console. You can enable the serial console with the
following procedure.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
the Ignition config file, and then run the following command to create a new customized
initramfs file that enables the serial console to receive output:
2 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
3 The desired primary console. In this case, the serial console. The options field defines the
baud rate and other settings. A common value for this field is 115200n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see the Linux kernel serial console documentation.
4 The specified disk to install to. If you omit this option, the PXE environment automatically
runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel
argument.
5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot
and ignition.platform.id=metal kernel arguments if they are not already present.
Your customizations are applied and affect every subsequent boot of the PXE environment.
3.14.3.8.2. Modifying a live install PXE environment to use a custom certificate authority
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
customize subcommand. You can use the CA certificates during both the installation boot and when
provisioning the installed system.
NOTE
Custom CA certificates affect how Ignition fetches remote resources but they do not
affect the certificates installed onto the system.
162
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file for use with a custom CA:
3. Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and
ignition.platform.id=metal kernel arguments if they are not already present.
IMPORTANT
The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag.
You must use the --dest-ignition flag to create a customized image for each cluster.
3.14.3.8.3. Modifying a live install PXE environment with customized network settings
You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the
installed system with the --network-keyfile flag of the customize subcommand.
WARNING
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Create a connection profile for a bonded interface. For example, create the
bond0.nmconnection file in your local directory with the following content:
[connection]
id=bond0
type=bond
interface-name=bond0
multi-connect=1
[bond]
miimon=100
mode=active-backup
163
OpenShift Container Platform 4.17 Installing on bare metal
[ipv4]
method=auto
[ipv6]
method=auto
3. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em1.nmconnection file in your local directory with the following content:
[connection]
id=em1
type=ethernet
interface-name=em1
master=bond0
multi-connect=1
slave-type=bond
4. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em2.nmconnection file in your local directory with the following content:
[connection]
id=em2
type=ethernet
interface-name=em2
master=bond0
multi-connect=1
slave-type=bond
5. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file that contains your
configured networking:
6. Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and
ignition.platform.id=metal kernel arguments if they are not already present.
Network settings are applied to the live system and are carried over to the destination system.
3.14.3.8.4. Customizing a live install PXE environment for an iSCSI boot device
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
164
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file with the following
information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The location of the destination system. You must provide the IP address of the target
portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI
logical unit number (LUN).
5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
3.14.3.8.5. Customizing a live install PXE environment for an iSCSI boot device with iBFT
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file with the following
information:
165
OpenShift Container Platform 4.17 Installing on bare metal
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The path to the device. If you are using multipath, the multipath device,
/dev/mapper/mpatha, If there are multiple multipath devices connected, or to be explicit,
you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This section illustrates the networking configuration and other advanced options that allow you to
modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables
describe the kernel arguments and command-line options you can use with the RHCOS live installer and
the coreos-installer command.
If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the
image to configure networking for a node. If no networking arguments are specified, DHCP is activated
in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file.
IMPORTANT
When adding networking arguments manually, you must also add the rd.neednet=1
kernel argument to bring the network up in the initramfs.
The following information provides examples for configuring networking and bonding on your RHCOS
nodes for ISO installations. The examples describe how to use the ip=, nameserver=, and bond= kernel
arguments.
NOTE
166
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
NOTE
Ordering is important when adding the kernel arguments: ip=, nameserver=, and then
bond=.
The networking options are passed to the dracut tool during system boot. For more information about
the networking options supported by dracut, see the dracut.cmdline manual page.
The following examples are the networking options for ISO installation.
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
nameserver=4.4.4.41
NOTE
When you use DHCP to configure IP addressing for the RHCOS machines, the machines
also obtain the DNS server information through DHCP. For DHCP-based deployments,
you can define the DNS server address that is used by the RHCOS nodes through your
DHCP server configuration.
167
OpenShift Container Platform 4.17 Installing on bare metal
ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none
nameserver=4.4.4.41
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
NOTE
When you configure one or multiple networks, one default gateway is required. If the
additional network gateway is different from the primary network gateway, the default
gateway must be the primary network gateway.
ip=::10.10.10.254::::
Enter the following command to configure the route for the additional network:
rd.route=20.20.20.0/24:20.20.20.254:enp2s0
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=::::core0.example.com:enp2s0:none
ip=enp1s0:dhcp
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
To configure a VLAN on a network interface and use a static IP address, run the following
command:
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none
vlan=enp2s0.100:enp2s0
To configure a VLAN on a network interface and to use DHCP, run the following command:
168
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
ip=enp2s0.100:dhcp
vlan=enp2s0.100:enp2s0
nameserver=1.1.1.1
nameserver=8.8.8.8
When you create a bonded interface using bond=, you must specify how the IP address is
assigned and other information for the bonded interface.
To configure the bonded interface to use DHCP, set the bond’s IP address to dhcp. For
example:
bond=bond0:em1,em2:mode=active-backup
ip=bond0:dhcp
To configure the bonded interface to use a static IP address, enter the specific IP address
you want and related information. For example:
bond=bond0:em1,em2:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
1. Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices.
Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section.
2. Create the bond, attach the desired VFs to the bond and set the bond link state up following
the guidance in Configuring network bonding. Follow any of the described procedures to create
the bond.
169
OpenShift Container Platform 4.17 Installing on bare metal
When you create a bonded interface using bond=, you must specify how the IP address is
assigned and other information for the bonded interface.
To configure the bonded interface to use DHCP, set the bond’s IP address to dhcp. For
example:
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=bond0:dhcp
To configure the bonded interface to use a static IP address, enter the specific IP address
you want and related information. For example:
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
NOTE
team=team0:em1,em2
ip=team0:dhcp
You can install RHCOS by running coreos-installer install <options> <device> at the command
prompt, after booting into the RHCOS live environment from an ISO image.
The following table shows the subcommands, options, and arguments you can pass to the coreos-
installer command.
Subcommand Description
Option Description
170
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
-f, --image-file <path> Specify a local image file manually. Used for
debugging.
-p, --platform <name> Override the Ignition platform ID for the installed
system.
--console <spec> Set the kernel and bootloader console for the
installed system. For more information about the
format of <spec>, see the Linux kernel serial
console documentation.
IMPORTANT
171
OpenShift Container Platform 4.17 Installing on bare metal
Argument Description
Subcommand Description
coreos-installer iso reset <options> Restore a RHCOS live ISO image to default settings.
<ISO_image>
coreos-installer iso ignition remove Remove the embedded Ignition config from an ISO
<options> <ISO_image> image.
Option Description
--dest-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the destination system.
--dest-console <spec> Specify the kernel and bootloader console for the
destination system.
172
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
--live-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the live environment.
--live-karg-delete <arg> Delete a kernel argument from each boot of the live
environment.
Subcommand Description
Note that not all of these options are accepted by all subcommands.
coreos-installer pxe customize <options> Customize a RHCOS live PXE boot config.
<path>
coreos-installer pxe ignition unwrap Show the wrapped Ignition config in an image.
<options> <image_name>
Option Description
Note that not all of these options are accepted by all subcommands.
173
OpenShift Container Platform 4.17 Installing on bare metal
--dest-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the destination system.
--dest-console <spec> Specify the kernel and bootloader console for the
destination system.
--live-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the live environment.
NOTE
You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot
arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments.
For ISO installations, the coreos.inst options can be added by interrupting the automatic boot
at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL
CoreOS (Live) menu option is highlighted.
For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line
before the RHCOS live installer is booted.
The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE
installations.
174
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Argument Description
175
OpenShift Container Platform 4.17 Installing on bare metal
Argument Description
ignition.config.url Optional: The URL of the Ignition config for the live
boot. For example, this can be used to customize
how coreos-installer is invoked, or to run code
before or after the installation. This is different from
coreos.inst.ignition_url, which is the Ignition
config for the installed system.
You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container
Platform 4.8 or later. While postinstallation support is available by activating multipathing via the
machine config, enabling multipathing during installation is recommended.
In setups where any I/O to non-optimized paths results in I/O system errors, you must enable
multipathing at installation time.
IMPORTANT
On IBM Z® and IBM® LinuxONE, you can enable multipathing only if you configured your
cluster for it during installation. For more information, see "Installing RHCOS and starting
the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on
IBM Z® and IBM® LinuxONE.
The following procedure enables multipath at installation time and appends kernel arguments to the
coreos-installer install command so that the installed system itself will use multipath beginning from
the first boot.
NOTE
OpenShift Container Platform does not support enabling multipathing as a day-2 activity
on nodes that have been upgraded from 4.6 or earlier.
Prerequisites
You have created the Ignition config files for your cluster.
You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap
process.
Procedure
176
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
1. To enable multipath and start the multipathd daemon, run the following command on the
installation host:
Optional: If booting the PXE or ISO, you can instead enable multipath by adding
rd.multipath=default from the kernel command line.
If there is only one multipath device connected to the machine, it should be available at path
/dev/mapper/mpatha. For example:
If there are multiple multipath devices connected to the machine, or to be more explicit,
instead of using /dev/mapper/mpatha, it is recommended to use the World Wide Name
(WWN) symlink available in /dev/disk/by-id. For example:
This symlink can also be used as the coreos.inst.install_dev kernel argument when using
special coreos.inst.* arguments to direct the live installer. For more information, see
"Installing RHCOS and starting the OpenShift Container Platform bootstrap process".
4. Check that the kernel arguments worked by going to one of the worker nodes and listing the
kernel command line arguments (in /proc/cmdline on the host):
$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
177
OpenShift Container Platform 4.17 Installing on bare metal
...
sh-4.2# exit
RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to
enable multipathing for the secondary disk at installation time.
Prerequisites
Procedure
Example multipath-config.bu
variant: openshift
version: 4.17.0
systemd:
units:
- name: mpath-configure.service
enabled: true
contents: |
[Unit]
Description=Configure Multipath on Secondary Disk
ConditionFirstBoot=true
ConditionPathExists=!/etc/multipath.conf
Before=multipathd.service 1
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/usr/sbin/mpathconf --enable 2
[Install]
WantedBy=multi-user.target
- name: mpath-var-lib-container.service
enabled: true
contents: |
[Unit]
Description=Set Up Multipath On /var/lib/containers
ConditionFirstBoot=true 3
Requires=dev-mapper-mpatha.device
After=dev-mapper-mpatha.device
After=ostree-remount.service
Before=kubelet.service
178
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
DefaultDependencies=no
[Service] 4
Type=oneshot
ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha
ExecStart=/usr/bin/mkdir -p /var/lib/containers
[Install]
WantedBy=multi-user.target
- name: var-lib-containers.mount
enabled: true
contents: |
[Unit]
Description=Mount /var/lib/containers
After=mpath-var-lib-containers.service
Before=kubelet.service 5
[Mount] 6
What=/dev/disk/by-label/dm-mpath-containers
Where=/var/lib/containers
Type=xfs
[Install]
WantedBy=multi-user.target
6 Mounts the device to the /var/lib/containers mount point. This location cannot be a
symlink.
3. Continue with the rest of the first boot RHCOS installation process.
IMPORTANT
Prerequisites
179
OpenShift Container Platform 4.17 Installing on bare metal
Prerequisites
2. You have an iSCSI target that you want to install RHCOS on.
Procedure
1. Mount the iSCSI target from the live environment by running the following command:
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ 1
--login
2. Install RHCOS onto the iSCSI target by running the following command and using the necessary
kernel arguments, for example:
$ coreos-installer install \
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 1
--append-karg rd.iscsi.initiator=<initiator_iqn> \ 2
--append.karg netroot=<target_iqn> \ 3
--console ttyS0,115200n8
--ignition-file <path_to_file>
1 The location you are installing to. You must provide the IP address of the target portal, the
associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit
number (LUN).
2 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This procedure can also be performed using the coreos-installer iso customize or coreos-installer
pxe customize subcommands.
Prerequisites
180
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Procedure
1. Mount the iSCSI target from the live environment by running the following command:
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ 1
--login
2. Optional: enable multipathing and start the daemon with the following command:
3. Install RHCOS onto the iSCSI target by running the following command and using the necessary
kernel arguments, for example:
$ coreos-installer install \
/dev/mapper/mpatha \ 1
--append-karg rd.iscsi.firmware=1 \ 2
--append-karg rd.multipath=default \ 3
--console ttyS0 \
--ignition-file <path_to_file>
1 The path of a single multipathed device. If there are multiple multipath devices connected,
or to be explicit, you can use the World Wide Name (WWN) symlink available in
/dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This procedure can also be performed using the coreos-installer iso customize or coreos-installer
pxe customize subcommands.
The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the
181
OpenShift Container Platform 4.17 Installing on bare metal
The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the
persistent RHCOS environment that has been installed to disk. The configuration information provided
through the Ignition config files is used to initialize the bootstrap process and install OpenShift
Container Platform on the machines. You must wait for the bootstrap process to complete.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have obtained the installation program and generated the Ignition config files for your
cluster.
You installed RHCOS on your cluster machines and provided the Ignition config files that the
OpenShift Container Platform installation program generated.
Your machines have direct internet access or have an HTTP or HTTPS proxy available.
Procedure
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 To view different installation details, specify warn, debug, or error instead of info.
Example output
The command succeeds when the Kubernetes API server signals that it has been bootstrapped
on the control plane machines.
2. After the bootstrap process is complete, remove the bootstrap machine from the load balancer.
IMPORTANT
You must remove the bootstrap machine from the load balancer at this point.
You can also remove or reformat the bootstrap machine itself.
Additional resources
See Monitoring installation progress for more information about monitoring the installation logs
and retrieving diagnostic data if installation issues arise.
182
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
Prerequisites
Procedure
$ oc get nodes
Example output
183
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.
2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:
$ oc get csr
Example output
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After the client CSR is
approved, the Kubelet creates a secondary CSR for the serving certificate, which
requires manual approval. Then, subsequent serving certificate renewal requests
are automatically approved by the machine-approver if the Kubelet requests a
new certificate with identical parameters.
NOTE
For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.
184
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
To approve them individually, run the following command for each valid CSR:
NOTE
Some Operators might not become available until some CSRs are approved.
4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:
$ oc get csr
Example output
5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:
To approve them individually, run the following command for each valid CSR:
6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:
$ oc get nodes
Example output
185
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.
Additional information
Prerequisites
Procedure
Example output
186
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Additional resources
See Gathering logs from a failed installation for details about gathering data in the event of a
failed OpenShift Container Platform installation.
See Troubleshooting Operator issues for steps to check Operator pod health across the cluster
and gather Operator logs for diagnosis.
After installation, you must edit the Image Registry Operator configuration to switch the
managementState from Removed to Managed. When this has completed, you must configure storage.
Instructions are shown for configuring a persistent volume, which is required for production clusters.
Where applicable, instructions are shown for configuring an empty directory as the storage location,
which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using
the Recreate rollout strategy during upgrades.
IMPORTANT
187
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
Block storage volumes, or block persistent volumes, are supported but not recommended
for use with the image registry on production clusters. An installation where the registry is
configured on block storage is not highly available because the registry cannot have more
than one replica.
If you choose to use a block storage volume with the image registry, you must use a
filesystem persistent volume claim (PVC).
Procedure
1. Enter the following command to set the image registry storage as a block storage type, patch
the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1) replica:
2. Provision the PV for the block storage device, and create a PVC for that volume. The requested
block volume uses the ReadWriteOnce (RWO) access mode.
a. Create a pvc.yaml file with the following contents to define a VMware vSphere
PersistentVolumeClaim object:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-storage 1
namespace: openshift-image-registry 2
spec:
accessModes:
- ReadWriteOnce 3
resources:
requests:
storage: 100Gi 4
3 The access mode of the persistent volume claim. With ReadWriteOnce, the volume
can be mounted with read and write permissions by a single node.
b. Enter the following command to create the PersistentVolumeClaim object from the file:
3. Enter the following command to edit the registry configuration so that it references the correct
PVC:
188
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
Example output
storage:
pvc:
claim: 1
1 By creating a custom PVC, you can leave the claim field blank for the default automatic
creation of an image-registry-storage PVC.
Prerequisites
Procedure
1. Confirm that all the cluster components are online with the following command:
Example output
189
OpenShift Container Platform 4.17 Installing on bare metal
Alternatively, the following command notifies you when all of the clusters are available. It also
retrieves and displays credentials:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
Example output
The command succeeds when the Cluster Version Operator finishes deploying the OpenShift
Container Platform cluster from Kubernetes API server.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If
the cluster is shut down before renewing the certificates and the cluster is
later restarted after the 24 hours have elapsed, the cluster automatically
recovers the expired certificates. The exception is that you must manually
approve the pending node-bootstrapper certificate signing requests (CSRs)
to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.
It is recommended that you use Ignition config files within 12 hours after they
are generated because the 24-hour certificate rotates from 16 to 22 hours
after the cluster is installed. By using the Ignition config files within 12 hours,
you can avoid installation failure if the certificate update runs during
installation.
2. Confirm that the Kubernetes API server is communicating with the pods.
Example output
190
CHAPTER 3. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER WITH NETWORK CUSTOMIZATIONS
b. View the logs for a pod that is listed in the output of the previous command by using the
following command:
1 Specify the pod name and namespace, as shown in the output of the previous
command.
If the pod logs display, the Kubernetes API server can communicate with the cluster
machines.
3. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable
multipathing. Do not enable multipathing during installation.
See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine
configuration tasks documentation for more information.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to
track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
See About remote health monitoring for more information about the Telemetry service
191
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
While you might be able to follow this procedure to deploy a cluster on virtualized or cloud
environments, you must be aware of additional considerations for non-bare metal
platforms. Review the information in the guidelines for deploying OpenShift Container
Platform on non-tested platforms before you attempt to install an OpenShift Container
Platform cluster in such an environment.
4.1. PREREQUISITES
You reviewed details about the OpenShift Container Platform installation and update
processes.
You read the documentation on selecting a cluster installation method and preparing it for
users.
You created a registry on your mirror host and obtained the imageContentSources data for
your version of OpenShift Container Platform.
IMPORTANT
Because the installation media is on the mirror host, you can use that computer
to complete all installation steps.
You provisioned persistent storage for your cluster. To deploy a private image registry, your
storage must provide ReadWriteMany access modes.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow
the sites that your cluster requires access to.
NOTE
Be sure to also review this site list if you are configuring a proxy.
If you choose to perform a restricted network installation on a cloud platform, you still require access to
its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require
internet access. Depending on your network, you might require less internet access for an installation on
bare metal hardware, Nutanix, or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the
192
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
OpenShift image registry and contains the installation media. You can create this registry on a mirror
host, which can access both the internet and your closed network, or by using other methods that meet
your restrictions.
IMPORTANT
By default, you cannot use the contents of the Developer Catalog because you cannot access
the required image stream tags.
Access OpenShift Cluster Manager to download the installation program and perform
subscription management. If the cluster has internet access and you do not disable Telemetry,
that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
This section describes the requirements for deploying OpenShift Container Platform on user-
provisioned infrastructure.
Hosts Description
193
OpenShift Container Platform 4.17 Installing on bare metal
Hosts Description
One temporary bootstrap machine The cluster requires the bootstrap machine to deploy
the OpenShift Container Platform cluster on the
three control plane machines. You can remove the
bootstrap machine after you install the cluster.
Three control plane machines The control plane machines run the Kubernetes and
OpenShift Container Platform services that form the
control plane.
At least two compute machines, which are also The workloads requested by OpenShift Container
known as worker machines. Platform users run on the compute machines.
NOTE
As an exception, you can run zero compute machines in a bare metal cluster that consists
of three control plane machines only. This provides smaller, more resource efficient
clusters for cluster administrators and developers to use for testing, development, and
production. Running one compute machine is not supported.
IMPORTANT
To maintain high availability of your cluster, use separate physical hosts for these cluster
machines.
The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the
operating system. However, the compute machines can choose between Red Hat Enterprise Linux
CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later.
Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware
certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits .
194
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
1. One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-
Threading, is not enabled. When enabled, use the following formula to calculate the
corresponding ratio: (threads per core × cores) × sockets = CPUs.
2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster
storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms
p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so
you might need to over-allocate storage volume to obtain sufficient performance.
3. As with all user-provisioned installations, if you choose to use RHEL compute machines in your
cluster, you take responsibility for all operating system life cycle management and maintenance,
including performing system updates, applying patches, and completing all other required tasks.
Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container
Platform 4.10 and later.
NOTE
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2,
which updates the micro-architecture requirements. The following list contains the
minimum instruction set architectures (ISA) that each architecture requires:
If an instance type for your platform meets the minimum requirements for cluster machines, it is
supported to use in OpenShift Container Platform.
Additional resources
Optimizing storage
195
OpenShift Container Platform 4.17 Installing on bare metal
(CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The
machine-approver cannot guarantee the validity of a serving certificate that is requested by using
kubelet credentials because it cannot confirm that the correct machine issued the request. You must
determine and implement a method of verifying the validity of the kubelet serving certificate requests
and approving them.
Additional resources
See Configuring a three-node cluster for details about deploying three-node clusters in bare
metal environments.
See Approving the certificate signing requests for your machines for more information about
approving cluster certificate signing requests after installation.
During the initial boot, the machines require an IP address configuration that is set either through a
DHCP server or statically by providing the required boot options. After a network connection is
established, the machines download their Ignition config files from an HTTP or HTTPS server. The
Ignition config files are then used to set the exact state of each machine. The Machine Config Operator
completes more changes to the machines, such as the application of new certificates or keys, after
installation.
It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure
that the DHCP server is configured to provide persistent IP addresses, DNS server information, and
hostnames to the cluster machines.
NOTE
If a DHCP service is not available for your user-provisioned infrastructure, you can instead
provide the IP networking configuration and the address of the DNS server to the nodes
at RHCOS install time. These can be passed as boot arguments if you are installing from
an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform
bootstrap process section for more information about static IP provisioning and advanced
networking options.
The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API
servers and worker nodes are in different zones, you can configure a default DNS search zone to allow
the API server to resolve the node names. Another supported approach is to always refer to hosts by
their fully-qualified domain names in both the node objects and all DNS requests.
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through
NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not
provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a
reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and
can take time to resolve. Other system services can start prior to this and detect the hostname as
localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name
configuration errors in environments that have a DNS split-horizon implementation.
196
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
You must configure the network connectivity between machines to allow OpenShift Container Platform
cluster components to communicate. Each machine must be able to resolve the hostnames of all other
machines in the cluster.
This section provides details about the ports that are required.
9000- 9999 Host level services, including the node exporter on ports
9100- 9101 and the Cluster Version Operator on port9099.
6081 Geneve
9000- 9999 Host level services, including the node exporter on ports
9100- 9101.
Table 4.5. Ports used for control plane machine to control plane machine communications
197
OpenShift Container Platform 4.17 Installing on bare metal
If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise
Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.
Additional resources
Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control
plane machines, and the compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse
name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS
(RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are
provided by DHCP. Additionally, the reverse records are used to generate the certificate signing
requests (CSR) that OpenShift Container Platform needs to operate.
NOTE
It is recommended to use a DHCP server to provide the hostnames to each cluster node.
See the DHCP recommendations for user-provisioned infrastructure section for more
information.
The following DNS records are required for a user-provisioned OpenShift Container Platform cluster
and they must be in place before installation. In each record, <cluster_name> is the cluster name and
<base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS
record takes the form: <component>.<cluster_name>.<base_domain>..
198
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Kuberne api.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
tes API <base_domain>. to identify the API load balancer. These records must be
resolvable by both clients external to the cluster and from
all the nodes within the cluster.
IMPORTANT
Bootstra bootstrap.<cluster_name>. A DNS A/AAAA or CNAME record, and a DNS PTR record,
p <base_domain>. to identify the bootstrap machine. These records must be
machine resolvable by the nodes within the cluster.
Control <control_plane><n>. DNS A/AAAA or CNAME records and DNS PTR records to
plane <cluster_name>. identify each machine for the control plane nodes. These
machine <base_domain>. records must be resolvable by the nodes within the cluster.
s
Comput <compute><n>. DNS A/AAAA or CNAME records and DNS PTR records to
e <cluster_name>. identify each machine for the worker nodes. These records
machine <base_domain>. must be resolvable by the nodes within the cluster.
s
NOTE
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and
SRV records in your DNS configuration.
199
OpenShift Container Platform 4.17 Installing on bare metal
TIP
You can use the dig command to verify name and reverse name resolution. See the section on
Validating DNS resolution for user-provisioned infrastructure for detailed validation steps.
This section provides A and PTR record configuration samples that meet the DNS requirements for
deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant
to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4 and the base domain is example.com.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5 1
api-int.ocp4.example.com. IN A 192.168.1.5 2
;
*.apps.ocp4.example.com. IN A 192.168.1.5 3
;
bootstrap.ocp4.example.com. IN A 192.168.1.96 4
;
control-plane0.ocp4.example.com. IN A 192.168.1.97 5
control-plane1.ocp4.example.com. IN A 192.168.1.98 6
control-plane2.ocp4.example.com. IN A 192.168.1.99 7
;
compute0.ocp4.example.com. IN A 192.168.1.11 8
compute1.ocp4.example.com. IN A 192.168.1.7 9
;
;EOF
1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer.
200
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API
load balancer and is used for internal cluster communications.
3 Provides name resolution for the wildcard routes. The record refers to the IP address of the
application ingress load balancer. The application ingress load balancer targets the machines
that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines
by default.
NOTE
In the example, the same load balancer is used for the Kubernetes API and
application ingress traffic. In production scenarios, you can deploy the API and
application ingress load balancers separately so that you can scale the load
balancer infrastructure for each in isolation.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2
;
96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3
;
97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4
98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5
99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6
;
11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7
7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8
;
;EOF
Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
201
OpenShift Container Platform 4.17 Installing on bare metal
1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer.
2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record
name of the API load balancer and is used for internal cluster communications.
NOTE
A PTR record is not required for the OpenShift Container Platform application wildcard.
Additional resources
NOTE
If you want to deploy the API and application Ingress load balancers with a Red Hat
Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately.
1. API load balancer: Provides a common endpoint for users, both human and machine, to interact
with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
A stateless load balancing algorithm. The options vary based on the load balancer
implementation.
IMPORTANT
Configure the following ports on both the front and back of the load balancers:
202
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
NOTE
2. Application Ingress load balancer: Provides an ingress point for application traffic flowing in
from outside the cluster. A working configuration for the Ingress router is required for an
OpenShift Container Platform cluster.
Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
TIP
If the true IP address of the client can be seen by the application Ingress load balancer, enabling
source IP-based session persistence can improve performance for applications that use end-
to-end TLS encryption.
Configure the following ports on both the front and back of the load balancers:
203
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
This section provides an example API and application Ingress load balancer configuration that meets the
load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg
configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing
one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In
production scenarios, you can deploy the API and application ingress load balancers separately so that
you can scale the load balancer infrastructure for each in isolation.
NOTE
If you are using HAProxy as a load balancer and SELinux is set to enforcing, you must
ensure that the HAProxy service can bind to the configured TCP port by running
setsebool -P haproxy_connect_any=1.
Example 4.3. Sample API and application Ingress load balancer configuration
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
204
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
maxconn 3000
listen api-server-6443 1
bind *:6443
mode tcp
option httpchk GET /readyz HTTP/1.0
option log-health-checks
balance roundrobin
server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2
rise 3 backup 2
server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s
fall 2 rise 3
server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s
fall 2 rise 3
server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s
fall 2 rise 3
listen machine-config-server-22623 3
bind *:22623
mode tcp
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 5
bind *:443
mode tcp
balance source
server compute0 compute0.ocp4.example.com:443 check inter 1s
server compute1 compute1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 6
bind *:80
mode tcp
balance source
server compute0 compute0.ocp4.example.com:80 check inter 1s
server compute1 compute1.ocp4.example.com:80 check inter 1s
1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines.
2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster
installation and they must be removed after the bootstrap process is complete.
3 Port 22623 handles the machine config server traffic and points to the control plane machines.
5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller
pods. The Ingress Controller pods run on the compute machines by default.
NOTE
If you are deploying a three-node cluster with zero compute nodes, the Ingress
Controller pods run on the control plane nodes. In three-node cluster
deployments, you must configure your application Ingress load balancer to route
HTTP and HTTPS traffic to the control plane nodes.
205
OpenShift Container Platform 4.17 Installing on bare metal
TIP
If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports
6443, 22623, 443, and 80 by running netstat -nltupe on the HAProxy node.
As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal
platform, you can create a MachineConfig object that includes an NMState configuration file. The
NMState configuration file creates a customized br-ex bridge network configuration on each node in
your cluster.
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
Consider the following use cases for creating a manifest object that includes a customized br-ex bridge:
You want to make postinstallation changes to the bridge, such as changing the Open vSwitch
(OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not
support making postinstallation changes to the bridge.
You want to deploy the bridge on a different interface than the interface available on a host or
server IP address.
You want to make advanced configurations to the bridge that are not possible with the
configure-ovs.sh shell script. Using the script for these configurations might result in the
bridge failing to connect multiple network interfaces and facilitating data forwarding between
the interfaces.
NOTE
If you require an environment with a single network interface controller (NIC) and default
network settings, use the configure-ovs.sh shell script.
After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine
Config Operator injects Ignition configuration files into each node in your cluster, so that each node
received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-
ovs.sh shell script receives a signal to not configure the br-ex bridge.
Prerequisites
Optional: You have installed the nmstate API so that you can validate the NMState
configuration.
206
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Procedure
1. Create a NMState configuration file that has decoded base64 information for your customized
br-ex bridge network:
interfaces:
- name: enp2s0 1
type: ethernet 2
state: up 3
ipv4:
enabled: false 4
ipv6:
enabled: false
- name: br-ex
type: ovs-bridge
state: up
ipv4:
enabled: false
dhcp: false
ipv6:
enabled: false
dhcp: false
bridge:
port:
- name: enp2s0 5
- name: br-ex
- name: br-ex
type: ovs-interface
state: up
copy-mac-from: enp2s0
ipv4:
enabled: true
dhcp: true
ipv6:
enabled: false
dhcp: false
# ...
2. Use the cat command to base64-encode the contents of the NMState configuration:
207
OpenShift Container Platform 4.17 Installing on bare metal
1 Replace <nmstate_configuration> with the name of your NMState resource YAML file.
3. Create a MachineConfig manifest file and define a customized br-ex bridge network
configuration analogous to the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker 1
name: 10-br-ex-worker 2
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,
<base64_encoded_nmstate_configuration> 3
mode: 0644
overwrite: true
path: /etc/nmstate/openshift/cluster.yml
# ...
1 For each node in your cluster, specify the hostname path to your node and the base-64
encoded Ignition configuration file data for the machine type. If you have a single global
configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that
you want to apply to all nodes in your cluster, you do not need to specify the hostname
path for each node. The worker role is the default role for nodes in your cluster. The .yaml
extension does not work when specifying the hostname path for each node or all nodes in
the MachineConfig manifest file.
After you configure these resources, you must scale machine sets, so that the machine sets can apply
the resource configuration to each compute node and reboot the nodes.
Prerequisites
You created a MachineConfig manifest object that includes a customized br-ex bridge
configuration.
Procedure
208
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
$ oc edit mc <machineconfig_custom_resource_name>
2. Add each compute node configuration to the CR, so that the CR can manage roles for each
defined compute node in your cluster.
3. Create a Secret object named extraworker-secret that has a minimal static IP configuration.
4. Apply the extraworker-secret secret to each node in your cluster by entering the following
command. This step provides each compute node access to the Ignition config file.
$ oc apply -f ./extraworker-secret.yaml
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
spec:
# ...
preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret
# ...
$ oc project openshift-machine-api
$ oc get machinesets
8. Scale each machine set by entering the following command. You must run this command for
each machine set.
1 Where <machineset_name> is the name of the machine set and <n> is the number of
compute nodes.
This section provides details about the high-level steps required to set up your cluster infrastructure in
preparation for an OpenShift Container Platform installation. This includes configuring IP networking
and network connectivity for your cluster nodes, enabling the required ports through your firewall, and
setting up the required DNS and load balancing infrastructure.
209
OpenShift Container Platform 4.17 Installing on bare metal
After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements
for a cluster with user-provisioned infrastructure section.
Prerequisites
You have reviewed the OpenShift Container Platform 4.x Tested Integrations page.
You have reviewed the infrastructure requirements detailed in the Requirements for a cluster
with user-provisioned infrastructure section.
Procedure
1. If you are using DHCP to provide the IP networking configuration to your cluster nodes,
configure your DHCP service.
a. Add persistent IP addresses for the nodes to your DHCP server configuration. In your
configuration, match the MAC address of the relevant network interface to the intended IP
address for each node.
b. When you use DHCP to configure IP addressing for the cluster machines, the machines also
obtain the DNS server information through DHCP. Define the persistent DNS server
address that is used by the cluster nodes through your DHCP server configuration.
NOTE
If you are not using a DHCP service, you must provide the IP networking
configuration and the address of the DNS server to the nodes at RHCOS
install time. These can be passed as boot arguments if you are installing from
an ISO image. See the Installing RHCOS and starting the OpenShift
Container Platform bootstrap process section for more information about
static IP provisioning and advanced networking options.
c. Define the hostnames of your cluster nodes in your DHCP server configuration. See the
Setting the cluster node hostnames through DHCP section for details about hostname
considerations.
NOTE
If you are not using a DHCP service, the cluster nodes obtain their hostname
through a reverse DNS lookup.
2. Ensure that your network infrastructure provides the required network connectivity between
the cluster components. See the Networking requirements for user-provisioned infrastructure
section for details about the requirements.
3. Configure your firewall to enable the ports required for the OpenShift Container Platform
cluster components to communicate. See Networking requirements for user-provisioned
infrastructure section for details about the ports that are required.
IMPORTANT
210
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
IMPORTANT
Avoid using the Ingress load balancer to expose this port, because doing so
might result in the exposure of sensitive information, such as statistics and
metrics, related to Ingress Controllers.
a. Configure DNS name resolution for the Kubernetes API, the application wildcard, the
bootstrap machine, the control plane machines, and the compute machines.
b. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the
control plane machines, and the compute machines.
See the User-provisioned DNS requirements section for more information about the
OpenShift Container Platform DNS requirements.
a. From your installation node, run DNS lookups against the record names of the Kubernetes
API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the
responses correspond to the correct components.
b. From your installation node, run reverse DNS lookups against the IP addresses of the load
balancer and the cluster nodes. Validate that the record names in the responses correspond
to the correct components.
See the Validating DNS resolution for user-provisioned infrastructure section for detailed
DNS validation steps.
6. Provision the required API and application ingress load balancing infrastructure. See the Load
balancing requirements for user-provisioned infrastructure section for more information about
the requirements.
NOTE
Some load balancing solutions require the DNS name resolution for the cluster nodes to
be in place before the load balancing is initialized.
Additional resources
Installing RHCOS and starting the OpenShift Container Platform bootstrap process
211
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
The validation steps detailed in this section must succeed before you install your cluster.
Prerequisites
You have configured the required DNS records for your user-provisioned infrastructure.
Procedure
1. From your installation node, run DNS lookups against the record names of the Kubernetes API,
the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the
responses correspond to the correct components.
a. Perform a lookup against the Kubernetes API record name. Check that the result points to
the IP address of the API load balancer:
Example output
b. Perform a lookup against the Kubernetes internal API record name. Check that the result
points to the IP address of the API load balancer:
Example output
Example output
212
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
NOTE
In the example outputs, the same load balancer is used for the Kubernetes
API and application ingress traffic. In production scenarios, you can deploy
the API and application ingress load balancers separately so that you can
scale the load balancer infrastructure for each in isolation.
You can replace random with another wildcard value. For example, you can query the route
to the OpenShift Container Platform console:
Example output
d. Run a lookup against the bootstrap DNS record name. Check that the result points to the IP
address of the bootstrap node:
Example output
e. Use this method to perform lookups against the DNS record names for the control plane
and compute nodes. Check that the results correspond to the IP addresses of each node.
2. From your installation node, run reverse DNS lookups against the IP addresses of the load
balancer and the cluster nodes. Validate that the record names contained in the responses
correspond to the correct components.
a. Perform a reverse lookup against the IP address of the API load balancer. Check that the
response includes the record names for the Kubernetes API and the Kubernetes internal
API:
Example output
NOTE
213
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
b. Perform a reverse lookup against the IP address of the bootstrap node. Check that the
result points to the DNS record name of the bootstrap node:
Example output
c. Use this method to perform reverse lookups against the IP addresses for the control plane
and compute nodes. Check that the results correspond to the DNS record names of each
node.
Additional resources
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user
core. To access the nodes through SSH, the private key identity must be managed by SSH for your local
user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you
must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
IMPORTANT
Do not skip this procedure in production environments, where disaster recovery and
debugging is required.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an existing SSH key pair on your local machine to use for authentication onto
214
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
1. If you do not have an existing SSH key pair on your local machine to use for authentication onto
your cluster nodes, create one. For example, on a computer that uses a Linux operating system,
run the following command:
1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have
an existing key pair, ensure your public key is in the your ~/.ssh directory.
NOTE
If you plan to install an OpenShift Container Platform cluster that uses the RHEL
cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3
Validation on only the x86_64, ppc64le, and s390x architectures, do not create a
key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or
ecdsa algorithm.
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub public key:
$ cat ~/.ssh/id_ed25519.pub
3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been
added. SSH agent management of the key is required for password-less SSH authentication
onto your cluster nodes, or if you want to use the ./openshift-install gather command.
NOTE
a. If the ssh-agent process is not already running for your local user, start it as a background
task:
Example output
NOTE
215
OpenShift Container Platform 4.17 Installing on bare metal
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program. If you install a cluster on infrastructure that you provision, you must provide the key to
the installation program.
Additional resources
Prerequisites
You have an SSH public key on your local machine to provide to the installation program. The
key will be used for SSH authentication onto your cluster nodes for debugging and disaster
recovery.
You have obtained the OpenShift Container Platform installation program and the pull secret
for your cluster.
Obtain the imageContentSources section from the output of the command to mirror the
repository.
Procedure
$ mkdir <installation_directory>
IMPORTANT
You must create a directory. Some installation assets, like bootstrap X.509
certificates have short expiration intervals, so you must not reuse an installation
directory. If you want to reuse individual files from another cluster installation,
you can copy them into your directory. However, the file names for the
installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.
216
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
2. Customize the sample install-config.yaml file template that is provided and save it in the
<installation_directory>.
NOTE
Unless you use a registry that RHCOS trusts by default, such as docker.io, you must provide
the contents of the certificate for your mirror repository in the additionalTrustBundle
section. In most cases, you must provide the certificate for your mirror.
You must include the imageContentSources section from the output of the command to
mirror the repository.
IMPORTANT
You must run the 'oc mirror' command twice. The first time you run the oc
mirror command, you get a full ImageContentSourcePolicy file. The second
time you run the oc mirror command, you only get the difference between
the first and second run. Because of this behavior, you must always keep a
backup of these files in case you need to merge them into one complete
ImageContentSourcePolicy file. Keeping a backup of these two output files
ensures that you have a complete ImageContentSourcePolicy file.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
The install-config.yaml file is consumed during the next step of the installation
process. You must back it up now.
Additional resources
apiVersion: v1
baseDomain: example.com 1
compute: 2
- hyperthreading: Enabled 3
name: worker
replicas: 0 4
217
OpenShift Container Platform 4.17 Installing on bare metal
controlPlane: 5
hyperthreading: Enabled 6
name: master
replicas: 3 7
metadata:
name: test 8
networking:
clusterNetwork:
- cidr: 10.128.0.0/14 9
hostPrefix: 23 10
networkType: OVNKubernetes 11
serviceNetwork: 12
- 172.30.0.0/16
platform:
none: {} 13
fips: false 14
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15
sshKey: 'ssh-ed25519 AAAA...' 16
additionalTrustBundle: | 17
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
imageContentSources: 18
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the
cluster name.
2 5 The controlPlane section is a single mapping, but the compute section is a sequence of
mappings. To meet the requirements of the different data structures, the first line of the compute
section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only
one control plane pool is used.
NOTE
IMPORTANT
218
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned
infrastructure. In installer-provisioned installations, the parameter controls the number of compute
NOTE
If you are installing a three-node cluster, do not deploy any compute machines when
you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines.
7 The number of control plane machines that you add to the cluster. Because the cluster uses these
values as the number of etcd endpoints in the cluster, the value must match the number of control
plane machines that you deploy.
9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap
with existing physical networks. These IP addresses are used for the pod network. If you need to
access the pods from an external network, you must configure load balancers and routers to
manage the traffic.
NOTE
Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you
must ensure your networking environment accepts the IP addresses within the Class
E CIDR range.
10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23,
then each node is assigned a /23 subnet out of the given cidr, which allows for 510 (2^(32 - 23) - 2)
pod IP addresses. If you are required to provide access to nodes from an external network,
configure load balancers and routers to manage the traffic.
11 The cluster network plugin to install. The default value OVNKubernetes is the only supported
value.
12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This
block must not overlap with existing physical networks. If you need to access the services from an
external network, configure load balancers and routers to manage the traffic.
13 You must set the platform to none. You cannot provide additional platform configuration variables
for your platform.
IMPORTANT
Clusters that are installed with the platform type none are unable to use some
features, such as managing compute machines with the Machine API. This limitation
applies even if the compute machines that are attached to the cluster are installed
on a platform that would normally support the feature. This parameter cannot be
changed after installation.
14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
IMPORTANT
219
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
To enable FIPS mode for your cluster, you must run the installation program from a
Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode.
For more information about configuring FIPS mode on RHEL, see Switching RHEL
to FIPS mode.
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux
CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core
components use the RHEL cryptographic libraries that have been submitted to NIST
for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x
architectures.
15 For <local_registry>, specify the registry domain name, and optionally the port, that your mirror
registry uses to serve content. For example, registry.example.com or
registry.example.com:5000. For <credentials>, specify the base64-encoded user name and
password for your mirror registry.
16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
17 Provide the contents of the certificate file that you used for your mirror registry.
18 Provide the imageContentSources section according to the output of the command that you
used to mirror the repository.
IMPORTANT
When using the oc adm release mirror command, use the output from the
imageContentSources section.
Additional resources
See Load balancing requirements for user-provisioned infrastructure for more information on
the API and application ingress load balancing requirements.
220
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
NOTE
For bare metal installations, if you do not assign node IP addresses from the range that is
specified in the networking.machineNetwork[].cidr field in the install-config.yaml file,
you must include them in the proxy.noProxy field.
Prerequisites
You reviewed the sites that your cluster requires access to and determined whether any of
them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to
hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to
bypass the proxy if necessary.
NOTE
The Proxy object status.noProxy field is populated with the values of the
networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and
networking.serviceNetwork[] fields from your installation configuration.
For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object
status.noProxy field is also populated with the instance metadata endpoint
(169.254.169.254).
Procedure
1. Edit your install-config.yaml file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http.
2 A proxy URL to use for creating HTTPS connections outside the cluster.
4 If provided, the installation program generates a config map that is named user-ca-bundle
in the openshift-config namespace that contains one or more additional CA certificates
that are required for proxying HTTPS connections. The Cluster Network Operator then
221
OpenShift Container Platform 4.17 Installing on bare metal
creates a trusted-ca-bundle config map that merges these contents with the Red Hat
Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the
trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
5 Optional: The policy to determine the configuration of the Proxy object to reference the
user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and
Always. Use Proxyonly to reference the user-ca-bundle config map only when
http/https proxy is configured. Use Always to always reference the user-ca-bundle
config map. The default value is Proxyonly.
NOTE
The installation program does not support the proxy readinessEndpoints field.
NOTE
If the installer times out, restart and then complete the deployment by using the
wait-for command of the installer. For example:
2. Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.
NOTE
Only the Proxy object named cluster is supported, and no additional proxies can be
created.
In three-node OpenShift Container Platform environments, the three control plane machines are
schedulable, which means that your application workloads are scheduled to run on them.
Prerequisites
Procedure
Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown
in the following compute stanza:
compute:
222
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
- name: worker
platform: {}
replicas: 0
NOTE
You must set the value of the replicas parameter for the compute machines to 0
when you install OpenShift Container Platform on user-provisioned
infrastructure, regardless of the number of compute machines you are deploying.
In installer-provisioned installations, the parameter controls the number of
compute machines that the cluster creates and manages for you. This does not
apply to user-provisioned installations, where the compute machines are
deployed manually.
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods
run on the control plane nodes. In three-node cluster deployments, you must configure your
application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
See the Load balancing requirements for user-provisioned infrastructure section for more
information.
When you create the Kubernetes manifest files in the following procedure, ensure that the
mastersSchedulable parameter in the <installation_directory>/manifests/cluster-
scheduler-02-config.yml file is set to true. This enables your application workloads to run on
the control plane nodes.
Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS
(RHCOS) machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to configure the cluster machines.
IMPORTANT
223
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
The Ignition config files that the OpenShift Container Platform installation
program generates contain certificates that expire after 24 hours, which are then
renewed at that time. If the cluster is shut down before renewing the certificates
and the cluster is later restarted after the 24 hours have elapsed, the cluster
automatically recovers the expired certificates. The exception is that you must
manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering
from expired control plane certificates for more information.
It is recommended that you use Ignition config files within 12 hours after they are
generated because the 24-hour certificate rotates from 16 to 22 hours after the
cluster is installed. By using the Ignition config files within 12 hours, you can avoid
installation failure if the certificate update runs during installation.
Prerequisites
You obtained the OpenShift Container Platform installation program. For a restricted network
installation, these files are on your mirror host.
Procedure
1. Change to the directory that contains the OpenShift Container Platform installation program
and generate the Kubernetes manifests for the cluster:
1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.
WARNING
If you are installing a three-node cluster, skip the following step to allow the
control plane nodes to be schedulable.
IMPORTANT
When you configure control plane nodes from the default unschedulable to
schedulable, additional subscriptions are required. This is because control plane
nodes then become compute nodes.
224
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
3. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:
Ignition config files are created for the bootstrap, control plane, and compute nodes in the
installation directory. The kubeadmin-password and kubeconfig files are created in the
./<installation_directory>/auth directory:
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Additional resources
See Recovering from expired control plane certificates for more information about recovering
kubelet certificates.
Procedure
1. Create a Butane config including the contents of the chrony.conf file. For example, to
configure chrony on worker nodes, create a 99-worker-chrony.bu file.
NOTE
See "Creating machine configs with Butane" for information about Butane.
variant: openshift
version: 4.17.0
metadata:
name: 99-worker-chrony 1
labels:
machineconfiguration.openshift.io/role: worker 2
225
OpenShift Container Platform 4.17 Installing on bare metal
storage:
files:
- path: /etc/chrony.conf
mode: 0644 3
overwrite: true
contents:
inline: |
pool 0.rhel.pool.ntp.org iburst 4
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
1 2 On control plane nodes, substitute master for worker in both of these locations.
3 Specify an octal value mode for the mode field in the machine config file. After creating
the file and applying the changes, the mode is converted to a decimal value. You can check
the YAML file with the command oc get mc <mc-name> -o yaml.
4 Specify any valid, reachable time source, such as the one provided by your DHCP server.
If the cluster is not running yet, after you generate manifest files, add the MachineConfig
object file to the <installation_directory>/openshift directory, and then continue to create
the cluster.
$ oc apply -f ./99-worker-chrony.yaml
To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting.
NOTE
226
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
NOTE
The compute node deployment steps included in this installation document are RHCOS-
specific. If you choose instead to deploy RHEL-based compute nodes, you take
responsibility for all operating system life cycle management and maintenance, including
performing system updates, applying patches, and completing all other required tasks.
Only RHEL 8 compute machines are supported.
You can configure RHCOS during ISO and PXE installations by using the following methods:
Kernel arguments: You can use kernel arguments to provide installation-specific information.
For example, you can specify the locations of the RHCOS installation files that you uploaded to
your HTTP server and the location of the Ignition config file for the type of node you are
installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to
the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot
process to add the kernel arguments. In both installation cases, you can use special
coreos.inst.* arguments to direct the live installer, as well as standard installation boot
arguments for turning standard kernel services on or off.
Ignition configs: OpenShift Container Platform Ignition config files (*.ign) are specific to the
type of node you are installing. You pass the location of a bootstrap, control plane, or compute
node Ignition config file during the RHCOS installation so that it takes effect on first boot. In
special cases, you can create a separate, limited Ignition config to pass to the live system. That
Ignition config could do a certain set of tasks, such as reporting success to a provisioning system
after completing installation. This special Ignition config is consumed by the coreos-installer to
be applied on first boot of the installed system. Do not provide the standard control plane and
compute node Ignition configs to the live ISO directly.
coreos-installer: You can boot the live ISO installer to a shell prompt, which allows you to
prepare the permanent system in a variety of ways before first boot. In particular, you can run
the coreos-installer command to identify various artifacts to include, work with disk partitions,
and set up networking. In some cases, you can configure features on the live system and copy
them to the installed system.
Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP
service and more preparation, but can make the installation process more automated. An ISO install is a
more manual process and can be inconvenient if you are setting up more than a few machines.
NOTE
As of OpenShift Container Platform 4.6, the RHCOS ISO and other installation artifacts
provide support for installation on disks with 4K sectors.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have an HTTP server that can be accessed from your computer, and from the machines
that you create.
227
OpenShift Container Platform 4.17 Installing on bare metal
You have reviewed the Advanced RHCOS installation configuration section for different ways to
configure features, such as networking and disk partitioning.
Procedure
1. Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the
following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition
config file:
$ sha512sum <installation_directory>/bootstrap.ign
The digests are provided to the coreos-installer in a later step to validate the authenticity of
the Ignition config files on the cluster nodes.
2. Upload the bootstrap, control plane, and compute node Ignition config files that the installation
program created to your HTTP server. Note the URLs of these files.
IMPORTANT
You can add or change configuration settings in your Ignition configs before
saving them to your HTTP server. If you plan to add more compute machines to
your cluster after you finish installation, do not delete these files.
3. From the installation host, validate that the Ignition config files are available on the URLs. The
following example gets the Ignition config file for the bootstrap node:
$ curl -k http://<HTTP_server>/bootstrap.ign 1
Example output
Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the
Ignition config files for the control plane and compute nodes are also available.
4. Although it is possible to obtain the RHCOS images that are required for your preferred method
of installing operating system instances from the RHCOS image mirror page, the recommended
way to obtain the correct version of your RHCOS images are from the output of openshift-
install command:
Example output
"location": "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-
<release>-live.aarch64.iso",
"location": "<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-
<release>-live.ppc64le.iso",
"location": "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-
228
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
live.s390x.iso",
"location": "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-
live.x86_64.iso",
IMPORTANT
The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available. Use only ISO images for this procedure. RHCOS qcow2 images are not
supported for this installation type.
rhcos-<version>-live.<architecture>.iso
5. Use the ISO to start the RHCOS installation. Use one of the following installation options:
6. Boot the RHCOS ISO image without specifying any options or interrupting the live boot
sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NOTE
7. Run the coreos-installer command and specify the options that meet your installation
requirements. At a minimum, you must specify the URL that points to the Ignition config file for
the node type, and the device that you are installing to:
1 1 You must run the coreos-installer command by using sudo, because the core user does
not have the required root privileges to perform the installation.
2 The --ignition-hash option is required when the Ignition config file is obtained through an
HTTP URL to validate the authenticity of the Ignition config file on the cluster node.
<digest> is the Ignition config file SHA512 digest obtained in a preceding step.
NOTE
If you want to provide your Ignition config files through an HTTPS server that
uses TLS, you can add the internal certificate authority (CA) to the system trust
store before running coreos-installer.
The following example initializes a bootstrap node installation to the /dev/sda device. The
229
OpenShift Container Platform 4.17 Installing on bare metal
The following example initializes a bootstrap node installation to the /dev/sda device. The
Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP
address 192.168.1.2:
8. Monitor the progress of the RHCOS installation on the console of the machine.
IMPORTANT
Be sure that the installation is successful on each node before commencing with
the OpenShift Container Platform installation. Observing the installation process
can also help to determine the cause of RHCOS installation issues that might
arise.
9. After RHCOS installs, you must reboot the system. During the system reboot, it applies the
Ignition config file that you specified.
Example command
IMPORTANT
You must create the bootstrap and control plane machines at this time. If the
control plane machines are not made schedulable, also create at least two
compute machines before you install OpenShift Container Platform.
If the required network, DNS, and load balancer infrastructure are in place, the OpenShift
Container Platform bootstrap process begins automatically after the RHCOS nodes have
rebooted.
NOTE
RHCOS nodes do not include a default password for the core user. You can
access the nodes by running ssh core@<node>.<cluster_name>.
<base_domain> as a user with access to the SSH private key that is paired to
the public key that you specified in your install_config.yaml file. OpenShift
Container Platform 4 cluster nodes running RHCOS are immutable and rely on
Operators to apply cluster changes. Accessing cluster nodes by using SSH is not
recommended. However, when investigating installation issues, if the OpenShift
Container Platform API is not available, or the kubelet is not properly functioning
on a target node, SSH access might be required for debugging or disaster
recovery.
230
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have an HTTP server that can be accessed from your computer, and from the machines
that you create.
You have reviewed the Advanced RHCOS installation configuration section for different ways to
configure features, such as networking and disk partitioning.
Procedure
1. Upload the bootstrap, control plane, and compute node Ignition config files that the installation
program created to your HTTP server. Note the URLs of these files.
IMPORTANT
You can add or change configuration settings in your Ignition configs before
saving them to your HTTP server. If you plan to add more compute machines to
your cluster after you finish installation, do not delete these files.
2. From the installation host, validate that the Ignition config files are available on the URLs. The
following example gets the Ignition config file for the bootstrap node:
$ curl -k http://<HTTP_server>/bootstrap.ign 1
Example output
Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the
Ignition config files for the control plane and compute nodes are also available.
3. Although it is possible to obtain the RHCOS kernel, initramfs and rootfs files that are required
for your preferred method of installing operating system instances from the RHCOS image
mirror page, the recommended way to obtain the correct version of your RHCOS files are from
the output of openshift-install command:
Example output
231
OpenShift Container Platform 4.17 Installing on bare metal
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
kernel-aarch64"
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
initramfs.aarch64.img"
"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-
rootfs.aarch64.img"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-
<release>-live-kernel-ppc64le"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-
initramfs.ppc64le.img"
"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-
rootfs.ppc64le.img"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-
s390x"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-
initramfs.s390x.img"
"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-
rootfs.s390x.img"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-
x86_64"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-
initramfs.x86_64.img"
"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-
rootfs.x86_64.img"
IMPORTANT
The RHCOS artifacts might not change with every release of OpenShift
Container Platform. You must download images with the highest version that is
less than or equal to the OpenShift Container Platform version that you install.
Only use the appropriate kernel, initramfs, and rootfs artifacts described below
for this procedure. RHCOS QCOW2 images are not supported for this installation
type.
The file names contain the OpenShift Container Platform version number. They resemble the
following examples:
kernel: rhcos-<version>-live-kernel-<architecture>
initramfs: rhcos-<version>-live-initramfs.<architecture>.img
rootfs: rhcos-<version>-live-rootfs.<architecture>.img
4. Upload the rootfs, kernel, and initramfs files to your HTTP server.
IMPORTANT
If you plan to add more compute machines to your cluster after you finish
installation, do not delete these files.
5. Configure the network boot infrastructure so that the machines boot from their local disks after
RHCOS is installed on them.
6. Configure PXE or iPXE installation for the RHCOS images and begin the installation.
232
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Modify one of the following example menu entries for your environment and verify that the
image and Ignition files are properly accessible:
DEFAULT pxeboot
TIMEOUT 20
PROMPT 0
LABEL pxeboot
KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1
APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.
<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-
rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda
coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3
1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The
URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url
parameter value is the location of the rootfs file, and the coreos.inst.ignition_url
parameter value is the location of the bootstrap Ignition config file. You can also add
more kernel arguments to the APPEND line to configure networking or other boot
options.
NOTE
This configuration does not enable serial console access on machines with a
graphical console. To configure a different console, add one or more
console= arguments to the APPEND line. For example, add console=tty0
console=ttyS0 to set the first PC serial port as the primary console and the
graphical console as a secondary console. For more information, see How
does one set up a serial terminal and/or console in Red Hat Enterprise Linux?
and "Enabling the serial console for PXE and ISO installation" in the
"Advanced RHCOS installation configuration" section.
1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel parameter value is the location of the kernel file, the initrd=main argument is
needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is
the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the
233
OpenShift Container Platform 4.17 Installing on bare metal
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the location of the initramfs file that you uploaded to your HTTP server.
NOTE
This configuration does not enable serial console access on machines with a
graphical console. To configure a different console, add one or more
console= arguments to the kernel line. For example, add console=tty0
console=ttyS0 to set the first PC serial port as the primary console and the
graphical console as a secondary console. For more information, see How
does one set up a serial terminal and/or console in Red Hat Enterprise Linux?
and "Enabling the serial console for PXE and ISO installation" in the
"Advanced RHCOS installation configuration" section.
NOTE
1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP
server. The kernel parameter value is the location of the kernel file on your TFTP
server. The coreos.live.rootfs_url parameter value is the location of the rootfs file,
and the coreos.inst.ignition_url parameter value is the location of the bootstrap
Ignition config file on your HTTP Server.
2 If you use multiple NICs, specify a single interface in the ip option. For example, to use
DHCP on a NIC that is named eno1, set ip=eno1:dhcp.
3 Specify the location of the initramfs file that you uploaded to your TFTP server.
7. Monitor the progress of the RHCOS installation on the console of the machine.
IMPORTANT
234
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
IMPORTANT
Be sure that the installation is successful on each node before commencing with
the OpenShift Container Platform installation. Observing the installation process
can also help to determine the cause of RHCOS installation issues that might
arise.
8. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config
file that you specified.
Example command
IMPORTANT
You must create the bootstrap and control plane machines at this time. If the
control plane machines are not made schedulable, also create at least two
compute machines before you install the cluster.
If the required network, DNS, and load balancer infrastructure are in place, the OpenShift
Container Platform bootstrap process begins automatically after the RHCOS nodes have
rebooted.
NOTE
RHCOS nodes do not include a default password for the core user. You can
access the nodes by running ssh core@<node>.<cluster_name>.
<base_domain> as a user with access to the SSH private key that is paired to
the public key that you specified in your install_config.yaml file. OpenShift
Container Platform 4 cluster nodes running RHCOS are immutable and rely on
Operators to apply cluster changes. Accessing cluster nodes by using SSH is not
recommended. However, when investigating installation issues, if the OpenShift
Container Platform API is not available, or the kubelet is not properly functioning
on a target node, SSH access might be required for debugging or disaster
recovery.
235
OpenShift Container Platform 4.17 Installing on bare metal
The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations
detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways.
4.11.3.1. Using advanced networking options for PXE and ISO installations
Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary
configuration settings. To set up static IP addresses or configure special settings, such as bonding, you
can do one of the following:
Pass special kernel parameters when you boot the live installer.
Configure networking from a live installer shell prompt, then copy those settings to the installed
system so that they take effect when the installed system first boots.
Procedure
2. From the live system shell prompt, configure networking for the live system using available
RHEL tools, such as nmcli or nmtui.
3. Run the coreos-installer command to install the system, adding the --copy-network option to
copy networking configuration. For example:
IMPORTANT
Additional resources
See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for
more information about the nmcli and nmtui tools.
Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat
Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the
236
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
same partition layout, unless you override the default partitioning configuration. During the RHCOS
installation, the size of the root file system is increased to use any remaining available space on the
target device.
IMPORTANT
The use of a custom partition scheme on your node might result in OpenShift Container
Platform not monitoring or alerting on some node partitions. If you override the default
partitioning, see Understanding OpenShift File System Monitoring (eviction conditions)
for more information about how OpenShift Container Platform monitors your host file
systems.
For the default partition scheme, nodefs and imagefs monitor the same root filesystem, /.
To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster
node, you must create separate partitions. Consider a situation where you want to add a separate
storage partition for your containers and container images. For example, by mounting
/var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the
imagefs directory and the root file system as the nodefs directory.
IMPORTANT
If you have resized your disk size to host a larger file system, consider creating a separate
/var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce
CPU time issues caused by a high number of allocation groups.
In general, you should use the default disk partitioning that is created during the RHCOS installation.
However, there are cases where you might want to create a separate partition for a directory that you
expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the
/var directory or a subdirectory of /var. For example:
/var/lib/containers: Holds container-related content that can grow as more images and
containers are added to a system.
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as
performance optimization of etcd storage.
/var: Holds data that you might want to keep separate for purposes such as auditing.
IMPORTANT
For disk sizes larger than 100GB, and especially larger than 1TB, create a separate
/var partition.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as
237
OpenShift Container Platform 4.17 Installing on bare metal
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as
needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this
method, you will not have to pull all your containers again, nor will you have to copy massive log files
when you update systems.
The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth
in the partitioned directory from filling up the root file system.
The following procedure sets up a separate /var partition by adding a machine config manifest that is
wrapped into the Ignition config file for a node type during the preparation phase of an installation.
Procedure
1. On your installation host, change to the directory that contains the OpenShift Container
Platform installation program and generate the Kubernetes manifests for the cluster:
2. Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the
storage device on the worker systems, and set the storage size as appropriate. This example
places the /var directory on a separate partition:
variant: openshift
version: 4.17.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
storage:
disks:
- device: /dev/disk/by-id/<device_name> 1
partitions:
- label: var
start_mib: <partition_start_offset> 2
size_mib: <partition_size> 3
number: 5
filesystems:
- device: /dev/disk/by-partlabel/var
path: /var
format: xfs
mount_options: [defaults, prjquota] 4
with_mount_unit: true
1 The storage device name of the disk that you want to partition.
2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes
is recommended. The root file system is automatically resized to fill all available space up
to the specified offset. If no offset value is specified, or if the specified value is smaller than
the recommended minimum, the resulting root file system will be too small, and future
reinstalls of RHCOS might overwrite the beginning of the data partition.
4 The prjquota mount option must be enabled for filesystems used for container storage.
238
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
NOTE
When creating a separate /var partition, you cannot use different instance types
for compute nodes, if the different instance types do not have the same device
name.
3. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory.
For example, run the following command:
Ignition config files are created for the bootstrap, control plane, and compute nodes in the
installation directory:
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Next steps
You can apply the custom disk partitioning by referencing the Ignition config files during the
RHCOS installations.
For an ISO installation, you can add options to the coreos-installer command that cause the installer to
maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the
APPEND parameter to preserve partitions.
Saved partitions might be data partitions from an existing OpenShift Container Platform system. You
can identify the disk partitions you want to keep either by partition label or by number.
NOTE
If you save existing partitions, and those partitions do not leave enough space for
RHCOS, the installation will fail without damaging the saved partitions.
239
OpenShift Container Platform 4.17 Installing on bare metal
The following example illustrates running the coreos-installer in a way that preserves the sixth (6)
partition on the disk:
In the previous examples where partition saving is used, coreos-installer recreates the partition
immediately.
coreos.inst.save_partlabel=data*
coreos.inst.save_partindex=5-
coreos.inst.save_partindex=6
When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide,
with different reasons for providing each one:
Permanent install Ignition config: Every manual RHCOS installation needs to pass one of the
Ignition config files generated by openshift-installer, such as bootstrap.ign, master.ign and
worker.ign, to carry out the installation.
IMPORTANT
It is not recommended to modify these Ignition config files directly. You can
update the manifest files that are wrapped into the Ignition config files, as
outlined in examples in the preceding sections.
For PXE installations, you pass the Ignition configs on the APPEND line using the
coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt,
you identify the Ignition config on the coreos-installer command line with the --ignition-url=
240
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
option. In both cases, only HTTP and HTTPS protocols are supported.
Live install Ignition config: This type can be created by using the coreos-installer customize
subcommand and its various options. With this method, the Ignition config passes to the live
install medium, runs immediately upon booting, and performs setup tasks before or after the
RHCOS system installs to disk. This method should only be used for performing tasks that must
be done once and not applied again later, such as with advanced partitioning that cannot be
done using a machine config.
For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url=
option to identify the location of the Ignition config. You also need to append ignition.firstboot
ignition.platform.id=metal or the ignition.config.url option will be ignored.
Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.17
boot image use a default console that is meant to accomodate most virtualized and bare metal setups.
Different cloud and virtualization platforms may use different default settings depending on the chosen
architecture. Bare metal installations use the kernel default settings which typically means the graphical
console is the primary console and the serial console is disabled.
The default consoles may not match your specific hardware configuration or you might have specific
needs that require you to adjust the default console. For example:
You want to access the emergency shell on the console for debugging purposes.
Your cloud platform does not provide interactive access to the graphical console, but provides a
serial console.
Console configuration is inherited from the boot image. This means that new nodes in existing clusters
are unaffected by changes to the default console.
You can configure the console for bare metal installations in the following ways:
NOTE
4.11.3.5. Enabling the serial console for PXE and ISO installations
By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is
written to the graphical console. You can enable the serial console for an ISO installation and
reconfigure the bootloader so that output is sent to both the serial console and the graphical console.
Procedure
2. Run the coreos-installer command to install the system, adding the --console option once to
241
OpenShift Container Platform 4.17 Installing on bare metal
2. Run the coreos-installer command to install the system, adding the --console option once to
specify the graphical console, and a second time to specify the serial console:
$ coreos-installer install \
--console=tty0 \ 1
--console=ttyS0,<options> \ 2
--ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>
1 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
2 The desired primary console. In this case the serial console. The options field defines the
baud rate and other settings. A common value for this field is 11520n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see Linux kernel serial console documentation.
NOTE
To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is
omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation
procedure.
You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file
directly into the image. This creates a customized image that you can use to provision your system.
For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which
modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the
coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your
customizations.
The customize subcommand is a general purpose tool that can embed other types of customizations as
well. The following tasks are examples of some of the more common customizations:
Inject custom CA certificates for when corporate security policy requires their use.
You can customize a live RHCOS ISO image directly with the coreos-installer iso customize
subcommand. When you boot the ISO image, the customizations are applied automatically.
You can use this feature to configure the ISO image to automatically install RHCOS.
Procedure
242
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file,
and then run the following command to inject the Ignition config directly into the ISO image:
1 The Ignition config file that is generated from the openshift-installer installation program.
2 When you specify this option, the ISO image automatically runs an installation. Otherwise,
the image remains configured for installation, but does not install automatically unless you
specify the coreos.inst.install_dev kernel argument.
3. Optional: To remove the ISO image customizations and return the image to its pristine state,
run:
You can now re-customize the live ISO image or use it in its pristine state.
4.11.3.7.1. Modifying a live install ISO image to enable the serial console
On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by
default and all output is written to the graphical console. You can enable the serial console with the
following procedure.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image to enable the serial console to receive output:
2 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
3 The desired primary console. In this case, the serial console. The options field defines the
baud rate and other settings. A common value for this field is 115200n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see the Linux kernel serial console documentation.
243
OpenShift Container Platform 4.17 Installing on bare metal
4 The specified disk to install to. If you omit this option, the ISO image automatically runs the
installation program which will fail unless you also specify the coreos.inst.install_dev
NOTE
The --dest-console option affects the installed system and not the live ISO
system. To modify the console for a live ISO system, use the --live-karg-append
option and specify the console with console=.
Your customizations are applied and affect every subsequent boot of the ISO image.
3. Optional: To remove the ISO image customizations and return the image to its original state,
run the following command:
You can now recustomize the live ISO image or use it in its original state.
4.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
customize subcommand. You can use the CA certificates during both the installation boot and when
provisioning the installed system.
NOTE
Custom CA certificates affect how Ignition fetches remote resources but they do not
affect the certificates installed onto the system.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image for use with a custom CA:
IMPORTANT
The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag.
You must use the --dest-ignition flag to create a customized image for each cluster.
4.11.3.7.3. Modifying a live install ISO image with customized network settings
You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed
system with the --network-keyfile flag of the customize subcommand.
244
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
WARNING
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Create a connection profile for a bonded interface. For example, create the
bond0.nmconnection file in your local directory with the following content:
[connection]
id=bond0
type=bond
interface-name=bond0
multi-connect=1
[bond]
miimon=100
mode=active-backup
[ipv4]
method=auto
[ipv6]
method=auto
3. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em1.nmconnection file in your local directory with the following content:
[connection]
id=em1
type=ethernet
interface-name=em1
master=bond0
multi-connect=1
slave-type=bond
4. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em2.nmconnection file in your local directory with the following content:
[connection]
id=em2
type=ethernet
interface-name=em2
245
OpenShift Container Platform 4.17 Installing on bare metal
master=bond0
multi-connect=1
slave-type=bond
5. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with your configured networking:
Network settings are applied to the live system and are carried over to the destination system.
4.11.3.7.4. Customizing a live install ISO image for an iSCSI boot device
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with the following information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The location of the destination system. You must provide the IP address of the target
portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI
logical unit number (LUN).
5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
246
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
4.11.3.7.5. Customizing a live install ISO image for an iSCSI boot device with iBFT
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following
command to customize the ISO image with the following information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The path to the device. If you are using multipath, the multipath device,
/dev/mapper/mpatha, If there are multiple multipath devices connected, or to be explicit,
you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize
247
OpenShift Container Platform 4.17 Installing on bare metal
You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize
subcommand. When you boot the PXE environment, the customizations are applied automatically.
You can use this feature to configure the PXE environment to automatically install RHCOS.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
the Ignition config file, and then run the following command to create a new initramfs file that
contains the customizations from your Ignition config:
2 When you specify this option, the PXE environment automatically runs an install.
Otherwise, the image remains configured for installing, but does not do so automatically
unless you specify the coreos.inst.install_dev kernel argument.
3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot
and ignition.platform.id=metal kernel arguments if they are not already present.
4.11.3.8.1. Modifying a live install PXE environment to enable the serial console
On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by
default and all output is written to the graphical console. You can enable the serial console with the
following procedure.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
the Ignition config file, and then run the following command to create a new customized
initramfs file that enables the serial console to receive output:
2 The desired secondary console. In this case, the graphical console. Omitting this option will
disable the graphical console.
248
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
3 The desired primary console. In this case, the serial console. The options field defines the
baud rate and other settings. A common value for this field is 115200n8. If no options are
provided, the default kernel value of 9600n8 is used. For more information on the format
of this option, see the Linux kernel serial console documentation.
4 The specified disk to install to. If you omit this option, the PXE environment automatically
runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel
argument.
5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot
and ignition.platform.id=metal kernel arguments if they are not already present.
Your customizations are applied and affect every subsequent boot of the PXE environment.
4.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority
You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the
customize subcommand. You can use the CA certificates during both the installation boot and when
provisioning the installed system.
NOTE
Custom CA certificates affect how Ignition fetches remote resources but they do not
affect the certificates installed onto the system.
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file for use with a custom CA:
3. Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and
ignition.platform.id=metal kernel arguments if they are not already present.
IMPORTANT
The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag.
You must use the --dest-ignition flag to create a customized image for each cluster.
4.11.3.8.3. Modifying a live install PXE environment with customized network settings
You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the
installed system with the --network-keyfile flag of the customize subcommand.
249
OpenShift Container Platform 4.17 Installing on bare metal
WARNING
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Create a connection profile for a bonded interface. For example, create the
bond0.nmconnection file in your local directory with the following content:
[connection]
id=bond0
type=bond
interface-name=bond0
multi-connect=1
[bond]
miimon=100
mode=active-backup
[ipv4]
method=auto
[ipv6]
method=auto
3. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em1.nmconnection file in your local directory with the following content:
[connection]
id=em1
type=ethernet
interface-name=em1
master=bond0
multi-connect=1
slave-type=bond
4. Create a connection profile for a secondary interface to add to the bond. For example, create
the bond0-proxy-em2.nmconnection file in your local directory with the following content:
[connection]
id=em2
type=ethernet
interface-name=em2
250
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
master=bond0
multi-connect=1
slave-type=bond
5. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file that contains your
configured networking:
6. Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and
ignition.platform.id=metal kernel arguments if they are not already present.
Network settings are applied to the live system and are carried over to the destination system.
4.11.3.8.4. Customizing a live install PXE environment for an iSCSI boot device
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file with the following
information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target and any commands enabling multipathing.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The location of the destination system. You must provide the IP address of the target
portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI
logical unit number (LUN).
251
OpenShift Container Platform 4.17 Installing on bare metal
5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
4.11.3.8.5. Customizing a live install PXE environment for an iSCSI boot device with iBFT
You can set the iSCSI target and initiator values for automatic mounting, booting and configuration
using a customized version of the live RHCOS image.
Prerequisites
Procedure
1. Download the coreos-installer binary from the coreos-installer image mirror page.
2. Retrieve the RHCOS kernel, initramfs and rootfs files from the RHCOS image mirror page and
run the following command to create a new customized initramfs file with the following
information:
1 The script that gets run before installation. It should contain the iscsiadm commands for
mounting the iSCSI target.
2 The script that gets run after installation. It should contain the command iscsiadm --mode
node --logout=all.
3 The path to the device. If you are using multipath, the multipath device,
/dev/mapper/mpatha, If there are multiple multipath devices connected, or to be explicit,
you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
252
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This section illustrates the networking configuration and other advanced options that allow you to
modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables
describe the kernel arguments and command-line options you can use with the RHCOS live installer and
the coreos-installer command.
If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the
image to configure networking for a node. If no networking arguments are specified, DHCP is activated
in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file.
IMPORTANT
When adding networking arguments manually, you must also add the rd.neednet=1
kernel argument to bring the network up in the initramfs.
The following information provides examples for configuring networking and bonding on your RHCOS
nodes for ISO installations. The examples describe how to use the ip=, nameserver=, and bond= kernel
arguments.
NOTE
Ordering is important when adding the kernel arguments: ip=, nameserver=, and then
bond=.
The networking options are passed to the dracut tool during system boot. For more information about
the networking options supported by dracut, see the dracut.cmdline manual page.
The following examples are the networking options for ISO installation.
253
OpenShift Container Platform 4.17 Installing on bare metal
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
nameserver=4.4.4.41
NOTE
When you use DHCP to configure IP addressing for the RHCOS machines, the machines
also obtain the DNS server information through DHCP. For DHCP-based deployments,
you can define the DNS server address that is used by the RHCOS nodes through your
DHCP server configuration.
ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none
nameserver=4.4.4.41
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
NOTE
When you configure one or multiple networks, one default gateway is required. If the
additional network gateway is different from the primary network gateway, the default
gateway must be the primary network gateway.
ip=::10.10.10.254::::
Enter the following command to configure the route for the additional network:
rd.route=20.20.20.0/24:20.20.20.254:enp2s0
254
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
You can disable DHCP on a single interface, such as when there are two or more network interfaces and
only one interface is being used. In the example, the enp1s0 interface has a static networking
configuration and DHCP is disabled for enp2s0, which is not used:
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=::::core0.example.com:enp2s0:none
ip=enp1s0:dhcp
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
To configure a VLAN on a network interface and use a static IP address, run the following
command:
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none
vlan=enp2s0.100:enp2s0
To configure a VLAN on a network interface and to use DHCP, run the following command:
ip=enp2s0.100:dhcp
vlan=enp2s0.100:enp2s0
nameserver=1.1.1.1
nameserver=8.8.8.8
When you create a bonded interface using bond=, you must specify how the IP address is
assigned and other information for the bonded interface.
To configure the bonded interface to use DHCP, set the bond’s IP address to dhcp. For
example:
bond=bond0:em1,em2:mode=active-backup
ip=bond0:dhcp
To configure the bonded interface to use a static IP address, enter the specific IP address
255
OpenShift Container Platform 4.17 Installing on bare metal
To configure the bonded interface to use a static IP address, enter the specific IP address
you want and related information. For example:
bond=bond0:em1,em2:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
1. Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices.
Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section.
2. Create the bond, attach the desired VFs to the bond and set the bond link state up following
the guidance in Configuring network bonding. Follow any of the described procedures to create
the bond.
When you create a bonded interface using bond=, you must specify how the IP address is
assigned and other information for the bonded interface.
To configure the bonded interface to use DHCP, set the bond’s IP address to dhcp. For
example:
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=bond0:dhcp
To configure the bonded interface to use a static IP address, enter the specific IP address
you want and related information. For example:
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
NOTE
256
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
team=team0:em1,em2
ip=team0:dhcp
You can install RHCOS by running coreos-installer install <options> <device> at the command
prompt, after booting into the RHCOS live environment from an ISO image.
The following table shows the subcommands, options, and arguments you can pass to the coreos-
installer command.
Subcommand Description
Option Description
-f, --image-file <path> Specify a local image file manually. Used for
debugging.
-p, --platform <name> Override the Ignition platform ID for the installed
system.
--console <spec> Set the kernel and bootloader console for the
installed system. For more information about the
format of <spec>, see the Linux kernel serial
console documentation.
257
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
Argument Description
Subcommand Description
258
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
coreos-installer iso reset <options> Restore a RHCOS live ISO image to default settings.
<ISO_image>
coreos-installer iso ignition remove Remove the embedded Ignition config from an ISO
<options> <ISO_image> image.
Option Description
--dest-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the destination system.
--dest-console <spec> Specify the kernel and bootloader console for the
destination system.
--live-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the live environment.
--live-karg-delete <arg> Delete a kernel argument from each boot of the live
environment.
259
OpenShift Container Platform 4.17 Installing on bare metal
Subcommand Description
Note that not all of these options are accepted by all subcommands.
coreos-installer pxe customize <options> Customize a RHCOS live PXE boot config.
<path>
coreos-installer pxe ignition unwrap Show the wrapped Ignition config in an image.
<options> <image_name>
Option Description
Note that not all of these options are accepted by all subcommands.
--dest-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the destination system.
--dest-console <spec> Specify the kernel and bootloader console for the
destination system.
260
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
--live-ignition <path> Merge the specified Ignition config file into a new
configuration fragment for the live environment.
NOTE
You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot
arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments.
For ISO installations, the coreos.inst options can be added by interrupting the automatic boot
at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL
CoreOS (Live) menu option is highlighted.
For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line
before the RHCOS live installer is booted.
The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE
installations.
Argument Description
261
OpenShift Container Platform 4.17 Installing on bare metal
Argument Description
ignition.config.url Optional: The URL of the Ignition config for the live
boot. For example, this can be used to customize
how coreos-installer is invoked, or to run code
before or after the installation. This is different from
coreos.inst.ignition_url, which is the Ignition
config for the installed system.
You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container
262
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container
Platform 4.8 or later. While postinstallation support is available by activating multipathing via the
machine config, enabling multipathing during installation is recommended.
In setups where any I/O to non-optimized paths results in I/O system errors, you must enable
multipathing at installation time.
IMPORTANT
On IBM Z® and IBM® LinuxONE, you can enable multipathing only if you configured your
cluster for it during installation. For more information, see "Installing RHCOS and starting
the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on
IBM Z® and IBM® LinuxONE.
The following procedure enables multipath at installation time and appends kernel arguments to the
coreos-installer install command so that the installed system itself will use multipath beginning from
the first boot.
NOTE
OpenShift Container Platform does not support enabling multipathing as a day-2 activity
on nodes that have been upgraded from 4.6 or earlier.
Prerequisites
You have created the Ignition config files for your cluster.
You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap
process.
Procedure
1. To enable multipath and start the multipathd daemon, run the following command on the
installation host:
Optional: If booting the PXE or ISO, you can instead enable multipath by adding
rd.multipath=default from the kernel command line.
If there is only one multipath device connected to the machine, it should be available at path
/dev/mapper/mpatha. For example:
If there are multiple multipath devices connected to the machine, or to be more explicit,
263
OpenShift Container Platform 4.17 Installing on bare metal
If there are multiple multipath devices connected to the machine, or to be more explicit,
instead of using /dev/mapper/mpatha, it is recommended to use the World Wide Name
(WWN) symlink available in /dev/disk/by-id. For example:
This symlink can also be used as the coreos.inst.install_dev kernel argument when using
special coreos.inst.* arguments to direct the live installer. For more information, see
"Installing RHCOS and starting the OpenShift Container Platform bootstrap process".
4. Check that the kernel arguments worked by going to one of the worker nodes and listing the
kernel command line arguments (in /proc/cmdline on the host):
$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
sh-4.2# exit
RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to
enable multipathing for the secondary disk at installation time.
Prerequisites
Procedure
264
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Example multipath-config.bu
variant: openshift
version: 4.17.0
systemd:
units:
- name: mpath-configure.service
enabled: true
contents: |
[Unit]
Description=Configure Multipath on Secondary Disk
ConditionFirstBoot=true
ConditionPathExists=!/etc/multipath.conf
Before=multipathd.service 1
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/usr/sbin/mpathconf --enable 2
[Install]
WantedBy=multi-user.target
- name: mpath-var-lib-container.service
enabled: true
contents: |
[Unit]
Description=Set Up Multipath On /var/lib/containers
ConditionFirstBoot=true 3
Requires=dev-mapper-mpatha.device
After=dev-mapper-mpatha.device
After=ostree-remount.service
Before=kubelet.service
DefaultDependencies=no
[Service] 4
Type=oneshot
ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha
ExecStart=/usr/bin/mkdir -p /var/lib/containers
[Install]
WantedBy=multi-user.target
- name: var-lib-containers.mount
enabled: true
contents: |
[Unit]
Description=Mount /var/lib/containers
After=mpath-var-lib-containers.service
Before=kubelet.service 5
[Mount] 6
What=/dev/disk/by-label/dm-mpath-containers
Where=/var/lib/containers
Type=xfs
265
OpenShift Container Platform 4.17 Installing on bare metal
[Install]
WantedBy=multi-user.target
6 Mounts the device to the /var/lib/containers mount point. This location cannot be a
symlink.
3. Continue with the rest of the first boot RHCOS installation process.
IMPORTANT
Prerequisites
2. You have an iSCSI target that you want to install RHCOS on.
Procedure
1. Mount the iSCSI target from the live environment by running the following command:
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ 1
--login
2. Install RHCOS onto the iSCSI target by running the following command and using the necessary
kernel arguments, for example:
266
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
$ coreos-installer install \
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 1
--append-karg rd.iscsi.initiator=<initiator_iqn> \ 2
--append.karg netroot=<target_iqn> \ 3
--console ttyS0,115200n8
--ignition-file <path_to_file>
1 The location you are installing to. You must provide the IP address of the target portal, the
associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit
number (LUN).
2 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect
to the iSCSI target.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This procedure can also be performed using the coreos-installer iso customize or coreos-installer
pxe customize subcommands.
Prerequisites
Procedure
1. Mount the iSCSI target from the live environment by running the following command:
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ 1
--login
2. Optional: enable multipathing and start the daemon with the following command:
267
OpenShift Container Platform 4.17 Installing on bare metal
3. Install RHCOS onto the iSCSI target by running the following command and using the necessary
kernel arguments, for example:
$ coreos-installer install \
/dev/mapper/mpatha \ 1
--append-karg rd.iscsi.firmware=1 \ 2
--append-karg rd.multipath=default \ 3
--console ttyS0 \
--ignition-file <path_to_file>
1 The path of a single multipathed device. If there are multiple multipath devices connected,
or to be explicit, you can use the World Wide Name (WWN) symlink available in
/dev/disk/by-path.
For more information about the iSCSI options supported by dracut, see the dracut.cmdline
manual page.
This procedure can also be performed using the coreos-installer iso customize or coreos-installer
pxe customize subcommands.
Prerequisites
You have created the Ignition config files for your cluster.
You have configured suitable network, DNS and load balancing infrastructure.
You have obtained the installation program and generated the Ignition config files for your
cluster.
You installed RHCOS on your cluster machines and provided the Ignition config files that the
OpenShift Container Platform installation program generated.
Procedure
268
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 To view different installation details, specify warn, debug, or error instead of info.
Example output
The command succeeds when the Kubernetes API server signals that it has been bootstrapped
on the control plane machines.
2. After the bootstrap process is complete, remove the bootstrap machine from the load balancer.
IMPORTANT
You must remove the bootstrap machine from the load balancer at this point.
You can also remove or reformat the bootstrap machine itself.
Additional resources
See Monitoring installation progress for more information about monitoring the installation logs
and retrieving diagnostic data if installation issues arise.
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
269
OpenShift Container Platform 4.17 Installing on bare metal
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
Prerequisites
Procedure
$ oc get nodes
Example output
NOTE
The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.
2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:
$ oc get csr
Example output
270
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After the client CSR is
approved, the Kubelet creates a secondary CSR for the serving certificate, which
requires manual approval. Then, subsequent serving certificate renewal requests
are automatically approved by the machine-approver if the Kubelet requests a
new certificate with identical parameters.
NOTE
For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.
To approve them individually, run the following command for each valid CSR:
NOTE
Some Operators might not become available until some CSRs are approved.
4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:
271
OpenShift Container Platform 4.17 Installing on bare metal
$ oc get csr
Example output
5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:
To approve them individually, run the following command for each valid CSR:
6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:
$ oc get nodes
Example output
NOTE
It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.
Additional information
After the control plane initializes, you must immediately configure some Operators so that they all
272
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
After the control plane initializes, you must immediately configure some Operators so that they all
become available.
Prerequisites
Procedure
Example output
Additional resources
See Gathering logs from a failed installation for details about gathering data in the event of a
273
OpenShift Container Platform 4.17 Installing on bare metal
See Gathering logs from a failed installation for details about gathering data in the event of a
failed OpenShift Container Platform installation.
See Troubleshooting Operator issues for steps to check Operator pod health across the cluster
and gather Operator logs for diagnosis.
Procedure
Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the
OperatorHub object:
TIP
Alternatively, you can use the web console to manage catalog sources. From the Administration →
Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create,
update, delete, disable, and enable individual sources.
Instructions are shown for configuring a persistent volume, which is required for production clusters.
Where applicable, instructions are shown for configuring an empty directory as the storage location,
which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using
the Recreate rollout strategy during upgrades.
To start the image registry, you must change the Image Registry Operator configuration’s
managementState from Removed to Managed.
Procedure
4.15.2.2. Configuring registry storage for bare metal and other manual installations
274
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS)
nodes, such as bare metal.
You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data
Foundation.
IMPORTANT
Procedure
NOTE
When you use shared storage, review your security settings to prevent outside
access.
Example output
NOTE
If you do have a registry pod in your output, you do not need to continue with this
procedure.
$ oc edit configs.imageregistry.operator.openshift.io
Example output
275
OpenShift Container Platform 4.17 Installing on bare metal
storage:
pvc:
claim:
Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
Example output
5. Ensure that your registry is set to managed to enable building and pushing of images.
Run:
$ oc edit configs.imageregistry/cluster
managementState: Removed
to
managementState: Managed
You must configure storage for the Image Registry Operator. For non-production clusters, you can set
the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
WARNING
If you run this command before the Image Registry Operator initializes its components, the oc
276
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
If you run this command before the Image Registry Operator initializes its components, the oc
patch command fails with the following error:
To allow the image registry to use block storage types during upgrades as a cluster administrator, you
can use the Recreate rollout strategy.
IMPORTANT
Block storage volumes, or block persistent volumes, are supported but not recommended
for use with the image registry on production clusters. An installation where the registry is
configured on block storage is not highly available because the registry cannot have more
than one replica.
If you choose to use a block storage volume with the image registry, you must use a
filesystem persistent volume claim (PVC).
Procedure
1. Enter the following command to set the image registry storage as a block storage type, patch
the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1) replica:
2. Provision the PV for the block storage device, and create a PVC for that volume. The requested
block volume uses the ReadWriteOnce (RWO) access mode.
a. Create a pvc.yaml file with the following contents to define a VMware vSphere
PersistentVolumeClaim object:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-storage 1
namespace: openshift-image-registry 2
spec:
accessModes:
- ReadWriteOnce 3
resources:
requests:
storage: 100Gi 4
3
The access mode of the persistent volume claim. With ReadWriteOnce, the volume
277
OpenShift Container Platform 4.17 Installing on bare metal
The access mode of the persistent volume claim. With ReadWriteOnce, the volume
can be mounted with read and write permissions by a single node.
b. Enter the following command to create the PersistentVolumeClaim object from the file:
3. Enter the following command to edit the registry configuration so that it references the correct
PVC:
Example output
storage:
pvc:
claim: 1
1 By creating a custom PVC, you can leave the claim field blank for the default automatic
creation of an image-registry-storage PVC.
Prerequisites
Procedure
1. Confirm that all the cluster components are online with the following command:
Example output
278
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
Alternatively, the following command notifies you when all of the clusters are available. It also
retrieves and displays credentials:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
Example output
The command succeeds when the Cluster Version Operator finishes deploying the OpenShift
Container Platform cluster from Kubernetes API server.
IMPORTANT
279
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If
the cluster is shut down before renewing the certificates and the cluster is
later restarted after the 24 hours have elapsed, the cluster automatically
recovers the expired certificates. The exception is that you must manually
approve the pending node-bootstrapper certificate signing requests (CSRs)
to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.
It is recommended that you use Ignition config files within 12 hours after they
are generated because the 24-hour certificate rotates from 16 to 22 hours
after the cluster is installed. By using the Ignition config files within 12 hours,
you can avoid installation failure if the certificate update runs during
installation.
2. Confirm that the Kubernetes API server is communicating with the pods.
Example output
b. View the logs for a pod that is listed in the output of the previous command by using the
following command:
1 Specify the pod name and namespace, as shown in the output of the previous
command.
If the pod logs display, the Kubernetes API server can communicate with the cluster
machines.
3. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable
multipathing. Do not enable multipathing during installation.
See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine
configuration tasks documentation for more information.
280
CHAPTER 4. INSTALLING A USER-PROVISIONED BARE METAL CLUSTER ON A RESTRICTED NETWORK
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to
track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
See About remote health monitoring for more information about the Telemetry service
Configure image streams for the Cluster Samples Operator and the must-gather tool.
If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by
configuring additional trust stores.
281
OpenShift Container Platform 4.17 Installing on bare metal
Consequentially, you can only use bare-metal host drivers that support virtual media
networking booting, for example redfish-virtualmedia and idrac-virtualmedia.
You cannot scale MachineSet objects in user-provisioned infrastructure clusters by using the
BMO.
Prerequisites
Procedure
apiVersion: metal3.io/v1alpha1
kind: Provisioning
metadata:
282
CHAPTER 5. SCALING A USER-PROVISIONED CLUSTER WITH THE BARE METAL OPERATOR
name: provisioning-configuration
spec:
provisioningNetwork: "Disabled"
watchAllNamespaces: false
NOTE
$ oc create -f provisioning.yaml
Example output
provisioning.metal3.io/provisioning-configuration created
Verification
Verify that the provisioning service is running by running the following command:
Example output
NOTE
Provisioning bare-metal hosts to the cluster by using the BMO sets the
spec.externallyProvisioned specification in the BareMetalHost custom resource to
false by default. Do not set the spec.externallyProvisioned specification to true,
because this setting results in unexpected behavior.
283
OpenShift Container Platform 4.17 Installing on bare metal
Prerequisites
Procedure
1. Create a configuration file for the bare-metal node. Depending if you use either a static
configuration or a DHCP server, choose one of the following example bmh.yaml files and
configure it to your needs by replacing values in the YAML to match your environment:
---
apiVersion: v1
kind: Secret
metadata:
name: openshift-worker-<num>-network-config-secret 1
namespace: openshift-machine-api
type: Opaque
stringData:
nmstate: | 2
interfaces: 3
- name: <nic1_name> 4
type: ethernet
state: up
ipv4:
address:
- ip: <ip_address> 5
prefix-length: 24
enabled: true
dns-resolver:
config:
server:
- <dns_ip_address> 6
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: <next_hop_ip_address> 7
next-hop-interface: <next_hop_nic1_name> 8
---
apiVersion: v1
kind: Secret
metadata:
name: openshift-worker-<num>-bmc-secret
namespace: openshift-machine-api
type: Opaque
data:
username: <base64_of_uid> 9
password: <base64_of_pwd>
---
apiVersion: metal3.io/v1alpha1
284
CHAPTER 5. SCALING A USER-PROVISIONED CLUSTER WITH THE BARE METAL OPERATOR
kind: BareMetalHost
metadata:
name: openshift-worker-<num>
namespace: openshift-machine-api
spec:
online: true
bootMACAddress: <nic1_mac_address> 10
bmc:
address: <protocol>://<bmc_url> 11
credentialsName: openshift-worker-<num>-bmc-secret
disableCertificateVerification: false
customDeploy:
method: install_coreos
userData:
name: worker-user-data-managed
namespace: openshift-machine-api
rootDeviceHints:
deviceName: <root_device_hint> 12
preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret
1 Replace all instances of <num> with a unique compute node number for the bare-
metal nodes in the name, credentialsName, and preprovisioningNetworkDataName
fields.
2 Add the NMState YAML syntax to configure the host interfaces. To configure the
network interface for a newly created node, specify the name of the secret that has
the network configuration. Follow the nmstate syntax to define the network
configuration for your node. See "Preparing the bare-metal node" for details on
configuring NMState syntax.
3 Optional: If you have configured the network interface with nmstate, and you want to
disable an interface, set state: up with the IP addresses set to enabled: false.
4 Replace <nic1_name> with the name of the bare-metal node’s first network interface
controller (NIC).
9 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user
name and password.
10 Replace <nic1_mac_address> with the MAC address of the bare-metal node’s first
NIC. See the "BMC addressing" section for additional BMC configuration options.
11 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace
<bmc_url> with the URL of the bare-metal node’s baseboard management controller.
When configuring the network interface with a static configuration by using nmstate, set
state: up with the IP addresses set to enabled: false:
---
apiVersion: v1
kind: Secret
metadata:
name: openshift-worker-<num>-network-config-secret
namespace: openshift-machine-api
# ...
interfaces:
- name: <nic_name>
type: ethernet
state: up
ipv4:
enabled: false
ipv6:
enabled: false
# ...
---
apiVersion: v1
kind: Secret
metadata:
name: openshift-worker-<num>-bmc-secret 1
namespace: openshift-machine-api
type: Opaque
data:
username: <base64_of_uid> 2
password: <base64_of_pwd>
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: openshift-worker-<num>
namespace: openshift-machine-api
spec:
online: true
bootMACAddress: <nic1_mac_address> 3
bmc:
address: <protocol>://<bmc_url> 4
credentialsName: openshift-worker-<num>-bmc
disableCertificateVerification: false
customDeploy:
method: install_coreos
userData:
name: worker-user-data-managed
286
CHAPTER 5. SCALING A USER-PROVISIONED CLUSTER WITH THE BARE METAL OPERATOR
namespace: openshift-machine-api
rootDeviceHints:
deviceName: <root_device_hint> 5
1 Replace <num> with a unique compute node number for the bare-metal nodes in the
name and credentialsName fields.
2 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user
name and password.
3 Replace <nic1_mac_address> with the MAC address of the bare-metal node’s first
NIC. See the "BMC addressing" section for additional BMC configuration options.
4 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace
<bmc_url> with the URL of the bare-metal node’s baseboard management controller.
IMPORTANT
$ oc create -f bmh.yaml
Example output
secret/openshift-worker-<num>-network-config-secret created
secret/openshift-worker-<num>-bmc-secret created
baremetalhost.metal3.io/openshift-worker-<num> created
where:
<num>
Specifies the compute node number.
Example output
287
OpenShift Container Platform 4.17 Installing on bare metal
$ oc get csr
Example output
Example output
certificatesigningrequest.certificates.k8s.io/<csr_name> approved
Verification
$ oc get nodes
Example output
Additional resources
Diagnosing a duplicate MAC address when provisioning a new host in the cluster
IMPORTANT
288
CHAPTER 5. SCALING A USER-PROVISIONED CLUSTER WITH THE BARE METAL OPERATOR
IMPORTANT
To manage existing hosts by using the BMO, you must set the
spec.externallyProvisioned specification in the BareMetalHost custom resource to
true to prevent the BMO from re-provisioning the host.
Prerequisites
Procedure
---
apiVersion: v1
kind: Secret
metadata:
name: controller1-bmc
namespace: openshift-machine-api
type: Opaque
data:
username: <base64_of_uid>
password: <base64_of_pwd>
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: controller1
namespace: openshift-machine-api
spec:
bmc:
address: <protocol>://<bmc_url> 1
credentialsName: "controller1-bmc"
bootMACAddress: <nic1_mac_address>
customDeploy:
method: install_coreos
externallyProvisioned: true 2
online: true
userData:
name: controller-user-data-managed
namespace: openshift-machine-api
1 You can only use bare-metal host drivers that support virtual media networking
booting, for example redfish-virtualmedia and idrac-virtualmedia.
2 You must set the value to true to prevent the BMO from re-provisioning the bare-
metal controller host.
289
OpenShift Container Platform 4.17 Installing on bare metal
$ oc create -f controller.yaml
Example output
secret/controller1-bmc created
baremetalhost.metal3.io/controller1 created
Verification
Verify that the BMO created the bare-metal host object by running the following command:
$ oc get bmh -A
Example output
Prerequisites
Procedure
Example output
node/app1 cordoned
WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-
operator/tuned-tvthg, openshift-dns/dns-
default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth,
openshift-ingress-cana
ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm,
openshift-monitoring/nod
e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-
multus/multus-fn8tg, openshift
290
CHAPTER 5. SCALING A USER-PROVISIONED CLUSTER WITH THE BARE METAL OPERATOR
-multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-
target-jqxn2, openshift-ovn-ku
bernetes/ovnkube-node-rsvqg
evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp
evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk
evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s
pod/collect-profiles-27766965-258vp evicted
pod/collect-profiles-27766950-kg5mk evicted
pod/collect-profiles-27766935-stf4s evicted
node/app1 drained
a. Edit the BareMetalHost CR for the host by running the following command:
...
customDeploy:
method: install_coreos
c. Verify that the provisioning state of the host changes to deprovisioning by running the
following command:
$ oc get bmh -A
Example output
3. Delete the host by running the following command when the BareMetalHost state changes to
available:
NOTE
You can run this step without having to edit the BareMetalHost CR. It might take
some time for the BareMetalHost state to change from deprovisioning to
available.
Verification
291
OpenShift Container Platform 4.17 Installing on bare metal
Verify that you deleted the node by running the following command:
$ oc get nodes
Example output
292
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR BARE METAL
NOTE
After installation, you cannot modify these parameters in the install-config.yaml file.
293
OpenShift Container Platform 4.17 Installing on bare metal
The name of the cluster. DNS String of lowercase letters and hyphens (- ), such as
metadata: records for the cluster are all dev.
name: subdomains of
{{.metadata.name}}.
{{.baseDomain}}.
Consider the following information before you configure network parameters for your cluster:
If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and
IPv6 address families are supported.
If you deployed nodes in an OpenShift Container Platform cluster with a network that supports
both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network.
For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the
same network interface as the default gateway. This ensures that in a multiple network
interface controller (NIC) environment, a cluster can detect what NIC to use based on the
available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-
stack limitations" in About the OVN-Kubernetes network plugin .
294
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR BARE METAL
To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host
that supports dual-stack networking.
If you configure your cluster to use both IP address families, review the following requirements:
Both IP families must use the same network interface for the default gateway.
You must specify IPv4 and IPv6 addresses in the same order for all network configuration
parameters. For example, in the following configuration IPv4 addresses are listed before IPv6
addresses.
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
- cidr: fd00:10:128::/56
hostPrefix: 64
serviceNetwork:
- 172.30.0.0/16
- fd00:172:16::/112
NOTE
295
OpenShift Container Platform 4.17 Installing on bare metal
The IP address block for services. The An array with an IP address block in
networking: default value is 172.30.0.0/16. CIDR format. For example:
serviceNetwork:
The OVN-Kubernetes network plugins networking:
supports only a single IP address block serviceNetwork:
for the service network. - 172.30.0.0/16
- fd02::/112
If you use the OVN-Kubernetes
network plugin, you can specify an IP
address block for both of the IPv4 and
IPv6 address families.
296
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR BARE METAL
297
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
Required if you use compute. Use this aws, azure , gcp , ibmcloud,
compute: parameter to specify the cloud nutanix, openstack, powervs ,
platform: provider to host the worker machines. vsphere, or {}
This parameter value must match the
controlPlane.platform parameter
value.
Enables the cluster for a feature set. A String. The name of the feature set to
featureSet: feature set is a collection of OpenShift enable, such as
Container Platform features that are TechPreviewNoUpgrade.
not enabled by default. For more
information about enabling a feature
set during installation, see "Enabling
features using feature gates".
298
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR BARE METAL
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
299
OpenShift Container Platform 4.17 Installing on bare metal
NOTE
300
CHAPTER 6. INSTALLATION CONFIGURATION PARAMETERS FOR BARE METAL
IMPORTANT
NOTE
301
OpenShift Container Platform 4.17 Installing on bare metal
IMPORTANT
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
Additional resources
302