0% found this document useful (0 votes)
184 views

OpenShift Container Platform-4.6-Installing On Azure-en-US

Uploaded by

Chinni Munni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
184 views

OpenShift Container Platform-4.6-Installing On Azure-en-US

Uploaded by

Chinni Munni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 199

OpenShift Container Platform 4.

Installing on Azure

Installing OpenShift Container Platform Azure clusters

Last Updated: 2021-02-18


OpenShift Container Platform 4.6 Installing on Azure
Installing OpenShift Container Platform Azure clusters
Legal Notice
Copyright © 2021 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.

Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract
This document provides instructions for installing and uninstalling OpenShift Container Platform
clusters on Microsoft Azure.
Table of Contents

Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INSTALLING
. . . . . . . . . . . . . ON
. . . . AZURE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
1.1. CONFIGURING AN AZURE ACCOUNT 6
1.1.1. Azure account limits 6
1.1.2. Configuring a public DNS zone in Azure 8
1.1.3. Increasing Azure account limits 9
1.1.4. Required Azure roles 10
1.1.5. Creating a service principal 10
1.1.6. Supported Azure regions 13
Supported Azure public regions 13
Supported Azure Government regions 14
1.1.7. Next steps 14
1.2. MANUALLY CREATING IAM FOR AZURE 15
1.2.1. Manually create IAM 15
1.2.2. Admin credentials root secret format 16
1.2.3. Upgrading clusters with manually maintained credentials 17
1.2.4. Mint mode 18
1.3. INSTALLING A CLUSTER QUICKLY ON AZURE 18
1.3.1. Prerequisites 18
1.3.2. Internet and Telemetry access for OpenShift Container Platform 18
1.3.3. Generating an SSH private key and adding it to the agent 19
1.3.4. Obtaining the installation program 20
1.3.5. Deploying the cluster 21
1.3.6. Installing the OpenShift CLI by downloading the binary 23
1.3.6.1. Installing the OpenShift CLI on Linux 23
1.3.6.2. Installing the OpenShift CLI on Windows 24
1.3.6.3. Installing the OpenShift CLI on macOS 24
1.3.7. Logging in to the cluster by using the CLI 25
1.3.8. Next steps 25
1.4. INSTALLING A CLUSTER ON AZURE WITH CUSTOMIZATIONS 25
1.4.1. Prerequisites 25
1.4.2. Internet and Telemetry access for OpenShift Container Platform 26
1.4.3. Generating an SSH private key and adding it to the agent 26
1.4.4. Obtaining the installation program 27
1.4.5. Creating the installation configuration file 28
1.4.5.1. Installation configuration parameters 30
1.4.5.2. Sample customized install-config.yaml file for Azure 37
1.4.6. Deploying the cluster 39
1.4.7. Installing the OpenShift CLI by downloading the binary 40
1.4.7.1. Installing the OpenShift CLI on Linux 41
1.4.7.2. Installing the OpenShift CLI on Windows 41
1.4.7.3. Installing the OpenShift CLI on macOS 42
1.4.8. Logging in to the cluster by using the CLI 42
1.4.9. Next steps 43
1.5. INSTALLING A CLUSTER ON AZURE WITH NETWORK CUSTOMIZATIONS 43
1.5.1. Prerequisites 43
1.5.2. Internet and Telemetry access for OpenShift Container Platform 43
1.5.3. Generating an SSH private key and adding it to the agent 44
1.5.4. Obtaining the installation program 45
1.5.5. Creating the installation configuration file 46
1.5.5.1. Installation configuration parameters 47
1.5.5.2. Network configuration parameters 55

1
OpenShift Container Platform 4.6 Installing on Azure

1.5.5.3. Sample customized install-config.yaml file for Azure 56


1.5.6. Modifying advanced network configuration parameters 58
1.5.7. Cluster Network Operator configuration 59
1.5.7.1. Configuration parameters for the OpenShift SDN default CNI network provider 60
1.5.7.2. Configuration parameters for the OVN-Kubernetes default CNI network provider 61
1.5.7.3. Cluster Network Operator example configuration 61
1.5.8. Configuring hybrid networking with OVN-Kubernetes 62
1.5.9. Deploying the cluster 64
1.5.10. Installing the OpenShift CLI by downloading the binary 65
1.5.10.1. Installing the OpenShift CLI on Linux 65
1.5.10.2. Installing the OpenShift CLI on Windows 66
1.5.10.3. Installing the OpenShift CLI on macOS 66
1.5.11. Logging in to the cluster by using the CLI 67
1.5.12. Next steps 67
1.6. INSTALLING A CLUSTER ON AZURE INTO AN EXISTING VNET 67
1.6.1. Prerequisites 67
1.6.2. About reusing a VNet for your OpenShift Container Platform cluster 68
1.6.2.1. Requirements for using your VNet 68
1.6.2.1.1. Network security group requirements 69
1.6.2.2. Division of permissions 69
1.6.2.3. Isolation between clusters 70
1.6.3. Internet and Telemetry access for OpenShift Container Platform 70
1.6.4. Generating an SSH private key and adding it to the agent 70
1.6.5. Obtaining the installation program 71
1.6.6. Creating the installation configuration file 72
1.6.6.1. Installation configuration parameters 74
1.6.6.2. Sample customized install-config.yaml file for Azure 81
1.6.6.3. Configuring the cluster-wide proxy during installation 83
1.6.7. Deploying the cluster 85
1.6.8. Installing the OpenShift CLI by downloading the binary 86
1.6.8.1. Installing the OpenShift CLI on Linux 86
1.6.8.2. Installing the OpenShift CLI on Windows 87
1.6.8.3. Installing the OpenShift CLI on macOS 87
1.6.9. Logging in to the cluster by using the CLI 88
1.6.10. Next steps 88
1.7. INSTALLING A PRIVATE CLUSTER ON AZURE 89
1.7.1. Prerequisites 89
1.7.2. Private clusters 89
1.7.2.1. Private clusters in Azure 89
1.7.2.1.1. Limitations 90
1.7.2.2. User-defined outbound routing 90
Private cluster with network address translation 90
Private cluster with Azure Firewall 91
Private cluster with a proxy configuration 91
Private cluster with no Internet access 91
1.7.3. About reusing a VNet for your OpenShift Container Platform cluster 91
1.7.3.1. Requirements for using your VNet 91
1.7.3.1.1. Network security group requirements 92
1.7.3.2. Division of permissions 93
1.7.3.3. Isolation between clusters 93
1.7.4. Internet and Telemetry access for OpenShift Container Platform 93
1.7.5. Generating an SSH private key and adding it to the agent 94
1.7.6. Obtaining the installation program 95

2
Table of Contents

1.7.7. Manually creating the installation configuration file 96


1.7.7.1. Installation configuration parameters 96
1.7.7.2. Sample customized install-config.yaml file for Azure 104
1.7.7.3. Configuring the cluster-wide proxy during installation 106
1.7.8. Deploying the cluster 108
1.7.9. Installing the OpenShift CLI by downloading the binary 109
1.7.9.1. Installing the OpenShift CLI on Linux 109
1.7.9.2. Installing the OpenShift CLI on Windows 110
1.7.9.3. Installing the OpenShift CLI on macOS 110
1.7.10. Logging in to the cluster by using the CLI 111
1.7.11. Next steps 111
1.8. INSTALLING A CLUSTER ON AZURE INTO A GOVERNMENT REGION 111
1.8.1. Prerequisites 111
1.8.2. Azure government regions 112
1.8.3. Private clusters 112
1.8.3.1. Private clusters in Azure 112
1.8.3.1.1. Limitations 113
1.8.3.2. User-defined outbound routing 113
Private cluster with network address translation 113
Private cluster with Azure Firewall 114
Private cluster with a proxy configuration 114
Private cluster with no Internet access 114
1.8.4. About reusing a VNet for your OpenShift Container Platform cluster 114
1.8.4.1. Requirements for using your VNet 114
1.8.4.1.1. Network security group requirements 115
1.8.4.2. Division of permissions 116
1.8.4.3. Isolation between clusters 116
1.8.5. Internet and Telemetry access for OpenShift Container Platform 116
1.8.6. Generating an SSH private key and adding it to the agent 117
1.8.7. Obtaining the installation program 118
1.8.8. Manually creating the installation configuration file 119
1.8.8.1. Installation configuration parameters 119
1.8.8.2. Sample customized install-config.yaml file for Azure 127
1.8.8.3. Configuring the cluster-wide proxy during installation 129
1.8.9. Deploying the cluster 131
1.8.10. Installing the OpenShift CLI by downloading the binary 132
1.8.10.1. Installing the OpenShift CLI on Linux 132
1.8.10.2. Installing the OpenShift CLI on Windows 133
1.8.10.3. Installing the OpenShift CLI on macOS 133
1.8.11. Logging in to the cluster by using the CLI 134
1.8.12. Next steps 134
1.9. INSTALLING A CLUSTER ON AZURE USING ARM TEMPLATES 134
1.9.1. Prerequisites 135
1.9.2. Internet and Telemetry access for OpenShift Container Platform 135
1.9.3. Configuring your Azure project 135
1.9.3.1. Azure account limits 136
1.9.3.2. Configuring a public DNS zone in Azure 138
1.9.3.3. Increasing Azure account limits 139
1.9.3.4. Certificate signing requests management 140
1.9.3.5. Required Azure roles 140
1.9.3.6. Creating a service principal 140
1.9.3.7. Supported Azure regions 143
Supported Azure public regions 143

3
OpenShift Container Platform 4.6 Installing on Azure

Supported Azure Government regions 144


1.9.4. Obtaining the installation program 145
1.9.5. Generating an SSH private key and adding it to the agent 145
1.9.6. Creating the installation files for Azure 147
1.9.6.1. Optional: Creating a separate /var partition 147
1.9.6.2. Creating the installation configuration file 149
1.9.6.3. Configuring the cluster-wide proxy during installation 150
1.9.6.4. Exporting common variables for ARM templates 152
1.9.6.5. Creating the Kubernetes manifest and Ignition config files 153
1.9.7. Creating the Azure resource group and identity 155
1.9.8. Uploading the RHCOS cluster image and bootstrap Ignition config file 156
1.9.9. Example for creating DNS zones 158
1.9.10. Creating a VNet in Azure 158
1.9.10.1. ARM template for the VNet 159
1.9.11. Deploying the RHCOS cluster image for the Azure infrastructure 161
1.9.11.1. ARM template for image storage 162
1.9.12. Creating networking and load balancing components in Azure 163
1.9.12.1. ARM template for the network and load balancers 164
1.9.13. Creating the bootstrap machine in Azure 168
1.9.13.1. ARM template for the bootstrap machine 169
1.9.14. Creating the control plane machines in Azure 174
1.9.14.1. ARM template for control plane machines 175
1.9.15. Wait for bootstrap completion and remove bootstrap resources in Azure 181
1.9.16. Creating additional worker machines in Azure 181
1.9.16.1. ARM template for worker machines 183
1.9.17. Installing the OpenShift CLI by downloading the binary 187
1.9.17.1. Installing the OpenShift CLI on Linux 187
1.9.17.2. Installing the OpenShift CLI on Windows 188
1.9.17.3. Installing the OpenShift CLI on macOS 188
1.9.18. Logging in to the cluster by using the CLI 189
1.9.19. Approving the certificate signing requests for your machines 189
1.9.20. Adding the Ingress DNS records 192
1.9.21. Completing an Azure installation on user-provisioned infrastructure 193
1.10. UNINSTALLING A CLUSTER ON AZURE 194
1.10.1. Removing a cluster that uses installer-provisioned infrastructure 194

4
Table of Contents

5
OpenShift Container Platform 4.6 Installing on Azure

CHAPTER 1. INSTALLING ON AZURE

1.1. CONFIGURING AN AZURE ACCOUNT


Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account.

IMPORTANT

All Azure resources that are available through public endpoints are subject to resource
name restrictions, and you cannot create resources that use certain terms. For a list of
terms that Azure restricts, see Resolve reserved resource name errors in the Azure
documentation.

1.1.1. Azure account limits


The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the
default Azure subscription and service limits, quotas, and constraints affect your ability to install
OpenShift Container Platform clusters.

IMPORTANT

Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by
series, such as Dv2, F, and G. For example, the default for Enterprise Agreement
subscriptions is 350 cores.

Check the limits for your subscription type and if necessary, increase quota limits for your
account before you install a default cluster on Azure.

The following table summarizes the Azure components whose limits can impact your ability to install and
run OpenShift Container Platform clusters.

Compone Number of Default Azure Description


nt components limit
required by
default

6
CHAPTER 1. INSTALLING ON AZURE

Compone Number of Default Azure Description


nt components limit
required by
default

vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must


increase the account limit.

By default, each cluster creates the following


instances:

One bootstrap machine, which is removed


after installation

Three control plane machines

Three compute machines

Because the bootstrap machine uses


Standard_D4s_v3 machines, which use 4 vCPUs,
the control plane machines use Standard_D8s_v3
virtual machines, which use 8 vCPUs, and the worker
machines use Standard_D4s_v3 virtual machines,
which use 4 vCPUs, a default cluster requires 40
vCPUs. The bootstrap node VM, which uses 4
vCPUs, is used only during installation.

To deploy more worker nodes, enable autoscaling,


deploy large workloads, or use a different instance
type, you must further increase the vCPU limit for
your account to ensure that your cluster can deploy
the machines that you require.

By default, the installation program distributes


control plane and compute machines across all
availability zones within a region. To ensure high
availability for your cluster, select a region with at
least three availability zones. If your region contains
fewer than three availability zones, the installation
program places more than one control plane
machine in the available zones.

VNet 1 1000 per region Each default cluster requires one Virtual Network
(VNet), which contains two subnets.

Network 6 65,536 per Each default cluster requires six network interfaces.
interfaces region If you create more machines or your deployed
workloads create load balancers, your cluster uses
more network interfaces.

7
OpenShift Container Platform 4.6 Installing on Azure

Compone Number of Default Azure Description


nt components limit
required by
default

Network 2 5000 Each default cluster Each cluster creates network


security security groups for each subnet in the VNet. The
groups default cluster creates network security groups for
the control plane and for the compute node subnets:

co Allows the control plane machines to be


ntr reached on port 6443 from anywhere
olp
lan
e

no Allows worker nodes to be reached from the


de Internet on ports 80 and 443

Network 3 1000 per region Each cluster creates the following load balancers:
load
balancers
def Public IP address that load balances requests
aul to ports 80 and 443 across worker machines
t

int Private IP address that load balances


ern requests to ports 6443 and 22623 across
al control plane machines

ext Public IP address that load balances requests


ern to port 6443 across control plane machines
al

If your applications create more Kubernetes


LoadBalancer service objects, your cluster uses
more load balancers.

Public IP 3 Each of the two public load balancers uses a public


addresses IP address. The bootstrap machine also uses a public
IP address so that you can SSH into the machine to
troubleshoot issues during installation. The IP
address for the bootstrap node is used only during
installation.

Private IP 7 The internal load balancer, each of the three control


addresses plane machines, and each of the three worker
machines each use a private IP address.

1.1.2. Configuring a public DNS zone in Azure

To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated

8
CHAPTER 1. INSTALLING ON AZURE

To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated
public hosted DNS zone in your account. This zone must be authoritative for the domain. This service
provides cluster DNS resolution and name lookup for external connections to the cluster.

Procedure

1. Identify your domain, or subdomain, and registrar. You can transfer an existing domain and
registrar or obtain a new one through Azure or another source.

NOTE

For more information about purchasing domains through Azure, see Buy a
custom domain name for Azure App Service in the Azure documentation.

2. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active
DNS name to Azure App Service in the Azure documentation.

3. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure
DNS in the Azure documentation to create a public hosted zone for your domain or subdomain,
extract the new authoritative name servers, and update the registrar records for the name
servers that your domain uses.
Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as
clusters.openshiftcorp.com.

4. If you use a subdomain, follow your company’s procedures to add its delegation records to the
parent domain.

1.1.3. Increasing Azure account limits


To increase an account limit, file a support request on the Azure portal.

NOTE

You can increase only one type of quota per support request.

Procedure

1. From the Azure portal, click Help + support in the lower left corner.

2. Click New support request and then select the required values:

a. From the Issue type list, select Service and subscription limits (quotas).

b. From the Subscription list, select the subscription to modify.

c. From the Quota type list, select the quota to increase. For example, select Compute-VM
(cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is
required to install a cluster.

d. Click Next: Solutions.

3. On the Problem Details page, provide the required information for your quota increase:

a. Click Provide details and provide the required details in the Quota details window.

b. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and
9
OpenShift Container Platform 4.6 Installing on Azure

b. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and
your contact details.

4. Click Next: Review + create and then click Create.

1.1.4. Required Azure roles


Your Microsoft Azure account must have the following roles for the subscription that you use:

User Access Administrator

To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure
portal in the Azure documentation.

1.1.5. Creating a service principal


Because OpenShift Container Platform and its installation program must create Microsoft Azure
resources through Azure Resource Manager, you must create a service principal to represent it.

Prerequisites

Install or update the Azure CLI.

Install the jq package.

Your Azure account has the required roles for the subscription that you use.

Procedure

1. Log in to the Azure CLI:

$ az login

Log in to Azure in the web console by using your credentials.

2. If your Azure account uses subscriptions, ensure that you are using the right subscription.

a. View the list of available accounts and record the tenantId value for the subscription you
want to use for your cluster:

$ az account list --refresh

Example output

[
{
"cloudName": "AzureCloud",
"id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",
"user": {
"name": "[email protected]",
"type": "user"

10
CHAPTER 1. INSTALLING ON AZURE

}
}
]

b. View your active account details and confirm that the tenantId value matches the
subscription you want to use:

$ az account show

Example output

{
"environmentName": "AzureCloud",
"id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1
"user": {
"name": "[email protected]",
"type": "user"
}
}

1 Ensure that the value of the tenantId parameter is the UUID of the correct
subscription.

c. If you are not using the right subscription, change the active subscription:

$ az account set -s <id> 1

1 Substitute the value of the id for the subscription that you want to use for <id>.

d. If you changed the active subscription, display your account information again:

$ az account show

Example output

{
"environmentName": "AzureCloud",
"id": "33212d16-bdf6-45cb-b038-f6565b61edda",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee",
"user": {
"name": "[email protected]",
"type": "user"
}
}

11
OpenShift Container Platform 4.6 Installing on Azure

3. Record the values of the tenantId and id parameters from the previous output. You need these
values during OpenShift Container Platform installation.

4. Create the service principal for your account:

$ az ad sp create-for-rbac --role Contributor --name <service_principal> 1

1 Replace <service_principal> with the name to assign to the service principal.

Example output

Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the


required format used for service principal names
Retrying role assignment creation: 1/36
Retrying role assignment creation: 2/36
Retrying role assignment creation: 3/36
Retrying role assignment creation: 4/36
{
"appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956",
"displayName": "<service_principal>",
"name": "http://<service_principal>",
"password": "ac461d78-bf4b-4387-ad16-7e32e328aec6",
"tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee"
}

5. Record the values of the appId and password parameters from the previous output. You need
these values during OpenShift Container Platform installation.

6. Grant additional permissions to the service principal.

You must always add the Contributor and User Access Administrator roles to the app
registration service principal so the cluster can assign credentials for its components.

To operate the Cloud Credential Operator (CCO) in mint mode , the app registration service
principal also requires the Azure Active Directory
Graph/Application.ReadWrite.OwnedBy API permission.

To operate the CCO in passthrough mode, the app registration service principal does not
require additional API permissions.

For more information about CCO modes, see the Cloud Credential Operator entry in the Red
Hat Operators reference content.

a. To assign the User Access Administrator role, run the following command:

$ az role assignment create --role "User Access Administrator" \


--assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \ 1
| jq '.[0].objectId' -r)

1 Replace <appId> with the appId parameter value for your service principal.

b. To assign the Azure Active Directory Graph permission, run the following command:

12
CHAPTER 1. INSTALLING ON AZURE

$ az ad app permission add --id <appId> \ 1


--api 00000002-0000-0000-c000-000000000000 \
--api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role

1 Replace <appId> with the appId parameter value for your service principal.

Example output

Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api


00000002-0000-0000-c000-000000000000" is needed to make the change effective

For more information about the specific permissions that you grant with this command, see
the GUID Table for Windows Azure Active Directory Permissions .

c. Approve the permissions request. If your account does not have the Azure Active Directory
tenant administrator role, follow the guidelines for your organization to request that the
tenant administrator approve your permissions request.

$ az ad app permission grant --id <appId> \ 1


--api 00000002-0000-0000-c000-000000000000

1 Replace <appId> with the appId parameter value for your service principal.

1.1.6. Supported Azure regions


The installation program dynamically generates the list of available Microsoft Azure regions based on
your subscription. The following Azure regions were tested and validated in OpenShift Container
Platform version 4.6.1:

Supported Azure public regions

australiacentral (Australia Central)

australiaeast (Australia East)

australiasoutheast (Australia South East)

brazilsouth (Brazil South)

canadacentral (Canada Central)

canadaeast (Canada East)

centralindia (Central India)

centralus (Central US)

eastasia (East Asia)

eastus (East US)

eastus2 (East US 2)

13
OpenShift Container Platform 4.6 Installing on Azure

francecentral (France Central)

germanywestcentral (Germany West Central)

japaneast (Japan East)

japanwest (Japan West)

koreacentral (Korea Central)

koreasouth (Korea South)

northcentralus (North Central US)

northeurope (North Europe)

norwayeast (Norway East)

southafricanorth (South Africa North)

southcentralus (South Central US)

southeastasia (Southeast Asia)

southindia (South India)

switzerlandnorth (Switzerland North)

uaenorth (UAE North)

uksouth (UK South)

ukwest (UK West)

westcentralus (West Central US)

westeurope (West Europe)

westindia (West India)

westus (West US)

westus2 (West US 2)

Supported Azure Government regions


Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift
Container Platform version 4.6:

usgovtexas (US Gov Texas)

usgovvirginia (US Gov Virginia)

You can reference all available MAG regions in the Azure documentation. Other provided MAG regions
are expected to work with OpenShift Container Platform, but have not been tested.

1.1.7. Next steps

Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or
14
CHAPTER 1. INSTALLING ON AZURE

Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or
quickly install a cluster with default options.

1.2. MANUALLY CREATING IAM FOR AZURE

1.2.1. Manually create IAM


The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in
environments where the cloud identity and access management (IAM) APIs are not reachable, or the
administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.

Procedure

1. To generate the manifests, run the following command from the directory that contains the
installation program:

$ openshift-install create manifests --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

2. Insert a config map into the manifests directory so that the Cloud Credential Operator is placed
in manual mode:

$ cat <<EOF > mycluster/manifests/cco-configmap.yaml


apiVersion: v1
kind: ConfigMap
metadata:
name: cloud-credential-operator-config
namespace: openshift-cloud-credential-operator
annotations:
release.openshift.io/create-only: "true"
data:
disabled: "true"
EOF

3. Remove the admin credential secret created using your local cloud credentials. This removal
prevents your admin credential from being stored in the cluster:

$ rm mycluster/openshift/99_cloud-creds-secret.yaml

4. From the directory that contains the installation program, obtain details of the OpenShift
Container Platform release image that your openshift-install binary is built to use:

$ openshift-install version

Example output

release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64

5. Locate all CredentialsRequest objects in this release image that target the cloud you are

15
OpenShift Container Platform 4.6 Installing on Azure

5. Locate all CredentialsRequest objects in this release image that target the cloud you are
deploying on:

$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --


credentials-requests --cloud=azure

This displays the details for each request.

Sample CredentialsRequest object

apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: openshift-image-registry-azure
namespace: openshift-cloud-credential-operator
spec:
secretRef:
name: installer-cloud-credentials
namespace: openshift-image-registry
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AzureProviderSpec
roleBindings:
- role: Contributor

6. Create YAML files for secrets in the openshift-install manifests directory that you generated
previously. The secrets must be stored using the namespace and secret name defined in the
spec.secretRef for each credentialsRequest. The format for the secret data varies for each
cloud provider.

7. From the directory that contains the installation program, proceed with your cluster creation:

$ openshift-install create cluster --dir=<installation_directory>

IMPORTANT

Before upgrading a cluster that uses manually maintained credentials, you must
ensure that the CCO is in an upgradeable state. For details, see the Upgrading
clusters with manually maintained credentials section of the installation content
for your cloud provider.

1.2.2. Admin credentials root secret format


Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which
is then used to satisfy all credentials requests and create their respective secrets. This is done either by
minting new credentials, with mint mode , or by copying the credentials root secret, with passthrough
mode.

The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.

Microsoft Azure secret format

16
CHAPTER 1. INSTALLING ON AZURE

apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: azure-credentials
stringData:
azure_subscription_id: <SubscriptionID>
azure_client_id: <ClientID>
azure_client_secret: <ClientSecret>
azure_tenant_id: <TenantID>
azure_resource_prefix: <ResourcePrefix>
azure_resourcegroup: <ResourceGroup>
azure_region: <Region>

On Microsoft Azure, the credentials secret format includes two properties that must contain the
cluster’s infrastructure ID, generated randomly for each cluster installation. This value can be found
after running create manifests:

$ cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r

Example output

mycluster-2mpcn

This value would be used in the secret data as follows:

azure_resource_prefix: mycluster-2mpcn
azure_resourcegroup: mycluster-2mpcn-rg

1.2.3. Upgrading clusters with manually maintained credentials


If credentials are added in a future release, the Cloud Credential Operator (CCO) upgradable status for
a cluster with manually maintained credentials changes to false. For minor release, for example, from
4.5 to 4.6, this status prevents you from upgrading until you have addressed any updated permissions.
For z-stream releases, for example, from 4.5.10 to 4.5.11, the upgrade is not blocked, but the credentials
must still be updated for the new release.

Use the Administrator perspective of the web console to determine if the CCO is upgradeable.

1. Navigate to Administration → Cluster Settings.

2. To view the CCO status details, click cloud-credential in the Cluster Operators list.

3. If the Upgradeable status in the Conditions section is False, examine the


credentialsRequests for the new release and update the manually maintained credentials on
your cluster to match before upgrading.

In addition to creating new credentials for the release image that you are upgrading to, you must review
the required permissions for existing credentials and accommodate any new permissions requirements
for existing components in the new release. The CCO cannot detect these mismatches and will not set
upgradable to false in this case.

The Manually creating IAM section of the installation content for your cloud provider explains how to
obtain and use the credentials required for your cloud.

17
OpenShift Container Platform 4.6 Installing on Azure

1.2.4. Mint mode


Mint mode is the default and recommended Cloud Credential Operator (CCO) credentials mode for
OpenShift Container Platform. In this mode, the CCO uses the provided administrator-level cloud
credential to run the cluster. Mint mode is supported for AWS, GCP, and Azure.

In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO
to process the CredentialsRequest objects in the cluster and create users for each with specific
permissions.

The benefits of mint mode include:

Each cluster component has only the permissions it requires

Automatic, on-going reconciliation for cloud credentials, including additional credentials or


permissions that might be required for upgrades

One drawback is that mint mode requires admin credential storage in a cluster kube-system secret.

1.3. INSTALLING A CLUSTER QUICKLY ON AZURE


In OpenShift Container Platform version 4.5, you can install a cluster on Microsoft Azure that uses the
default configuration options.

1.3.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster and determine the tested and validated region
to deploy the cluster to.

If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

1.3.2. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

18
CHAPTER 1. INSTALLING ON AZURE

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.3.3. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

19
OpenShift Container Platform 4.6 Installing on Azure

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.3.4. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

20
CHAPTER 1. INSTALLING ON AZURE

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.3.5. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

2 To view different installation details, specify warn, debug, or error instead of info.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an installation
directory. If you want to reuse individual files from another cluster installation,
you can copy them into your directory. However, the file names for the
installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.

Provide values at the prompts:

a. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want to


perform installation debugging or disaster recovery, specify an SSH key that
your ssh-agent process uses.

21
OpenShift Container Platform 4.6 Installing on Azure

b. Select azure as the platform to target.

c. If you do not have a Microsoft Azure profile stored on your computer, specify the following
Azure parameter values for your subscription and service principal:

azure subscription id: The subscription ID to use for the cluster. Specify the id value in
your account output.

azure tenant id: The tenant ID. Specify the tenantId value in your account output.

azure service principal client id: The value of the appId parameter for the service
principal.

azure service principal client secret: The value of the password parameter for the
service principal.

d. Select the region to deploy the cluster to.

e. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure
DNS Zone that you created for your cluster.

f. Enter a descriptive name for your cluster.

IMPORTANT

All Azure resources that are available through public endpoints are subject to
resource name restrictions, and you cannot create resources that use certain
terms. For a list of terms that Azure restricts, see Resolve reserved resource
name errors in the Azure documentation.

g. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift
Cluster Manager site.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE
22
CHAPTER 1. INSTALLING ON AZURE

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.3.6. Installing the OpenShift CLI by downloading the binary


You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.3.6.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

23
OpenShift Container Platform 4.6 Installing on Azure

After you install the CLI, it is available using the oc command:

$ oc <command>

1.3.6.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.3.6.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

24
CHAPTER 1. INSTALLING ON AZURE

1.3.7. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.3.8. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

1.4. INSTALLING A CLUSTER ON AZURE WITH CUSTOMIZATIONS


In OpenShift Container Platform version 4.5, you can install a customized cluster on infrastructure that
the installation program provisions on Microsoft Azure. To customize the installation, you modify
parameters in the install-config.yaml file before you install the cluster.

1.4.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster and determine the tested and validated region
to deploy the cluster to.

If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster

25
OpenShift Container Platform 4.6 Installing on Azure

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

1.4.2. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.4.3. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

26
CHAPTER 1. INSTALLING ON AZURE

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.4.4. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your

27
OpenShift Container Platform 4.6 Installing on Azure

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.4.5. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT
28
CHAPTER 1. INSTALLING ON AZURE

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select azure as the platform to target.

iii. If you do not have a Microsoft Azure profile stored on your computer, specify the
following Azure parameter values for your subscription and service principal:

azure subscription id: The subscription ID to use for the cluster. Specify the id
value in your account output.

azure tenant id: The tenant ID. Specify the tenantId value in your account output.

azure service principal client id: The value of the appId parameter for the service
principal.

azure service principal client secret: The value of the password parameter for the
service principal.

iv. Select the region to deploy the cluster to.

v. Select the base domain to deploy the cluster to. The base domain corresponds to the
Azure DNS Zone that you created for your cluster.

vi. Enter a descriptive name for your cluster.

IMPORTANT

All Azure resources that are available through public endpoints are
subject to resource name restrictions, and you cannot create resources
that use certain terms. For a list of terms that Azure restricts, see
Resolve reserved resource name errors in the Azure documentation.

vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

29
OpenShift Container Platform 4.6 Installing on Azure

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.4.5.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.1. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

30
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev.
subdomains of
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.2. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

31
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

32
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

33
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

34
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

35
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.3. Additional Azure parameters

Parameter Description Values

machines.platform.a The Azure VM instance type. VMs that use Windows or Linux as the
zure.type operating system. See the Guest
operating systems supported on Azure
Stack in the Azure documentation.

machines.platform.a The Azure disk size for the VM. Integer that represents the size of the
zure.osDisk.diskSize disk in GB, for example 512. The
GB minimum supported disk size is 120.

platform.azure.base The name of the resource group that String, for example
DomainResourceGr contains the DNS zone for your base production_cluster .
oupName domain.

platform.azure.outbo The outbound routing strategy used to LoadBalancer or


undType connect your cluster to the internet. If UserDefinedRouting. The default is
you are using user-defined routing, LoadBalancer .
you must have pre-existing networking
available where the outbound routing
has already been configured prior to
installing a cluster. The installation
program is not responsible for
configuring user-defined routing.

36
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

platform.azure.regio The name of the Azure region that Any valid region name, such as
n hosts your cluster. centralus.

platform.azure.zone List of availability zones to place List of zones, for example ["1", "2",
machines in. For high availability, "3"].
specify at least two zones.

platform.azure.netw The name of the resource group that String.


orkResourceGroupN contains the existing VNet that you
ame want to deploy your cluster to. This
name cannot be the same as the
platform.azure.baseDomainReso
urceGroupName.

platform.azure.virtua The name of the existing VNet that String.


lNetwork you want to deploy your cluster to.

platform.azure.contr The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
olPlaneSubnet your VNet that you want to deploy
your control plane machines to.

platform.azure.comp The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
uteSubnet your VNet that you want to deploy
your compute machines to.

platform.azure.cloud The name of the Azure cloud Any valid cloud environment, such as
Name environment that is used to configure AzurePublicCloud or
the Azure SDK with the appropriate AzureUSGovernmentCloud .
Azure API endpoints. If empty, the
default value AzurePublicCloud is
used.

NOTE

You cannot customize Azure Availability Zones or Use tags to organize your Azure
resources with an Azure cluster.

1.4.5.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.

IMPORTANT

This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.

37
OpenShift Container Platform 4.6 Installing on Azure

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024 5
type: Standard_D8s_v3
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512 8
zones: 9
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster 10
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
region: centralus 11
baseDomainResourceGroupName: resource_group 12
pullSecret: '{"auths": ...}' 13
fips: false 14
sshKey: ssh-ed25519 AAAA... 15

1 10 11 13 Required. The installation program prompts you for this value.

2 6 If you do not provide these parameters and values, the installation program provides the default
value.

3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.

38
CHAPTER 1. INSTALLING ON AZURE

4 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default,


simultaneous multithreading is enabled to increase the performance of your machines' cores. You

IMPORTANT

If you disable simultaneous multithreading, ensure that your capacity planning


accounts for the dramatically decreased machine performance. Use larger virtual
machine types, such as Standard_D8s_v3, for your machines if you disable
simultaneous multithreading.

5 8 You can specify the size of the disk to use in GB. Minimum recommendation for master nodes is
1024 GB.

9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.

12 Specify the name of the resource group that contains the DNS zone for your base domain.

14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.

15 You can optionally provide the sshKey value that you use to access the machines in your cluster.

NOTE

For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.

1.4.6. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

39
OpenShift Container Platform 4.6 Installing on Azure

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the location of your customized ./install-


config.yaml file.

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.4.7. Installing the OpenShift CLI by downloading the binary

40
CHAPTER 1. INSTALLING ON AZURE

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.4.7.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

1.4.7.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

41
OpenShift Container Platform 4.6 Installing on Azure

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.4.7.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

1.4.8. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

42
CHAPTER 1. INSTALLING ON AZURE

$ oc whoami

Example output

system:admin

1.4.9. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

1.5. INSTALLING A CLUSTER ON AZURE WITH NETWORK


CUSTOMIZATIONS
In OpenShift Container Platform version 4.5, you can install a cluster with a customized network
configuration on infrastructure that the installation program provisions on Microsoft Azure. By
customizing your network configuration, your cluster can coexist with existing IP address allocations in
your environment and integrate with existing MTU and VXLAN configurations.

You must set most of the network configuration parameters during installation, and you can modify only
kubeProxy configuration parameters in a running cluster.

1.5.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster and determine the tested and validated region
to deploy the cluster to.

If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

1.5.2. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

43
OpenShift Container Platform 4.6 Installing on Azure

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.5.3. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

44
CHAPTER 1. INSTALLING ON AZURE

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.5.4. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
45
OpenShift Container Platform 4.6 Installing on Azure

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.5.5. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select azure as the platform to target.

46
CHAPTER 1. INSTALLING ON AZURE

iii. If you do not have a Microsoft Azure profile stored on your computer, specify the
following Azure parameter values for your subscription and service principal:

azure subscription id: The subscription ID to use for the cluster. Specify the id
value in your account output.

azure tenant id: The tenant ID. Specify the tenantId value in your account output.

azure service principal client id: The value of the appId parameter for the service
principal.

azure service principal client secret: The value of the password parameter for the
service principal.

iv. Select the region to deploy the cluster to.

v. Select the base domain to deploy the cluster to. The base domain corresponds to the
Azure DNS Zone that you created for your cluster.

vi. Enter a descriptive name for your cluster.

IMPORTANT

All Azure resources that are available through public endpoints are
subject to resource name restrictions, and you cannot create resources
that use certain terms. For a list of terms that Azure restricts, see
Resolve reserved resource name errors in the Azure documentation.

vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.5.5.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.4. Required parameters

47
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev.
subdomains of
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

48
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.5. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

49
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

50
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

51
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

52
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

53
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

Table 1.6. Additional Azure parameters

Parameter Description Values

machines.platform.a The Azure VM instance type. VMs that use Windows or Linux as the
zure.type operating system. See the Guest
operating systems supported on Azure
Stack in the Azure documentation.

machines.platform.a The Azure disk size for the VM. Integer that represents the size of the
zure.osDisk.diskSize disk in GB, for example 512. The
GB minimum supported disk size is 120.

platform.azure.base The name of the resource group that String, for example
DomainResourceGr contains the DNS zone for your base production_cluster .
oupName domain.

platform.azure.outbo The outbound routing strategy used to LoadBalancer or


undType connect your cluster to the internet. If UserDefinedRouting. The default is
you are using user-defined routing, LoadBalancer .
you must have pre-existing networking
available where the outbound routing
has already been configured prior to
installing a cluster. The installation
program is not responsible for
configuring user-defined routing.

platform.azure.regio The name of the Azure region that Any valid region name, such as
n hosts your cluster. centralus.

platform.azure.zone List of availability zones to place List of zones, for example ["1", "2",
machines in. For high availability, "3"].
specify at least two zones.

54
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

platform.azure.netw The name of the resource group that String.


orkResourceGroupN contains the existing VNet that you
ame want to deploy your cluster to. This
name cannot be the same as the
platform.azure.baseDomainReso
urceGroupName.

platform.azure.virtua The name of the existing VNet that String.


lNetwork you want to deploy your cluster to.

platform.azure.contr The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
olPlaneSubnet your VNet that you want to deploy
your control plane machines to.

platform.azure.comp The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
uteSubnet your VNet that you want to deploy
your compute machines to.

platform.azure.cloud The name of the Azure cloud Any valid cloud environment, such as
Name environment that is used to configure AzurePublicCloud or
the Azure SDK with the appropriate AzureUSGovernmentCloud .
Azure API endpoints. If empty, the
default value AzurePublicCloud is
used.

NOTE

You cannot customize Azure Availability Zones or Use tags to organize your Azure
resources with an Azure cluster.

1.5.5.2. Network configuration parameters

You can modify your cluster network configuration parameters in the install-config.yaml configuration
file. The following table describes the parameters.

NOTE

You cannot modify these parameters in the install-config.yaml file after installation.

Table 1.7. Required network parameters

Parameter Description Value

networking.net The default Container Network Interface (CNI) Either OpenShiftSDN or


workType network provider plug-in to deploy. OVNKubernetes. The
default value is
OpenShiftSDN .

55
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Value

networking.clus A block of IP addresses from which pod IP addresses An IP address allocation in


terNetwork[].cid are allocated. The OpenShiftSDN network plug-in CIDR format. The default
r supports multiple cluster networks. The address value is 10.128.0.0/14.
blocks for multiple cluster networks must not overlap.
Select address pools large enough to fit your
anticipated workload.

networking.clus The subnet prefix length to assign to each individual A subnet prefix. The default
terNetwork[].ho node. For example, if hostPrefix is set to 23, then value is 23.
stPrefix each node is assigned a /23 subnet out of the given
cidr, allowing for 510 (2^(32 - 23) - 2) pod IP
addresses.

networking.serv A block of IP addresses for services. OpenShiftSDN An IP address allocation in


iceNetwork[] allows only one serviceNetwork block. The address CIDR format. The default
block must not overlap with any other network block. value is 172.30.0.0/16.

networking.mac A block of IP addresses assigned to nodes created by An IP address allocation in


hineNetwork[].ci the OpenShift Container Platform installation CIDR format. The default
dr program while installing the cluster. The address value is 10.0.0.0/16.
block must not overlap with any other network block.
Multiple CIDR ranges may be specified.

1.5.5.3. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.

IMPORTANT

This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024 5
type: Standard_D8s_v3
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:

56
CHAPTER 1. INSTALLING ON AZURE

azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512 8
zones: 9
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster 10
networking: 11
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
region: centralus 12
baseDomainResourceGroupName: resource_group 13
pullSecret: '{"auths": ...}' 14
fips: false 15
sshKey: ssh-ed25519 AAAA... 16

1 10 12 14 Required. The installation program prompts you for this value.

2 6 11 If you do not provide these parameters and values, the installation program provides the
default value.

3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.

4 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default,


simultaneous multithreading is enabled to increase the performance of your machines' cores. You
can disable it by setting the parameter value to Disabled. If you disable simultaneous
multithreading in some cluster machines, you must disable it in all cluster machines.

IMPORTANT

If you disable simultaneous multithreading, ensure that your capacity planning


accounts for the dramatically decreased machine performance. Use larger virtual
machine types, such as Standard_D8s_v3, for your machines if you disable
simultaneous multithreading.

5 8 You can specify the size of the disk to use in GB. Minimum recommendation for master nodes is
1024 GB.

57
OpenShift Container Platform 4.6 Installing on Azure

9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.

13 Specify the name of the resource group that contains the DNS zone for your base domain.

15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.

16 You can optionally provide the sshKey value that you use to access the machines in your cluster.

NOTE

For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.

1.5.6. Modifying advanced network configuration parameters


You can modify the advanced network configuration parameters only before you install the cluster.
Advanced configuration customization lets you integrate your cluster into your existing network
environment by specifying an MTU or VXLAN port, by allowing customization of kube-proxy settings,
and by specifying a different mode for the openshiftSDNConfig parameter.

IMPORTANT

Modifying the OpenShift Container Platform manifest files directly is not supported.

Prerequisites

Create the install-config.yaml file and complete any modifications to it.

Procedure

1. Change to the directory that contains the installation program and create the manifests:

$ ./openshift-install create manifests --dir=<installation_directory> 1

1 For <installation_directory>, specify the name of the directory that contains the install-
config.yaml file for your cluster.

2. Create a file that is named cluster-network-03-config.yml in the


<installation_directory>/manifests/ directory:

$ touch <installation_directory>/manifests/cluster-network-03-config.yml 1

1 For <installation_directory>, specify the directory name that contains the manifests/
directory for your cluster.

After creating the file, several network configuration files are in the manifests/ directory, as
shown:

58
CHAPTER 1. INSTALLING ON AZURE

$ ls <installation_directory>/manifests/cluster-network-*

Example output

cluster-network-01-crd.yml
cluster-network-02-config.yml
cluster-network-03-config.yml

3. Open the cluster-network-03-config.yml file in an editor and enter a CR that describes the
Operator configuration you want:

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec: 1
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789

1 The parameters for the spec parameter are only an example. Specify your configuration
for the Cluster Network Operator in the CR.

The CNO provides default values for the parameters in the CR, so you must specify only the
parameters that you want to change.

4. Save the cluster-network-03-config.yml file and quit the text editor.

5. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program


deletes the manifests/ directory when creating the cluster.

1.5.7. Cluster Network Operator configuration


The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO)
configuration and stored in a CR object that is named cluster. The CR specifies the parameters for the
Network API in the operator.openshift.io API group.

You can specify the cluster network configuration for your OpenShift Container Platform cluster by
setting the parameter values for the defaultNetwork parameter in the CNO CR. The following CR
displays the default configuration for the CNO and explains both the parameters you can configure and
the valid parameter values:

Cluster Network Operator CR

apiVersion: operator.openshift.io/v1

59
OpenShift Container Platform 4.6 Installing on Azure

kind: Network
metadata:
name: cluster
spec:
clusterNetwork: 1
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork: 2
- 172.30.0.0/16
defaultNetwork: 3
...
kubeProxyConfig: 4
iptablesSyncPeriod: 30s 5
proxyArguments:
iptables-min-sync-period: 6
- 0s

1 2 Specified in the install-config.yaml file.

3 Configures the default Container Network Interface (CNI) network provider for the cluster
network.

4 The parameters for this object specify the kube-proxy configuration. If you do not specify the
parameter values, the Cluster Network Operator applies the displayed default parameter values. If
you are using the OVN-Kubernetes default CNI network provider, the kube-proxy configuration has
no effect.

5 The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h
and are described in the Go time package documentation.

NOTE

Because of performance improvements introduced in OpenShift Container Platform


4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer
necessary.

6 The minimum duration before refreshing iptables rules. This parameter ensures that the refresh
does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time
package.

1.5.7.1. Configuration parameters for the OpenShift SDN default CNI network provider

The following YAML object describes the configuration parameters for the OpenShift SDN default
Container Network Interface (CNI) network provider.

defaultNetwork:
type: OpenShiftSDN 1
openshiftSDNConfig: 2
mode: NetworkPolicy 3
mtu: 1450 4
vxlanPort: 4789 5

60
CHAPTER 1. INSTALLING ON AZURE

1 Specified in the install-config.yaml file.

2 Specify only if you want to override part of the OpenShift SDN configuration.

3 Configures the network isolation mode for OpenShift SDN. The allowed values are Multitenant,
Subnet, or NetworkPolicy. The default value is NetworkPolicy.

4 The maximum transmission unit (MTU) for the VXLAN overlay network. This value is normally
configured automatically, but if the nodes in your cluster do not all use the same MTU, then you
must set this explicitly to 50 less than the smallest node MTU value.

5 The port to use for all VXLAN packets. The default value is 4789. If you are running in a virtualized
environment with existing nodes that are part of another VXLAN network, then you might be
required to change this. For example, when running an OpenShift SDN overlay on top of VMware
NSX-T, you must select an alternate port for VXLAN, since both SDNs use the same default
VXLAN port number.

On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port
9000 and port 9999.

1.5.7.2. Configuration parameters for the OVN-Kubernetes default CNI network provider

The following YAML object describes the configuration parameters for the OVN-Kubernetes default
CNI network provider.

defaultNetwork:
type: OVNKubernetes 1
ovnKubernetesConfig: 2
mtu: 1400 3
genevePort: 6081 4

1 Specified in the install-config.yaml file.

2 Specify only if you want to override part of the OVN-Kubernetes configuration.

3 The MTU for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This
value is normally configured automatically, but if the nodes in your cluster do not all use the same
MTU, then you must set this explicitly to 100 less than the smallest node MTU value.

4 The UDP port for the Geneve overlay network.

1.5.7.3. Cluster Network Operator example configuration

A complete CR object for the CNO is displayed in the following example:

Cluster Network Operator example CR

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:

61
OpenShift Container Platform 4.6 Installing on Azure

- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
kubeProxyConfig:
iptablesSyncPeriod: 30s
proxyArguments:
iptables-min-sync-period:
- 0s

1.5.8. Configuring hybrid networking with OVN-Kubernetes


You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid
cluster that supports different node networking configurations. For example, this is necessary to run
both Linux and Windows nodes in a cluster.

IMPORTANT

You must configure hybrid networking with OVN-Kubernetes during the installation of
your cluster. You cannot switch to hybrid networking after the installation process.

Prerequisites

You defined OVNKubernetes for the networking.networkType parameter in the install-


config.yaml file. See the installation documentation for configuring OpenShift Container
Platform network customizations on your chosen cloud provider for more information.

Procedure

1. Create the manifests from the directory that contains the installation program:

$ ./openshift-install create manifests --dir=<installation_directory> 1

1 For <installation_directory>, specify the name of the directory that contains the install-
config.yaml file for your cluster.

2. Create a file that is named cluster-network-03-config.yml in the


<installation_directory>/manifests/ directory:

$ touch <installation_directory>/manifests/cluster-network-03-config.yml 1

1 For <installation_directory>, specify the directory name that contains the manifests/
directory for your cluster.

After creating the file, several network configuration files are in the manifests/ directory, as
shown:

62
CHAPTER 1. INSTALLING ON AZURE

$ ls -1 <installation_directory>/manifests/cluster-network-*

Example output

cluster-network-01-crd.yml
cluster-network-02-config.yml
cluster-network-03-config.yml

3. Open the cluster-network-03-config.yml file and configure OVN-Kubernetes with hybrid


networking. For example:

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
creationTimestamp: null
name: cluster
spec: 1
clusterNetwork: 2
- cidr: 10.128.0.0/14
hostPrefix: 23
externalIP:
policy: {}
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OVNKubernetes 3
ovnKubernetesConfig:
hybridOverlayConfig:
hybridClusterNetwork: 4
- cidr: 10.132.0.0/14
hostPrefix: 23
status: {}

1 The parameters for the spec parameter are only an example. Specify your configuration
for the Cluster Network Operator in the custom resource.

2 Specify the CIDR configuration used when adding nodes.

3 Specify OVNKubernetes as the Container Network Interface (CNI) cluster network


provider.

4 Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR.

4. Optional: Back up the <installation_directory>/manifests/cluster-network-03-config.yml file.


The installation program deletes the manifests/ directory when creating the cluster.

NOTE

For more information on using Linux and Windows nodes in the same cluster, see
Understanding Windows container workloads .

63
OpenShift Container Platform 4.6 Installing on Azure

1.5.9. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the location of your customized ./install-


config.yaml file.

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE
64
CHAPTER 1. INSTALLING ON AZURE

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.5.10. Installing the OpenShift CLI by downloading the binary


You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.5.10.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

65
OpenShift Container Platform 4.6 Installing on Azure

After you install the CLI, it is available using the oc command:

$ oc <command>

1.5.10.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.5.10.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

66
CHAPTER 1. INSTALLING ON AZURE

1.5.11. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.5.12. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

1.6. INSTALLING A CLUSTER ON AZURE INTO AN EXISTING VNET


In OpenShift Container Platform version 4.5, you can install a cluster into an existing Azure Virtual
Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required
infrastructure, which you can further customize. To customize the installation, you modify parameters in
the install-config.yaml file before you install the cluster.

1.6.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster and determine the tested and validated region
to deploy the cluster to.

If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster
67
OpenShift Container Platform 4.6 Installing on Azure

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

1.6.2. About reusing a VNet for your OpenShift Container Platform cluster
In OpenShift Container Platform 4.5, you can deploy a cluster into an existing Azure Virtual Network
(VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing
rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid
service limit constraints in new accounts or more easily abide by the operational constraints that your
company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation
permissions that are required to create the VNet.

IMPORTANT

The use of an existing VNet requires the use of the updated Azure Private DNS (preview)
feature. See Announcing Preview Refresh for Azure DNS Private Zones for more
information about the limitations of this feature.

1.6.2.1. Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration
before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates
the following components, but it does not create them when you install into an existing VNet:

Subnets

Route tables

VNets

Network Security Groups

If you use a custom VNet, you must correctly configure it and its subnets for the installation program
and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use,
set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the
cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets.
While all of the resources that the cluster creates are placed in a separate resource group that it
creates, some network resources are used from a separate group. Some cluster Operators must be able
to access resources in both resource groups. For example, the Machine API controller attaches NICS for
the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP
address pool for cluster machines.

The VNet and its subnets must belong to the same resource group, and the subnets must be
configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the

68
CHAPTER 1. INSTALLING ON AZURE

You must provide two subnets within your VNet, one for the control plane machines and one for the
compute machines. Because Azure distributes machines in different availability zones within the region
that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following
data:

All the subnets that you specify exist.

You provide two private subnets for each availability zone.

The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned
in availability zones that you do not provide private subnets for. If required, the installation
program creates public load balancers that manage the control plane and worker nodes, and
Azure allocates a public IP address to them.

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

1.6.2.1.1. Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require
specific access to ensure that the cluster communication is correct. You must create rules to allow
access to the required cluster communication ports.

IMPORTANT

The network security group rules must be in place before you install the cluster. If you
attempt to install a cluster without the required access, the installation program cannot
reach the Azure APIs, and installation fails.

Table 1.8. Required ports

Port Description Control plane Compute

80 Allows HTTP traffic x

443 Allows HTTPS traffic x

6443 Allows communication to the control plane machines x x

22623 Allows communication to the machine config server x x

1.6.2.2. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required
for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics
the division of permissions that you might have at your company: some individuals can create different
resources in your clouds than others. For example, you might be able to create application-specific
items, like instances, storage, and load balancers, but not networking-related components such as
VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions
that are required to make VNets and core networking components within the VNet, such as subnets,

69
OpenShift Container Platform 4.6 Installing on Azure

routing tables, internet gateways, NAT, and VPN. You still need permission to make the application
resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and
nodes.

1.6.2.3. Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to
isolate clusters from each other on the VNet.

1.6.3. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.6.4. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE
70
CHAPTER 1. INSTALLING ON AZURE

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.6.5. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

71
OpenShift Container Platform 4.6 Installing on Azure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.6.6. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT
72
CHAPTER 1. INSTALLING ON AZURE

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select azure as the platform to target.

iii. If you do not have a Microsoft Azure profile stored on your computer, specify the
following Azure parameter values for your subscription and service principal:

azure subscription id: The subscription ID to use for the cluster. Specify the id
value in your account output.

azure tenant id: The tenant ID. Specify the tenantId value in your account output.

azure service principal client id: The value of the appId parameter for the service
principal.

azure service principal client secret: The value of the password parameter for the
service principal.

iv. Select the region to deploy the cluster to.

v. Select the base domain to deploy the cluster to. The base domain corresponds to the
Azure DNS Zone that you created for your cluster.

vi. Enter a descriptive name for your cluster.

IMPORTANT

All Azure resources that are available through public endpoints are
subject to resource name restrictions, and you cannot create resources
that use certain terms. For a list of terms that Azure restricts, see
Resolve reserved resource name errors in the Azure documentation.

vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

73
OpenShift Container Platform 4.6 Installing on Azure

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.6.6.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.9. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

74
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev.
subdomains of
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.10. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

75
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

76
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

77
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

78
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

79
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.11. Additional Azure parameters

Parameter Description Values

machines.platform.a The Azure VM instance type. VMs that use Windows or Linux as the
zure.type operating system. See the Guest
operating systems supported on Azure
Stack in the Azure documentation.

machines.platform.a The Azure disk size for the VM. Integer that represents the size of the
zure.osDisk.diskSize disk in GB, for example 512. The
GB minimum supported disk size is 120.

platform.azure.base The name of the resource group that String, for example
DomainResourceGr contains the DNS zone for your base production_cluster .
oupName domain.

platform.azure.outbo The outbound routing strategy used to LoadBalancer or


undType connect your cluster to the internet. If UserDefinedRouting. The default is
you are using user-defined routing, LoadBalancer .
you must have pre-existing networking
available where the outbound routing
has already been configured prior to
installing a cluster. The installation
program is not responsible for
configuring user-defined routing.

80
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

platform.azure.regio The name of the Azure region that Any valid region name, such as
n hosts your cluster. centralus.

platform.azure.zone List of availability zones to place List of zones, for example ["1", "2",
machines in. For high availability, "3"].
specify at least two zones.

platform.azure.netw The name of the resource group that String.


orkResourceGroupN contains the existing VNet that you
ame want to deploy your cluster to. This
name cannot be the same as the
platform.azure.baseDomainReso
urceGroupName.

platform.azure.virtua The name of the existing VNet that String.


lNetwork you want to deploy your cluster to.

platform.azure.contr The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
olPlaneSubnet your VNet that you want to deploy
your control plane machines to.

platform.azure.comp The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
uteSubnet your VNet that you want to deploy
your compute machines to.

platform.azure.cloud The name of the Azure cloud Any valid cloud environment, such as
Name environment that is used to configure AzurePublicCloud or
the Azure SDK with the appropriate AzureUSGovernmentCloud .
Azure API endpoints. If empty, the
default value AzurePublicCloud is
used.

NOTE

You cannot customize Azure Availability Zones or Use tags to organize your Azure
resources with an Azure cluster.

1.6.6.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.

IMPORTANT

This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.

81
OpenShift Container Platform 4.6 Installing on Azure

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024 5
type: Standard_D8s_v3
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512 8
zones: 9
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster 10
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
region: centralus 11
baseDomainResourceGroupName: resource_group 12
networkResourceGroupName: vnet_resource_group 13
virtualNetwork: vnet 14
controlPlaneSubnet: control_plane_subnet 15
computeSubnet: compute_subnet 16
pullSecret: '{"auths": ...}' 17
fips: false 18
sshKey: ssh-ed25519 AAAA... 19

1 10 11 17 Required. The installation program prompts you for this value.

2 6 If you do not provide these parameters and values, the installation program provides the default
value.

3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section

82
CHAPTER 1. INSTALLING ON AZURE

To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.

4 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default,


simultaneous multithreading is enabled to increase the performance of your machines' cores. You
can disable it by setting the parameter value to Disabled. If you disable simultaneous
multithreading in some cluster machines, you must disable it in all cluster machines.

IMPORTANT

If you disable simultaneous multithreading, ensure that your capacity planning


accounts for the dramatically decreased machine performance. Use larger virtual
machine types, such as Standard_D8s_v3, for your machines if you disable
simultaneous multithreading.

5 8 You can specify the size of the disk to use in GB. Minimum recommendation for master nodes is
1024 GB.

9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.

12 Specify the name of the resource group that contains the DNS zone for your base domain.

13 If you use an existing VNet, specify the name of the resource group that contains it.

14 If you use an existing VNet, specify its name.

15 If you use an existing VNet, specify the name of the subnet to host the control plane machines.

16 If you use an existing VNet, specify the name of the subnet to host the compute machines.

18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.

19 You can optionally provide the sshKey value that you use to access the machines in your cluster.

NOTE

For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.

1.6.6.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.

Prerequisites

83
OpenShift Container Platform 4.6 Installing on Azure

You have an existing install-config.yaml file.

You reviewed the sites that your cluster requires access to and determined whether any of
them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to
hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to
bypass the proxy if necessary.

NOTE

The Proxy object status.noProxy field is populated with the values of the
networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and
networking.serviceNetwork[] fields from your installation configuration.

For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object
status.noProxy field is also populated with the instance metadata endpoint
(169.254.169.254).

Procedure

1. Edit your install-config.yaml file and add the proxy settings. For example:

apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...

1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.

2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. If you use an
MITM transparent proxy network that does not require additional proxy configuration but
requires additional CAs, you must not specify an httpsProxy value.

3 A comma-separated list of destination domain names, domains, IP addresses, or other


network CIDRs to exclude proxying. Preface a domain with . to include all subdomains of
that domain. Use * to bypass proxy for all destinations.

4 If provided, the installation program generates a config map that is named user-ca-bundle
in the openshift-config namespace that contains one or more additional CA certificates
that are required for proxying HTTPS connections. The Cluster Network Operator then
creates a trusted-ca-bundle config map that merges these contents with the Red Hat
Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the
Proxy object’s trustedCA field. The additionalTrustBundle field is required unless the
proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. If you

84
CHAPTER 1. INSTALLING ON AZURE

use an MITM transparent proxy network that does not require additional proxy
configuration but requires additional CAs, you must provide the MITM CA certificate.

NOTE

The installation program does not support the proxy readinessEndpoints field.

2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.

NOTE

Only the Proxy object named cluster is supported, and no additional proxies can be
created.

1.6.7. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the location of your customized ./install-


config.yaml file.

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE
85
OpenShift Container Platform 4.6 Installing on Azure

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.6.8. Installing the OpenShift CLI by downloading the binary


You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.6.8.1. Installing the OpenShift CLI on Linux

86
CHAPTER 1. INSTALLING ON AZURE

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

1.6.8.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.6.8.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

87
OpenShift Container Platform 4.6 Installing on Azure

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

1.6.9. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.6.10. Next steps

88
CHAPTER 1. INSTALLING ON AZURE

Customize your cluster.

If necessary, you can opt out of remote health reporting .

1.7. INSTALLING A PRIVATE CLUSTER ON AZURE


In OpenShift Container Platform version 4.5, you can install a private cluster into an existing Azure
Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required
infrastructure, which you can further customize. To customize the installation, you modify parameters in
the install-config.yaml file before you install the cluster.

1.7.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster and determine the tested and validated region
to deploy the cluster to.

If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

1.7.2. Private clusters


You can deploy a private OpenShift Container Platform cluster that does not expose external
endpoints. Private clusters are accessible from only an internal network and are not visible to the
Internet.

By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints.
A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your
cluster. This means that the cluster resources are only accessible from your internal network and are not
visible to the internet.

To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster
resources might be shared between other clusters on the network.

Additionally, you must deploy a private cluster from a machine that has access the API services for the
cloud you provision to, the hosts on the network that you provision, and to the internet to obtain
installation media. You can use any machine that meets these access requirements and follows your
company’s guidelines. For example, this machine can be a bastion host on your cloud network or a
machine that has access to the network through a VPN.

1.7.2.1. Private clusters in Azure

To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to
host the cluster. The installation program must also be able to resolve the DNS records that the cluster
requires. The installation program configures the Ingress Operator and API server for only internal
traffic.

Depending how your network connects to the private VNET, you might need to use a DNS forwarder in
order to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally
for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address
168.63.129.16? in the Azure documentation.

89
OpenShift Container Platform 4.6 Installing on Azure

The cluster still requires access to Internet to access the Azure APIs.

The following items are not required or created when you install a private cluster:

A BaseDomainResourceGroup, since the cluster does not create public records

Public IP addresses

Public DNS records

Public endpoints

The cluster is configured so that the Operators do not create public records for the cluster
and all cluster machines are placed in the private subnets that you specify.

1.7.2.1.1. Limitations

Private clusters on Azure are subject to only the limitations that are associated with the use of an
existing VNet.

1.7.2.2. User-defined outbound routing

In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to
the Internet. This allows you to skip the creation of public IP addresses and the public load balancer.

You can configure user-defined routing by modifying parameters in the install-config.yaml file before
installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster;
the installation program is not responsible for configuring this.

When configuring a cluster to use user-defined routing, the installation program does not create the
following resources:

Outbound rules for access to the Internet.

Public IPs for the public load balancer.

Kubernetes Service object to add the cluster machines to the public load balancer for outbound
requests.

You must ensure the following items are available before setting user-defined routing:

Egress to the Internet is possible to pull container images, unless using an internal registry
mirror.

The cluster can access Azure APIs.

Various allowlist endpoints are configured. You can reference these endpoints in the
Configuring your firewall section.

There are several pre-existing networking setups that are supported for Internet access using user-
defined routing.

Private cluster with network address translation


You can use Azure VNET network address translation (NAT) to provide outbound Internet access for
the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure
documentation for configuration instructions.

90
CHAPTER 1. INSTALLING ON AZURE

When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private
cluster with no public endpoints.

Private cluster with Azure Firewall


You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can
learn more about providing user-defined routing with Azure Firewall in the Azure documentation.

When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a
private cluster with no public endpoints.

Private cluster with a proxy configuration


You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that
cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs
outside of the proxy.

When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all
Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As
long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined
routing configured allow you to create private clusters with no public endpoints.

Private cluster with no Internet access


You can have VNets with no access to the Internet if your cluster has access to the following:

An internal registry mirror that allows for pulling container images

Access to Azure APIs

With these requirements available, you can use user-defined routing to create private clusters with no
public endpoints.

1.7.3. About reusing a VNet for your OpenShift Container Platform cluster
In OpenShift Container Platform 4.5, you can deploy a cluster into an existing Azure Virtual Network
(VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing
rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid
service limit constraints in new accounts or more easily abide by the operational constraints that your
company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation
permissions that are required to create the VNet.

IMPORTANT

The use of an existing VNet requires the use of the updated Azure Private DNS (preview)
feature. See Announcing Preview Refresh for Azure DNS Private Zones for more
information about the limitations of this feature.

1.7.3.1. Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration
before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates
the following components, but it does not create them when you install into an existing VNet:

Subnets

Route tables

91
OpenShift Container Platform 4.6 Installing on Azure

VNets

Network Security Groups

If you use a custom VNet, you must correctly configure it and its subnets for the installation program
and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use,
set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the
cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets.
While all of the resources that the cluster creates are placed in a separate resource group that it
creates, some network resources are used from a separate group. Some cluster Operators must be able
to access resources in both resource groups. For example, the Machine API controller attaches NICS for
the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP
address pool for cluster machines.

The VNet and its subnets must belong to the same resource group, and the subnets must be
configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the
compute machines. Because Azure distributes machines in different availability zones within the region
that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following
data:

All the subnets that you specify exist.

You provide two private subnets for each availability zone.

The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned
in availability zones that you do not provide private subnets for. If required, the installation
program creates public load balancers that manage the control plane and worker nodes, and
Azure allocates a public IP address to them.

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

1.7.3.1.1. Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require
specific access to ensure that the cluster communication is correct. You must create rules to allow
access to the required cluster communication ports.

IMPORTANT

The network security group rules must be in place before you install the cluster. If you
attempt to install a cluster without the required access, the installation program cannot
reach the Azure APIs, and installation fails.

Table 1.12. Required ports

92
CHAPTER 1. INSTALLING ON AZURE

Port Description Control plane Compute

80 Allows HTTP traffic x

443 Allows HTTPS traffic x

6443 Allows communication to the control plane machines x x

22623 Allows communication to the machine config server x x

1.7.3.2. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required
for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics
the division of permissions that you might have at your company: some individuals can create different
resources in your clouds than others. For example, you might be able to create application-specific
items, like instances, storage, and load balancers, but not networking-related components such as
VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions
that are required to make VNets and core networking components within the VNet, such as subnets,
routing tables, internet gateways, NAT, and VPN. You still need permission to make the application
resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and
nodes.

1.7.3.3. Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to
isolate clusters from each other on the VNet.

1.7.4. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT
93
OpenShift Container Platform 4.6 Installing on Azure

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.7.5. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

94
CHAPTER 1. INSTALLING ON AZURE

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.7.6. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

95
OpenShift Container Platform 4.6 Installing on Azure

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.7.7. Manually creating the installation configuration file


For installations of a private OpenShift Container Platform cluster that are only accessible from an
internal network and are not visible to the Internet, you must manually generate your installation
configuration file.

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for your
cluster.

Procedure

1. Create an installation directory to store your required installation assets in:

$ mkdir <installation_directory>

IMPORTANT

You must create a directory. Some installation assets, like bootstrap X.509
certificates have short expiration intervals, so you must not reuse an installation
directory. If you want to reuse individual files from another cluster installation,
you can copy them into your directory. However, the file names for the
installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.

2. Customize the following install-config.yaml file template and save it in the


<installation_directory>.

NOTE

You must name this configuration file install-config.yaml.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the next step of the installation
process. You must back it up now.

1.7.7.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE
96
CHAPTER 1. INSTALLING ON AZURE

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.13. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev.
subdomains of
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

97
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.14. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

98
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

99
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

100
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

101
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.15. Additional Azure parameters

102
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

machines.platform.a The Azure VM instance type. VMs that use Windows or Linux as the
zure.type operating system. See the Guest
operating systems supported on Azure
Stack in the Azure documentation.

machines.platform.a The Azure disk size for the VM. Integer that represents the size of the
zure.osDisk.diskSize disk in GB, for example 512. The
GB minimum supported disk size is 120.

platform.azure.base The name of the resource group that String, for example
DomainResourceGr contains the DNS zone for your base production_cluster .
oupName domain.

platform.azure.outbo The outbound routing strategy used to LoadBalancer or


undType connect your cluster to the internet. If UserDefinedRouting. The default is
you are using user-defined routing, LoadBalancer .
you must have pre-existing networking
available where the outbound routing
has already been configured prior to
installing a cluster. The installation
program is not responsible for
configuring user-defined routing.

platform.azure.regio The name of the Azure region that Any valid region name, such as
n hosts your cluster. centralus.

platform.azure.zone List of availability zones to place List of zones, for example ["1", "2",
machines in. For high availability, "3"].
specify at least two zones.

platform.azure.netw The name of the resource group that String.


orkResourceGroupN contains the existing VNet that you
ame want to deploy your cluster to. This
name cannot be the same as the
platform.azure.baseDomainReso
urceGroupName.

platform.azure.virtua The name of the existing VNet that String.


lNetwork you want to deploy your cluster to.

platform.azure.contr The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
olPlaneSubnet your VNet that you want to deploy
your control plane machines to.

platform.azure.comp The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
uteSubnet your VNet that you want to deploy
your compute machines to.

103
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

platform.azure.cloud The name of the Azure cloud Any valid cloud environment, such as
Name environment that is used to configure AzurePublicCloud or
the Azure SDK with the appropriate AzureUSGovernmentCloud .
Azure API endpoints. If empty, the
default value AzurePublicCloud is
used.

NOTE

You cannot customize Azure Availability Zones or Use tags to organize your Azure
resources with an Azure cluster.

1.7.7.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.

IMPORTANT

This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024 5
type: Standard_D8s_v3
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512 8
zones: 9
- "1"
- "2"
- "3"
replicas: 5
metadata:

104
CHAPTER 1. INSTALLING ON AZURE

name: test-cluster 10
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
region: centralus 11
baseDomainResourceGroupName: resource_group 12
networkResourceGroupName: vnet_resource_group 13
virtualNetwork: vnet 14
controlPlaneSubnet: control_plane_subnet 15
computeSubnet: compute_subnet 16
outboundType: UserDefinedRouting 17
pullSecret: '{"auths": ...}' 18
fips: false 19
sshKey: ssh-ed25519 AAAA... 20
publish: Internal 21

1 10 11 18 Required. The installation program prompts you for this value.

2 6 If you do not provide these parameters and values, the installation program provides the default
value.

3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.

4 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default,


simultaneous multithreading is enabled to increase the performance of your machines' cores. You
can disable it by setting the parameter value to Disabled. If you disable simultaneous
multithreading in some cluster machines, you must disable it in all cluster machines.

IMPORTANT

If you disable simultaneous multithreading, ensure that your capacity planning


accounts for the dramatically decreased machine performance. Use larger virtual
machine types, such as Standard_D8s_v3, for your machines if you disable
simultaneous multithreading.

5 8 You can specify the size of the disk to use in GB. Minimum recommendation for master nodes is
1024 GB.

9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.

12 Specify the name of the resource group that contains the DNS zone for your base domain.

105
OpenShift Container Platform 4.6 Installing on Azure

13 If you use an existing VNet, specify the name of the resource group that contains it.

14 If you use an existing VNet, specify its name.

15 If you use an existing VNet, specify the name of the subnet to host the control plane machines.

16 If you use an existing VNet, specify the name of the subnet to host the compute machines.

17 You can customize your own outbound routing. Configuring user-defined routing prevents
exposing external endpoints in your cluster. User-defined routing for egress requires deploying
your cluster to an existing VNet.

19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.

20 You can optionally provide the sshKey value that you use to access the machines in your cluster.

NOTE

For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.

21 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a
private cluster, which cannot be accessed from the Internet. The default value is External.

1.7.7.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.

Prerequisites

You have an existing install-config.yaml file.

You reviewed the sites that your cluster requires access to and determined whether any of
them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to
hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to
bypass the proxy if necessary.

NOTE

The Proxy object status.noProxy field is populated with the values of the
networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and
networking.serviceNetwork[] fields from your installation configuration.

For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object
status.noProxy field is also populated with the instance metadata endpoint
(169.254.169.254).

106
CHAPTER 1. INSTALLING ON AZURE

Procedure

1. Edit your install-config.yaml file and add the proxy settings. For example:

apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...

1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.

2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. If you use an
MITM transparent proxy network that does not require additional proxy configuration but
requires additional CAs, you must not specify an httpsProxy value.

3 A comma-separated list of destination domain names, domains, IP addresses, or other


network CIDRs to exclude proxying. Preface a domain with . to include all subdomains of
that domain. Use * to bypass proxy for all destinations.

4 If provided, the installation program generates a config map that is named user-ca-bundle
in the openshift-config namespace that contains one or more additional CA certificates
that are required for proxying HTTPS connections. The Cluster Network Operator then
creates a trusted-ca-bundle config map that merges these contents with the Red Hat
Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the
Proxy object’s trustedCA field. The additionalTrustBundle field is required unless the
proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. If you
use an MITM transparent proxy network that does not require additional proxy
configuration but requires additional CAs, you must provide the MITM CA certificate.

NOTE

The installation program does not support the proxy readinessEndpoints field.

2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.

NOTE

Only the Proxy object named cluster is supported, and no additional proxies can be
created.

107
OpenShift Container Platform 4.6 Installing on Azure

1.7.8. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE
108
CHAPTER 1. INSTALLING ON AZURE

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.7.9. Installing the OpenShift CLI by downloading the binary


You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.7.9.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

109
OpenShift Container Platform 4.6 Installing on Azure

After you install the CLI, it is available using the oc command:

$ oc <command>

1.7.9.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.7.9.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

110
CHAPTER 1. INSTALLING ON AZURE

1.7.10. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.7.11. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

1.8. INSTALLING A CLUSTER ON AZURE INTO A GOVERNMENT


REGION
In OpenShift Container Platform version 4.5, you can install a cluster on Microsoft Azure into a
government region. To configure the government region, you modify parameters in the install-
config.yaml file before you install the cluster.

1.8.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster and determine the tested and validated
government region to deploy the cluster to.

If you use a firewall, you must configure it to allow the sites that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster
111
OpenShift Container Platform 4.6 Installing on Azure

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

1.8.2. Azure government regions


OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG)
regions. MAG is specifically designed for US government agencies at the federal, state, and local level,
as well as contractors, educational institutions, and other US customers that must run sensitive
workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact
Level 5 Provisional Authorization.

Installing to a MAG region requires manually configuring the Azure Government dedicated cloud
instance and region in the install-config.yaml file. You must also update your service principal to
reference the appropriate government environment.

NOTE

The Azure government region cannot be selected using the guided terminal prompts
from the installation program. You must define the region manually in the install-
config.yaml file. Remember to also set the dedicated cloud instance, like
AzureUSGovernmentCloud, based on the region specified.

1.8.3. Private clusters


You can deploy a private OpenShift Container Platform cluster that does not expose external
endpoints. Private clusters are accessible from only an internal network and are not visible to the
Internet.

By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints.
A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your
cluster. This means that the cluster resources are only accessible from your internal network and are not
visible to the internet.

To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster
resources might be shared between other clusters on the network.

Additionally, you must deploy a private cluster from a machine that has access the API services for the
cloud you provision to, the hosts on the network that you provision, and to the internet to obtain
installation media. You can use any machine that meets these access requirements and follows your
company’s guidelines. For example, this machine can be a bastion host on your cloud network or a
machine that has access to the network through a VPN.

1.8.3.1. Private clusters in Azure

To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to
host the cluster. The installation program must also be able to resolve the DNS records that the cluster
requires. The installation program configures the Ingress Operator and API server for only internal
traffic.

Depending how your network connects to the private VNET, you might need to use a DNS forwarder in
order to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally
for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address
168.63.129.16? in the Azure documentation.

112
CHAPTER 1. INSTALLING ON AZURE

The cluster still requires access to Internet to access the Azure APIs.

The following items are not required or created when you install a private cluster:

A BaseDomainResourceGroup, since the cluster does not create public records

Public IP addresses

Public DNS records

Public endpoints

The cluster is configured so that the Operators do not create public records for the cluster
and all cluster machines are placed in the private subnets that you specify.

1.8.3.1.1. Limitations

Private clusters on Azure are subject to only the limitations that are associated with the use of an
existing VNet.

1.8.3.2. User-defined outbound routing

In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to
the Internet. This allows you to skip the creation of public IP addresses and the public load balancer.

You can configure user-defined routing by modifying parameters in the install-config.yaml file before
installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster;
the installation program is not responsible for configuring this.

When configuring a cluster to use user-defined routing, the installation program does not create the
following resources:

Outbound rules for access to the Internet.

Public IPs for the public load balancer.

Kubernetes Service object to add the cluster machines to the public load balancer for outbound
requests.

You must ensure the following items are available before setting user-defined routing:

Egress to the Internet is possible to pull container images, unless using an internal registry
mirror.

The cluster can access Azure APIs.

Various allowlist endpoints are configured. You can reference these endpoints in the
Configuring your firewall section.

There are several pre-existing networking setups that are supported for Internet access using user-
defined routing.

Private cluster with network address translation


You can use Azure VNET network address translation (NAT) to provide outbound Internet access for
the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure
documentation for configuration instructions.

113
OpenShift Container Platform 4.6 Installing on Azure

When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private
cluster with no public endpoints.

Private cluster with Azure Firewall


You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can
learn more about providing user-defined routing with Azure Firewall in the Azure documentation.

When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a
private cluster with no public endpoints.

Private cluster with a proxy configuration


You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that
cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs
outside of the proxy.

When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all
Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As
long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined
routing configured allow you to create private clusters with no public endpoints.

Private cluster with no Internet access


You can have VNets with no access to the Internet if your cluster has access to the following:

An internal registry mirror that allows for pulling container images

Access to Azure APIs

With these requirements available, you can use user-defined routing to create private clusters with no
public endpoints.

1.8.4. About reusing a VNet for your OpenShift Container Platform cluster
In OpenShift Container Platform 4.5, you can deploy a cluster into an existing Azure Virtual Network
(VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing
rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid
service limit constraints in new accounts or more easily abide by the operational constraints that your
company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation
permissions that are required to create the VNet.

IMPORTANT

The use of an existing VNet requires the use of the updated Azure Private DNS (preview)
feature. See Announcing Preview Refresh for Azure DNS Private Zones for more
information about the limitations of this feature.

1.8.4.1. Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration
before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates
the following components, but it does not create them when you install into an existing VNet:

Subnets

Route tables

114
CHAPTER 1. INSTALLING ON AZURE

VNets

Network Security Groups

If you use a custom VNet, you must correctly configure it and its subnets for the installation program
and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use,
set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the
cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets.
While all of the resources that the cluster creates are placed in a separate resource group that it
creates, some network resources are used from a separate group. Some cluster Operators must be able
to access resources in both resource groups. For example, the Machine API controller attaches NICS for
the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP
address pool for cluster machines.

The VNet and its subnets must belong to the same resource group, and the subnets must be
configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the
compute machines. Because Azure distributes machines in different availability zones within the region
that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following
data:

All the subnets that you specify exist.

You provide two private subnets for each availability zone.

The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned
in availability zones that you do not provide private subnets for. If required, the installation
program creates public load balancers that manage the control plane and worker nodes, and
Azure allocates a public IP address to them.

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

1.8.4.1.1. Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require
specific access to ensure that the cluster communication is correct. You must create rules to allow
access to the required cluster communication ports.

IMPORTANT

The network security group rules must be in place before you install the cluster. If you
attempt to install a cluster without the required access, the installation program cannot
reach the Azure APIs, and installation fails.

Table 1.16. Required ports

115
OpenShift Container Platform 4.6 Installing on Azure

Port Description Control plane Compute

80 Allows HTTP traffic x

443 Allows HTTPS traffic x

6443 Allows communication to the control plane machines x x

22623 Allows communication to the machine config server x x

1.8.4.2. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required
for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics
the division of permissions that you might have at your company: some individuals can create different
resources in your clouds than others. For example, you might be able to create application-specific
items, like instances, storage, and load balancers, but not networking-related components such as
VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions
that are required to make VNets and core networking components within the VNet, such as subnets,
routing tables, internet gateways, NAT, and VPN. You still need permission to make the application
resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and
nodes.

1.8.4.3. Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to
isolate clusters from each other on the VNet.

1.8.5. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT
116
CHAPTER 1. INSTALLING ON AZURE

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.8.6. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

117
OpenShift Container Platform 4.6 Installing on Azure

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.8.7. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

118
CHAPTER 1. INSTALLING ON AZURE

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.8.8. Manually creating the installation configuration file


When installing OpenShift Container Platform on Microsoft Azure into a government region, you must
manually generate your installation configuration file.

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for your
cluster.

Procedure

1. Create an installation directory to store your required installation assets in:

$ mkdir <installation_directory>

IMPORTANT

You must create a directory. Some installation assets, like bootstrap X.509
certificates have short expiration intervals, so you must not reuse an installation
directory. If you want to reuse individual files from another cluster installation,
you can copy them into your directory. However, the file names for the
installation assets might change between releases. Use caution when copying
installation files from an earlier OpenShift Container Platform version.

2. Customize the following install-config.yaml file template and save it in the


<installation_directory>.

NOTE

You must name this configuration file install-config.yaml.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the next step of the installation
process. You must back it up now.

1.8.8.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE
119
OpenShift Container Platform 4.6 Installing on Azure

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.17. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev.
subdomains of
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

120
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.18. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

121
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

122
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

123
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

124
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.19. Additional Azure parameters

125
OpenShift Container Platform 4.6 Installing on Azure

Parameter Description Values

machines.platform.a The Azure VM instance type. VMs that use Windows or Linux as the
zure.type operating system. See the Guest
operating systems supported on Azure
Stack in the Azure documentation.

machines.platform.a The Azure disk size for the VM. Integer that represents the size of the
zure.osDisk.diskSize disk in GB, for example 512. The
GB minimum supported disk size is 120.

platform.azure.base The name of the resource group that String, for example
DomainResourceGr contains the DNS zone for your base production_cluster .
oupName domain.

platform.azure.outbo The outbound routing strategy used to LoadBalancer or


undType connect your cluster to the internet. If UserDefinedRouting. The default is
you are using user-defined routing, LoadBalancer .
you must have pre-existing networking
available where the outbound routing
has already been configured prior to
installing a cluster. The installation
program is not responsible for
configuring user-defined routing.

platform.azure.regio The name of the Azure region that Any valid region name, such as
n hosts your cluster. centralus.

platform.azure.zone List of availability zones to place List of zones, for example ["1", "2",
machines in. For high availability, "3"].
specify at least two zones.

platform.azure.netw The name of the resource group that String.


orkResourceGroupN contains the existing VNet that you
ame want to deploy your cluster to. This
name cannot be the same as the
platform.azure.baseDomainReso
urceGroupName.

platform.azure.virtua The name of the existing VNet that String.


lNetwork you want to deploy your cluster to.

platform.azure.contr The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
olPlaneSubnet your VNet that you want to deploy
your control plane machines to.

platform.azure.comp The name of the existing subnet in Valid CIDR, for example 10.0.0.0/16.
uteSubnet your VNet that you want to deploy
your compute machines to.

126
CHAPTER 1. INSTALLING ON AZURE

Parameter Description Values

platform.azure.cloud The name of the Azure cloud Any valid cloud environment, such as
Name environment that is used to configure AzurePublicCloud or
the Azure SDK with the appropriate AzureUSGovernmentCloud .
Azure API endpoints. If empty, the
default value AzurePublicCloud is
used.

NOTE

You cannot customize Azure Availability Zones or Use tags to organize your Azure
resources with an Azure cluster.

1.8.8.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.

IMPORTANT

This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024 5
type: Standard_D8s_v3
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512 8
zones: 9
- "1"
- "2"
- "3"
replicas: 5
metadata:

127
OpenShift Container Platform 4.6 Installing on Azure

name: test-cluster 10
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
region: usgovvirginia
baseDomainResourceGroupName: resource_group 11
networkResourceGroupName: vnet_resource_group 12
virtualNetwork: vnet 13
controlPlaneSubnet: control_plane_subnet 14
computeSubnet: compute_subnet 15
outboundType: UserDefinedRouting 16
cloudName: AzureUSGovernmentCloud 17
pullSecret: '{"auths": ...}' 18
fips: false 19
sshKey: ssh-ed25519 AAAA... 20
publish: Internal 21

1 10 18 Required.

2 6 If you do not provide these parameters and values, the installation program provides the default
value.

3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.

4 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default,


simultaneous multithreading is enabled to increase the performance of your machines' cores. You
can disable it by setting the parameter value to Disabled. If you disable simultaneous
multithreading in some cluster machines, you must disable it in all cluster machines.

IMPORTANT

If you disable simultaneous multithreading, ensure that your capacity planning


accounts for the dramatically decreased machine performance. Use larger virtual
machine types, such as Standard_D8s_v3, for your machines if you disable
simultaneous multithreading.

5 8 You can specify the size of the disk to use in GB. Minimum recommendation for master nodes is
1024 GB.

9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.

128
CHAPTER 1. INSTALLING ON AZURE

11 Specify the name of the resource group that contains the DNS zone for your base domain.

12 If you use an existing VNet, specify the name of the resource group that contains it.

13 If you use an existing VNet, specify its name.

14 If you use an existing VNet, specify the name of the subnet to host the control plane machines.

15 If you use an existing VNet, specify the name of the subnet to host the compute machines.

16 You can customize your own outbound routing. Configuring user-defined routing prevents
exposing external endpoints in your cluster. User-defined routing for egress requires deploying
your cluster to an existing VNet.

17 Specify the name of the Azure cloud environment to deploy your cluster to. Set
AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The
default value is AzurePublicCloud.

19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.

20 You can optionally provide the sshKey value that you use to access the machines in your cluster.

NOTE

For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.

21 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a
private cluster, which cannot be accessed from the Internet. The default value is External.

1.8.8.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.

Prerequisites

You have an existing install-config.yaml file.

You reviewed the sites that your cluster requires access to and determined whether any of
them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to
hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to
bypass the proxy if necessary.

NOTE
129
OpenShift Container Platform 4.6 Installing on Azure

NOTE

The Proxy object status.noProxy field is populated with the values of the
networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and
networking.serviceNetwork[] fields from your installation configuration.

For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object
status.noProxy field is also populated with the instance metadata endpoint
(169.254.169.254).

Procedure

1. Edit your install-config.yaml file and add the proxy settings. For example:

apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...

1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.

2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. If you use an
MITM transparent proxy network that does not require additional proxy configuration but
requires additional CAs, you must not specify an httpsProxy value.

3 A comma-separated list of destination domain names, domains, IP addresses, or other


network CIDRs to exclude proxying. Preface a domain with . to include all subdomains of
that domain. Use * to bypass proxy for all destinations.

4 If provided, the installation program generates a config map that is named user-ca-bundle
in the openshift-config namespace that contains one or more additional CA certificates
that are required for proxying HTTPS connections. The Cluster Network Operator then
creates a trusted-ca-bundle config map that merges these contents with the Red Hat
Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the
Proxy object’s trustedCA field. The additionalTrustBundle field is required unless the
proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. If you
use an MITM transparent proxy network that does not require additional proxy
configuration but requires additional CAs, you must provide the MITM CA certificate.

NOTE

The installation program does not support the proxy readinessEndpoints field.

130
CHAPTER 1. INSTALLING ON AZURE

2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.

NOTE

Only the Proxy object named cluster is supported, and no additional proxies can be
created.

1.8.9. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the location of your customized ./install-


config.yaml file.

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...

131
OpenShift Container Platform 4.6 Installing on Azure

INFO Install complete!


INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.8.10. Installing the OpenShift CLI by downloading the binary


You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.8.10.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

132
CHAPTER 1. INSTALLING ON AZURE

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

1.8.10.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.8.10.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

133
OpenShift Container Platform 4.6 Installing on Azure

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

1.8.11. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.8.12. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

1.9. INSTALLING A CLUSTER ON AZURE USING ARM TEMPLATES


In OpenShift Container Platform version 4.5, you can install a cluster on Microsoft Azure by using
infrastructure that you provide.

Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or
to help model your own. You can also create the required resources through other methods; the
templates are just an example.

134
CHAPTER 1. INSTALLING ON AZURE

1.9.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Configure an Azure account to host the cluster.

Download the Azure CLI and install it on your computer. See Install the Azure CLI in the Azure
documentation. The documentation below was last tested using version 2.2.0 of the Azure CLI.
Azure CLI commands might perform differently based on the version you use.

If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites
that your cluster requires access to.

If you do not allow the system to manage identity and access management (IAM), then a cluster
administrator can manually create and maintain IAM credentials . Manual mode can also be used
in environments where the cloud IAM APIs are not reachable.

NOTE

Be sure to also review this site list if you are configuring a proxy.

1.9.2. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.9.3. Configuring your Azure project


Before you can install OpenShift Container Platform, you must configure an Azure project to host it.

135
OpenShift Container Platform 4.6 Installing on Azure

IMPORTANT

All Azure resources that are available through public endpoints are subject to resource
name restrictions, and you cannot create resources that use certain terms. For a list of
terms that Azure restricts, see Resolve reserved resource name errors in the Azure
documentation.

1.9.3.1. Azure account limits

The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the
default Azure subscription and service limits, quotas, and constraints affect your ability to install
OpenShift Container Platform clusters.

IMPORTANT

Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by
series, such as Dv2, F, and G. For example, the default for Enterprise Agreement
subscriptions is 350 cores.

Check the limits for your subscription type and if necessary, increase quota limits for your
account before you install a default cluster on Azure.

The following table summarizes the Azure components whose limits can impact your ability to install and
run OpenShift Container Platform clusters.

Compone Number of Default Azure Description


nt components limit
required by
default

136
CHAPTER 1. INSTALLING ON AZURE

Compone Number of Default Azure Description


nt components limit
required by
default

vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must


increase the account limit.

By default, each cluster creates the following


instances:

One bootstrap machine, which is removed


after installation

Three control plane machines

Three compute machines

Because the bootstrap machine uses


Standard_D4s_v3 machines, which use 4 vCPUs,
the control plane machines use Standard_D8s_v3
virtual machines, which use 8 vCPUs, and the worker
machines use Standard_D4s_v3 virtual machines,
which use 4 vCPUs, a default cluster requires 40
vCPUs. The bootstrap node VM, which uses 4
vCPUs, is used only during installation.

To deploy more worker nodes, enable autoscaling,


deploy large workloads, or use a different instance
type, you must further increase the vCPU limit for
your account to ensure that your cluster can deploy
the machines that you require.

By default, the installation program distributes


control plane and compute machines across all
availability zones within a region. To ensure high
availability for your cluster, select a region with at
least three availability zones. If your region contains
fewer than three availability zones, the installation
program places more than one control plane
machine in the available zones.

VNet 1 1000 per region Each default cluster requires one Virtual Network
(VNet), which contains two subnets.

Network 6 65,536 per Each default cluster requires six network interfaces.
interfaces region If you create more machines or your deployed
workloads create load balancers, your cluster uses
more network interfaces.

137
OpenShift Container Platform 4.6 Installing on Azure

Compone Number of Default Azure Description


nt components limit
required by
default

Network 2 5000 Each default cluster Each cluster creates network


security security groups for each subnet in the VNet. The
groups default cluster creates network security groups for
the control plane and for the compute node subnets:

co Allows the control plane machines to be


ntr reached on port 6443 from anywhere
olp
lan
e

no Allows worker nodes to be reached from the


de Internet on ports 80 and 443

Network 3 1000 per region Each cluster creates the following load balancers:
load
balancers
def Public IP address that load balances requests
aul to ports 80 and 443 across worker machines
t

int Private IP address that load balances


ern requests to ports 6443 and 22623 across
al control plane machines

ext Public IP address that load balances requests


ern to port 6443 across control plane machines
al

If your applications create more Kubernetes


LoadBalancer service objects, your cluster uses
more load balancers.

Public IP 3 Each of the two public load balancers uses a public


addresses IP address. The bootstrap machine also uses a public
IP address so that you can SSH into the machine to
troubleshoot issues during installation. The IP
address for the bootstrap node is used only during
installation.

Private IP 7 The internal load balancer, each of the three control


addresses plane machines, and each of the three worker
machines each use a private IP address.

1.9.3.2. Configuring a public DNS zone in Azure

138
CHAPTER 1. INSTALLING ON AZURE

To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated
public hosted DNS zone in your account. This zone must be authoritative for the domain. This service
provides cluster DNS resolution and name lookup for external connections to the cluster.

Procedure

1. Identify your domain, or subdomain, and registrar. You can transfer an existing domain and
registrar or obtain a new one through Azure or another source.

NOTE

For more information about purchasing domains through Azure, see Buy a
custom domain name for Azure App Service in the Azure documentation.

2. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active
DNS name to Azure App Service in the Azure documentation.

3. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure
DNS in the Azure documentation to create a public hosted zone for your domain or subdomain,
extract the new authoritative name servers, and update the registrar records for the name
servers that your domain uses.
Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as
clusters.openshiftcorp.com.

4. If you use a subdomain, follow your company’s procedures to add its delegation records to the
parent domain.

You can view Azure’s DNS solution by visiting this example for creating DNS zones .

1.9.3.3. Increasing Azure account limits

To increase an account limit, file a support request on the Azure portal.

NOTE

You can increase only one type of quota per support request.

Procedure

1. From the Azure portal, click Help + support in the lower left corner.

2. Click New support request and then select the required values:

a. From the Issue type list, select Service and subscription limits (quotas).

b. From the Subscription list, select the subscription to modify.

c. From the Quota type list, select the quota to increase. For example, select Compute-VM
(cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is
required to install a cluster.

d. Click Next: Solutions.

3. On the Problem Details page, provide the required information for your quota increase:

139
OpenShift Container Platform 4.6 Installing on Azure

a. Click Provide details and provide the required details in the Quota details window.

b. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and
your contact details.

4. Click Next: Review + create and then click Create.

1.9.3.4. Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure
that you provision, you must provide a mechanism for approving cluster certificate signing requests
(CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The
machine-approver cannot guarantee the validity of a serving certificate that is requested by using
kubelet credentials because it cannot confirm that the correct machine issued the request. You must
determine and implement a method of verifying the validity of the kubelet serving certificate requests
and approving them.

1.9.3.5. Required Azure roles

Your Microsoft Azure account must have the following roles for the subscription that you use:

User Access Administrator

To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure
portal in the Azure documentation.

1.9.3.6. Creating a service principal

Because OpenShift Container Platform and its installation program must create Microsoft Azure
resources through Azure Resource Manager, you must create a service principal to represent it.

Prerequisites

Install or update the Azure CLI.

Install the jq package.

Your Azure account has the required roles for the subscription that you use.

Procedure

1. Log in to the Azure CLI:

$ az login

Log in to Azure in the web console by using your credentials.

2. If your Azure account uses subscriptions, ensure that you are using the right subscription.

a. View the list of available accounts and record the tenantId value for the subscription you
want to use for your cluster:

$ az account list --refresh

Example output

140
CHAPTER 1. INSTALLING ON AZURE

[
{
"cloudName": "AzureCloud",
"id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",
"user": {
"name": "[email protected]",
"type": "user"
}
}
]

b. View your active account details and confirm that the tenantId value matches the
subscription you want to use:

$ az account show

Example output

{
"environmentName": "AzureCloud",
"id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1
"user": {
"name": "[email protected]",
"type": "user"
}
}

1 Ensure that the value of the tenantId parameter is the UUID of the correct
subscription.

c. If you are not using the right subscription, change the active subscription:

$ az account set -s <id> 1

1 Substitute the value of the id for the subscription that you want to use for <id>.

d. If you changed the active subscription, display your account information again:

$ az account show

Example output

141
OpenShift Container Platform 4.6 Installing on Azure

"environmentName": "AzureCloud",
"id": "33212d16-bdf6-45cb-b038-f6565b61edda",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee",
"user": {
"name": "[email protected]",
"type": "user"
}
}

3. Record the values of the tenantId and id parameters from the previous output. You need these
values during OpenShift Container Platform installation.

4. Create the service principal for your account:

$ az ad sp create-for-rbac --role Contributor --name <service_principal> 1

1 Replace <service_principal> with the name to assign to the service principal.

Example output

Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the


required format used for service principal names
Retrying role assignment creation: 1/36
Retrying role assignment creation: 2/36
Retrying role assignment creation: 3/36
Retrying role assignment creation: 4/36
{
"appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956",
"displayName": "<service_principal>",
"name": "http://<service_principal>",
"password": "ac461d78-bf4b-4387-ad16-7e32e328aec6",
"tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee"
}

5. Record the values of the appId and password parameters from the previous output. You need
these values during OpenShift Container Platform installation.

6. Grant additional permissions to the service principal.

You must always add the Contributor and User Access Administrator roles to the app
registration service principal so the cluster can assign credentials for its components.

To operate the Cloud Credential Operator (CCO) in mint mode , the app registration service
principal also requires the Azure Active Directory
Graph/Application.ReadWrite.OwnedBy API permission.

To operate the CCO in passthrough mode, the app registration service principal does not
require additional API permissions.

For more information about CCO modes, see the Cloud Credential Operator entry in the Red
Hat Operators reference content.

142
CHAPTER 1. INSTALLING ON AZURE

a. To assign the User Access Administrator role, run the following command:

$ az role assignment create --role "User Access Administrator" \


--assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \ 1
| jq '.[0].objectId' -r)

1 Replace <appId> with the appId parameter value for your service principal.

b. To assign the Azure Active Directory Graph permission, run the following command:

$ az ad app permission add --id <appId> \ 1


--api 00000002-0000-0000-c000-000000000000 \
--api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role

1 Replace <appId> with the appId parameter value for your service principal.

Example output

Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api


00000002-0000-0000-c000-000000000000" is needed to make the change effective

For more information about the specific permissions that you grant with this command, see
the GUID Table for Windows Azure Active Directory Permissions .

c. Approve the permissions request. If your account does not have the Azure Active Directory
tenant administrator role, follow the guidelines for your organization to request that the
tenant administrator approve your permissions request.

$ az ad app permission grant --id <appId> \ 1


--api 00000002-0000-0000-c000-000000000000

1 Replace <appId> with the appId parameter value for your service principal.

1.9.3.7. Supported Azure regions

The installation program dynamically generates the list of available Microsoft Azure regions based on
your subscription. The following Azure regions were tested and validated in OpenShift Container
Platform version 4.6.1:

Supported Azure public regions

australiacentral (Australia Central)

australiaeast (Australia East)

australiasoutheast (Australia South East)

brazilsouth (Brazil South)

canadacentral (Canada Central)

canadaeast (Canada East)

143
OpenShift Container Platform 4.6 Installing on Azure

centralindia (Central India)

centralus (Central US)

eastasia (East Asia)

eastus (East US)

eastus2 (East US 2)

francecentral (France Central)

germanywestcentral (Germany West Central)

japaneast (Japan East)

japanwest (Japan West)

koreacentral (Korea Central)

koreasouth (Korea South)

northcentralus (North Central US)

northeurope (North Europe)

norwayeast (Norway East)

southafricanorth (South Africa North)

southcentralus (South Central US)

southeastasia (Southeast Asia)

southindia (South India)

switzerlandnorth (Switzerland North)

uaenorth (UAE North)

uksouth (UK South)

ukwest (UK West)

westcentralus (West Central US)

westeurope (West Europe)

westindia (West India)

westus (West US)

westus2 (West US 2)

Supported Azure Government regions


Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift
Container Platform version 4.6:

144
CHAPTER 1. INSTALLING ON AZURE

usgovtexas (US Gov Texas)

usgovvirginia (US Gov Virginia)

You can reference all available MAG regions in the Azure documentation. Other provided MAG regions
are expected to work with OpenShift Container Platform, but have not been tested.

1.9.4. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.9.5. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an

145
OpenShift Container Platform 4.6 Installing on Azure

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
146
CHAPTER 1. INSTALLING ON AZURE

When you install OpenShift Container Platform, provide the SSH public key to the installation
program. If you install a cluster on infrastructure that you provision, you must provide this key to
your cluster’s machines.

1.9.6. Creating the installation files for Azure


To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you
must generate the files that the installation program needs to deploy your cluster and modify them so
that the cluster creates only the machines that it will use. You generate and customize the install-
config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up
a separate var partition during the preparation phases of installation.

1.9.6.1. Optional: Creating a separate /var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installer.
However, there are cases where you might want to create separate partitions in a part of the filesystem
that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the
/var partition or a subdirectory of /var. For example:

/var/lib/containers: Holds container-related content that can grow as more images and
containers are added to a system.

/var/lib/etcd: Holds data that you might want to keep separate for purposes such as
performance optimization of etcd storage.

/var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as
needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this
method, you will not have to pull all your containers again, nor will you have to copy massive log files
when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS),
the following procedure sets up the separate /var partition by creating a machine config that is inserted
during the openshift-install preparation phases of an OpenShift Container Platform installation.

IMPORTANT

If you follow the steps to create a separate /var partition in this procedure, it is not
necessary to create the Kubernetes manifest and Ignition config files again as described
later in this section.

Procedure

1. Create a directory to hold the OpenShift Container Platform installation files:

$ mkdir $HOME/clusterconfig

2. Run openshift-install to create a set of files in the manifest and openshift subdirectories.
Answer the system questions as you are prompted:

$ openshift-install create manifests --dir $HOME/clusterconfig


? SSH Public Key ...

147
OpenShift Container Platform 4.6 Installing on Azure

$ ls $HOME/clusterconfig/openshift/
99_kubeadmin-password-secret.yaml
99_openshift-cluster-api_master-machines-0.yaml
99_openshift-cluster-api_master-machines-1.yaml
99_openshift-cluster-api_master-machines-2.yaml
...

3. Create a MachineConfig object and add it to a file in the openshift directory. For example,
name the file 98-var-partition.yaml, change the disk device name to the name of the storage
device on the worker systems, and set the storage size as appropriate. This attaches storage to
a separate /var directory.

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
spec:
config:
ignition:
version: 3.1.0
storage:
disks:
- device: /dev/<device_name> 1
partitions:
- sizeMiB: <partition_size>
startMiB: <partition_start_offset> 2
label: var
filesystems:
- path: /var
device: /dev/disk/by-partlabel/var
format: xfs
systemd:
units:
- name: var.mount
enabled: true
contents: |
[Unit]
Before=local-fs.target
[Mount]
Where=/var
What=/dev/disk/by-partlabel/var
[Install]
WantedBy=local-fs.target

1 The storage device name of the disk that you want to partition.

2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes)
is recommended. The root file system is automatically resized to fill all available space up
to the specified offset. If no value is specified, or if the specified value is smaller than the
recommended minimum, the resulting root file system will be too small, and future
reinstalls of RHCOS might overwrite the beginning of the data partition.

4. Run openshift-install again to create Ignition configs from a set of files in the manifest and
148
CHAPTER 1. INSTALLING ON AZURE

4. Run openshift-install again to create Ignition configs from a set of files in the manifest and
openshift subdirectories:

$ openshift-install create ignition-configs --dir $HOME/clusterconfig


$ ls $HOME/clusterconfig/
auth bootstrap.ign master.ign metadata.json worker.ign

Now you can use the Ignition config files as input to the installation procedures to install Red Hat
Enterprise Linux CoreOS (RHCOS) systems.

1.9.6.2. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select azure as the platform to target.

iii. If you do not have a Microsoft Azure profile stored on your computer, specify the
149
OpenShift Container Platform 4.6 Installing on Azure

iii. If you do not have a Microsoft Azure profile stored on your computer, specify the
following Azure parameter values for your subscription and service principal:

azure subscription id: The subscription ID to use for the cluster. Specify the id
value in your account output.

azure tenant id: The tenant ID. Specify the tenantId value in your account output.

azure service principal client id: The value of the appId parameter for the service
principal.

azure service principal client secret: The value of the password parameter for the
service principal.

iv. Select the region to deploy the cluster to.

v. Select the base domain to deploy the cluster to. The base domain corresponds to the
Azure DNS Zone that you created for your cluster.

vi. Enter a descriptive name for your cluster.

IMPORTANT

All Azure resources that are available through public endpoints are
subject to resource name restrictions, and you cannot create resources
that use certain terms. For a list of terms that Azure restricts, see
Resolve reserved resource name errors in the Azure documentation.

vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.9.6.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.

Prerequisites

You have an existing install-config.yaml file.

You reviewed the sites that your cluster requires access to and determined whether any of
them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to
hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to
bypass the proxy if necessary.

NOTE
150
CHAPTER 1. INSTALLING ON AZURE

NOTE

The Proxy object status.noProxy field is populated with the values of the
networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and
networking.serviceNetwork[] fields from your installation configuration.

For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP),
Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object
status.noProxy field is also populated with the instance metadata endpoint
(169.254.169.254).

Procedure

1. Edit your install-config.yaml file and add the proxy settings. For example:

apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...

1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.

2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. If you use an
MITM transparent proxy network that does not require additional proxy configuration but
requires additional CAs, you must not specify an httpsProxy value.

3 A comma-separated list of destination domain names, domains, IP addresses, or other


network CIDRs to exclude proxying. Preface a domain with . to include all subdomains of
that domain. Use * to bypass proxy for all destinations.

4 If provided, the installation program generates a config map that is named user-ca-bundle
in the openshift-config namespace that contains one or more additional CA certificates
that are required for proxying HTTPS connections. The Cluster Network Operator then
creates a trusted-ca-bundle config map that merges these contents with the Red Hat
Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the
Proxy object’s trustedCA field. The additionalTrustBundle field is required unless the
proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. If you
use an MITM transparent proxy network that does not require additional proxy
configuration but requires additional CAs, you must provide the MITM CA certificate.

NOTE

The installation program does not support the proxy readinessEndpoints field.

151
OpenShift Container Platform 4.6 Installing on Azure

2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.

NOTE

Only the Proxy object named cluster is supported, and no additional proxies can be
created.

1.9.6.4. Exporting common variables for ARM templates

You must export a common set of variables that are used with the provided Azure Resource Manager
(ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure.

NOTE

Specific ARM templates can also require additional exported variables, which are detailed
in their related procedures.

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Export common variables found in the install-config.yaml to be used by the provided ARM
templates:

$ export CLUSTER_NAME=<cluster_name> 1
$ export AZURE_REGION=<azure_region> 2
$ export SSH_KEY=<ssh_key> 3
$ export BASE_DOMAIN=<base_domain> 4
$ export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5

1 The value of the .metadata.name attribute from the install-config.yaml file.

2 The region to deploy the cluster into, for example centralus. This is the value of the
.platform.azure.region attribute from the install-config.yaml file.

3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it
contains spaces. This is the value of the .sshKey attribute from the install-config.yaml
file.

4 The base domain to deploy the cluster to. The base domain corresponds to the public
DNS zone that you created for your cluster. This is the value of the .baseDomain attribute
from the install-config.yaml file.

5 The resource group where the public DNS zone exists. This is the value of the
.platform.azure.baseDomainResourceGroupName attribute from the install-
config.yaml file.

152
CHAPTER 1. INSTALLING ON AZURE

For example:

$ export CLUSTER_NAME=test-cluster
$ export AZURE_REGION=centralus
$ export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]"
$ export BASE_DOMAIN=example.com
$ export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster

2. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

1.9.6.5. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must
generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to create the cluster.

IMPORTANT

The Ignition config files that the installation program generates contain certificates that
expire after 24 hours, which are then renewed at that time. If the cluster is shut down
before renewing the certificates and the cluster is later restarted after the 24 hours have
elapsed, the cluster automatically recovers the expired certificates. The exception is that
you must manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.

Prerequisites

You obtained the OpenShift Container Platform installation program.

You created the install-config.yaml installation configuration file.

Procedure

1. Change to the directory that contains the installation program and generate the Kubernetes
manifests for the cluster:

$ ./openshift-install create manifests --dir=<installation_directory> 1

Example output

INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"


INFO Consuming Install Config from target directory
INFO Manifests created in: install_dir/manifests and install_dir/openshift

153
OpenShift Container Platform 4.6 Installing on Azure

1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.

2. Remove the Kubernetes manifest files that define the control plane machines:

$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml

By removing these files, you prevent the cluster from automatically generating control plane
machines.

3. Remove the Kubernetes manifest files that define the worker machines:

$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml

Because you create and manage the worker machines yourself, you do not need to initialize
these machines.

4. Check that the mastersSchedulable parameter in the


<installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest
file is set to false. This setting prevents pods from being scheduled on the control plane
machines:

a. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.

b. Locate the mastersSchedulable parameter and ensure that it is set to false.

c. Save and exit the file.

5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove
the privateZone and publicZone sections from the
<installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:

apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: null
name: cluster
spec:
baseDomain: example.openshift.com
privateZone: 1
id: mycluster-100419-private-zone
publicZone: 2
id: example.openshift.com
status: {}

1 2 Remove this section completely.

If you do so, you must add ingress DNS records manually in a later step.

6. When configuring Azure on user-provisioned infrastructure, you must export some common
variables defined in the manifest files to use later in the Azure Resource Manager (ARM)
templates:

a. Export the infrastructure ID by using the following command:

154
CHAPTER 1. INSTALLING ON AZURE

$ export INFRA_ID=<infra_id> 1

1 The OpenShift Container Platform cluster has been assigned an identifier (INFRA_ID)
in the form of <cluster_name>-<random_string>. This will be used as the base name
for most resources created using the provided ARM templates. This is the value of the
.status.infrastructureName attribute from the manifests/cluster-infrastructure-02-
config.yml file.

b. Export the resource group by using the following command:

$ export RESOURCE_GROUP=<resource_group> 1

1 All resources created in this Azure deployment exists as part of a resource group. The
resource group name is also based on the INFRA_ID, in the form of <cluster_name>-
<random_string>-rg. This is the value of the
.status.platformStatus.azure.resourceGroupName attribute from the
manifests/cluster-infrastructure-02-config.yml file.

7. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:

$ ./openshift-install create ignition-configs --dir=<installation_directory> 1

1 For <installation_directory>, specify the same installation directory.

The following files are generated in the directory:

.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign

1.9.7. Creating the Azure resource group and identity


You must create a Microsoft Azure resource group and an identity for that resource group. These are
both used during the installation of your OpenShift Container Platform cluster on Azure.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Procedure

1. Create the resource group in a supported Azure region:

155
OpenShift Container Platform 4.6 Installing on Azure

$ az group create --name ${RESOURCE_GROUP} --location ${AZURE_REGION}

2. Create an Azure identity for the resource group:

$ az identity create -g ${RESOURCE_GROUP} -n ${INFRA_ID}-identity

This is used to grant the required access to Operators in your cluster. For example, this allows
the Ingress Operator to create a public IP and its load balancer. You must assign the Azure
identity to a role.

3. Grant the Contributor role to the Azure identity:

a. Export the following variables required by the Azure role assignment:

$ export PRINCIPAL_ID=`az identity show -g ${RESOURCE_GROUP} -n ${INFRA_ID}-


identity --query principalId --out tsv`

$ export RESOURCE_GROUP_ID=`az group show -g ${RESOURCE_GROUP} --query


id --out tsv`

b. Assign the Contributor role to the identity:

$ az role assignment create --assignee "${PRINCIPAL_ID}" --role 'Contributor' --scope


"${RESOURCE_GROUP_ID}"

1.9.8. Uploading the RHCOS cluster image and bootstrap Ignition config file
The Azure client does not support deployments based on files existing locally; therefore, you must copy
and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a
storage container so they are accessible during deployment.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Procedure

1. Create an Azure storage account to store the VHD cluster image:

$ az storage account create -g ${RESOURCE_GROUP} --location ${AZURE_REGION} --


name ${CLUSTER_NAME}sa --kind Storage --sku Standard_LRS

156
CHAPTER 1. INSTALLING ON AZURE


WARNING

The Azure storage account name must be between 3 and 24 characters in


length and use numbers and lower-case letters only. If your
CLUSTER_NAME variable does not follow these restrictions, you must
manually define the Azure storage account name. For more information on
Azure storage account name restrictions, see Resolve errors for storage
account names in the Azure documentation.

2. Export the storage account key as an environment variable:

$ export ACCOUNT_KEY=`az storage account keys list -g ${RESOURCE_GROUP} --


account-name ${CLUSTER_NAME}sa --query "[0].value" -o tsv`

3. Choose the RHCOS version to use and export the URL of its VHD to an environment variable:

$ export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-


4.6/data/data/rhcos.json | jq -r .azure.url`

IMPORTANT

The RHCOS images might not change with every release of OpenShift Container
Platform. You must specify an image with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
version that matches your OpenShift Container Platform version if it is available.

4. Copy the chosen VHD to a blob:

$ az storage container create --name vhd --account-name ${CLUSTER_NAME}sa --account-


key ${ACCOUNT_KEY}

$ az storage blob copy start --account-name ${CLUSTER_NAME}sa --account-key


${ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri
"${VHD_URL}"

To track the progress of the VHD copy task, run this script:

status="unknown"
while [ "$status" != "success" ]
do
status=`az storage blob show --container-name vhd --name "rhcos.vhd" --account-name
${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -o tsv --query
properties.copy.status`
echo $status
done

5. Create a blob storage container and upload the generated bootstrap.ign file:

157
OpenShift Container Platform 4.6 Installing on Azure

$ az storage container create --name files --account-name ${CLUSTER_NAME}sa --


account-key ${ACCOUNT_KEY} --public-access blob

$ az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key


${ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign"

1.9.9. Example for creating DNS zones


DNS records are required for clusters that use user-provisioned infrastructure. You should choose the
DNS strategy that fits your scenario.

For this example, Azure’s DNS solution is used, so you will create a new public DNS zone for external
(internet) visibility and a private DNS zone for internal cluster resolution.

NOTE

The public DNS zone is not required to exist in the same resource group as the cluster
deployment and might already exist in your organization for the desired base domain. If
that is the case, you can skip creating the public DNS zone; be sure the installation config
you generated earlier reflects that scenario.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Procedure

1. Create the new public DNS zone in the resource group exported in the
BASE_DOMAIN_RESOURCE_GROUP environment variable:

$ az network dns zone create -g ${BASE_DOMAIN_RESOURCE_GROUP} -n


${CLUSTER_NAME}.${BASE_DOMAIN}

You can skip this step if you are using a public DNS zone that already exists.

2. Create the private DNS zone in the same resource group as the rest of this deployment:

$ az network private-dns zone create -g ${RESOURCE_GROUP} -n


${CLUSTER_NAME}.${BASE_DOMAIN}

You can learn more about configuring a public DNS zone in Azure by visiting that section.

1.9.10. Creating a VNet in Azure


You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform
cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to
modify the provided Azure Resource Manager (ARM) template.

NOTE
158
CHAPTER 1. INSTALLING ON AZURE

NOTE

If you do not use the provided ARM template to create your Azure infrastructure, you
must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Procedure

1. Copy the template from the ARM template for the VNet section of this topic and save it as
01_vnet.json in your cluster’s installation directory. This template describes the VNet that your
cluster requires.

2. Create the deployment by using the az CLI:

$ az deployment group create -g ${RESOURCE_GROUP} \


--template-file "<installation_directory>/01_vnet.json" \
--parameters baseName="${INFRA_ID}" 1

1 The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

3. Link the VNet template to the private DNS zone:

$ az network private-dns link vnet create -g ${RESOURCE_GROUP} -z


${CLUSTER_NAME}.${BASE_DOMAIN} -n ${INFRA_ID}-network-link -v "${INFRA_ID}-vnet"
-e false

1.9.10.1. ARM template for the VNet

You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need
for your OpenShift Container Platform cluster:

Example 1.1. 01_vnet.json ARM template

{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
}
},

159
OpenShift Container Platform 4.6 Installing on Azure

"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"addressPrefix" : "10.0.0.0/16",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetPrefix" : "10.0.0.0/24",
"nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
"nodeSubnetPrefix" : "10.0.1.0/24",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]"
},
"resources" : [
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/virtualNetworks",
"name" : "[variables('virtualNetworkName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]"
],
"properties" : {
"addressSpace" : {
"addressPrefixes" : [
"[variables('addressPrefix')]"
]
},
"subnets" : [
{
"name" : "[variables('masterSubnetName')]",
"properties" : {
"addressPrefix" : "[variables('masterSubnetPrefix')]",
"serviceEndpoints": [],
"networkSecurityGroup" : {
"id" : "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('clusterNsgName'))]"
}
}
},
{
"name" : "[variables('nodeSubnetName')]",
"properties" : {
"addressPrefix" : "[variables('nodeSubnetPrefix')]",
"serviceEndpoints": [],
"networkSecurityGroup" : {
"id" : "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('clusterNsgName'))]"
}
}
}
]
}
},
{
"type" : "Microsoft.Network/networkSecurityGroups",
"name" : "[variables('clusterNsgName')]",
"apiVersion" : "2018-10-01",
"location" : "[variables('location')]",

160
CHAPTER 1. INSTALLING ON AZURE

"properties" : {
"securityRules" : [
{
"name" : "apiserver_in",
"properties" : {
"protocol" : "Tcp",
"sourcePortRange" : "*",
"destinationPortRange" : "6443",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 101,
"direction" : "Inbound"
}
}
]
}
}
]
}

1.9.11. Deploying the RHCOS cluster image for the Azure infrastructure
You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your
OpenShift Container Platform nodes.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container.

Store the bootstrap Ignition config file in an Azure storage container.

Procedure

1. Copy the template from the ARM template for image storage section of this topic and save it
as 02_storage.json in your cluster’s installation directory. This template describes the image
storage that your cluster requires.

2. Export the RHCOS VHD blob URL as a variable:

$ export VHD_BLOB_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --


account-key ${ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv`

3. Deploy the cluster image:

$ az deployment group create -g ${RESOURCE_GROUP} \


--template-file "<installation_directory>/02_storage.json" \
--parameters vhdBlobURL="${VHD_BLOB_URL}" \ 1
--parameters baseName="${INFRA_ID}" 2

161
OpenShift Container Platform 4.6 Installing on Azure

1 The blob URL of the RHCOS VHD to be used to create master and worker machines.

2 The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.11.1. ARM template for image storage

You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat
Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster:

Example 1.2. 02_storage.json ARM template

{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"vhdBlobURL" : {
"type" : "string",
"metadata" : {
"description" : "URL pointing to the blob where the VHD to be used to create master and
worker machines is located"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"imageName" : "[concat(parameters('baseName'), '-image')]"
},
"resources" : [
{
"apiVersion" : "2018-06-01",
"type": "Microsoft.Compute/images",
"name": "[variables('imageName')]",
"location" : "[variables('location')]",
"properties": {
"storageProfile": {
"osDisk": {
"osType": "Linux",
"osState": "Generalized",
"blobUri": "[parameters('vhdBlobURL')]",
"storageAccountType": "Standard_LRS"
}
}
}
}
]
}

162
CHAPTER 1. INSTALLING ON AZURE

1.9.12. Creating networking and load balancing components in Azure


You must configure networking and load balancing in Microsoft Azure for your OpenShift Container
Platform cluster to use. One way to create these components is to modify the provided Azure Resource
Manager (ARM) template.

NOTE

If you do not use the provided ARM template to create your Azure infrastructure, you
must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Create and configure a VNet and associated subnets in Azure.

Procedure

1. Copy the template from the ARM template for the network and load balancerssection of this
topic and save it as 03_infra.json in your cluster’s installation directory. This template describes
the networking and load balancing objects that your cluster requires.

2. Create the deployment by using the az CLI:

$ az deployment group create -g ${RESOURCE_GROUP} \


--template-file "<installation_directory>/03_infra.json" \
--parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \ 1
--parameters baseName="${INFRA_ID}" 2

1 The name of the private DNS zone.

2 The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

3. Create an api DNS record in the public zone for the API public load balancer. The
${BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the
public DNS zone exists.

a. Export the following variable:

$ export PUBLIC_IP=`az network public-ip list -g ${RESOURCE_GROUP} --query "[?


name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv`

b. Create the DNS record in a new public zone:

$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -


z ${CLUSTER_NAME}.${BASE_DOMAIN} -n api -a ${PUBLIC_IP} --ttl 60

c. If you are adding the cluster to an existing public zone, you can create the DNS record in it
163
OpenShift Container Platform 4.6 Installing on Azure

c. If you are adding the cluster to an existing public zone, you can create the DNS record in it
instead:

$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -


z ${BASE_DOMAIN} -n api.${CLUSTER_NAME} -a ${PUBLIC_IP} --ttl 60

1.9.12.1. ARM template for the network and load balancers

You can use the following Azure Resource Manager (ARM) template to deploy the networking objects
and load balancers that you need for your OpenShift Container Platform cluster:

Example 1.3. 03_infra.json ARM template

{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"privateDNSZoneName" : {
"type" : "string",
"metadata" : {
"description" : "Name of the private DNS zone"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks',
variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/',
variables('masterSubnetName'))]",
"masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]",
"masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses',
variables('masterPublicIpAddressName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
"masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers',
variables('masterLoadBalancerName'))]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
"internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers',
variables('internalLoadBalancerName'))]",
"skuName": "Standard"
},
"resources" : [
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/publicIPAddresses",

164
CHAPTER 1. INSTALLING ON AZURE

"name" : "[variables('masterPublicIpAddressName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"properties" : {
"publicIPAllocationMethod" : "Static",
"dnsSettings" : {
"domainNameLabel" : "[variables('masterPublicIpAddressName')]"
}
}
},
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/loadBalancers",
"name" : "[variables('masterLoadBalancerName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"dependsOn" : [
"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]"
],
"properties" : {
"frontendIPConfigurations" : [
{
"name" : "public-lb-ip",
"properties" : {
"publicIPAddress" : {
"id" : "[variables('masterPublicIpAddressID')]"
}
}
}
],
"backendAddressPools" : [
{
"name" : "public-lb-backend"
}
],
"loadBalancingRules" : [
{
"name" : "api-internal",
"properties" : {
"frontendIPConfiguration" : {
"id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-
ip')]"
},
"backendAddressPool" : {
"id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-
backend')]"
},
"protocol" : "Tcp",
"loadDistribution" : "Default",
"idleTimeoutInMinutes" : 30,
"frontendPort" : 6443,
"backendPort" : 6443,

165
OpenShift Container Platform 4.6 Installing on Azure

"probe" : {
"id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]"
}
}
}
],
"probes" : [
{
"name" : "api-internal-probe",
"properties" : {
"protocol" : "Https",
"port" : 6443,
"requestPath": "/readyz",
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
}
]
}
},
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/loadBalancers",
"name" : "[variables('internalLoadBalancerName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"properties" : {
"frontendIPConfigurations" : [
{
"name" : "internal-lb-ip",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"privateIPAddressVersion" : "IPv4"
}
}
],
"backendAddressPools" : [
{
"name" : "internal-lb-backend"
}
],
"loadBalancingRules" : [
{
"name" : "api-internal",
"properties" : {
"frontendIPConfiguration" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-
ip')]"
},
"frontendPort" : 6443,
"backendPort" : 6443,

166
CHAPTER 1. INSTALLING ON AZURE

"enableFloatingIP" : false,
"idleTimeoutInMinutes" : 30,
"protocol" : "Tcp",
"enableTcpReset" : false,
"loadDistribution" : "Default",
"backendAddressPool" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-
backend')]"
},
"probe" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]"
}
}
},
{
"name" : "sint",
"properties" : {
"frontendIPConfiguration" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-
ip')]"
},
"frontendPort" : 22623,
"backendPort" : 22623,
"enableFloatingIP" : false,
"idleTimeoutInMinutes" : 30,
"protocol" : "Tcp",
"enableTcpReset" : false,
"loadDistribution" : "Default",
"backendAddressPool" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-
backend')]"
},
"probe" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]"
}
}
}
],
"probes" : [
{
"name" : "api-internal-probe",
"properties" : {
"protocol" : "Https",
"port" : 6443,
"requestPath": "/readyz",
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
},
{
"name" : "sint-probe",
"properties" : {
"protocol" : "Https",
"port" : 22623,
"requestPath": "/healthz",
"intervalInSeconds" : 10,

167
OpenShift Container Platform 4.6 Installing on Azure

"numberOfProbes" : 3
}
}
]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/A",
"name": "[concat(parameters('privateDNSZoneName'), '/api')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]"
],
"properties": {
"ttl": 60,
"aRecords": [
{
"ipv4Address": "
[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIP
Address]"
}
]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/A",
"name": "[concat(parameters('privateDNSZoneName'), '/api-int')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]"
],
"properties": {
"ttl": 60,
"aRecords": [
{
"ipv4Address": "
[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIP
Address]"
}
]
}
}
]
}

1.9.13. Creating the bootstrap machine in Azure


You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform
cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager
(ARM) template.

NOTE
168
CHAPTER 1. INSTALLING ON AZURE

NOTE

If you do not use the provided ARM template to create your bootstrap machine, you must
review the provided information and manually create the infrastructure. If your cluster
does not initialize correctly, you might have to contact Red Hat support with your
installation logs.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Create and configure a VNet and associated subnets in Azure.

Create and configure networking and load balancers in Azure.

Create control plane and compute roles.

Procedure

1. Copy the template from the ARM template for the bootstrap machinesection of this topic
and save it as 04_bootstrap.json in your cluster’s installation directory. This template describes
the bootstrap machine that your cluster requires.

2. Export the following variables required by the bootstrap machine deployment:

$ export BOOTSTRAP_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --


account-key ${ACCOUNT_KEY} -c "files" -n "bootstrap.ign" -o tsv`
$ export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.1.0" --arg url ${BOOTSTRAP_URL}
'{ignition:{version:$v,config:{replace:{source:$url}}}}' | base64 | tr -d '\n'`

3. Create the deployment by using the az CLI:

$ az deployment group create -g ${RESOURCE_GROUP} \


--template-file "<installation_directory>/04_bootstrap.json" \
--parameters bootstrapIgnition="${BOOTSTRAP_IGNITION}" \ 1
--parameters sshKeyData="${SSH_KEY}" \ 2
--parameters baseName="${INFRA_ID}" 3

1 The bootstrap Ignition content for the bootstrap cluster.

2 The SSH RSA public key file as a string.

3 The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.13.1. ARM template for the bootstrap machine

You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine
that you need for your OpenShift Container Platform cluster:

Example 1.4. 04_bootstrap.json ARM template

169
OpenShift Container Platform 4.6 Installing on Azure

"$schema" : "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"bootstrapIgnition" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Bootstrap ignition content for the bootstrap cluster"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string."
}
},
"bootstrapVMSize" : {
"type" : "string",
"defaultValue" : "Standard_D4s_v3",
"allowedValues" : [
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_A8",
"Standard_A9",
"Standard_A10",
"Standard_A11",
"Standard_D2",
"Standard_D3",
"Standard_D4",
"Standard_D11",
"Standard_D12",
"Standard_D13",
"Standard_D14",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D5_v2",
"Standard_D8_v3",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2",
"Standard_E2_v3",
"Standard_E4_v3",

170
CHAPTER 1. INSTALLING ON AZURE

"Standard_E8_v3",
"Standard_E16_v3",
"Standard_E32_v3",
"Standard_E64_v3",
"Standard_E2s_v3",
"Standard_E4s_v3",
"Standard_E8s_v3",
"Standard_E16s_v3",
"Standard_E32s_v3",
"Standard_E64s_v3",
"Standard_G1",
"Standard_G2",
"Standard_G3",
"Standard_G4",
"Standard_G5",
"Standard_DS2",
"Standard_DS3",
"Standard_DS4",
"Standard_DS11",
"Standard_DS12",
"Standard_DS13",
"Standard_DS14",
"Standard_DS2_v2",
"Standard_DS3_v2",
"Standard_DS4_v2",
"Standard_DS5_v2",
"Standard_DS11_v2",
"Standard_DS12_v2",
"Standard_DS13_v2",
"Standard_DS14_v2",
"Standard_GS1",
"Standard_GS2",
"Standard_GS3",
"Standard_GS4",
"Standard_GS5",
"Standard_D2s_v3",
"Standard_D4s_v3",
"Standard_D8s_v3"
],
"metadata" : {
"description" : "The size of the Bootstrap Virtual Machine"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks',
variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/',
variables('masterSubnetName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",

171
OpenShift Container Platform 4.6 Installing on Azure

"vmName" : "[concat(parameters('baseName'), '-bootstrap')]",


"nicName" : "[concat(variables('vmName'), '-nic')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]",
"sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]"
},
"resources" : [
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/publicIPAddresses",
"name" : "[variables('sshPublicIpAddressName')]",
"location" : "[variables('location')]",
"sku": {
"name": "Standard"
},
"properties" : {
"publicIPAllocationMethod" : "Static",
"dnsSettings" : {
"domainNameLabel" : "[variables('sshPublicIpAddressName')]"
}
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Network/networkInterfaces",
"name" : "[variables('nicName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
],
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',
variables('sshPublicIpAddressName'))]"
},
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"loadBalancerBackendAddressPools" : [
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/',
resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/',
variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]"
},
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/',
resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/',
variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]"
}
]
}

172
CHAPTER 1. INSTALLING ON AZURE

}
]
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Compute/virtualMachines",
"name" : "[variables('vmName')]",
"location" : "[variables('location')]",
"identity" : {
"type" : "userAssigned",
"userAssignedIdentities" : {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',
variables('identityName'))]" : {}
}
},
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties" : {
"hardwareProfile" : {
"vmSize" : "[parameters('bootstrapVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmName')]",
"adminUsername" : "core",
"customData" : "[parameters('bootstrapIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmName'),'_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB" : 100
}
},
"networkProfile" : {
"networkInterfaces" : [
{

173
OpenShift Container Platform 4.6 Installing on Azure

"id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"


}
]
}
}
},
{
"apiVersion" : "2018-06-01",
"type": "Microsoft.Network/networkSecurityGroups/securityRules",
"name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
],
"properties": {
"protocol" : "Tcp",
"sourcePortRange" : "*",
"destinationPortRange" : "22",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 100,
"direction" : "Inbound"
}
}
]
}

1.9.14. Creating the control plane machines in Azure


You must create the control plane machines in Microsoft Azure for your cluster to use. One way to
create these machines is to modify the provided Azure Resource Manager (ARM) template.

NOTE

If you do not use the provided ARM template to create your control plane machines, you
must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Create and configure a VNet and associated subnets in Azure.

Create and configure networking and load balancers in Azure.

Create control plane and compute roles.

Create the bootstrap machine.

Procedure
174
CHAPTER 1. INSTALLING ON AZURE

Procedure

1. Copy the template from the ARM template for control plane machines section of this topic
and save it as 05_masters.json in your cluster’s installation directory. This template describes
the control plane machines that your cluster requires.

2. Export the following variable needed by the control plane machine deployment:

$ export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'`

3. Create the deployment by using the az CLI:

$ az deployment group create -g ${RESOURCE_GROUP} \


--template-file "<installation_directory>/05_masters.json" \
--parameters masterIgnition="${MASTER_IGNITION}" \ 1
--parameters sshKeyData="${SSH_KEY}" \ 2
--parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \ 3
--parameters baseName="${INFRA_ID}" 4

1 The Ignition content for the master nodes.

2 The SSH RSA public key file as a string.

3 The name of the private DNS zone to which the master nodes are attached.

4 The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.14.1. ARM template for control plane machines

You can use the following Azure Resource Manager (ARM) template to deploy the control plane
machines that you need for your OpenShift Container Platform cluster:

Example 1.5. 05_masters.json ARM template

{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"masterIgnition" : {
"type" : "string",
"metadata" : {
"description" : "Ignition content for the master nodes"
}
},
"numberOfMasters" : {
"type" : "int",

175
OpenShift Container Platform 4.6 Installing on Azure

"defaultValue" : 3,
"minValue" : 2,
"maxValue" : 30,
"metadata" : {
"description" : "Number of OpenShift masters to deploy"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string"
}
},
"privateDNSZoneName" : {
"type" : "string",
"metadata" : {
"description" : "Name of the private DNS zone the master nodes are going to be attached to"
}
},
"masterVMSize" : {
"type" : "string",
"defaultValue" : "Standard_D8s_v3",
"allowedValues" : [
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_A8",
"Standard_A9",
"Standard_A10",
"Standard_A11",
"Standard_D2",
"Standard_D3",
"Standard_D4",
"Standard_D11",
"Standard_D12",
"Standard_D13",
"Standard_D14",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D5_v2",
"Standard_D8_v3",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2",
"Standard_E2_v3",
"Standard_E4_v3",
"Standard_E8_v3",
"Standard_E16_v3",
"Standard_E32_v3",
"Standard_E64_v3",
"Standard_E2s_v3",

176
CHAPTER 1. INSTALLING ON AZURE

"Standard_E4s_v3",
"Standard_E8s_v3",
"Standard_E16s_v3",
"Standard_E32s_v3",
"Standard_E64s_v3",
"Standard_G1",
"Standard_G2",
"Standard_G3",
"Standard_G4",
"Standard_G5",
"Standard_DS2",
"Standard_DS3",
"Standard_DS4",
"Standard_DS11",
"Standard_DS12",
"Standard_DS13",
"Standard_DS14",
"Standard_DS2_v2",
"Standard_DS3_v2",
"Standard_DS4_v2",
"Standard_DS5_v2",
"Standard_DS11_v2",
"Standard_DS12_v2",
"Standard_DS13_v2",
"Standard_DS14_v2",
"Standard_GS1",
"Standard_GS2",
"Standard_GS3",
"Standard_GS4",
"Standard_GS5",
"Standard_D2s_v3",
"Standard_D4s_v3",
"Standard_D8s_v3"
],
"metadata" : {
"description" : "The size of the Master Virtual Machines"
}
},
"diskSizeGB" : {
"type" : "int",
"defaultValue" : 1024,
"metadata" : {
"description" : "Size of the Master VM OS disk, in GB"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks',
variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/',
variables('masterSubnetName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",

177
OpenShift Container Platform 4.6 Installing on Azure

"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"copy" : [
{
"name" : "vmNames",
"count" : "[parameters('numberOfMasters')]",
"input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]"
}
]
},
"resources" : [
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Network/networkInterfaces",
"copy" : {
"name" : "nicCopy",
"count" : "[length(variables('vmNames'))]"
},
"name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]",
"location" : "[variables('location')]",
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"loadBalancerBackendAddressPools" : [
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/',
resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/',
variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]"
},
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/',
resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/',
variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]"
}
]
}
}
]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/SRV",
"name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]",
"location" : "[variables('location')]",
"properties": {
"ttl": 60,
"copy": [{
"name": "srvRecords",

178
CHAPTER 1. INSTALLING ON AZURE

"count": "[length(variables('vmNames'))]",
"input": {
"priority": 0,
"weight" : 10,
"port" : 2380,
"target" : "[concat('etcd-', copyIndex('srvRecords'), '.',
parameters('privateDNSZoneName'))]"
}
}]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/A",
"copy" : {
"name" : "dnsCopy",
"count" : "[length(variables('vmNames'))]"
},
"name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-
nic'))]"
],
"properties": {
"ttl": 60,
"aRecords": [
{
"ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-
nic')).ipConfigurations[0].properties.privateIPAddress]"
}
]
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Compute/virtualMachines",
"copy" : {
"name" : "vmCopy",
"count" : "[length(variables('vmNames'))]"
},
"name" : "[variables('vmNames')[copyIndex()]]",
"location" : "[variables('location')]",
"identity" : {
"type" : "userAssigned",
"userAssignedIdentities" : {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',
variables('identityName'))]" : {}
}
},
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-
nic'))]",
"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'),
'/A/etcd-', copyIndex())]",
"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'),

179
OpenShift Container Platform 4.6 Installing on Azure

'/SRV/_etcd-server-ssl._tcp')]"
],
"properties" : {
"hardwareProfile" : {
"vmSize" : "[parameters('masterVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmNames')[copyIndex()]]",
"adminUsername" : "core",
"customData" : "[parameters('masterIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"caching": "ReadOnly",
"writeAcceleratorEnabled": false,
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB" : "[parameters('diskSizeGB')]"
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')
[copyIndex()], '-nic'))]",
"properties": {
"primary": false
}
}
]
}
}
}
]
}

180
CHAPTER 1. INSTALLING ON AZURE

1.9.15. Wait for bootstrap completion and remove bootstrap resources in Azure
After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to
complete on the machines that you provisioned by using the Ignition config files that you generated
with the installation program.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Create and configure a VNet and associated subnets in Azure.

Create and configure networking and load balancers in Azure.

Create control plane and compute roles.

Create the bootstrap machine.

Create the control plane machines.

Procedure

1. Change to the directory that contains the installation program and run the following command:

$ ./openshift-install wait-for bootstrap-complete --dir=<installation_directory> \ 1


--log-level info 2

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2 To view different installation details, specify warn, debug, or error instead of info.

If the command exits without a FATAL warning, your production control plane has initialized.

2. Delete the bootstrap resources:

$ az network nsg rule delete -g ${RESOURCE_GROUP} --nsg-name ${INFRA_ID}-nsg --


name bootstrap_ssh_in
$ az vm stop -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap
$ az vm deallocate -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap
$ az vm delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap --yes
$ az disk delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap_OSDisk --no-
wait --yes
$ az network nic delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-nic --no-
wait
$ az storage blob delete --account-key ${ACCOUNT_KEY} --account-name
${CLUSTER_NAME}sa --container-name files --name bootstrap.ign
$ az network public-ip delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-
ssh-pip

1.9.16. Creating additional worker machines in Azure

You can create worker machines in Microsoft Azure for your cluster to use by launching individual
181
OpenShift Container Platform 4.6 Installing on Azure

You can create worker machines in Microsoft Azure for your cluster to use by launching individual
instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can
also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift
Container Platform.

In this example, you manually launch one instance by using the Azure Resource Manager (ARM)
template. Additional instances can be launched by including additional resources of type
06_workers.json in the file.

NOTE

If you do not use the provided ARM template to create your worker machines, you must
review the provided information and manually create the infrastructure. If your cluster
does not initialize correctly, you might have to contact Red Hat support with your
installation logs.

Prerequisites

Configure an Azure account.

Generate the Ignition config files for your cluster.

Create and configure a VNet and associated subnets in Azure.

Create and configure networking and load balancers in Azure.

Create control plane and compute roles.

Create the bootstrap machine.

Create the control plane machines.

Procedure

1. Copy the template from the ARM template for worker machines section of this topic and save
it as 06_workers.json in your cluster’s installation directory. This template describes the worker
machines that your cluster requires.

2. Export the following variable needed by the worker machine deployment:

$ export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'`

3. Create the deployment by using the az CLI:

$ az deployment group create -g ${RESOURCE_GROUP} \


--template-file "<installation_directory>/06_workers.json" \
--parameters workerIgnition="${WORKER_IGNITION}" \ 1
--parameters sshKeyData="${SSH_KEY}" \ 2
--parameters baseName="${INFRA_ID}" 3

1 The Ignition content for the worker nodes.

2 The SSH RSA public key file as a string.

3 The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

182
CHAPTER 1. INSTALLING ON AZURE

1.9.16.1. ARM template for worker machines

You can use the following Azure Resource Manager (ARM) template to deploy the worker machines
that you need for your OpenShift Container Platform cluster:

Example 1.6. 06_workers.json ARM template

{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"workerIgnition" : {
"type" : "string",
"metadata" : {
"description" : "Ignition content for the worker nodes"
}
},
"numberOfNodes" : {
"type" : "int",
"defaultValue" : 3,
"minValue" : 2,
"maxValue" : 30,
"metadata" : {
"description" : "Number of OpenShift compute nodes to deploy"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string"
}
},
"nodeVMSize" : {
"type" : "string",
"defaultValue" : "Standard_D4s_v3",
"allowedValues" : [
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_A8",
"Standard_A9",
"Standard_A10",
"Standard_A11",
"Standard_D2",
"Standard_D3",

183
OpenShift Container Platform 4.6 Installing on Azure

"Standard_D4",
"Standard_D11",
"Standard_D12",
"Standard_D13",
"Standard_D14",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D5_v2",
"Standard_D8_v3",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2",
"Standard_E2_v3",
"Standard_E4_v3",
"Standard_E8_v3",
"Standard_E16_v3",
"Standard_E32_v3",
"Standard_E64_v3",
"Standard_E2s_v3",
"Standard_E4s_v3",
"Standard_E8s_v3",
"Standard_E16s_v3",
"Standard_E32s_v3",
"Standard_E64s_v3",
"Standard_G1",
"Standard_G2",
"Standard_G3",
"Standard_G4",
"Standard_G5",
"Standard_DS2",
"Standard_DS3",
"Standard_DS4",
"Standard_DS11",
"Standard_DS12",
"Standard_DS13",
"Standard_DS14",
"Standard_DS2_v2",
"Standard_DS3_v2",
"Standard_DS4_v2",
"Standard_DS5_v2",
"Standard_DS11_v2",
"Standard_DS12_v2",
"Standard_DS13_v2",
"Standard_DS14_v2",
"Standard_GS1",
"Standard_GS2",
"Standard_GS3",
"Standard_GS4",
"Standard_GS5",
"Standard_D2s_v3",
"Standard_D4s_v3",
"Standard_D8s_v3"
],
"metadata" : {

184
CHAPTER 1. INSTALLING ON AZURE

"description" : "The size of the each Node Virtual Machine"


}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks',
variables('virtualNetworkName'))]",
"nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
"nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/',
variables('nodeSubnetName'))]",
"infraLoadBalancerName" : "[parameters('baseName')]",
"sshKeyPath" : "/home/capi/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"copy" : [
{
"name" : "vmNames",
"count" : "[parameters('numberOfNodes')]",
"input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-',
copyIndex('vmNames', 1))]"
}
]
},
"resources" : [
{
"apiVersion" : "2019-05-01",
"name" : "[concat('node', copyIndex())]",
"type" : "Microsoft.Resources/deployments",
"copy" : {
"name" : "nodeCopy",
"count" : "[length(variables('vmNames'))]"
},
"properties" : {
"mode" : "Incremental",
"template" : {
"$schema" : "http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"resources" : [
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Network/networkInterfaces",
"name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]",
"location" : "[variables('location')]",
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('nodeSubnetRef')]"
}
}

185
OpenShift Container Platform 4.6 Installing on Azure

}
]
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Compute/virtualMachines",
"name" : "[variables('vmNames')[copyIndex()]]",
"location" : "[variables('location')]",
"tags" : {
"kubernetes.io-cluster-ffranzupi": "owned"
},
"identity" : {
"type" : "userAssigned",
"userAssignedIdentities" : {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',
variables('identityName'))]" : {}
}
},
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')
[copyIndex()], '-nic'))]"
],
"properties" : {
"hardwareProfile" : {
"vmSize" : "[parameters('nodeVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmNames')[copyIndex()]]",
"adminUsername" : "capi",
"customData" : "[parameters('workerIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB": 128
}

186
CHAPTER 1. INSTALLING ON AZURE

},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces',
concat(variables('vmNames')[copyIndex()], '-nic'))]",
"properties": {
"primary": true
}
}
]
}
}
}
]
}
}
}
]
}

1.9.17. Installing the OpenShift CLI by downloading the binary


You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.

IMPORTANT

If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.5. Download and install the new version of oc.

1.9.17.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.

4. Unpack the archive:

$ tar xvzf <file>

5. Place the oc binary in a directory that is on your PATH.


To check your PATH, execute the following command:

$ echo $PATH

187
OpenShift Container Platform 4.6 Installing on Azure

After you install the CLI, it is available using the oc command:

$ oc <command>

1.9.17.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.

4. Unzip the archive with a ZIP program.

5. Move the oc binary to a directory that is on your PATH.


To check your PATH, open the command prompt and execute the following command:

C:\> path

After you install the CLI, it is available using the oc command:

C:\> oc <command>

1.9.17.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.

2. Select your infrastructure provider, and, if applicable, your installation type.

3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.

4. Unpack and unzip the archive.

5. Move the oc binary to a directory on your PATH.


To check your PATH, open a terminal and execute the following command:

$ echo $PATH

After you install the CLI, it is available using the oc command:

$ oc <command>

188
CHAPTER 1. INSTALLING ON AZURE

1.9.18. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.9.19. Approving the certificate signing requests for your machines


When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for
each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve
them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

You added machines to your cluster.

Procedure

1. Confirm that the cluster recognizes the machines:

$ oc get nodes

Example output

NAME STATUS ROLES AGE VERSION


master-0 Ready master 63m v1.19.0
master-1 Ready master 63m v1.19.0

189
OpenShift Container Platform 4.6 Installing on Azure

master-2 Ready master 64m v1.19.0


worker-0 NotReady worker 76s v1.19.0
worker-1 NotReady worker 70s v1.19.0

The output lists all of the machines that you created.

NOTE

The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.

2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:

$ oc get csr

Example output

NAME AGE REQUESTOR CONDITION


csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-
bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-
bootstrapper Pending
...

In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.

3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:

NOTE

Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager.

NOTE

For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.

190
CHAPTER 1. INSTALLING ON AZURE

To approve them individually, run the following command for each valid CSR:

$ oc adm certificate approve <csr_name> 1

1 <csr_name> is the name of a CSR from the list of current CSRs.

To approve all pending CSRs, run the following command:

$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}


{{end}}{{end}}' | xargs oc adm certificate approve

NOTE

Some Operators might not become available until some CSRs are approved.

4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:

$ oc get csr

Example output

NAME AGE REQUESTOR CONDITION


csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal
Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal
Pending
...

5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:

To approve them individually, run the following command for each valid CSR:

$ oc adm certificate approve <csr_name> 1

1 <csr_name> is the name of a CSR from the list of current CSRs.

To approve all pending CSRs, run the following command:

$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}


{{end}}{{end}}' | xargs oc adm certificate approve

6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:

$ oc get nodes

Example output

191
OpenShift Container Platform 4.6 Installing on Azure

NAME STATUS ROLES AGE VERSION


master-0 Ready master 73m v1.20.0
master-1 Ready master 73m v1.20.0
master-2 Ready master 74m v1.20.0
worker-0 Ready worker 11m v1.20.0
worker-1 Ready worker 11m v1.20.0

NOTE

It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.

Additional information

For more information on CSRs, see Certificate Signing Requests .

1.9.20. Adding the Ingress DNS records


If you removed the DNS Zone configuration when creating Kubernetes manifests and generating
Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can
create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other
records per your requirements.

Prerequisites

You deployed an OpenShift Container Platform cluster on Microsoft Azure by using


infrastructure that you provisioned.

Install the OpenShift CLI (oc).

Install the jq package.

Install or update the Azure CLI.

Procedure

1. Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field:

$ oc -n openshift-ingress get service router-default

Example output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE


router-default LoadBalancer 172.30.20.10 35.130.120.110
80:32288/TCP,443:31215/TCP 20

2. Export the Ingress router IP as a variable:

$ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-


headers | awk '{print $4}'`

3. Add a *.apps record to the public DNS zone.

192
CHAPTER 1. INSTALLING ON AZURE

a. If you are adding this cluster to a new public zone, run:

$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -


z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} --ttl 300

b. If you are adding this cluster to an already existing public zone, run:

$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -


z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a ${PUBLIC_IP_ROUTER} --ttl 300

4. Add a *.apps record to the private DNS zone:

a. Create a *.apps record by using the following command:

$ az network private-dns record-set a create -g ${RESOURCE_GROUP} -z


${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps --ttl 300

b. Add the *.apps record to the private DNS zone by using the following command:

$ az network private-dns record-set a add-record -g ${RESOURCE_GROUP} -z


${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER}

If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the
cluster’s current routes:

$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}


{end}' routes

Example output

oauth-openshift.apps.cluster.basedomain.com
console-openshift-console.apps.cluster.basedomain.com
downloads-openshift-console.apps.cluster.basedomain.com
alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com
grafana-openshift-monitoring.apps.cluster.basedomain.com
prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com

1.9.21. Completing an Azure installation on user-provisioned infrastructure


After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned
infrastructure, you can monitor the cluster events until the cluster is ready.

Prerequisites

Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned
Azure infrastructure.

Install the oc CLI and log in.

Procedure

Complete the cluster installation:

193
OpenShift Container Platform 4.6 Installing on Azure

$ ./openshift-install --dir=<installation_directory> wait-for install-complete 1

Example output

INFO Waiting up to 30m0s for the cluster to initialize...

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

1.10. UNINSTALLING A CLUSTER ON AZURE


You can remove a cluster that you deployed to Microsoft Azure.

1.10.1. Removing a cluster that uses installer-provisioned infrastructure


You can remove a cluster that uses installer-provisioned infrastructure from your cloud.

Prerequisites

Have a copy of the installation program that you used to deploy the cluster.

Have the files that the installation program generated when you created your cluster.

Procedure

1. From the directory that contains the installation program on the computer that you used to
install the cluster, run the following command:

$ ./openshift-install destroy cluster \


--dir=<installation_directory> --log-level=info 1 2

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2 To view different details, specify warn, debug, or error instead of info.

NOTE
194
CHAPTER 1. INSTALLING ON AZURE

NOTE

You must specify the directory that contains the cluster definition files for your
cluster. The installation program requires the metadata.json file in this directory
to delete the cluster.

2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform
installation program.

195

You might also like