Openshift Container Platform 4.6: Architecture
Openshift Container Platform 4.6: Architecture
Architecture
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides an overview of the platform and application architecture in OpenShift
Container Platform.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .OPENSHIFT
. . . . . . . . . . . . .CONTAINER
. . . . . . . . . . . . .PLATFORM
. . . . . . . . . . . . .ARCHITECTURE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
1.1. INTRODUCTION TO OPENSHIFT CONTAINER PLATFORM 4
1.1.1. About Kubernetes 4
1.1.2. The benefits of containerized applications 4
1.1.2.1. Operating system benefits 4
1.1.2.2. Deployment and scaling benefits 5
1.1.3. OpenShift Container Platform overview 5
1.1.3.1. Custom operating system 5
1.1.3.2. Simplified installation and update process 6
1.1.3.3. Other key features 6
1.1.3.4. OpenShift Container Platform lifecycle 6
1.1.4. Internet and Telemetry access for OpenShift Container Platform 7
.CHAPTER
. . . . . . . . . . 2.
. . INSTALLATION
. . . . . . . . . . . . . . . . .AND
. . . . .UPDATE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
2.1. OPENSHIFT CONTAINER PLATFORM INSTALLATION OVERVIEW 8
2.1.1. Available platforms 9
2.1.2. Installation process 10
The installation process with installer-provisioned infrastructure 11
The installation process with user-provisioned infrastructure 11
Installation process details 12
Installation scope 13
2.2. ABOUT THE OPENSHIFT CONTAINER PLATFORM UPDATE SERVICE 13
2.3. SUPPORT POLICY FOR UNMANAGED OPERATORS 14
2.4. NEXT STEPS 15
.CHAPTER
. . . . . . . . . . 3.
. . THE
. . . . . OPENSHIFT
. . . . . . . . . . . . . CONTAINER
. . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . .CONTROL
. . . . . . . . . . . PLANE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
..............
3.1. UNDERSTANDING THE OPENSHIFT CONTAINER PLATFORM CONTROL PLANE 16
3.1.1. Machine roles in OpenShift Container Platform 16
3.1.1.1. Cluster workers 16
3.1.1.2. Cluster masters 16
3.1.2. Operators in OpenShift Container Platform 18
3.1.2.1. Platform Operators in OpenShift Container Platform 19
3.1.2.2. Operators managed by OLM 19
3.1.2.3. About the OpenShift Container Platform update service 19
3.1.2.4. Understanding the Machine Config Operator 20
.CHAPTER
. . . . . . . . . . 4.
. . .UNDERSTANDING
. . . . . . . . . . . . . . . . . . .OPENSHIFT
. . . . . . . . . . . . .CONTAINER
. . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . .DEVELOPMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
..............
4.1. ABOUT DEVELOPING CONTAINERIZED APPLICATIONS 22
4.2. BUILDING A SIMPLE CONTAINER 22
4.2.1. Container build tool options 23
4.2.2. Base image options 24
4.2.3. Registry options 25
4.3. CREATING A KUBERNETES MANIFEST FOR OPENSHIFT CONTAINER PLATFORM 25
4.3.1. About Kubernetes pods and services 26
4.3.2. Application types 26
4.3.3. Available supporting components 27
4.3.4. Applying the manifest 27
4.3.5. Next steps 28
4.4. DEVELOP FOR OPERATORS 28
.CHAPTER
. . . . . . . . . . 5.
. . THE
. . . . . CI/CD
. . . . . . .METHODOLOGY
. . . . . . . . . . . . . . . . . .AND
. . . . . PRACTICE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
..............
5.1. CI/CD FOR CLUSTER ADMINISTRATION AND APPLICATION CONFIGURATION MANAGEMENT 29
1
OpenShift Container Platform 4.6 Architecture
. . . . . . . . . . . 6.
CHAPTER . . .USING
. . . . . . .ARGOCD
. . . . . . . . . .WITH
. . . . . .OPENSHIFT
. . . . . . . . . . . . .CONTAINER
. . . . . . . . . . . . .PLATFORM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
..............
6.1. WHAT DOES ARGOCD DO? 31
6.2. STATEMENT OF SUPPORT 31
6.3. ARGOCD DOCUMENTATION 31
.CHAPTER
. . . . . . . . . . 7.
. . ADMISSION
. . . . . . . . . . . . . PLUG-INS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
..............
7.1. ABOUT ADMISSION PLUG-INS 32
7.2. DEFAULT ADMISSION PLUG-INS 32
7.3. WEBHOOK ADMISSION PLUG-INS 32
7.4. TYPES OF WEBHOOK ADMISSION PLUG-INS 34
7.4.1. Mutating admission plug-in 34
7.4.2. Validating admission plug-in 35
7.5. CONFIGURING DYNAMIC ADMISSION 36
7.6. ADDITIONAL RESOURCES 43
2
Table of Contents
3
OpenShift Container Platform 4.6 Architecture
With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology
that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other
applications. Its implementation in open Red Hat technologies lets you extend your containerized
applications beyond a single cloud to on-premise and multi-cloud environments.
Kubernetes is an open source container orchestration engine for automating deployment, scaling, and
management of containerized applications. The general concept of Kubernetes is fairly simple:
Start with one or more worker nodes to run the container workloads.
Manage the deployment of those workloads from one or more master nodes.
Wrap containers in a deployment unit called a pod. Using pods provides extra metadata with the
container and offers the ability to group several containers in a single deployment entity.
Create special kinds of assets. For example, services are represented by a set of pods and a
policy that defines how they are accessed. This policy allows containers to connect to the
services that they need even if they do not have the specific IP addresses for the services.
Replication controllers are another special asset that indicates how many pod replicas are
required to run at a time. You can use this capability to automatically scale your application to
adapt to its current demand.
In only a few years, Kubernetes has seen massive cloud and on-premise adoption. The open source
development model allows many people to extend Kubernetes by implementing different technologies
for components such as networking, storage, and authentication.
Containers use small, dedicated Linux operating systems without a kernel. Their file system, networking,
cgroups, process tables, and namespaces are separate from the host Linux system, but the containers
can integrate with the hosts seamlessly when necessary. Being based on Linux allows containers to use
all the advantages that come with the open source development model of rapid innovation.
Because each container uses a dedicated operating system, you can deploy applications that require
4
CHAPTER 1. OPENSHIFT CONTAINER PLATFORM ARCHITECTURE
Because each container uses a dedicated operating system, you can deploy applications that require
conflicting software dependencies on the same host. Each container carries its own dependent software
and manages its own interfaces, such as networking and file systems, so applications never need to
compete for those assets.
If you employ rolling upgrades between major releases of your application, you can continuously
improve your applications without downtime and still maintain compatibility with the current release.
You can also deploy and test a new version of an application alongside the existing version. If the
container passes your tests, simply deploy more new containers and remove the old ones.
Since all the software dependencies for an application are resolved within the container itself, you can
use a standardized operating system on each host in your data center. You do not need to configure a
specific operating system for each application host. When your data center needs more capacity, you
can deploy another generic host system.
Similarly, scaling containerized applications is simple. OpenShift Container Platform offers a simple,
standard way of scaling any containerized service. For example, if you build applications as a set of
microservices rather than large, monolithic applications, you can scale the individual microservices
individually to meet demand. This capability allows you to scale only the required services instead of the
entire application, which can allow you to meet application demands while using minimal resources.
Hybrid cloud deployments. You can deploy OpenShift Container Platform clusters to variety of
public cloud platforms or in your data center.
Integrated Red Hat technology. Major components in OpenShift Container Platform come from
{op-system-base-full} and related Red Hat technologies. OpenShift Container Platform
benefits from the intense testing and certification initiatives for Red Hat’s enterprise quality
software.
Open source development model. Development is completed in the open, and the source code
is available from public software repositories. This open collaboration fosters rapid innovation
and development.
Although Kubernetes excels at managing your applications, it does not specify or manage platform-level
requirements or deployment processes. Powerful and flexible platform management tools and
processes are important benefits that OpenShift Container Platform 4.6 offers. The following sections
describe some unique features and benefits of OpenShift Container Platform.
{op-system} includes:
Ignition, which OpenShift Container Platform uses as a firstboot system configuration for
5
OpenShift Container Platform 4.6 Architecture
Ignition, which OpenShift Container Platform uses as a firstboot system configuration for
initially bringing up and configuring machines.
CRI-O, a Kubernetes native container runtime implementation that integrates closely with the
operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides
facilities for running, stopping, and restarting containers. It fully replaces the Docker Container
Engine, which was used in OpenShift Container Platform 3.
Kubelet, the primary node agent for Kubernetes that is responsible for launching and monitoring
containers.
In OpenShift Container Platform 4.6, you must use {op-system} for all control plane machines, but you
can use Red Hat Enterprise Linux (RHEL) as the operating system for compute machines, which are also
known as worker machines. If you choose to use RHEL workers, you must perform more system
maintenance than if you use {op-system} for all of the cluster machines.
With OpenShift Container Platform 4.6, if you have an account with the right permissions, you can
deploy a production cluster in supported clouds by running a single command and providing a few values.
You can also customize your cloud installation or install your cluster in your data center if you use a
supported platform.
For clusters that use {op-system} for all machines, updating, or upgrading, OpenShift Container
Platform is a simple, highly-automated process. Because OpenShift Container Platform completely
controls the systems and services that run on each machine, including the operating system itself, from
a central control plane, upgrades are designed to become automatic events. If your cluster contains
RHEL worker machines, the control plane benefits from the streamlined update process, but you must
perform more tasks to upgrade the RHEL machines.
Operators are both the fundamental unit of the OpenShift Container Platform 4.6 code base and a
convenient way to deploy applications and software components for your applications to use. In
OpenShift Container Platform, Operators serve as the platform foundation and remove the need for
manual upgrades of operating systems and control plane applications. OpenShift Container Platform
Operators such as the Cluster Version Operator and Machine Config Operator allow simplified, cluster-
wide management of those critical components.
Operator Lifecycle Manager (OLM) and the OperatorHub provide facilities for storing and distributing
Operators to people developing and deploying applications.
The Red Hat Quay Container Registry is a Quay.io container registry that serves most of the container
images and Operators to OpenShift Container Platform clusters. Quay.io is a public registry version of
Red Hat Quay that stores millions of images and tags.
The following figure illustrates the basic OpenShift Container Platform lifecycle:
6
CHAPTER 1. OPENSHIFT CONTAINER PLATFORM ARCHITECTURE
Scaling up applications
Once you confirm that your {cloud-redhat-com} inventory is correct, either maintained automatically by
Telemetry or manually using OCM, use subscription watch to track your OpenShift Container Platform
subscriptions at the account or multi-cluster level.
Access the {cloud-redhat-com} page to download the installation program and perform
subscription management. If the cluster has Internet access and you do not disable Telemetry,
that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.
7
OpenShift Container Platform 4.6 Architecture
These two basic types of OpenShift Container Platform clusters are frequently called installer-
provisioned infrastructure clusters and user-provisioned infrastructure clusters.
Administrators maintain control over what updates are applied and when
You use the same installation program to deploy both types of clusters. The main assets generated by
the installation program are the Ignition config files for the bootstrap, master, and worker machines.
With these three configurations and correctly configured infrastructure, you can start an OpenShift
Container Platform cluster.
The OpenShift Container Platform installation program uses a set of targets and dependencies to
manage cluster installation. The installation program has a set of targets that it must achieve, and each
target has a set of dependencies. Because each target is only concerned with its own dependencies, the
installation program can act to achieve multiple targets in parallel. The ultimate target is a running
cluster. By meeting dependencies instead of running commands, the installation program is able to
recognize and use existing components instead of running the commands to create them again.
The following diagram shows a subset of the installation targets and dependencies:
After installation, each cluster machine uses {op-system-first} as the operating system. {op-system} is
the immutable container host version of {op-system-base-full} and features a {op-system-base} kernel
with SELinux enabled by default. It includes the kubelet, which is the Kubernetes node agent, and the
CRI-O container runtime, which is optimized for Kubernetes.
Every control plane machine in an OpenShift Container Platform 4.6 cluster must use {op-system},
which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to
8
CHAPTER 2. INSTALLATION AND UPDATE
configure the machines. Operating system updates are delivered as an Atomic OSTree repository that is
embedded in a container image that is rolled out across the cluster by an Operator. Actual operating
system changes are made in-place on each machine as an atomic operation by using rpm-ostree.
Together, these technologies enable OpenShift Container Platform to manage the operating system
like it manages any other application on the cluster, via in-place upgrades that keep the entire platform
up-to-date. These in-place updates can reduce the burden on operations teams.
If you use {op-system} as the operating system for all cluster machines, the cluster manages all aspects
of its components and machines, including the operating system. Because of this, only the installation
program and the Machine Config Operator can change machines. The installation program uses Ignition
config files to set the exact state of each machine, and the Machine Config Operator completes more
changes to the machines, such as the application of new certificates or keys, after installation.
Microsoft Azure
The latest OpenShift Container Platform release supports both the latest {rh-openstack}
long-life release and intermediate release. For complete {rh-openstack} release
compatibility, see the OpenShift Container Platform on {rh-openstack} support matrix .
{rh-virtualization-first}
VMware vSphere
For these clusters, all machines, including the computer that you run the installation process on, must
have direct internet access to pull images for platform containers and provide telemetry data to Red
Hat.
IMPORTANT
In OpenShift Container Platform 4.6, you can install a cluster that uses user-provisioned infrastructure
on the following platforms:
AWS
Azure
GCP
9
OpenShift Container Platform 4.6 Architecture
{rh-openstack}
{rh-virtualization}
VMware vSphere
Bare metal
IBM Z or LinuxONE
With installations on user-provisioned infrastructure, each machine can have full internet access, you
can place your cluster behind a proxy, or you can perform a restricted network installation . In a restricted
network installation, you can download the images that are required to install a cluster, place them in a
mirror registry, and use that data to install your cluster. While you require internet access to pull images
for platform containers, with a restricted network installation on vSphere or bare metal infrastructure,
your cluster machines do not require direct internet access.
The OpenShift Container Platform 4.x Tested Integrations page contains details about integration
testing for different platforms.
Registry tokens, which are the pull secrets that you use to obtain the required components
Cluster registration, which associates the cluster identity to your Red Hat account to facilitate
the gathering of usage metrics
In OpenShift Container Platform 4.6, the installation program is a Go binary file that performs a series
of file transformations on a set of assets. The way you interact with the installation program differs
depending on your installation type.
If you provision and manage the infrastructure for your cluster, you must provide all of the
cluster infrastructure and resources, including the bootstrap machine, networking, load
balancing, storage, and individual cluster machines.
You use three sets of files during installation: an installation configuration file that is named install-
config.yaml, Kubernetes manifests, and Ignition config files for your machine types.
IMPORTANT
10
CHAPTER 2. INSTALLATION AND UPDATE
IMPORTANT
It is possible to modify Kubernetes and the Ignition config files that control the
underlying {op-system} operating system during installation. However, no validation is
available to confirm the suitability of any modifications that you make to these objects. If
you modify these objects, you might render your cluster non-functional. Because of this
risk, modifying Kubernetes and Ignition config files is not supported unless you are
following documented procedures or are instructed to do so by Red Hat support.
The installation configuration file is transformed into Kubernetes manifests, and then the manifests are
wrapped into Ignition config files. The installation program uses these Ignition config files to create the
cluster.
The installation configuration files are all pruned when you run the installation program, so be sure to
back up all configuration files that you want to use again.
IMPORTANT
You cannot modify the parameters that you set during installation, but you can modify
many cluster attributes after installation.
You can install either a standard cluster or a customized cluster. With a standard cluster, you provide
minimum details that are required to install the cluster. With a customized cluster, you can specify more
details about the platform, such as the number of machines that the control plane uses, the type of
virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network.
If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all
other environments, you use the installation program to generate the assets that you require to
provision your cluster infrastructure.
With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of
the cluster, including the operating system itself. Each machine boots with a configuration that
references resources hosted in the cluster that it joins. This configuration allows the cluster to manage
itself as updates are applied.
If you do not use infrastructure that the installation program provisioned, you must manage and
maintain the cluster resources yourself, including:
The underlying infrastructure for the control plane and compute machines that make up the
cluster
Load balancers
11
OpenShift Container Platform 4.6 Architecture
If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL worker
machines to your cluster.
After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap
process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete
many of the steps manually.
IMPORTANT
The Ignition config files that the installation program generates contain certificates that
expire after 24 hours, which are then renewed at that time. If the cluster is shut down
before renewing the certificates and the cluster is later restarted after the 24 hours have
elapsed, the cluster automatically recovers the expired certificates. The exception is that
you must manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.
1. The bootstrap machine boots and starts hosting the remote resources required for the master
machines to boot. (Requires manual intervention if you provision the infrastructure)
2. The master machines fetch the remote resources from the bootstrap machine and finish
booting. (Requires manual intervention if you provision the infrastructure)
3. The master machines use the bootstrap machine to form an etcd cluster.
4. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.
12
CHAPTER 2. INSTALLATION AND UPDATE
5. The temporary control plane schedules the production control plane to the master machines.
6. The temporary control plane shuts down and passes control to the production control plane.
7. The bootstrap machine injects OpenShift Container Platform components into the production
control plane.
8. The installation program shuts down the bootstrap machine. (Requires manual intervention if
you provision the infrastructure)
10. The control plane installs additional services in the form of a set of Operators.
The result of this bootstrapping process is a fully running OpenShift Container Platform cluster. The
cluster then downloads and configures remaining components needed for the day-to-day operation,
including the creation of worker machines in supported environments.
Installation scope
The scope of the OpenShift Container Platform installation program is intentionally narrow. It is
designed for simplicity and ensured success. You can complete many more configuration tasks after
installation completes.
Additional resources
See Available cluster customizations for details about OpenShift Container Platform
configuration resources.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Container Platform
update service to see the valid updates and update paths based on current component versions and
information in the graph. When you request an update, the OpenShift Container Platform CVO uses the
release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as
container images.
To allow the OpenShift Container Platform update service to provide only compatible updates, a release
verification pipeline exists to drive automation. Each release artifact is verified for compatibility with
supported cloud platforms and system architectures as well as other component packages. After the
pipeline confirms the suitability of a release, the OpenShift Container Platform update service notifies
you that it is available.
IMPORTANT
Because the update service displays all valid updates, you must not force an update to a
version that the update service does not display.
During continuous update mode, two controllers run. One continuously updates the payload manifests,
13
OpenShift Container Platform 4.6 Architecture
applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether
they are available, upgrading, or failed. The second controller polls the OpenShift Container Platform
update service to determine if updates are available.
IMPORTANT
During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your
cluster machines. It cordons the number of nodes that is specified by the maxUnavailable field on the
machine configuration pool and marks them as unavailable. By default, this value is set to 1. It then
applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL)
machines as workers, the MCO does not update the kubelet on these machines because you must
update the OpenShift API on them first. Because the specification for the new version is applied to the
old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until
the machines are available. However, the maximum number of nodes that are unavailable is set to
ensure that normal cluster operations are likely to continue with that number of machines out of service.
While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged
state are unsupported and the cluster administrator assumes full control of the individual component
configurations and upgrades.
Changing the managementState parameter to Unmanaged means that the Operator is not
actively managing its resources and will take no action related to the related component. Some
Operators might not support this management state as it might damage the cluster and require
manual recovery.
WARNING
14
CHAPTER 2. INSTALLATION AND UPDATE
administrators to provide a list of overrides to the CVO’s behavior for a component. Setting the
spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and
alerts the administrator after a CVO override has been set:
Disabling ownership via cluster version overrides prevents upgrades. Please remove
overrides before continuing.
WARNING
15
OpenShift Container Platform 4.6 Architecture
NOTE
The cluster also contains the definition for the bootstrap role. Because the bootstrap
machine is used only during cluster installation, its function is explained in the cluster
installation documentation.
In a Kubernetes cluster, the worker nodes are where the actual workloads requested by Kubernetes
users run and are managed. The worker nodes advertise their capacity and the scheduler, which is part
of the master services, determines on which nodes to start containers and pods. Important services run
on each worker node, including CRI-O, which is the container engine, Kubelet, which is the service that
accepts and fulfills requests for running and stopping container workloads, and a service proxy, which
manages communication for pods across workers.
In OpenShift Container Platform, machine sets control the worker machines. Machines with the worker
role drive compute workloads that are governed by a specific machine pool that autoscales them.
Because OpenShift Container Platform has the capacity to support multiple machine types, the worker
machines are classed as compute machines. In this release, the terms worker machine and compute
machine are used interchangeably because the only default type of compute machine is the worker
machine. In future versions of OpenShift Container Platform, different types of compute machines, such
as infrastructure machines, might be used by default.
In a Kubernetes cluster, the master nodes run services that are required to control the Kubernetes
cluster. In OpenShift Container Platform, the master machines are the control plane. They contain more
than just the Kubernetes services for managing the OpenShift Container Platform cluster. Because all of
the machines with the control plane role are master machines, the terms master and control plane are
used interchangeably to describe them. Instead of being grouped into a machine set, master machines
are defined by a series of standalone machine API resources. Extra controls apply to master machines to
prevent you from deleting all master machines and breaking your cluster.
NOTE
16
CHAPTER 3. THE OPENSHIFT CONTAINER PLATFORM CONTROL PLANE
NOTE
Exactly three master nodes must be used for all production deployments.
Services that fall under the Kubernetes category on the master include the Kubernetes API server, etcd,
Kubernetes controller manager, and HAProxy services.
Component Description
Kubernetes API server The Kubernetes API server validates and configures the data for pods,
services, and replication controllers. It also provides a focal point for the
shared state of the cluster.
etcd etcd stores the persistent master state while other components watch
etcd for changes to bring themselves into the specified state.
Kubernetes controller manager The Kubernetes controller manager watches etcd for changes to objects
such as replication, namespace, and service account controller objects,
and then uses the API to enforce the specified state. Several such
processes create a cluster with one active leader at a time.
There are also OpenShift services that run on the control plane, which include the OpenShift API server,
OpenShift controller manager, and OAuth API server.
Component Description
OpenShift API server The OpenShift API server validates and configures the data for
OpenShift resources, such as projects, routes, and templates.
OpenShift controller manager The OpenShift controller manager watches etcd for changes to
OpenShift objects, such as project, route, and template controller
objects, and then uses the API to enforce the specified state.
OpenShift OAuth API server The OpenShift OAuth API server validates and configures the data to
authenticate to OpenShift Container Platform, such as users, groups,
and OAuth tokens.
17
OpenShift Container Platform 4.6 Architecture
Component Description
OpenShift OAuth server Users request tokens from the OpenShift OAuth server to authenticate
themselves to the API.
Some of these services on the master machines run as systemd services, while others run as static pods.
Systemd services are appropriate for services that you need to always come up on that particular
system shortly after it starts. For master machines, those include sshd, which allows remote login. It also
includes services such as:
The CRI-O container engine (crio), which runs and manages the containers. OpenShift
Container Platform 4.6 uses CRI-O instead of the Docker Container Engine.
Kubelet (kubelet), which accepts requests for managing containers on the machine from master
services.
CRI-O and Kubelet must run directly on the host as systemd services because they need to be running
before you can run other containers.
The installer-* and revision-pruner-* control plane pods must run with root permissions because they
write to the /etc/kubernetes directory, which is owned by the root user. These pods are in the following
namespaces:
openshift-etcd
openshift-kube-apiserver
openshift-kube-controller-manager
openshift-kube-scheduler
Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed
on the control plane by using Operators. Operators are among the most important components of
OpenShift Container Platform 4.6. Components that are added to the control plane by using Operators
include critical networking and credential services.
The Operator that manages the other Operators in an OpenShift Container Platform cluster is the
Cluster Version Operator.
OpenShift Container Platform 4.6 uses different classes of Operators to perform cluster operations and
run services on the cluster for your applications to use.
18
CHAPTER 3. THE OPENSHIFT CONTAINER PLATFORM CONTROL PLANE
In OpenShift Container Platform 4.6, all cluster functions are divided into a series of platform Operators.
Platform Operators manage a particular area of cluster functionality, such as cluster-wide application
logging, management of the Kubernetes control plane, or the machine provisioning system.
Each Operator provides you with a simple API for determining cluster functionality. The Operator hides
the details of managing the lifecycle of that component. Operators can manage a single component or
tens of components, but the end goal is always to reduce operational burden by automating common
actions. Operators also offer a more granular configuration experience. You configure each component
by modifying the API that the Operator exposes instead of modifying a global configuration file.
The Cluster Operator Lifecycle Management (OLM) component manages Operators that are available
for use in applications. It does not manage the Operators that comprise OpenShift Container Platform.
OLM is a framework that manages Kubernetes-native applications as Operators. Instead of managing
Kubernetes manifests, it manages Kubernetes Operators. OLM manages two classes of Operators, Red
Hat Operators and certified Operators.
Some Red Hat Operators drive the cluster functions, like the scheduler and problem detectors. Others
are provided for you to manage yourself and use in your applications, like etcd. OpenShift Container
Platform also offers certified Operators, which the community built and maintains. These certified
Operators provide an API layer to traditional applications so you can manage the application through
Kubernetes constructs.
The OpenShift Container Platform update service is the hosted service that provides over-the-air
updates to both OpenShift Container Platform and {op-system-first}. It provides a graph, or diagram
that contain vertices and the edges that connect them, of component Operators. The edges in the
graph show which versions you can safely update to, and the vertices are update payloads that specify
the intended state of the managed cluster components.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Container Platform
update service to see the valid updates and update paths based on current component versions and
information in the graph. When you request an update, the OpenShift Container Platform CVO uses the
release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as
container images.
To allow the OpenShift Container Platform update service to provide only compatible updates, a release
verification pipeline exists to drive automation. Each release artifact is verified for compatibility with
supported cloud platforms and system architectures as well as other component packages. After the
pipeline confirms the suitability of a release, the OpenShift Container Platform update service notifies
you that it is available.
IMPORTANT
Because the update service displays all valid updates, you must not force an update to a
version that the update service does not display.
During continuous update mode, two controllers run. One continuously updates the payload manifests,
applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether
they are available, upgrading, or failed. The second controller polls the OpenShift Container Platform
update service to determine if updates are available.
19
OpenShift Container Platform 4.6 Architecture
IMPORTANT
During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your
cluster machines. It cordons the number of nodes that is specified by the maxUnavailable field on the
machine configuration pool and marks them as unavailable. By default, this value is set to 1. It then
applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL)
machines as workers, the MCO does not update the kubelet on these machines because you must
update the OpenShift API on them first. Because the specification for the new version is applied to the
old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until
the machines are available. However, the maximum number of nodes that are unavailable is set to
ensure that normal cluster operations are likely to continue with that number of machines out of service.
OpenShift Container Platform 4.6 integrates both operating system and cluster management. Because
the cluster manages its own updates, including updates to {op-system-first} on cluster nodes,
OpenShift Container Platform provides an opinionated lifecycle management experience that simplifies
the orchestration of node upgrades.
OpenShift Container Platform employs three daemon sets and controllers to simplify node
management. These daemon sets orchestrate operating system updates and configuration changes to
the hosts by using standard Kubernetes-style constructs. They include:
The machine-config-controller, which coordinates machine upgrades from the control plane. It
monitors all of the cluster nodes and orchestrates their configuration updates.
The machine-config-daemon daemon set, which runs on each node in the cluster and updates
a machine to configuration as defined by machine config and as instructed by the
MachineConfigController. When the node detects a change, it drains off its pods, applies the
update, and reboots. These changes come in the form of Ignition configuration files that apply
the specified machine configuration and control kubelet configuration. The update itself is
delivered in a container. This process is key to the success of managing OpenShift Container
Platform and {op-system} updates together.
The machine-config-server daemon set, which provides the Ignition config files to control
plane nodes as they join the cluster.
The machine configuration is a subset of the Ignition configuration. The machine-config-daemon reads
the machine configuration to see if it needs to do an OSTree update or if it must apply a series of
systemd kubelet file changes, configuration changes, or other changes to the operating system or
OpenShift Container Platform configuration.
When you perform node management operations, you create or modify a KubeletConfig custom
resource (CR).
IMPORTANT
20
CHAPTER 3. THE OPENSHIFT CONTAINER PLATFORM CONTROL PLANE
IMPORTANT
When changes are made to a machine configuration, the Machine Config Operator
automatically reboots all corresponding nodes in order for the changes to take effect.
To prevent the nodes from automatically rebooting after machine configuration changes,
before making the changes, you must pause the autoreboot process by setting the
spec.paused field to true in the corresponding machine config pool. When paused,
machine configuration changes are not applied until you set the spec.paused field to
false and the nodes have rebooted into the new configuration.
Additional information
For information on preventing the control plane machines from after the Machine Config Operator
makes changes to the machine config, see Disabling Machine Config Operator from automatically
rebooting.
21
OpenShift Container Platform 4.6 Architecture
Created as discrete microservices that can be connected to other containerized, and non-
containerized, services. For example, you might want to join your application with a database or
attach a monitoring application to it.
Automated to pick up code changes automatically and then start and deploy new versions of
themselves.
Scaled up, or replicated, to have more instances serving clients as demand increases and then
spun down to fewer instances as demand declines.
Run in different ways, depending on the type of application. For example, one application might
run once a month to produce a report and then exit. Another application might need to run
constantly and be highly available to clients.
Managed so you can watch the state of your application and react when something goes wrong.
Containers’ widespread acceptance, and the resulting requirements for tools and methods to make
them enterprise-ready, resulted in many options for them.
The rest of this section explains options for assets you can create when you build and deploy
containerized Kubernetes applications in OpenShift Container Platform. It also describes which
approaches you might use for different kinds of applications and development requirements.
First you require a tool for building a container, like buildah or docker, and a file that describes what goes
in your container, which is typically a Dockerfile.
Next, you require a location to push the resulting container image so you can pull it to run anywhere you
22
CHAPTER 4. UNDERSTANDING OPENSHIFT CONTAINER PLATFORM DEVELOPMENT
Next, you require a location to push the resulting container image so you can pull it to run anywhere you
want it to run. This location is a container registry.
Some examples of each of these components are installed by default on most Linux operating systems,
except for the Dockerfile, which you provide yourself.
The following diagram displays the process of building and pushing an image:
If you use a computer that runs {op-system-base-full} as the operating system, the process of creating
a containerized application requires the following steps:
1. Install container build tools: {op-system-base} contains a set of tools that includes podman,
buildah, and skopeo that you use to build and manage containers.
2. Create a Dockerfile to combine base image and software: Information about building your
container goes into a file that is named Dockerfile. In that file, you identify the base image you
build from, the software packages you install, and the software you copy into the container. You
also identify parameter values like network ports that you expose outside the container and
volumes that you mount inside the container. Put your Dockerfile and the software you want to
containerize in a directory on your {op-system-base} system.
3. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker
build command to pull your chosen base image to the local system and create a container
image that is stored locally. You can also build container images without a Dockerfile by using
buildah.
4. Tag and push to a registry: Add a tag to your new container image that identifies the location of
the registry in which you want to store and share your container. Then push that image to the
registry by running the podman push or docker push command.
5. Pull and run the image: From any system that has a container client tool, such as podman or
docker, run a command that identifies your new image. For example, run the podman
run <image_name> or docker run <image_name> command. Here <image_name> is the
name of your new container image, which resembles quay.io/myrepo/myapp:latest. The
registry might require credentials to push and pull images.
For more details on the process of building container images, pushing them to registries, and running
them, see Custom image builds with Buildah .
23
OpenShift Container Platform 4.6 Architecture
While the Docker Container Engine and docker command are popular tools to work with containers, with
{op-system-base} and many other Linux systems, you can instead choose a different set of container
tools that includes podman, skopeo, and buildah. You can still use Docker Container Engine tools to
create containers that will run in OpenShift Container Platform and any other container platform.
Building and managing containers with buildah, podman, and skopeo results in industry standard
container images that include features tuned specifically for ultimately deploying those containers in
OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can
be run without root privileges, so there is less overhead in running them.
When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container
engine. CRI-O runs on every worker and master machine in an OpenShift Container Platform cluster,
but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform.
Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These
images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered
in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As
a result, you can build your application on UBI images without having to worry about how they are shared
or the need to create different images for different environments.
These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software
Collections images as a foundation for applications that rely on specific runtime environments such as
Node.js, Perl, or Python. Special versions of some of these runtime base images referred to as Source-
to-image (S2I) images. With S2I images, you can insert your code into a base image environment that is
ready to run that code.
S2I images are available for you to use directly from the OpenShift Container Platform web UI by
selecting Catalog → Developer Catalog, as shown in the following figure:
Figure 4.2. Choose S2I base images for apps that need specific runtimes
24
CHAPTER 4. UNDERSTANDING OPENSHIFT CONTAINER PLATFORM DEVELOPMENT
Figure 4.2. Choose S2I base images for apps that need specific runtimes
To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red
Hat Registry is represented by two locations: registry.access.redhat.com, which is unauthenticated
and deprecated, and registry.redhat.io, which requires authentication. You can learn about the Red Hat
and partner images in the Red Hat Registry from the Container images section of the Red Hat
Ecosystem Catalog. Besides listing Red Hat container images, it also shows extensive information about
the contents and quality of those images, including health scores that are based on applied security
updates.
Large, public registries include Docker Hub and Quay.io. The Quay.io registry is owned and managed by
Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including
container images and the Operators that are used to deploy OpenShift Container Platform itself.
Quay.io also offers the means of storing other types of content, including Helm charts.
If you want your own, private container registry, OpenShift Container Platform itself includes a private
container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat
also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo
replication, Git build triggers, Clair image scanning, and many other features.
All of the registries mentioned here can require credentials to download images from those registries.
Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform,
while other credentials can be assigned to individuals.
While the container image is the basic building block for a containerized application, more information is
25
OpenShift Container Platform 4.6 Architecture
While the container image is the basic building block for a containerized application, more information is
required to manage and deploy that application in a Kubernetes environment such as OpenShift
Container Platform. The typical next steps after you create an image are to:
Make some decisions about what kind of an application you are running
Create a manifest and store that manifest in a Git repository so you can store it in a source
versioning system, audit it, track it, promote and deploy it to the next environment, roll it back to
earlier versions, if necessary, and share it with others
Scalability and namespaces are probably the main items to consider when determining what goes in a
pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging
and monitoring container in the pod. Later, when you run the pod and need to scale up an additional
instance, those other containers are scaled up with it. For namespaces, containers in a pod share the
same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU,
which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also
communicate with each other by using standard inter-process communications, such as System V
semaphores or POSIX shared memory.
While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping
together a set of pods to create a complete, stable application that can complete tasks such as load
balancing. A service is also more permanent than a pod because the service remains available from the
same IP address until you delete it. When the service is in use, it is requested by name and the
OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can
reach the pods that compose the service.
By their nature, containerized applications are separated from the operating systems where they run
and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application
to internal and external networks by defining network policies that allow fine-grained control over
communication with your containerized applications. To connect incoming requests for HTTP, HTTPS,
and other services from outside your cluster to services inside your cluster, you can use an Ingress
resource.
If your container requires on-disk storage instead of database storage, which might be provided through
a service, you can add volumes to your manifests to make that storage available to your pods. You can
configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are
added to your Pod definitions.
After you define a group of pods that compose your application, you can define those pods in
Deployment and DeploymentConfig objects.
Kubernetes defines different types of workloads that are appropriate for different kinds of applications.
26
CHAPTER 4. UNDERSTANDING OPENSHIFT CONTAINER PLATFORM DEVELOPMENT
Kubernetes defines different types of workloads that are appropriate for different kinds of applications.
To determine the appropriate workload for your application, consider if the application is:
Meant to run to completion and be done. An example is an application that starts up to produce
a report and exits when the report is complete. The application might not run again then for a
month. Suitable OpenShift Container Platform objects for these types of applications include
Job and CronJob objects.
Expected to run continuously. For long-running applications, you can write a deployment.
Required to be highly available. If your application requires high availability, then you want to size
your deployment to have more than one instance. A Deployment or DeploymentConfig object
can incorporate a replica set for that type of application. With replica sets, pods run across
multiple nodes to make sure the application is always available, even if a worker goes down.
Need to run on every node. Some types of Kubernetes applications are intended to run in the
cluster itself on every master or worker node. DNS and monitoring applications are examples of
applications that need to run continuously on every node. You can run this type of application as
a daemon set. You can also run a daemon set on a subset of nodes, based on node labels.
Require life-cycle management. When you want to hand off your application so that others can
use it, consider creating an Operator. Operators let you build in intelligence, so it can handle
things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager
(OLM), cluster managers can expose Operators to selected namespaces so that users in the
cluster can run them.
OperatorHub, which is available in each OpenShift Container Platform 4.6 cluster. The
OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and
community members to the cluster operator. The cluster operator can make those Operators
available in all or selected namespaces in the cluster, so developers can launch them and
configure them with their applications.
Templates, which are useful for a one-off type of application, where the lifecycle of a
component is not important after it is installed. A template provides an easy way to get started
developing a Kubernetes application with minimal overhead. A template can be a list of resource
definitions, which could be Deployment, Service, Route, or other objects. If you want to change
names or resources, you can set these values as parameters in the template.
You can configure the supporting Operators and templates to the specific needs of your development
team and then make them available in the namespaces in which your developers work. Many people add
shared templates to the openshift namespace because it is accessible from all other namespaces.
Kubernetes manifests let you create a more complete picture of the components that make up your
27
OpenShift Container Platform 4.6 Architecture
Kubernetes manifests let you create a more complete picture of the components that make up your
Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to
the cluster, for example, by running the oc apply command.
Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the
cluster and test that it works.
Day 2: You put your YAML container configuration file into your own Git repository. From there,
people who want to install that app, or help you improve it, can pull down the YAML and apply it
to their cluster to run the app.
When you create an application as an Operator, you can build in your own knowledge of how to run and
maintain the application. You can build in features for upgrading the application, backing it up, scaling it,
or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating
the Operator, can happen automatically and invisibly to the Operator’s users.
An example of a useful Operator is one that is set up to automatically back up data at particular times.
Having an Operator manage an application’s backup at set times can save a system administrator from
remembering to do it.
Any application maintenance that has traditionally been completed manually, like backing up data or
rotating certificates, can be completed automatically with an Operator.
28
CHAPTER 5. THE CI/CD METHODOLOGY AND PRACTICE
Continuous delivery and continuous deployment are closely related concepts that are sometimes used
interchangeably and refer to automation of the pipeline. Continuous delivery uses automation to ensure
that a developer’s changes to an application are tested and sent to a repository, where an operations
team can deploy them to a production environment. Continuous deployment enables the release of
changes, starting from the repository and ending in production. Continuous deployment speeds up
application delivery and prevents the operations team from getting overloaded.
You can use GitOps tooling to create repeatable and predictable processes for managing and
recreating OpenShift Container Platform clusters and applications. By using GitOps, you can address
the issues of infrastructure and application configuration sprawl. It simplifies the propagation of
infrastructure and application configuration changes across multiple clusters by defining your
infrastructure and applications definitions as “code.” Implementing GitOps for your cluster configuration
files can make automated installation easier and allow you to configure automated cluster
customizations. You can apply the core principles of developing and maintaining software in a Git
repository to the creation and management of your cluster and application configuration files.
By using OpenShift Container Platform to automate both your cluster configuration and container
development process, you can pick and choose where and when to adopt GitOps practices. Using a CI
pipeline that pairs with your GitOps strategy and execution plan is ideal. OpenShift Container Platform
provides the flexibility to choose when and how you integrate this methodology into your business
practices and pipelines.
With GitOps integration, you can declaratively configure and store your OpenShift Container Platform
cluster configuration
GitOps works well with OpenShift Container Platform because you can both declaratively configure
clusters and store the state of the cluster configuration in Git. For more information, see Available
cluster customizations.
29
OpenShift Container Platform 4.6 Architecture
Ensure that the clusters have similar states for configuration, monitoring, or storage.
You can integrate GitOps into OpenShift Container Platform with the following community partners
and third-party integrators:
ArgoCD
30
CHAPTER 6. USING ARGOCD WITH OPENSHIFT CONTAINER PLATFORM
ArgoCD enables you to deliver global custom resources, like the resources that are used to configure
OpenShift Container Platform clusters.
31
OpenShift Container Platform 4.6 Architecture
Admission plug-ins run in sequence as an admission chain. If any admission plug-in in the sequence
rejects a request, the whole chain is aborted and an error is returned.
OpenShift Container Platform has a default set of admission plug-ins enabled for each resource type.
These are required for proper functioning of the cluster. Admission plug-ins ignore resources that they
are not responsible for.
In addition to the defaults, the admission chain can be extended dynamically through webhook
admission plug-ins that call out to custom webhook servers. There are two types of webhook admission
plug-ins: a mutating admission plug-in and a validating admission plug-in. The mutating admission plug-
in runs first and can both modify resources and validate requests. The validating admission plug-in
validates requests and runs after the mutating admission plug-in so that modifications triggered by the
mutating admission plug-in can also be validated.
Calling webhook servers through a mutating admission plug-in can produce side effects on resources
related to the target object. In such situations, you must take steps to validate that the end result is as
expected.
WARNING
There are two types of webhook admission plug-ins in OpenShift Container Platform:
32
CHAPTER 7. ADMISSION PLUG-INS
During the admission process, the mutating admission plug-in can perform tasks, such as
injecting affinity labels.
At the end of the admission process, the validating admission plug-in can be used to make sure
an object is configured properly, for example ensuring affinity labels are as expected. If the
validation passes, OpenShift Container Platform schedules the object as configured.
When an API request comes in, mutating or validating admission plug-ins use the list of external
webhooks in the configuration and call them in parallel:
If all of the webhooks approve the request, the admission chain continues.
If any of the webhooks deny the request, the admission request is denied and the reason for
doing so is based on the first denial.
If more than one webhook denies the admission request, only the first denial reason is returned
to the user.
If an error is encountered when calling a webhook, the request is either denied or the webhook is
ignored depending on the error policy set. If the error policy is set to Ignore, the request is
unconditionally accepted in the event of a failure. If the policy is set to Fail, failed requests are
denied. Using Ignore can result in unpredictable behavior for all clients.
Communication between the webhook admission plug-in and the webhook server must use TLS.
Generate a CA certificate and use the certificate to sign the server certificate that is used by your
webhook admission server. The PEM-encoded CA certificate is supplied to the webhook admission
plug-in using a mechanism, such as service serving certificate secrets.
The following diagram illustrates the sequential admission chain process within which multiple webhook
servers are called.
Figure 7.1. API admission chain with mutating and validating admission plug-ins
An example webhook admission plug-in use case is where all pods must have a common set of labels. In
this example, the mutating admission plug-in can inject labels and the validating admission plug-in can
check that labels are as expected. OpenShift Container Platform would subsequently schedule pods
that include required labels and reject those that do not.
Namespace reservation.
33
OpenShift Container Platform 4.6 Architecture
Limiting custom network resources managed by the SR-IOV network device plug-in.
Defining tolerations that enable taints to qualify which pods should be scheduled on a node.
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration 1
metadata:
name: <webhook_name> 2
webhooks:
- name: <webhook_name> 3
clientConfig: 4
service:
namespace: default 5
name: kubernetes 6
path: <webhook_url> 7
caBundle: <ca_signing_certificate> 8
rules: 9
- operations: 10
- <operation>
apiGroups:
- ""
apiVersions:
- "*"
resources:
- <resource>
failurePolicy: <policy> 11
sideEffects: None
2 The name for the MutatingWebhookConfiguration object. Replace <webhook_name> with the
appropriate value.
3 The name of the webhook to call. Replace <webhook_name> with the appropriate value.
4 Information about how to connect to, trust, and send data to the webhook server.
34
CHAPTER 7. ADMISSION PLUG-INS
7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate
value.
8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server.
Replace <ca_signing_certificate> with the appropriate certificate in base64 format.
9 Rules that define when the API server should use this webhook admission plug-in.
10 One or more operations that trigger the API server to call this webhook admission plug-in. Possible
values are create, update, delete or connect. Replace <operation> and <resource> with the
appropriate values.
11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy>
with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny
the failed request). Using Ignore can result in unpredictable behavior for all clients.
IMPORTANT
In OpenShift Container Platform 4.6, objects created by users or control loops through a
mutating admission plug-in might return unexpected results, especially if values set in an
initial request are overwritten, which is not recommended.
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration 1
metadata:
name: <webhook_name> 2
webhooks:
- name: <webhook_name> 3
clientConfig: 4
service:
namespace: default 5
name: kubernetes 6
path: <webhook_url> 7
caBundle: <ca_signing_certificate> 8
rules: 9
- operations: 10
- <operation>
apiGroups:
- ""
35
OpenShift Container Platform 4.6 Architecture
apiVersions:
- "*"
resources:
- <resource>
failurePolicy: <policy> 11
sideEffects: Unknown
2 The name for the ValidatingWebhookConfiguration object. Replace <webhook_name> with the
appropriate value.
3 The name of the webhook to call. Replace <webhook_name> with the appropriate value.
4 Information about how to connect to, trust, and send data to the webhook server.
7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate
value.
8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server.
Replace <ca_signing_certificate> with the appropriate certificate in base64 format.
9 Rules that define when the API server should use this webhook admission plug-in.
10 One or more operations that trigger the API server to call this webhook admission plug-in. Possible
values are create, update, delete or connect. Replace <operation> and <resource> with the
appropriate values.
11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy>
with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny
the failed request). Using Ignore can result in unpredictable behavior for all clients.
The webhook server is also configured as an aggregated API server. This allows other OpenShift
Container Platform components to communicate with the webhook using internal credentials and
facilitates testing using the oc command. Additionally, this enables role based access control (RBAC)
into the webhook and prevents token information from other API servers from being disclosed to the
webhook.
Prerequisites
Procedure
36
CHAPTER 7. ADMISSION PLUG-INS
Procedure
1. Build a webhook server container image and make it available to the cluster using an image
registry.
2. Create a local CA key and certificate and use them to sign the webhook server’s certificate
signing request (CSR).
$ oc new-project my-webhook-namespace 1
4. Define RBAC rules for the aggregated API service in a file called rbac.yaml:
apiVersion: v1
kind: List
items:
- apiVersion: rbac.authorization.k8s.io/v1 1
kind: ClusterRoleBinding
metadata:
name: auth-delegator-my-webhook-namespace
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: system:auth-delegator
subjects:
- kind: ServiceAccount
namespace: my-webhook-namespace
name: server
- apiVersion: rbac.authorization.k8s.io/v1 2
kind: ClusterRole
metadata:
annotations:
name: system:openshift:online:my-webhook-server
rules:
- apiGroups:
- online.openshift.io
resources:
- namespacereservations 3
verbs:
- get
- list
- watch
- apiVersion: rbac.authorization.k8s.io/v1 4
kind: ClusterRole
metadata:
name: system:openshift:online:my-webhook-requester
rules:
- apiGroups:
- admission.online.openshift.io
37
OpenShift Container Platform 4.6 Architecture
resources:
- namespacereservations 5
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1 6
kind: ClusterRoleBinding
metadata:
name: my-webhook-server-my-webhook-namespace
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: system:openshift:online:my-webhook-server
subjects:
- kind: ServiceAccount
namespace: my-webhook-namespace
name: server
- apiVersion: rbac.authorization.k8s.io/v1 7
kind: RoleBinding
metadata:
namespace: kube-system
name: extension-server-authentication-reader-my-webhook-namespace
roleRef:
kind: Role
apiGroup: rbac.authorization.k8s.io
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
namespace: my-webhook-namespace
name: server
- apiVersion: rbac.authorization.k8s.io/v1 8
kind: ClusterRole
metadata:
name: my-cluster-role
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiVersion: rbac.authorization.k8s.io/v1
38
CHAPTER 7. ADMISSION PLUG-INS
kind: ClusterRoleBinding
metadata:
name: my-cluster-role
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: my-cluster-role
subjects:
- kind: ServiceAccount
namespace: my-webhook-namespace
name: server
8 Default cluster role and cluster role bindings for an aggregated API server.
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: my-webhook-namespace
name: server
labels:
server: "true"
spec:
selector:
matchLabels:
server: "true"
template:
metadata:
name: server
labels:
server: "true"
spec:
serviceAccountName: server
containers:
- name: my-webhook-container 1
39
OpenShift Container Platform 4.6 Architecture
image: <image_registry_username>/<image_path>:<tag> 2
imagePullPolicy: IfNotPresent
command:
- <container_commands> 3
ports:
- containerPort: 8443 4
volumeMounts:
- mountPath: /var/serving-cert
name: serving-cert
readinessProbe:
httpGet:
path: /healthz
port: 8443 5
scheme: HTTPS
volumes:
- name: serving-cert
secret:
defaultMode: 420
secretName: server-serving-cert
1 Note that the webhook server might expect a specific container name.
4 Defines the target port within pods. This example uses port 8443.
5 Specifies the port used by the readiness probe. This example uses port 8443.
$ oc apply -f webhook-daemonset.yaml
8. Define a secret for the service serving certificate signer, within a YAML file called webhook-
secret.yaml:
apiVersion: v1
kind: Secret
metadata:
namespace: my-webhook-namespace
name: server-serving-cert
type: kubernetes.io/tls
data:
tls.crt: <server_certificate> 1
tls.key: <server_key> 2
1 References the signed webhook server certificate. Replace <server_certificate> with the
appropriate certificate in base64 format.
2 References the signed webhook server key. Replace <server_key> with the appropriate
key in base64 format.
40
CHAPTER 7. ADMISSION PLUG-INS
$ oc apply -f webhook-secret.yaml
10. Define a service account and service, within a YAML file called webhook-service.yaml:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
namespace: my-webhook-namespace
name: server
- apiVersion: v1
kind: Service
metadata:
namespace: my-webhook-namespace
name: server
annotations:
service.alpha.openshift.io/serving-cert-secret-name: server-serving-cert
spec:
selector:
server: "true"
ports:
- port: 443 1
targetPort: 8443 2
1 Defines the port that the service listens on. This example uses port 443.
2 Defines the target port within pods that the service forwards connections to. This example
uses port 8443.
$ oc apply -f webhook-service.yaml
12. Define a custom resource definition for the webhook server, in a file called webhook-crd.yaml:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: namespacereservations.online.openshift.io 1
spec:
group: online.openshift.io 2
version: v1alpha1 3
scope: Cluster 4
names:
plural: namespacereservations 5
singular: namespacereservation 6
kind: NamespaceReservation 7
41
OpenShift Container Platform 4.6 Architecture
$ oc apply -f webhook-crd.yaml
14. Configure the webhook server also as an aggregated API server, within a file called webhook-
api-service.yaml:
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.admission.online.openshift.io
spec:
caBundle: <ca_signing_certificate> 1
group: admission.online.openshift.io
groupPriorityMinimum: 1000
versionPriority: 15
service:
name: server
namespace: my-webhook-namespace
version: v1beta1
1 A PEM-encoded CA certificate that signs the server certificate that is used by the
webhook server. Replace <ca_signing_certificate> with the appropriate certificate in
base64 format.
$ oc apply -f webhook-api-service.yaml
16. Define the webhook admission plug-in configuration within a file called webhook-config.yaml.
This example uses the validating admission plug-in:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: namespacereservations.admission.online.openshift.io 1
webhooks:
42
CHAPTER 7. ADMISSION PLUG-INS
- name: namespacereservations.admission.online.openshift.io 2
clientConfig:
service: 3
namespace: default
name: kubernetes
path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4
caBundle: <ca_signing_certificate> 5
rules:
- operations:
- CREATE
apiGroups:
- project.openshift.io
apiVersions:
- "*"
resources:
- projectrequests
- operations:
- CREATE
apiGroups:
- ""
apiVersions:
- "*"
resources:
- namespaces
failurePolicy: Fail
2 Name of the webhook to call. This example uses the namespacereservations resource.
4 The webhook URL used for admission requests. This example uses the
namespacereservation resource.
5 A PEM-encoded CA certificate that signs the server certificate that is used by the
webhook server. Replace <ca_signing_certificate> with the appropriate certificate in
base64 format.
$ oc apply -f webhook-config.yaml
18. Verify that the webhook is functioning as expected. For example, if you have configured
dynamic admission to reserve specific namespaces, confirm that requests to create those
namespaces are rejected and that requests to create non-reserved namespaces succeed.
Defining tolerations that enable taints to qualify which pods should be scheduled on a node
43
OpenShift Container Platform 4.6 Architecture
44