Do180 4.12 Student Guide
Do180 4.12 Student Guide
The contents of this course and all its modules and related materials, including handouts to audience members, are ©
2023 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to [email protected] [mailto:[email protected]] or phone toll-free (USA) +1 (866) 626-2994 or +1 (919)
754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, RHCA, RHCE,
RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United
States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is a trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open
source or commercial project.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks
of OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's
permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the
OpenStack community.
DO180-OCP4.12-en-1-20230406 vii
Externalize the Configuration of Applications ........................................................... 302
Guided Exercise: Externalize the Configuration of Applications ...................................... 311
Provision Persistent Data Volumes ........................................................................... 315
Guided Exercise: Provision Persistent Data Volumes .................................................. 324
Select a Storage Class for an Application ................................................................. 331
Guided Exercise: Select a Storage Class for an Application ......................................... 337
Manage non-Shared Storage with Stateful Sets ....................................................... 346
Guided Exercise: Manage non-Shared Storage with Stateful Sets ................................ 352
Lab: Manage Storage for Application Configuration and Data ...................................... 359
Summary ............................................................................................................ 373
6. Configure Applications for Reliability 375
Application High Availability with Kubernetes ............................................................ 376
Guided Exercise: Application High Availability with Kubernetes .................................... 378
Application Health Probes ..................................................................................... 384
Guided Exercise: Application Health Probes ............................................................. 388
Reserve Compute Capacity for Applications ............................................................ 394
Guided Exercise: Reserve Compute Capacity for Applications ..................................... 397
Limit Compute Capacity for Applications ................................................................ 402
Guided Exercise: Limit Compute Capacity for Applications ........................................ 406
Application Autoscaling ......................................................................................... 410
Guided Exercise: Application Autoscaling .................................................................. 413
Lab: Configure Applications for Reliability ................................................................ 420
Quiz: Configure Applications for Reliability ............................................................... 428
Summary ............................................................................................................ 430
7. Manage Application Updates 431
Container Image Identity and Tags ......................................................................... 432
Guided Exercise: Container Image Identity and Tags .................................................. 438
Update Application Image and Settings .................................................................. 447
Guided Exercise: Update Application Image and Settings ........................................... 456
Reproducible Deployments with OpenShift Image Streams ........................................ 463
Guided Exercise: Reproducible Deployments with OpenShift Image Streams .................. 471
Automatic Image Updates with OpenShift Image Change Triggers .............................. 475
Guided Exercise: Automatic Image Updates with OpenShift Image Change Triggers ...... 480
Lab: Manage Application Updates ........................................................................... 485
Summary ............................................................................................................. 491
8. Comprehensive Review 493
Comprehensive Review ......................................................................................... 494
Lab: Deploy Web Applications ................................................................................ 497
Lab: Troubleshoot and Scale Applications ................................................................ 506
viii DO180-OCP4.12-en-1-20230406
Document Conventions
This section describes various conventions and practices that are used
throughout all Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
These describe where to find external documentation that is relevant to
a subject.
Note
Notes are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
Important sections provide details of information that is easily missed:
configuration changes that apply only to the current session, or
services that need restarting before an update applies. Ignoring these
admonitions will not cause data loss, but might cause irritation and
frustration.
Warning
Do not ignore warnings. Ignoring these admonitions will most likely
cause data loss.
DO180-OCP4.12-en-1-20230406 ix
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas to help remove any
potentially offensive terms. This is an ongoing process and requires alignment with the products
and services that are covered in Red Hat Training courses. Red Hat appreciates your patience
during this process.
x DO180-OCP4.12-en-1-20230406
Introduction
Course Objectives
Audience
Prerequisites
DO180-OCP4.12-en-1-20230406 xi
Introduction
xii DO180-OCP4.12-en-1-20230406
Introduction
A Red Hat OpenShift Container Platform (RHOCP) 4.12 single-node (SNO) bare metal UPI
installation is used in this classroom. Infrastructure systems for the RHOCP cluster are in the
ocp4.example.com DNS domain.
All student computer systems have a standard user account, student, which has the student
password. The root password on all student systems is redhat.
Classroom Machines
DO180-OCP4.12-en-1-20230406 xiii
Introduction
The primary function of bastion is to act as a router between the network that connects the
student machines and the classroom network. If bastion is down, then other student machines
do not function properly, or might even hang during boot.
The utility system acts as a router between the network that connects the RHOCP cluster
machines and the student network. If utility is down, then the RHOCP cluster does not
function properly, or might even hang during boot.
Several systems in the classroom provide supporting services. The classroom server hosts
software and lab materials for the hands-on activities. The registry server is a private Red Hat
Quay container registry that hosts the container images for the hands-on activities. Information
about how to use these servers is provided in the instructions for those activities.
The master01 system serves as the control plane and compute node for the RHOCP cluster.
The cluster uses the registry system as its own private container image registry and GitLab
server. The idm system provides LDAP services to the RHOCP cluster for authentication and
authorization support.
Students use the workstation machine to access a dedicated RHOCP cluster, for which they
have cluster administrator privileges.
API https://api.ocp4.example.com:6443
The RHOCP cluster has a standard user account, developer, which has the developer
password. The administrative account, admin, has the redhatocp password.
Classroom Registry
The DO180 course uses a private Red Hat Quay container image registry that is accessible only
within the classroom environment. The container image registry hosts the container images that
students use in the hands-on activities. By using a private container image registry, the classroom
environment is self-contained to not require internet access.
xiv DO180-OCP4.12-en-1-20230406
Introduction
The following table provides the container image repositories that are used in this course and their
public repositories.
redhattraining/docker-nginx docker.io/library/nginx
quay.io/redhattraining/docker-
nginx
redhattraining/bitnami-mysql docker.io/bitnami/mysql
quay.io/redhattraining/bitnami-
mysql
redhattraining/do180-dbinit quay.io/redhattraining/do180-
dbinit
redhattraining/do180-httpd-app quay.io/redhattraining/do180-
httpd-app
redhattraining/do180-roster quay.io/redhattraining/do180-
roster
redhattraining/famous-quotes quay.io/redhattraining/famous-
quotes
redhattraining/hello-world-nginx quay.io/redhattraining/hello-
world-nginx
redhattraining/httpd-noimage quay.io/redhattraining/httpd-
noimage
redhattraining/long-load quay.io/redhattraining/long-load
redhattraining/loadtest quay.io/redhattraining/loadtest
redhattraining/mysql-app quay.io/redhattraining/mysql-app
redhattraining/php-ssl quay.io/redhattraining/php-ssl
redhattraining/php-webapp quay.io/redhattraining/php-webapp
redhattraining/php-webapp-mysql quay.io/redhattraining/php-webapp-
mysql
redhattraining/versioned-hello quay.io/redhattraining/versioned-
hello
redhattraining/webphp quay.io/redhattraining/webphp
DO180-OCP4.12-en-1-20230406 xv
Introduction
rhel8/mysql-80 registry.redhat.io/rhel8/mysql-80
rhel9/mysql-80 registry.redhat.io/rhel9/mysql-80
ubi8/httpd-24 registry.access.redhat.com/ubi8/
httpd-24
ubi8/ubi registry.access.redhat.com/ubi8/
ubi
ubi9/ubi registry.access.redhat.com/ubi9/
ubi
Machine States
xvi DO180-OCP4.12-en-1-20230406
Introduction
active The virtual machine is running and available. If it just started, it still
might be starting services.
stopped The virtual machine is shut down. On starting, the virtual machine
boots into the same state it was in before shutdown. The disk state is
preserved.
Classroom Actions
CREATE Create the ROLE classroom. Creates and starts all the virtual
machines that are needed for this classroom.
CREATING The ROLE classroom virtual machines are being created. Creation can
take several minutes to complete.
DELETE Delete the ROLE classroom. Destroys all virtual machines in the
classroom. All saved work on those systems' disks is lost.
Machine Actions
OPEN CONSOLE Connect to the system console of the virtual machine in a new
browser tab. You can log in directly to the virtual machine and run
commands, when required. Normally, log in to the workstation
virtual machine only, and from there, use ssh to connect to the other
virtual machines.
ACTION > Gracefully shut down the virtual machine, preserving disk contents.
Shutdown
ACTION > Power Forcefully shut down the virtual machine, while still preserving disk
Off contents. This action is equivalent to removing the power from a
physical machine.
ACTION > Reset Forcefully shut down the virtual machine and reset associated storage
to its initial state. All saved work on that system's disks is lost.
At the start of an exercise, if instructed to reset a single virtual machine node, click ACTION >
Reset for only that specific virtual machine.
DO180-OCP4.12-en-1-20230406 xvii
Introduction
At the start of an exercise, if instructed to reset all virtual machines, click ACTION > Reset on
every virtual machine in the list.
If you want to return the classroom environment to its original state at the start of the course,
then click DELETE to remove the entire classroom environment. After the lab is deleted, then click
CREATE to provision a new set of classroom systems.
Warning
The DELETE operation cannot be undone. All completed work in the classroom
environment is lost.
To adjust the timers, locate the two + buttons at the bottom of the course management page.
Click the auto-stop + button to add another hour to the auto-stop timer. Click the auto-destroy +
button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours,
and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are
working, so that your environment is not unexpectedly shut down. Be careful not to set the timers
unnecessarily high, which could waste your subscription time allotment.
xviii DO180-OCP4.12-en-1-20230406
Introduction
• A guided exercise is a hands-on practice exercise that follows a presentation section. It walks
you through a procedure to perform, step by step.
• A quiz is typically used when checking knowledge-based learning, or when a hands-on activity is
impractical for some other reason.
• An end-of-chapter lab is a gradable hands-on activity to help you to check your learning. You
work through a set of high-level steps, based on the guided exercises in that chapter, but the
steps do not walk you through every command. A solution is provided with a step-by-step walk-
through.
• A comprehensive review lab is used at the end of the course. It is also a gradable hands-on
activity, and might cover content from the entire course. You work through a specification of
what to do in the activity, without receiving the specific steps to do so. Again, a solution is
provided with a step-by-step walk-through that meets the specification.
To prepare your lab environment at the start of each hands-on activity, run the lab start
command with a specified activity name from the activity's instructions. Likewise, at the end of
each hands-on activity, run the lab finish command with that same activity name to clean up
after the activity. Each hands-on activity has a unique name within a course.
The action is a choice of start, grade, or finish. All exercises support start and finish.
Only end-of-chapter labs and comprehensive review labs support grade.
start
The start action verifies the required resources to begin an exercise. It might include
configuring settings, creating resources, confirming prerequisite services, and verifying
necessary outcomes from previous exercises. You can perform an exercise at any time, even
without performing preceding exercises.
grade
For gradable activities, the grade action directs the lab command to evaluate your work, and
shows a list of grading criteria with a PASS or FAIL status for each. To achieve a PASS status
for all criteria, fix the failures and rerun the grade action.
finish
The finish action cleans up resources that were configured during the exercise. You can
perform an exercise as many times as you want.
The lab command supports tab completion. For example, to list all exercises that you can start,
enter lab start and then press the Tab key twice.
DO180-OCP4.12-en-1-20230406 xix
Introduction
The lab script copies the necessary files for each course activity to the workspace directory.
For example, the lab start updates-rollout command does the following tasks:
• /tmp/log/labs: This directory contains log files. The lab script creates a unique log file for
each activity. For example, the log file for the lab start updates-rollout command is /
tmp/log/labs/updates-rollout.
The lab start commands usually verify whether the Red Hat OpenShift Container Platform
(RHOCP) cluster is ready and reachable. If you run the lab start command right after creating
the classroom environment, then you might get errors when the command verifies the cluster API
or the credentials. These errors occur because the RHOCP cluster might take up to 15 minutes
to become available. A convenient solution is to run the lab finish command to clean up the
scenario, wait a few minutes, and then rerun the lab start command.
Important
In this course, the lab start scripts normally create a specific RHOCP project
for each exercise. The lab finish scripts remove the exercise-specific RHOCP
project.
If you are retrying an exercise, then you might need to wait before running the lab
start command again. The project removal process might take up to 10 minutes to
be fully effective.
xx DO180-OCP4.12-en-1-20230406
Chapter 1
DO180-OCP4.12-en-1-20230406 1
Chapter 1 | Introduction to Kubernetes and OpenShift
Objectives
• Describe the relationship between OpenShift, Kubernetes, and other Open Source projects and
list key features of Red Hat OpenShift products and editions.
Although the benefits of containerized applications are available, the ability to deliver these
aspects of application development is challenging. Understanding the approach can be a
significant learning challenge for many seasoned developers and administrators, alike. However,
when you shift to container-based workflows, the acceleration in application updates and
feature delivery, portability, and stability for your business needs are highly valuable. Although
running your first container is a breakthrough on the container adoption journey, businesses
adopting a containerized approach can encounter challenges of running many containers, at
scale. As container adoption has grown, so has the need to deliver a robust, yet easy-to-manage,
container platform and tool kit. Kubernetes provides a container orchestration platform to manage
containers and containerized applications at scale. The Kubernetes platform solves many of
the issues on the container adoption journey. For example, Kubernetes organizes application
containers into manageable units called pods, creates application or tenant isolation through
namespaces, and provides integration with other components of containerized workflows, such as
Continuous Integration/Continuous Delivery (CI/CD) with Jenkins.
Additionally, Red Hat OpenShift Container Platform (RHOCP) builds on Kubernetes to provide the
business-class features that deliver a greater user experience and a wider tool set for enterprise
needs. RHOCP extends the features of Kubernetes by adding robust networking solutions,
platform security, authentication, a full featured web console, an integrated image registry, and
several other key features in container-based workflows.
2 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
By developing in containers, each component of the full application operates in isolation. You
can apply changes to each container, such as new libraries or a software version update, without
affecting the other containers in the application. With this approach, the maintenance for each
aspect of the application does not impact the other component functions. Developers author
each component to perform a unique function for the application within an individual container,
and then describe the relationship and communication between containers to deliver the entire
application functions.
Maintaining a collection of these containerized applications, such as all the necessary tools to
run a business, is challenging. Kubernetes, the open source container platform, was created to
manage the hosting environment, at scale. Kubernetes is a platform for running applications
as containers in a cluster of computers. Kubernetes provides a robust and scalable set of tools
to deliver applications in this modern way, and alleviates the complexity of orchestrating the
underlying infrastructure. Grouping the containers that compose an application into a logical unit,
which is called a pod, Kubernetes manages the deployment, scaling, and infrastructure details for
the application. Kubernetes provides a resilient and elastic environment that ensures application
delivery without necessitating growth of the administrative teams that maintain the infrastructure.
Red Hat OpenShift is a containerization software distribution that is based on the Kubernetes
container platform, and is coupled with additional features, functions, and support for enterprise-
class deployments. OpenShift expands on Kubernetes to provide added tools to administer
the container platform, and to provide more features and organization through the operator
framework. The operator framework adds extensibility to the platform, to support adding cluster
capabilities, and provides a community of shared operators to add to any deployment.
DO180-OCP4.12-en-1-20230406 3
Chapter 1 | Introduction to Kubernetes and OpenShift
The RHOCP architecture begins with the Red Hat CoreOS or Red Hat Enterprise Linux operating
systems. By using machines that run one of these operating systems, RHOCP provides the
necessary cluster features to deliver the container platform. Additional automated operations
continue to build on Kubernetes to deliver an enterprise-class container environment.
Cluster services, such as metrics, a container image registry, and event logging deliver additional
features for the environment. Container images are the collection of defined containers that
are available within the cluster for deployment. Application services, such as service mesh and
other middleware, are also available within the cluster. Additionally, the cluster contains developer
services to aid the ongoing application development and platform administration.
The full stack of an RHOCP cluster delivers not only a container runtime environment, but also the
additional required tools in enterprise-class deployments to perform the full set of tasks that a
modern business application platform requires.
Figure 1.2: Open source projects that are involved in a Kubernetes release
After a Kubernetes releases a new version, additional development adds the production security
hardening and stability into the subsequent OpenShift release. The community also delivers many
bug fixes and other development efforts before the release of a new OpenShift version.
4 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
With any release of Red Hat OpenShift Container Platform, you can deploy the cluster in
many ways. When you initially explore RHOCP, using Red Hat OpenShift Local is a viable
approach that deploys a cluster on a local computer for testing and exploration. Additionally,
an assisted installation method that deploys an OpenShift cluster on a single node for testing
and development is available, and is called Single Node OpenShift (SNO). The previous options
provide access to a cluster and support testing and exploration as you consider adopting
OpenShift, but are not suitable environments for production deployments.
When you are ready to adopt Red Hat OpenShift for production workloads, various environments
are available to suit any business requirement for the cluster deployment.
Public cloud partners, such as AWS, Microsoft Azure, IBM Cloud, and Google Cloud each provide
quick access to an on-demand Red Hat OpenShift deployment. These managed deployments
offer quick access to a cluster on infrastructure that you can rely on from a Red Hat trusted
cloud provider. This approach is a good option when the speed to delivery for your applications
is a driving factor, but adopting cloud infrastructure might not satisfy the most rigid security
requirements, when that requirement is an important factor for your business.
You can also deploy a Red Hat OpenShift cluster by using the available installers on physical or
virtual infrastructure, either on-premise or in a public cloud. These self-managed offerings are
available in several forms.
Red Hat OpenShift Kubernetes Engine includes the latest version of the Kubernetes platform with
the additional security hardening and enterprise stability that Red Hat is famous for delivering.
DO180-OCP4.12-en-1-20230406 5
Chapter 1 | Introduction to Kubernetes and OpenShift
This deployment runs on the Red Hat Enterprise Linux CoreOS immutable container operating
system, by using Red Hat OpenShift Virtualization for virtual machine management, and provides
an administrator console to aid in operational support.
Red Hat OpenShift Container Platform builds on the features of the Kubernetes Engine platform
to include additional cluster manageability, security, stability, and ease of application development
for businesses. Additional features of this tier include a developer console, as well as log
management, cost management, and metering information. This offering adds Red Hat OpenShift
Serverless (Knative), Red Hat OpenShift Service Mesh (Istio), Red Hat OpenShift Pipelines
(Tekton), and Red Hat OpenShift GitOps (ArgoCD) to the deployment.
Red Hat OpenShift Platform Plus expands further on the offering to deliver the most valuable and
robust features that are available. This offering includes Red Hat Advanced Cluster Management
for Kubernetes, Red Hat Advanced Cluster Security for Kubernetes, and the Red Hat Quay
private registry platform. For the most complete and full-featured container experience, Red Hat
OpenShift Platform Plus bundles all the necessary tools for a complete development and
administrative approach to containerized application platform management.
Note
Downloadable Red Hat OpenShift Container Platform installers are available at
https://developers.redhat.com/products/openshift/download.
No matter which environment or offering is best for your business container needs, Red Hat has an
offering that delivers the features and tooling that your cluster requires.
Operators Overview
Operators are a method of packaging, deploying, and managing a Kubernetes application.
OpenShift uses this method to add capabilities to a Kubernetes cluster. These operators
continuously watch the cluster state to deliver the functionality for the specific operator.
Additionally, the operator framework supports custom development of features that your cluster
uniquely requires.
Cluster operators are often installed by default, or during initial cluster configuration, and are
managed by using the Cluster Version Operator (CVO). The Operator Lifecycle Manager (OLM)
manages the installation, upgrade, and role-based access control (RBAC) for the operators in a
cluster. The Operator Registry maintains information about ClusterServiceVersions (CSV)
resources and the available Custom Resource Definitions (CRD) within the cluster. To find and add
other operators to a cluster, use the OperatorHub web console.
Note
Product documentation for Red Hat OpenShift Container Platform
is found at https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12.
6 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
To assist in performing these duties, the Red Hat Insights Advisor is available from the Red Hat
Hybrid Cloud Console. The Insights Advisor helps administrators to identify and remediate cluster
issues by continually analyzing data that the Insights Operator provides. The data from operator is
uploaded to the Red Hat Hybrid Cloud Console, where you further inspect the recommendations
and their impact on the cluster.
References
Just What Is Red Hat OpenShift Platform Plus?
https://cloud.redhat.com/blog/just-what-is-red-hat-openshift-platform-plus
For more information about Red Hat OpenShift Container Platform, refer to the
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12
DO180-OCP4.12-en-1-20230406 7
Chapter 1 | Introduction to Kubernetes and OpenShift
Quiz
5. Which public cloud platform does not have a Red Hat OpenShift offering?
a. Amazon Web Services
b. Google Cloud
c. Heroku
d. Microsoft Azure
e. IBM Cloud
8 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Solution
5. Which public cloud platform does not have a Red Hat OpenShift offering?
a. Amazon Web Services
b. Google Cloud
c. Heroku
d. Microsoft Azure
e. IBM Cloud
DO180-OCP4.12-en-1-20230406 9
Chapter 1 | Introduction to Kubernetes and OpenShift
Objectives
• Navigate the OpenShift web console to identify running applications and cluster services.
Kubernetes provides a web-based dashboard, which is not deployed by default within a cluster.
The Kubernetes dashboard provides minimal security permissions, and accepts only token-
based authentication. This dashboard also requires a proxy setup that limits the access to the
web console from only the system terminal that creates the proxy. By contrast with the stated
limitations of the Kubernetes web console, OpenShift includes a fuller-featured web console.
The OpenShift web console is not related to the Kubernetes dashboard, but is a separate tool for
managing OpenShift clusters. Additionally, operators can extend the web console features and
functions to include more menus, views, and forms to aid in cluster administration.
...output omitted...
Then, you execute the oc whoami --show-console command to retrieve the web console
URL:
Lastly, use a web browser to navigate to the URL, which displays the authentication page:
10 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Using the credentials for your cluster access brings you to the home page for the web console.
DO180-OCP4.12-en-1-20230406 11
Chapter 1 | Introduction to Kubernetes and OpenShift
Each perspective presents the user with different menu categories and pages that cater to
the needs of the two separate personas. The Administrator perspective focuses on cluster
configuration, deployments, and operations of the cluster and running workloads. The Developer
perspective pages focus on creating and running applications.
Note
An initial login to the web console presents the option for a short informational tour.
Click Skip Tour if you prefer to dismiss the tour option at this time.
By default, the console displays the Home > Overview page, which provides a quick glimpse of
initial cluster configurations, documentation, and general cluster status. Navigate to Home >
Projects to list all projects in the cluster that are available to the credentials in use.
You might initially peruse the Operators > OperatorHub page, which provides access to the
collection of operators that are available for your cluster.
12 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
By adding operators to the cluster, you can extend the features and functions that your OpenShift
cluster provides. Use the search filter to find the available operators to enhance the cluster and to
supply the OpenShift aspects that you require.
By clicking the link on the Operator Hub page, you can peruse the Developer Catalog.
Select any project, or use the search filter to find a specific project, to visit the Developer Catalog
for that project, where shared applications, services, event sources, or source-to-image builders
are available.
DO180-OCP4.12-en-1-20230406 13
Chapter 1 | Introduction to Kubernetes and OpenShift
After finding the preferred additions for a project, a cluster administrator can further customize
the content that the catalog provides. By adding the necessary features to a project from this
approach, developers can customize features to provide an ideal application deployment.
• Deployments: The operational unit that provides granular management of a running application.
• Routes: Networking configuration to expose your applications and services to resources outside
the cluster.
These concepts are covered in more detail throughout the course. You can find these concepts
throughout the web console as you explore the features of an OpenShift cluster from the
graphical environment.
References
For more information about the OpenShift web console, refer to Red Hat OpenShift
Container Platform Web Console documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/web_console/index
14 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Guided Exercise
Outcomes
• Explore the features and components of Red Hat OpenShift by using the web console.
• Create a sample application by using the Developer perspective in the web console.
• Switch to the Administrator perspective and examine the resources that are created for
the sample application.
• Use the web console to describe the cluster nodes, networking, storage, and
authentication.
This command ensures that the cluster is validated for the exercise.
Instructions
1. As the developer user, locate and then navigate to the Red Hat OpenShift web console.
1.1. Use the terminal to log in to the OpenShift cluster as the developer user with the
developer password.
...output omitted...
DO180-OCP4.12-en-1-20230406 15
Chapter 1 | Introduction to Kubernetes and OpenShift
2.1. Click Red Hat Identity Management and log in as the developer user with the
developer password.
Note
Click Skip Tour to dismiss the option to view a short tour on the first visit.
16 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
3. Use the Developer perspective of the web console to create your first project.
3.1. From the Getting Started page, click Create a new project to open the Create
Project wizard.
3.2. Create a project named intro-navigate by using the wizard. Use intro-
navigate for the display name, and add a brief description of the project.
DO180-OCP4.12-en-1-20230406 17
Chapter 1 | Introduction to Kubernetes and OpenShift
4.1. Select the Start building your application link to browse the available sample
applications.
4.2. Enter Apache into the search bar to see the available sample applications for
deployment.
18 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
4.3. Select the samples option from Httpd from the list of available applications, and then
click Create from the side panel.
4.4. Examine the default values for the sample application, and then select Create at the
bottom of the page.
DO180-OCP4.12-en-1-20230406 19
Chapter 1 | Introduction to Kubernetes and OpenShift
5.1. Select the icon on the Topology panel to view details of the httpd-sample
deployment.
5.2. Select the Actions list to view the available controls for the httpd-sample
deployment.
20 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
6.1. From the OpenShift web console, locate the left panel. If you do not see the
left panel, then click the main menu icon at the upper left of the web console.
Click Developer and then click Administrator to change to the Administrator
perspective. The web console changes to the new perspective and exposes
additional information through the sidebar.
6.2. Navigate to Home > Projects to view the intro-navigate project in the populated
project list.
6.3. Select the intro-navigate project to open the Project Details page. This page
includes a general overview of the project, such as the project status and resource
utilization details.
DO180-OCP4.12-en-1-20230406 21
Chapter 1 | Introduction to Kubernetes and OpenShift
7.1. From the OpenShift web console menu, navigate to Workloads > Pods to view the
httpd-sample pods.
7.2. Navigate to Workloads > Deployments to view the list of deployments in the project.
Click httpd-sample to view the deployment details.
22 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
8.1. Navigate to Networking > Services and click httpd-sample to view the details of the
httpd-sample service.
8.2. Navigate to Networking > Route and click httpd-sample to view the details of the
httpd-sample route.
9.1. From the OpenShift web console menu, navigate to Home > Projects. Select Delete
Project from the context menu for the intro-navigate project.
DO180-OCP4.12-en-1-20230406 23
Chapter 1 | Introduction to Kubernetes and OpenShift
9.2. Enter the project name in the text field and then select Delete.
9.3. Log out of the web console. From the OpenShift web console right panel, click
developer and then select Log out from the account menu.
10. Log in to the OpenShift web console as the admin user to inspect additional cluster
details.
Note
When you use a cluster administrator account, you can browse the cluster
components, but do not alter or remove any components.
10.1. Log in to the web console. Select Red Hat Identity Management and then enter the
admin username and the redhatocp password.
24 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
10.2. From the OpenShift web console menu, navigate to Operators > Installed Operators.
Each operator provides a specific function for the cluster. Select an individual
operator to display its details.
10.3. Navigate to Workloads > Pods to view the list of all pods in the cluster. The search
bar at the top can narrow down the list of pods. Select an individual pod to display its
details.
10.4. Navigate to Workloads > Deployments to view the list of all deployments in the
cluster. Select an individual deployment to display its details.
DO180-OCP4.12-en-1-20230406 25
Chapter 1 | Introduction to Kubernetes and OpenShift
10.5. Navigate to Networking > Services to view the list of all services in the cluster. Select
an individual service to display its details.
11.1. Log out of the web console. From the OpenShift web console right panel, click
Administrator and then select Log out from the account menu.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
26 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
DO180-OCP4.12-en-1-20230406 27
Chapter 1 | Introduction to Kubernetes and OpenShift
Objectives
• Describe the main architectural characteristics of Kubernetes and OpenShift.
The smallest manageable unit in Kubernetes is a pod. A pod consists of one or more containers,
their storage resources, and an IP address that comprise a single application. Kubernetes also uses
pods to orchestrate the containers inside the pod and to manage the resources as a single unit.
Containerization offers a new approach to the application development process and delivery
techniques, and identifying applications that benefit from this modular architecture holds
immense value.
Authoring or migrating the first application by using containers unlocks a significant advantage to
the development and release cycle, but also introduces significant challenges. As the container
count grows in an environment, the management of container relationships, connectivity, security,
and the lifecycle increases in complexity.
The following graphic shows the number of tools, mostly open source, that play a part in building a
Kubernetes distribution.
28 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Especially note the sheer volume and diversity of components and contributors, which might be
overwhelming, at first exposure. Each component plays a vital part in building a robust container
platform that handles a wide array of deployments and business models. Additionally, each
component has a distinct community of contributors who collectively build the Kubernetes
ecosystem. Together, the breadth of capabilities of these components gives a robust container
management platform.
Although Kubernetes meets the need to manage containers as a business deployment grows in
scale, it does not handle many essential business functions and integrations. RHOCP builds a
wider feature set to provide a container management experience that considers the more wide-
ranging business requirements.
For example, delivering a fully integrated monitoring solution, based on Prometheus, is just one
advantage that RHOCP provides to meet a business-essential need for a production platform.
Prometheus provides preconfigured alerts to equip administrators with immediate cluster
DO180-OCP4.12-en-1-20230406 29
Chapter 1 | Introduction to Kubernetes and OpenShift
intelligence. Grafana dashboards show the state and performance of the cluster components and
resources, by visualizing cluster performance, systemic issues, and utilization metrics. RHOCP also
includes a built-in image registry for the application containers that you deploy in the cluster, for
ease of management, storage, and lifecycle management.
Kubernetes Features
Kubernetes offers the following features on top of a container infrastructure:
Horizontal scaling
Applications can scale up and down manually, or automatically with a configuration set, with either
the Kubernetes command-line interface (CLI) or the web UI.
Self-healing
Kubernetes can use user-defined health checks to monitor pods, and can restart and reschedule
the pods in the event of failure.
Automated rollout
Kubernetes can gradually roll out updates to your application containers, and monitors the
application status during that process. If something goes wrong during the rollout, then
Kubernetes rolls back to the previous iteration of the deployment.
Note
Kubernetes does not encrypt secrets, but instead stores them in Base64 encoding.
Operators
Operators are packaged Kubernetes applications that also bring the knowledge of the
application's lifecycle into the Kubernetes cluster. Applications that are packaged as operators use
the Kubernetes API to update the cluster state in reaction to changes in the application state.
30 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
with capabilities such as remote management, multitenancy, increased security, monitoring and
auditing, application lifecycle management, and self-service interfaces for developers.
Beginning with RHOCP v4, hosts in an RHOCP cluster use Red Hat Enterprise Linux CoreOS
(RHEL CoreOS) as the underlying operating system. RHEL CoreOS is an immutable operating
system that is optimized for running containerized applications. The entire operating system is
updated as a single image, instead of on a package-by-package basis, and both user applications
and system components such as network services run as containers.
RHOCP controls updates to RHEL CoreOS and its configurations, and so managing an
RHOCP cluster includes managing the operating system on cluster nodes, which frees system
administrators from these tasks and reduces the risk of human error.
Routes
Routes expose services to the outside world.
RHOCP ships with an advanced aggregated logging solution, based on Elasticsearch, which
supports long-term retention of logs from cluster nodes and containers.
DO180-OCP4.12-en-1-20230406 31
Chapter 1 | Introduction to Kubernetes and OpenShift
As shown in the preceding diagram, control plane communication to cluster nodes is through
the kubelet service that runs on each node. A server can act as both a control plane node and
a compute node, but the two roles are usually separated for increased stability, security, and
manageability.
32 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
• kube-scheduler: A watcher service that determines an available compute node for new pod
requests.
DO180-OCP4.12-en-1-20230406 33
Chapter 1 | Introduction to Kubernetes and OpenShift
• kubelet: The main agent on each cluster compute node, and is primarily responsible for
executing pod requests that are sent from the API and scheduler.
• kube-proxy: The component that provides network configuration and communication for pods
on a node.
• CRI: The Container Runtime Interface (CRI) is a plug-in interface that provides configurable
communication between the kubelet and pod configuration requests.
• cri-o: The CRI-O engine, which represents a small Open Container Initiative (OCI)-compliant
runtime engine, provides configurable communication between the kubelet and pod
configuration requests.
Additional Concepts
• Namespaces: A logical collection and isolation of all resources for a tenant or application.
• API Resources: An endpoint that stores the list of a particular object type, such as pods,
services, or stateful sets.
• Controllers: A cluster loop that continuously assesses the running state of cluster resources that
are aligned to declared state.
• Reconciliation Loop: The cyclical process that a controller uses to poll the current cluster state,
assess alignment with the declared state, and to remediate disparities.
34 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Another valuable addition that RHOCP delivers is the inclusion of the Operator Lifecycle Manager
(OLM). The OLM aids in user installation, updates, and the lifecycle of Kubernetes operators
and respective services within the cluster. The OLM uses operator manifests to determine the
operators to deploy from a cluster catalog, for each namespace in the cluster.
This brief section is a high-level overview of this concept, which is discussed in more detail in the
DO280: Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster course.
Installer-Provisioned Infrastructure (IPI) relies on the creation of a bootstrap node that automates
many of the tasks during cluster deployment. This approach is available for on-premise bare metal
hardware or virtual machines, and also through many public cloud providers, such as IBM Cloud,
Amazon Web Services, and Google Cloud Platform.
DO180-OCP4.12-en-1-20230406 35
Chapter 1 | Introduction to Kubernetes and OpenShift
Both methods of installation aim to deliver a full RHOCP cluster to begin deploying business
applications and workloads. Determining which approach is right for your business environment is
a prerequisite to begin using RHOCP.
References
For more information about the Operator Lifecycle Manager (OLM), refer to the
Operator Lifecycle Manager Concepts and Resources section in the Understanding
Operators chapter in the Red Hat OpenShift Container Platform 4.12 Operators
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/operators/index
.
36 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Quiz
A cluster component that validates data, assigns nodes, and synchronizes pods.
An immutable operating system that provides the foundation for a Red Hat
OpenShift cluster.
Technology Function
etcd
Kubernetes
Operator Lifecycle
Manager
API Server
kubelet
DO180-OCP4.12-en-1-20230406 37
Chapter 1 | Introduction to Kubernetes and OpenShift
Solution
Technology Function
38 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Objectives
• Navigate the Events, Compute, and Observe panels of the OpenShift web console to assess the
overall state of a cluster.
Whereas the node and machine terms are often interchangeable, Red Hat OpenShift Container
Platform (RHOCP) uses the machine term more specifically. In OpenShift, a machine is the
resource that describes a cluster node by using a providerSpec file. This specification file is
a Custom Resource Definition (CRD) that uses the OpenShift API to provision an appropriate
compute instance. Using a machine resource is particularly valuable when using public cloud
providers to provision infrastructure, because the providerSpec requests the correct instance
type from the available infrastructure.
A MachineConfig resource defines the initial state and any changes to files, services, operating
system updates, and critical OpenShift service versions for the kubelet and cri-o services.
OpenShift relies on the Machine Config Operator (MCO) to maintain the operating systems
and configuration of the cluster machines. The MCO is a cluster-level operator that ensures
the correct configuration of each machine. This operator also performs routine administrative
tasks, such as system updates. This operator uses the machine definitions in a MachineConfig
resource to continually validate and remediate the state of cluster machines to the intended state.
After a MachineConfig change, the MCO orchestrates the execution of the changes for all
affected nodes.
Note
The orchestration of MachineConfig changes through the MCO is prioritized
alphabetically by zone, by using the node label in topology.kubernetes.io/
zone.
DO180-OCP4.12-en-1-20230406 39
Chapter 1 | Introduction to Kubernetes and OpenShift
Click a node's name to navigate to the overview page for the node. On the node overview page,
you can view the node logs or connect to the node by using the terminal.
From the previous page, view the node logs and investigate the system information to aid
troubleshooting and remediation for node issues.
40 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
The preceding page shows the web console terminal that is connected to the cluster node.
Although making changes directly on the cluster node from the terminal is not recommended, it
is common practice to connect to the cluster node for diagnostic investigation and remediation.
From this terminal, you can use the same binaries that are available within the cluster node itself.
Additionally, the tabs on the node overview page show metrics, events, and the node's YAML
definition file.
This page lists all pods, which can be filtered by selecting a specific project, by using the search
tools, or ordering the page by using the various column headings. To view the pod details page,
click a pod name in the list.
DO180-OCP4.12-en-1-20230406 41
Chapter 1 | Introduction to Kubernetes and OpenShift
The pod details page contains links to pod metrics, environment variables, logs, events, a terminal,
and the pod's YAML definition. The pod logs are available on the Pods > Logs page and provide
information about the pod status. The Pods > Terminal page opens a shell connection to the
pod for inspection and issue remediation. It is not recommended to alter a running pod, but
the terminal is useful for diagnosing and remediating pod issues. To fix a pod, update the pod
configuration to reflect the necessary changes, and redeploy the pod.
Depending on the monitor definitions, alerting is then available based on the metric that is polled
and the defined success criteria. The monitor continuously compares the gathered metric, and
creates an alert when the success criteria are no longer met. As an example, a web service monitor
polls on the listening port, port 80, and alerts only if the response from that port becomes invalid.
From the web console, navigate to Observe > Metrics to visualize gathered metrics by using a
Grafana-based data query utility. On this page, users can submit queries to build data graphs
and dashboards, which administrators can view to gather valuable statistics for the cluster and
applications.
For configured monitors, visit Observe > Alerting to view firing alerts, and filter on the alert
severity to view those alerts that need remediation. Alerting data is a key component to help
administrators to deliver cluster and application accessibility and functions.
42 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Kubernetes Events
Administrators are typically familiar with the contents of log files for services, whereas logs tend
to be highly detailed and granular. Events provide a high-level abstraction to log files and to
provide information about more significant changes. Events are useful in understanding the
performance and behavior of the cluster, nodes, projects, or pods, at a glance. Events provide
details to understand general performance and to bring attention to meaningful issues, while logs
provide a deeper level of detail for remediating specific issues.
The Home > Events page shows the events for all projects or for a specific project, which can be
filtered and searched.
DO180-OCP4.12-en-1-20230406 43
Chapter 1 | Introduction to Kubernetes and OpenShift
References
For more information about Red Hat OpenShift Container Platform machines,
refer to the Overview of Machine Management chapter in the Red Hat OpenShift
Container Platform Machine Management documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/machine_management/
index#overview-of-machine-management
44 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Guided Exercise
Outcomes
• Explore and show the monitoring features and components.
• Use a terminal connection to the master01 node to view the crio and kubelet
services.
• Explore the Monitoring page, alert rule configurations, and the etcd service dashboard.
• Explore the events page, and filter events by resource name, type, and message.
This command ensures that the cluster is prepared for the exercise.
Instructions
1. As the developer user, locate and then navigate to the Red Hat OpenShift web console.
1.1. Use the terminal to log in to the OpenShift cluster as the developer user with the
developer password.
DO180-OCP4.12-en-1-20230406 45
Chapter 1 | Introduction to Kubernetes and OpenShift
2.1. Click Red Hat Identity Management and log in as the admin user with the
redhatocp password.
46 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
The top of this section contains links to helpful documentation and an initial cluster
configuration walkthrough.
3.2. Scroll down to view the Status section, which provides a short summary of cluster
performance and health.
Notice that many of the headings are links to sections that contain more detailed
cluster information.
3.3. Continue scrolling to view the Cluster utilization section, which contains
metrics and graphs that show resource consumption.
DO180-OCP4.12-en-1-20230406 47
Chapter 1 | Introduction to Kubernetes and OpenShift
3.4. Continue scrolling to view the Details section, including information such as the API
version, cluster ID, and Red Hat OpenShift version.
3.5. Scroll to the Cluster Inventory section, which contains links to the Nodes, Pods,
StorageClasses, and PersistentVolumeClaim pages.
3.6. The last part of the page contains the Activity section, which lists ongoing
activities and recent events for the cluster.
48 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
4. Use the OpenShift web console to access the terminal of a cluster node. From the terminal,
determine the status of the kubelet node agent service and the CRI-O container runtime
interface service.
4.1. Navigate to Compute > Nodes to view the machine that provides the cluster
resources.
Note
The classroom cluster runs on a single node named master01, which serves as
the control and data planes for the cluster, and is intended for training purposes. A
production cluster uses multiple nodes to ensure stability and to provide a highly
available architecture.
4.2. Click the master01 link to view the details of the cluster node.
4.3. Select the Terminal tab to connect to a shell on the master01 node.
DO180-OCP4.12-en-1-20230406 49
Chapter 1 | Introduction to Kubernetes and OpenShift
The shell on this page is interactive, and enables you to run commands directly on the
cluster node.
4.4. Run the chroot /host command to enable host binaries on the node.
4.5. View the status of the kubelet node agent service by running the systemctl
status kubelet command.
4.6. View the status of CRI-O container runtime interface service by running the
systemctl status crio command.
50 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
5.1. From the OpenShift web console menu, navigate to Observe > Alerting to view
cluster alert information.
5.2. Select the Alerting rules tab to view the various alert definitions.
5.3. Filter the alerting rules by name and search for the etcdDatabase term.
DO180-OCP4.12-en-1-20230406 51
Chapter 1 | Introduction to Kubernetes and OpenShift
52 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
6.1. Navigate to Observe > Metrics to open the cluster metrics utility.
6.2. Click Insert example query to populate the metrics graph with sample data.
DO180-OCP4.12-en-1-20230406 53
Chapter 1 | Introduction to Kubernetes and OpenShift
6.3. From the graph, hover over any point on the timeline to view the detailed data points.
7.1. Navigate to Home > Events to open the cluster events log.
Note
The event log updates every 15 minutes and can require additional time to populate
entries.
7.2. Scroll down to view a chronologically ordered stream that contains cluster events.
54 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Note
Select an event to open the Details page of the related resource.
8.1. From the Resources drop-down, use the search bar to filter for the job term, and
select the box labeled CronJob to display events that relate to that resource.
8.2. Continue to refine the filter by selecting Warning from the Type drop-down.
8.3. Filter the results by using the Message text field. Enter the missed start time
text to retrieve a single event.
DO180-OCP4.12-en-1-20230406 55
Chapter 1 | Introduction to Kubernetes and OpenShift
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
56 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Quiz
As the student user on the workstation machine, open a web browser and navigate
to https://console-openshift-console.apps.ocp4.example.com to access the Red Hat
OpenShift web console. Then, log in as the admin user with the redhatocp password.
2. Which three severity types are available for the alerts in the cluster? (Choose three.)
a. Warning
b. Firing
c. Info
d. Urgent
e. Critical
f. Oops
4. Which two objects are listed as the StorageClasses objects for the cluster? (Choose
two.)
a. ceph-storage
b. nfs-storage
c. k8s-lvm-vg1
d. local-volume
e. lvms-vg1
DO180-OCP4.12-en-1-20230406 57
Chapter 1 | Introduction to Kubernetes and OpenShift
58 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Solution
As the student user on the workstation machine, open a web browser and navigate
to https://console-openshift-console.apps.ocp4.example.com to access the Red Hat
OpenShift web console. Then, log in as the admin user with the redhatocp password.
2. Which three severity types are available for the alerts in the cluster? (Choose three.)
a. Warning
b. Firing
c. Info
d. Urgent
e. Critical
f. Oops
4. Which two objects are listed as the StorageClasses objects for the cluster? (Choose
two.)
a. ceph-storage
b. nfs-storage
c. k8s-lvm-vg1
d. local-volume
e. lvms-vg1
DO180-OCP4.12-en-1-20230406 59
Chapter 1 | Introduction to Kubernetes and OpenShift
60 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Lab
Outcomes
You should be able to navigate the Red Hat OpenShift Container Platform web console to
find various information items and configuration details.
This command ensures that the Red Hat OpenShift Container Platform is deployed and
ready for the lab.
Instructions
1. Log in to the Red Hat OpenShift Container Platform web console, with Red Hat Identity
Management as the admin user with the redhatocp password, and review the answers for
the preceding quiz.
2. View the cluster version on the Overview page for the cluster.
3. View the available alert severity types within the filters on the Alerting page.
4. View the labels for the thanos-querier route.
5. View the available storage classes in the cluster.
6. View the installed operators for the cluster.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 61
Chapter 1 | Introduction to Kubernetes and OpenShift
62 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
Solution
Outcomes
You should be able to navigate the Red Hat OpenShift Container Platform web console to
find various information items and configuration details.
This command ensures that the Red Hat OpenShift Container Platform is deployed and
ready for the lab.
Instructions
1. Log in to the Red Hat OpenShift Container Platform web console, with Red Hat Identity
Management as the admin user with the redhatocp password, and review the answers for
the preceding quiz.
1.1. Use a browser to view the login page at the web console address https://console-
openshift-console.apps.ocp4.example.com.
1.2. Click Red Hat Identity Management, and supply the admin username and the
redhatocp password, and then click Log in to access the home page.
DO180-OCP4.12-en-1-20230406 63
Chapter 1 | Introduction to Kubernetes and OpenShift
2. View the cluster version on the Overview page for the cluster.
2.1. From the Home > Overview page, scroll down to view the cluster details.
64 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
3. View the available alert severity types within the filters on the Alerting page.
3.2. Click the Filter drop-down to view the available severity options.
DO180-OCP4.12-en-1-20230406 65
Chapter 1 | Introduction to Kubernetes and OpenShift
66 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
4.4. Scroll down on the thanos-querier Route details page to view Labels.
DO180-OCP4.12-en-1-20230406 67
Chapter 1 | Introduction to Kubernetes and OpenShift
6.2. View the listed operators that are installed in the cluster.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
68 DO180-OCP4.12-en-1-20230406
Chapter 1 | Introduction to Kubernetes and OpenShift
DO180-OCP4.12-en-1-20230406 69
Chapter 1 | Introduction to Kubernetes and OpenShift
Summary
• A container is an encapsulated process that includes the required runtime dependencies for an
application to run.
• When running containers at scale, it becomes challenging to configure, set up networking, and
deliver high availability applications without a container platform, such as Kubernetes.
• Pods are the smallest organizational unit for a containerized application in a Kubernetes cluster.
• Red Hat OpenShift Container Platform (RHOCP) adds enterprise-class functions to the
Kubernetes container platform to deliver the wider business needs.
• Most administrative tasks that cluster administrators and developers perform are available
through the RHOCP web console.
• Logs, metrics, alerts, terminal connections to the nodes and pods in the cluster, and many other
features are available through the RHOCP web console.
70 DO180-OCP4.12-en-1-20230406
Chapter 2
DO180-OCP4.12-en-1-20230406 71
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Objectives
• Access an OpenShift cluster by using the Kubernetes and OpenShift command-line interfaces.
With the oc command, you can create applications and manage Red Hat OpenShift Container
Platform (RHOCP) projects from a terminal. The OpenShift CLI is ideal in the following situations:
You can also install the kubectl CLI independently of the oc CLI. You must use a kubectl CLI
version that is within one minor version difference of your cluster. For example, a v1.26 client can
communicate with v1.25, v1.26, and v1.27 control planes. Using the latest compatible version
of the kubectl CLI can help to avoid unforeseen issues.
The perform a manual installation of the kubectl binary for a Linux installation, you must first
download the latest release by using the curl command.
Then, you must download the kubectl checksum file and then validate the kubectl binary
against the checksum file.
72 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
If the check fails, then the sha256sum command exits with nonzero status, and prints a kubectl:
FAILED message.
Note
If you do not have root access on the target system, you can still install the
kubectl CLI to the ~/.local/bin directory. For more information, refer to
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/.
Finally, use the kubectl version command to verify the installed version. This command prints
the client and server versions. Use the --client option to view the client version only.
Alternatively, a distribution that is based on Red Hat Enterprise Linux (RHEL) can install the
kubectl CLI with the following command:
To view a list of the available kubectl commands, use the kubectl --help command.
DO180-OCP4.12-en-1-20230406 73
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
You can also use the --help option on any command to view detailed information about the
command, including its purpose, examples, available subcommands, and options. For example, the
following command provides information about the kubectl create command and its usage.
# Edit the data in registry.yaml in JSON then create the resource using the
edited data
oc create -f registry.yaml --edit -o json
Available Commands:
build Create a new build
clusterresourcequota Create a cluster resource quota
...output omitted...
Kubernetes uses many resource components to support applications. The kubectl explain
command provides detailed information about the attributes of a given resource. For example, use
the following command to learn more about the attributes of a pod resource.
You can download the oc CLI from the OpenShift web console to ensure that the CLI tools
are compatible with the RHOCP cluster. From the OpenShift web console, navigate to Help >
Command line tools. The Help menu is represented by a ? icon. The web console provides several
installation options for the oc client, such as downloads for the following operating systems:
74 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
The basic usage of the oc command is through its subcommands in the following syntax:
Because the oc CLI is a superset of the kubectl CLI, the version, --help, and explain
commands are the same for both CLIs. However, the oc CLI includes additional commands that
are not included in the kubectl CLI, such as the oc login and oc new-project commands.
Before you can interact with your RHOCP cluster, you must authenticate your requests. Use the
oc login command to authenticate your requests. The oc login command provides role-
based authentication and authorization that protects the RHOCP cluster from unauthorized
access. The syntax to log in is shown below:
For example, in this course, you can use the following command:
You don't have any projects. You can try to create a new project, by running
DO180-OCP4.12-en-1-20230406 75
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
$ oc new-project <projectname>
After authenticating to the RHOCP cluster, you can create a project with the oc new-project
command. Projects provide isolation between your application resources. Projects are Kubernetes
namespaces with additional annotations that provide multitenancy scoping for applications.
Several essential commands can manage RHOCP and Kubernetes resources, as described
here. Unless otherwise specified, the following commands are compatible with both the oc and
kubectl CLIs.
Some commands require a user with cluster administrator access. The following list includes
several useful oc commands for cluster administrators.
oc cluster-info
The cluster-info command prints the address of the control plane and other cluster
services. The oc cluster-info dump command expands the output to include helpful
details for debugging cluster problems.
oc api-versions
The structure of cluster resources has a corresponding API version, which the oc api-
versions command displays. The command prints the supported API versions on the server,
in the form of "group/version".
oc get clusteroperator
The cluster operators that Red Hat ships serve as the architectural foundation for RHOCP.
RHOCP installs cluster operators by default. Use the oc get clusteroperator command
to see a list of the cluster operators:
Other useful commands are available to both regular and administrator users:
76 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
oc get
Use the get command to retrieve information about resources in the selected project.
Generally, this command shows only the most important characteristics of the resources, and
omits more detailed information.
The oc get RESOURCE_TYPE command displays a summary of all resources of the specified
type.
For example, the following command returns the list of the pod resources in the current
project:
You can use the oc get RESOURCE_TYPE RESOURCE_NAME command to export a resource
definition. Typical use cases include creating a backup or modifying a definition. The -o yaml
option prints the object representation in YAML format. You can change to JSON format by
providing a -o json option.
oc get all
Use the oc get all command to retrieve a summary of the most important components of
a cluster. This command iterates through the major resource types for the current project, and
prints a summary of their information:
oc describe
If the summaries from the get command are insufficient, then you can use the oc describe
RESOURCE_TYPE RESOURCE_NAME command to retrieve additional information. Unlike
the get command, you can use the describe command to iterate through all the different
resources by type. Although most major resources can be described, this function is not
available across all resources. The following example demonstrates describing a pod resource:
DO180-OCP4.12-en-1-20230406 77
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
oc explain
To learn about the fields of an API resource object, use the oc explain command. This
command describes the purpose and the fields that are associated with each supported API
resource. You can also use this command to print the documentation of a specific field of a
resource. Fields are identified via a JSONPath identifier. The following example prints the
documentation for the .spec.containers.resources field of the pod resource type:
DESCRIPTION:
Compute Resources required by this container. Cannot be updated. More info:
https://kubernetes.io/docs/concepts/configuration/manage-resources-
containers/
FIELDS:
limits <map[string]string>
Limits describes the maximum amount of compute resources allowed. More
info:
https://kubernetes.io/docs/concepts/configuration/manage-resources-
containers/
requests <map[string]string>
Requests describes the minimum amount of compute resources required. If
Requests is omitted for a container, it defaults to Limits if that is
explicitly specified, otherwise to an implementation-defined value. More
info:
https://kubernetes.io/docs/concepts/configuration/manage-resources-
containers/
Add the --recursive flag to display all fields of a resource without descriptions. Information
about each field is retrieved from the server in OpenAPI format.
oc create
Use the create command to create a RHOCP resource in the current project. This command
creates resources from a resource definition. Typically, this command is paired with the
oc get RESOURCE_TYPE RESOURCE_NAME -o yaml command for editing definitions.
Developers commonly use the -f flag to indicate the file that contains the JSON or YAML
representation of an RHOCP resource.
For example, to create resources from the pod.yaml file, use the following command:
78 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
oc status
The oc status command provides a high-level overview of the current project. The
command shows services, deployments, build configurations, and active deployments.
Information about any misconfigured components is also shown. The --suggest option
shows additional details for any identified issues.
oc delete
Use the delete command to delete an existing RHOCP resource from the current project.
You must specify the resource type and the resource name.
For example, to delete the quotes-ui pod, use the following command:
Each of these commands is executed in the current selected project. To execute commands in a
different project, you must include the --namespace or -n options.
A user in OpenShift is an entity that can make requests to the RHOCP API. An RHOCP User
object represents an actor that can be granted permissions in the system by adding roles to
the user or to the user's groups. Typically, this represents the account of a developer or an
administrator.
Regular users
Most interactive RHOCP users are represented by this user type. An RHOCP User object
represents a regular user.
System users
Infrastructure uses system users to interact with the API securely. Some system users are
automatically created, including the cluster administrator, with access to everything. By
default, unauthenticated requests use an anonymous system user.
DO180-OCP4.12-en-1-20230406 79
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Service accounts
ServiceAccount objects represent service accounts. OCP creates service accounts
automatically when a project is created. Project administrators can create additional service
accounts to define access to the contents of each project.
Each user must authenticate to access a cluster. After authentication, policy determines what the
user is authorized to do.
Note
Authentication and authorization are covered in greater detail in the "DO280:
Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster"
course.
The RHOCP control plane includes a built-in OAuth server. To authenticate themselves to the
API, users obtain OAuth access tokens. Token authentication is the only guaranteed method to
work with any OpenShift cluster, because enterprise SSO might replace the login form of the web
console.
When a person requests a new OAuth token, the OAuth server uses the configured identity
provider to determine the identity of the person who makes the request. The OAuth server then
determines the user that the identity maps to; creates an access token for that user; and then
returns the token for use.
To retrieve an OAuth token by using the OpenShift web console, navigate to Help > Command line
tools. The Help menu is represented by a ? icon.
On the Command Line Tools page, navigate to Copy login Command. The following page
requires you to log in with your OpenShift user credentials. Next, navigate to Display token. Use
the command under the Log in with this token label to log in to the OpenShift API.
80 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Copy the command from the web console and paste it on the command line. The copied
command uses the --token and --server options, similar to the following example.
References
For more information, refer to the Getting Started with the OpenShift CLI chapter in
the Red Hat OpenShift Container Platform 4.12 CLI Tools documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/cli_tools/index#cli-getting-started
For more information, refer to the CLI Developer Commands chapter in the Red Hat
OpenShift Container Platform 4.12 CLI Tools documentation at Refer to the
OpenShift CLI Developer Command Reference
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/cli_tools/index#cli-developer-
commands
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/authentication_and_authorization/
index#understanding-authentication
DO180-OCP4.12-en-1-20230406 81
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Guided Exercise
Outcomes
• Use the OpenShift web console to locate the installation file for the oc OpenShift
command-line interface.
• Get and use a token from the web console to access the cluster from the command line.
This command ensures that all resources are available for this exercise.
Instructions
1. Log in to the OpenShift web console as the developer user. Locate the installation file for
the oc OpenShift command-line interface (CLI).
1.2. Click Red Hat Identity Management and log in as the developer user with the
developer password.
82 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
1.3. Locate the installation file for the oc CLI. From the OpenShift web console, select
Help > Command line tools. The Help menu is represented by a ? icon.
The oc binary is available for multiple operating systems and architectures. For each
operating system and architecture, the oc binary also includes the kubectl binary.
Note
You do not need to download or install the oc and kubectl binaries, which are
already installed on the workstation machine.
2. Download an authorization token from the web console. Then, use the token and the oc
command to log in to the OpenShift cluster.
2.1. From the Command Line Tools page, click the Copy login command link.
2.2. The link opens a login page. Click Red Hat Identity Management and log in as the
developer user with the developer password.
2.3. A web page is displayed. Click the Display token link to show your API token and the
login command.
DO180-OCP4.12-en-1-20230406 83
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
2.4. Copy the oc login command to your clipboard. Open a terminal window and then
use the copied command to log in to the cluster with your token.
3.1. Use the help command to list and review the available commands for the kubectl
command.
Notice that the kubectl command does not provide a login command.
84 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
3.2. Examine the available subcommands and options for the kubectl create
command by using the --help option.
Examples:
# Create a pod using the data in pod.json
kubectl create -f ./pod.json
...output omitted....
Available Commands:
clusterrole Create a cluster role
clusterrolebinding Create a cluster role binding for a particular cluster
role
configmap Create a config map from a local file, directory or
literal value
cronjob Create a cron job with the specified name
deployment Create a deployment with the specified name
...output omitted...
Options:
--allow-missing-template-keys=true:
If true, ignore any errors in templates when a field or map key is missing in the
template. Only applies to
golang and jsonpath output formats.
--dry-run='none':
Must be "none", "server", or "client". If client strategy, only print the object
that would be sent, without
sending it. If server strategy, submit server-side request without persisting the
resource.
...output omitted....
Usage:
kubectl create -f FILENAME [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).
You can use the --help option with any kubectl command. The --help option
provides information about a command, including the available subcommands and
options, and the command syntax.
3.3. List and review the available commands for the oc binary by using the help
command.
This client helps you develop, build, deploy, and run your applications on any
OpenShift or Kubernetes cluster. It also includes the administrative
commands for managing a cluster under the 'adm' subcommand.
DO180-OCP4.12-en-1-20230406 85
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Basic Commands:
login Log in to a server
new-project Request a new project
new-app Create a new application
status Show an overview of the current project
project Switch to another project
projects Display existing projects
explain Get documentation for a resource
...output omitted....
The oc command supports the same capabilities as the kubectl command. The oc
command provides additional commands to natively support an OpenShift cluster.
For example, the new-project command creates a project, which is a Kubernetes
namespace, in the OpenShift cluster. The new-app command is unique to the oc
command. It creates applications by using existing source code or prebuilt images.
3.4. Use the --help option with the oc create command to view the available
subcommands and options.
Examples:
# Create a pod using the data in pod.json
oc create -f ./pod.json
...output omitted...
Available Commands:
build Create a new build
clusterresourcequota Create a cluster resource quota
clusterrole Create a cluster role
clusterrolebinding Create a cluster role binding for a particular cluster
role
configmap Create a config map from a local file, directory or
literal value
cronjob Create a cron job with the specified name
deployment Create a deployment with the specified name
deploymentconfig Create a deployment config with default options that uses
a given image
...output omitted....
Options:
--allow-missing-template-keys=true:
If true, ignore any errors in templates when a field or map key is missing in the
template. Only applies to
golang and jsonpath output formats.
--dry-run='none':
Must be "none", "server", or "client". If client strategy, only print the object
that would be sent, without
sending it. If server strategy, submit server-side request without persisting the
resource.
...output omitted...
86 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Usage:
oc create -f FILENAME [options]
....output omitted....
The oc create command includes the same subcommands and options as the
kubectl create command, and provides additional subcommands for OpenShift
resources. For example, you can use the oc create command to create OpenShift
resources such as a deployment configuration, a route, and an image stream.
4. Identify the components and Kubernetes resources of an OpenShift cluster by using the
terminal. Unless otherwise noted, all commands are available for the oc and kubectl
commands.
4.1. In a terminal, use the oc login command to log in to the cluster as the admin user
with the redhatocp password. Regular cluster users, such as the developer user,
cannot list resources at a cluster scope.
4.3. Use the cluster-info command to identify the URL for the Kubernetes control
plane.
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
4.4. Identify the supported API versions by using the api-versions command.
DO180-OCP4.12-en-1-20230406 87
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
4.6. Use the get command to list pods in the openshift-api project. Specify the
project with the -n option.
4.7. Use the oc status command to retrieve the status of resources in the openshift-
authentication project.
4.8. Use the explain command to list the description and available fields for services
resources.
DESCRIPTION:
Service is a named abstraction of software service (for example, mysql)
88 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
consisting of local port (for example 3306) that the proxy listens on, and
the selector that determines which pods will answer requests sent through
the proxy.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values.
...output omitted...
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 89
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Objectives
• Query, format, and filter attributes of Kubernetes resources.
Note
The oc commands in the examples are identical to the equivalent kubectl
commands.
The SHORTNAME for a component helps to minimize typing long CLI commands. For example, you
can use oc get cm instead of oc get configmaps.
The APIVERSION column divides the objects into API groups. The column uses the <API-
Group>/<API-Version> format. The API-Group object is blank for Kubernetes core resource
objects.
Many Kubernetes resources exist within the context of a Kubernetes namespace. Kubernetes
namespaces and OpenShift projects are broadly similar. A 1:1 relationship always exists between a
namespace and an OpenShift project.
The oc api-resources command can further filter the output with options that operate on the
data.
90 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
--api-group apps Limit to resources in the specified API group. Use --api-
group='' to show core resources.
--sort-by name If non-empty, sort list of resources using specified field. The
field can be either 'name' or 'kind'.
For example, use the following oc api-resources command to see all the namespaced
resources in the apps API group, sorted by name.
Each resource contains fields that identify the resource or that describe the intended
configuration of the resource. Use the oc explain command to get information about valid
fields for an object. For example, execute the oc explain pod command to get information
about possible pod object fields.
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
...
kind <string>
Kind is a string value representing the REST resource this object
...
metadata <Object>
Standard object's metadata. More info:
...
spec <Object>
Specification of the desired behavior of the pod. More info:
...output omitted...
DO180-OCP4.12-en-1-20230406 91
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Every Kubernetes resource contains the kind, apiVersion, spec, and status fields. However,
when you create an object definition, you do not need to provide the status field. Instead,
Kubernetes generates the status field, and it lists information such as runtime status and
readiness. The status field is useful for troubleshooting an error or for verifying the current state
of a resource.
You can use the YAML path to a field and dot-notation to get information about a particular field.
For example, the following oc explain command shows details for the pod.spec fields.
DESCRIPTION:
Specification of the desired behavior of the pod. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-
conventions.md#spec-and-status
FIELDS:
activeDeadlineSeconds <integer>
Optional duration in seconds the pod may be active on the node relative to
...output omitted...
The following Kubernetes main resource types can be created and configured by using a YAML or
a JSON manifest file, or by using OpenShift management tools:
Pods (pod)
Represent a collection of containers that share resources, such as IP addresses and persistent
storage volumes. It is the primary unit of work for Kubernetes.
Services (svc)
Define a single IP/port combination that provides access to a pool of pods. By default,
services connect clients to pods in a round-robin fashion.
ReplicaSet (rs)
Ensure that a specified number of pod replicas are running at any given time.
Deployment (deploy)
A representation of a set of containers that are included in a pod, and the deployment
strategies to use. A deployment object contains the configuration to apply to all containers
92 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
of each pod replica, such as the base image, tags, storage definitions, and the commands
to execute when the containers start. Although Kubernetes replicas can be created stand-
alone in OpenShift, they are usually created by higher-level resources such as deployment
controllers.
Red Hat OpenShift Container Platform (RHOCP) adds the following main resource types to
Kubernetes:
BuildConfig (bc)
Defines a process to execute in the OpenShift project. The OpenShift Source-to-Image (S2I)
feature uses a BuildConfig to build a container image from application source code that is
stored in a Git repository. A bc works together with a dc to provide an extensible continuous
integration and continuous delivery workflows.
DeploymentConfig (dc)
OpenShift 4.5 introduced the Deployment resource concept to replace the
DeploymentConfig default configuration for pods. Both concepts represent a set of
containers that are included in a pod, and the deployment strategies to use.
The Deployment object serves as the improved version of the DeploymentConfig object.
Some functional replacements between both objects are as follows:
• Every change in the pod template that Deployment objects use triggers a new rollout
automatically.
• The deployment process of a Deployment object can be paused at any time without
affecting the deployer process.
• A Deployment object can have as many active replica sets as the user wants, and can scale
down previous replicas. In contrast, the DeploymentConfig object can have only two
active replication sets at a time.
Routes
Represent a DNS hostname that the OpenShift router recognizes as an ingress point for
applications and microservices.
Structure of Resources
Almost every Kubernetes object includes two nested object fields that govern the object's
configuration: the object spec and the object status. The spec object describes the intended
state of the resource, and the status object describes the current state. You specify the spec
section of the resource when you create the object. Kubernetes controllers continuously update
the status of the object throughout the existence of the object. The Kubernetes control plane
continuously and actively manages every object's actual state to match the desired state you
supplied.
The status field uses a collection of condition resource objects with the following fields.
DO180-OCP4.12-en-1-20230406 93
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Message 2/3 containers are running An optional textual description for the
condition
For example, in Kubernetes, a Deployment object can represent an application that is running on
your cluster. When you create a Deployment object, you might configure the deployment spec
object to specify that you want three replicas of the application to be running. Kubernetes reads
the deployment spec object and starts three instances of your chosen application, and updates
the status field to match your spec object. If any of those instances fails, then Kubernetes
responds to the difference between the spec and status objects by making a correction, in this
case to start a replacement instance.
Other common fields provide base information in addition to the spec and status fields of a
Kubernetes object.
Field Description
metadata.namespace The namespace, or the RHOCP project where the resource is.
Resources in Kubernetes consist of multiple objects. These objects define the intended state of
a resource. When you create or modify an object, you make a persistent record of the intended
state. Kubernetes reads the object and modifies the current state accordingly.
All RHOCP and Kubernetes objects can be represented as a JSON or YAML structure. Consider
the following pod object in the YAML format:
apiVersion: v1
kind: Pod
metadata:
name: wildfly
94 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
namespace: my_app
labels:
name: wildfly
spec:
containers:
- resources:
limits:
cpu: 0.5
image: quay.io/example/todojee:v1
name: wildfly
ports:
- containerPort: 8080
name: wildfly
env:
- name: MYSQL_DATABASE
value: items
- name: MYSQL_USER
value: user1
- name: MYSQL_PASSWORD
value: mypa55
...object omitted...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-08-19T12:59:22Z"
status: "True"
type: PodScheduled
Schema identifier. In this example, the object conforms to the pod schema.
Metadata for a given resource, such as annotations, labels, name, and namespace.
A unique name for a pod in Kubernetes that enables administrators to run commands on it.
The namespace, or the RHOCP project that the resource resides in.
Creates a label with a name key that other resources in Kubernetes, usually a service, can use
to find it.
Defines the pod object configuration, or the intended state of the resource.
Name of the container inside a pod. Container names are important for oc commands when a
pod contains multiple containers.
Current state of the object. Kubernetes provides this field, which lists information such as
runtime status, readiness, and container images.
Labels are key-value pairs that you define in the .metadata.labels object path, for example:
DO180-OCP4.12-en-1-20230406 95
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
kind: Pod
apiVersion: v1
metadata:
name: example-pod
labels:
app: example-pod
group: developers
...object omitted...
Command Outputs
The kubectl and oc CLI commands provide many output formatting options. By default, many
commands display a small subset of the most useful fields for the given resource type in a tabular
output. Many commands support a -o wide option that shows additional fields.
RESTARTS RESTARTS 5
IP 10.8.0.60
NODE master01
To view all the fields that are associated with a resource, the describe subcommand shows a
detailed description of the selected resource and related resources. You can select a single object
by name, or all objects of that type, or provide a name prefix, or a label selector.
For example, the following command first looks for an exact match on the TYPE object and the
NAME-PREFIX object. If no such resource exists, then the command outputs details for every
resource of that type with a name with a NAME_PREFIX prefix.
96 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
The describe subcommand provides detailed human-readable output. However, the format of
the describe output might change between versions, and thus is not recommended for script
development. Any scripts that rely on the output of the describe subcommand might break after
a version update.
Kubernetes provides YAML and JSON-formatted output options that are suitable for parsing or
scripting.
YAML Output
The -o yaml option provides a YAML-formatted output that is parsable and still human-readable.
Note
The reference documentation provides a more detailed introduction to YAML.
Use the yq command-line YAML processor to filter the YAML output for your chosen field. The
yq processor uses a dot notation to separate field names in a query. The yq processor works with
YAML files and JSON files. The following example pipes the YAML output to the yq command to
parse the podIP field.
The [0] in the example specifies the first index in the items array.
JSON Output
Kubernetes uses the JSON format internally to process resource objects. Use the -o json option
to view a resource in the JSON format.
DO180-OCP4.12-en-1-20230406 97
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Similar to the yq processor, use the jq processor and dot notation on the fields to query specific
information from the JSON-formatted output.
Alternatively, the example might have used .items[].status.podIP for the query string. The
empty brackets instruct the jq tool to query all items.
Custom Output
Kubernetes provides a custom output format that combines the convenience of extracting data
via jq styled queries with a tabular output format. Use the -o custom-columns option with
comma-separated <column name> : <jq query string> pairs.
Kubernetes also supports the use of JSONPath expressions to extract formatted output from
JSON objects. In the following example, the JSONPath expression uses the range operator to
iterate over the list of items.
98 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
References
For more information, refer to the OpenShift CLI (oc) chapter in the Red Hat
OpenShift Container Platform 4.12 CLI Tools documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/cli_tools/index#cli-using-cli_cli-
developer-commands
For more information about custom columns, refer to the oc get section in the
Red Hat OpenShift Container Platform 4.12 CLI Tools documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/cli_tools/index#cli-using-cli_cli-
developer-commands
Labels and selector details are available in the Working with Kubernetes Objects
section of the
Kubernetes Documentation - Labels and Selectors
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
DO180-OCP4.12-en-1-20230406 99
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Guided Exercise
Outcomes
• List and explain the supported API resources for a cluster.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Select the cli-resources project.
2. List the available cluster resource types with the api-resources command. Then, use
filters to list namespaced and non-namespaced resources.
2.1. List the available resource types with the api-resources command.
100 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
2.2. Use the --namespaced option to limit the output of the api-resources command
to namespaced resources.
Then, determine the number of available namespaced resources. Use the -o name
option to list the resource names, and then pipe the output to the wc -l command.
The cluster has 108 namespaced cluster resource types, such as the pods,
deployments, and services resources.
DO180-OCP4.12-en-1-20230406 101
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
The cluster has 114 non-namespaced cluster resource types, such as the nodes,
images, and project resources.
3. Identify and explain the available cluster resource types that the core API group provides.
Then, describe a resource from the core API group in the cli-resources project.
3.1. List the available resource types with the api-resources command.
You can use the APIVERSIONS field to determine which API group provides the
resource. The field lists the group followed by the API version of the resource. For
example, the jobs resource type is provided by the batch API group, and v1 is the
API version of the resource.
3.2. Filter the output of the api-resources command to only show resources from the
core API group. Use the --api-group option and set '' as the value.
102 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
The core API group provides many resource types, such as nodes, events, and pods.
3.3. Use the explain command to list a description and the available fields for the pods
resource type.
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values
...output omitted...
A single pod exists in the cli-resources project. The pod name might differ in
your output.
3.5. Use the describe command to view the configuration and events for the pod.
Specify the pod name from the previous step.
DO180-OCP4.12-en-1-20230406 103
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
...output omitted...
Status: Running
IP: 10.8.0.127
IPs:
IP: 10.8.0.127
Controlled By: ReplicaSet/myapp-54fcdcd9d7
Containers:
myapp:
Container ID: cri-o://e0da...669d
Image: registry.ocp4.example.com:8443/ubi8/httpd-24:1-215
Image ID: registry.ocp4.example.com:8443/ubi8/
httpd-24@sha256:91ad...fd83
...output omitted...
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 500m
memory: 128Mi
Environment: <none>
...output omitted...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned cli-
resources/myapp-54fcdcd9d7-2h5vx to master01
....output omitted...
3.6. Retrieve the details of the pod in a structured format. Use the get command and
specify the output as the YAML format. Compare the results of the describe
command versus the get command.
104 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
cpu: 500m
memory: 128Mi
...output omitted...
Using a structured format with the get command provides more details about a
resource than the describe command.
4. Identify and explain the available cluster resource types that the Kubernetes apps
API group provides. Then, describe a resource from the apps API group in the cli-
resources project.
4.1. List the resource types that the apps API group provides.
4.2. Use the explain command to list a description and fields for the deployments
resource type.
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values.
...output omitted...
4.3. Use the get command to identify any deployment resources in the cli-
resources project.
4.4. The myapp deployment exists in the cli-resources project. Use the get
command and the -o wide option to identify the container name and the container
image in the deployment.
DO180-OCP4.12-en-1-20230406 105
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
4.5. Describe the myapp deployment to view more details about the resource.
5. Identify and explain the available cluster resource types that the OpenShift configuration
API group provides. Then, describe a resource from the OpenShift configuration API group.
5.1. List the resource types that the OpenShift configuration API group provides.
106 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
5.2. Use the explain command to list a description and fields for the projects
resource type.
DESCRIPTION:
Projects are the unit of isolation and collaboration in OpenShift. A
project has one or more members, a quota on the resources that the project
may consume, and the security controls on the resources in the project.
Within a project, members may have different roles - project administrators
can set membership, editors can create and manage the resources, and
viewers can see but not access running containers. In a normal cluster
project administrators are not able to alter their quotas - that is
restricted to cluster administrators.
Listing or watching projects will return only projects the user has the
reader role on.
...output omitted...
DO180-OCP4.12-en-1-20230406 107
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
openshift.io/display-name=
openshift.io/requester=system:admin
openshift.io/sa.scc.mcs=s0:c27,c4
openshift.io/sa.scc.supplemental-groups=1000710000/10000
openshift.io/sa.scc.uid-range=1000710000/10000
Display Name: <none>
Description: <none>
Status: Active
Node Selector: <none>
Quota: <none>
Resource limits: <none>
5.4. Retrieve more details of the cli-resources project. Use the get command, and
format the output to use JSON.
The get command provides additional details, such as the kind and apiVersion
attributes, of the project resource.
6. Verify the cluster status by inspecting cluster services. Format command outputs by using
filters.
6.1. Retrieve the list of pods for the Etcd operator. The Etcd operator is available in the
openshift-etcd namespace. Specify the namespace with the --namespace or -n
option.
108 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
6.2. Retrieve the conditions status of the etcd-master01 pod in the openshift-
etcd namespace. Use filters to limit the output to the .status.conditions
attribute of the pod. Compare the outputs of the JSONPath and jq filters.
DO180-OCP4.12-en-1-20230406 109
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
"lastTransitionTime": "2023-03-07T18:05:06Z",
"status": "True",
"type": "PodScheduled"
}
]
Using the JSON format and the jq filter provides a structured output for the
.status.conditions attribute.
6.3. Retrieve the condition status of the prometheus-k8s-0 pod in the openshift-
monitoring namespace. Configure the output to use the YAML format, and then
filter the output with the yq filter.
The r - option tells the yq command to read the standard input (STDIN) for the
YAML output of the get command.
6.4. Use the get command to retrieve detailed information for the pods in the
openshift-storage namespace. Use the YAML format and custom columns to
filter the output according to the following table:
Pod metadata.name
API apiVersion
Container spec.containers[].name
Phase status.phase
IP status.podIP
Ports spec.containers[].ports[].containerPort
110 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 111
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Objectives
• Query the health of essential cluster services and components.
Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands.
Operators provide the means of monitoring applications, performing health checks, managing
over-the-air (OTA) updates, and ensuring that applications remain in your specified state.
Because CRI-O and the Kubelet run on every node, almost every other cluster function can be
managed on the control plane by using Operators. Components that are added to the control
plane by using operators include critical networking and credential services.
Operators in RHOCP are managed by two different systems, depending on the purpose of the
operator.
Cluster operators use a Kubernetes kind value of clusteroperators, and thus can be
queried via oc or kubectl commands. As a user with the cluster-admin role, use the oc
get clusteroperators command to list all the cluster operators.
For more details about a cluster operator, use the describe clusteroperators
operator-name command to view the field values that are associated with the operator,
including the current status of the operator. The purpose of the describe command is
provide a human-readable output format for a resource. As such, the output format might
change with an RHOCP version update.
112 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
For an output format that is less likely to change with a version update, use one of
the -o output options of the get command. For example, use the following oc get
clusteroperators command for the YAML-formatted output details for the dns operator.
As a user with the cluster-admin role, use the get operators command to list all the
add-on operators.
You can likewise use the describe and get commands to query details about the fields that
are associated with the add-on operators.
Operators use one or more pods to provide cluster services. You can find the namespaces for
these pods under the relatedObjects section of the detailed output for the operator. As a user
with a cluster-admin role, use the -n namespace option on the get pod command to view
the pods. For example, use the following get pods command to retrieve the list of pods in the
openshift-dns-operator namespace.
DO180-OCP4.12-en-1-20230406 113
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Use the -o yaml or -o json output formats to view or analyze more details about the pods. The
resource conditions, which are found in the status for the resource, track the current state of the
resource object. The following example uses the jq processor to extract the status values from
the JSON output details for the dns pod.
In addition to listing the pods of a namespace, you can also use the --show-labels option of the
get command to print the labels used by the pods. The following example retrieves the pods and
their labels in the openshift-etcd namespace.
The -A option shows pods from all namespaces. Use the -n namespace option to filter the
results to show the pods in a single namespace. Use the --containers option to display
the resource usage of containers within a pod. For example, use the following command to list
the resource usage of the containers in the etcd-master01 pod in the openshift-etcd
namespace.
114 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
• Current cluster capacity based on CPU, memory, storage, and network usage
• A time-series graph of total CPU, memory, and disk usage
• The ability to display the top consumers of CPU, memory, and storage
For any of the listed resources in the Cluster Utilization section, administrators can click the link
for current resource usage. The link displays a window with a breakdown of top consumers for that
resource. Top consumers can be sorted by project, by pod, or by node. The list of top consumers
can be useful for identifying problematic pods or nodes. For example, a pod with an unexpected
memory leak might appear at the top of the list.
DO180-OCP4.12-en-1-20230406 115
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
All metrics are pulled from Prometheus. Click any graph to navigate to the Metrics page. View the
executed query, and inspect the data further.
If a resource quota is created for the project, then the current project request and limits appear on
the Project Details page.
116 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
To perform a query, navigate to Observe > Metrics, enter a Prometheus Query Language
expression in the text field, and click Run Queries. The results of the query are displayed as a time-
series graph:
Note
The Prometheus Query Language is not discussed in detail in this course. Refer to
the references section for a link to the official documentation.
To read events, use the get events command. Events that the get events command lists are
not filtered and span the whole RHOCP cluster. The -n namespace option filters the events to
only the pods in a selected RHOCP project (namespace). The following get events command
prints events in the openshift-image-registry namespace.
DO180-OCP4.12-en-1-20230406 117
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
You can use the describe pod pod-name command to further narrow the results to a single
pod. For example, to retrieve only the events that relate to a mysql pod, you can refer to the
Events field from the output of the oc describe pod mysql command:
Kubernetes Alerts
RHOCP includes a monitoring stack, which is based on the Prometheus open source project. The
monitoring stack is configured to monitor the core RHOCP cluster components, by default. You
can optionally configure the monitoring stack also to monitor user projects.
Use the following get all command to display a list of all resources, their status, and their types
in the openshift-monitoring namespace.
The alertmanager-main-0 pod is the Alertmanager for the cluster. The following logs
command shows the logs of the alertmanager-main-0 pod, which displays the received
messages from Prometheus.
118 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
[user@host~]$ oc cluster-info
The oc cluster-info output is high-level, and can verify that the cluster nodes are running. For
a more detailed view into the cluster nodes, use the get nodes command.
The example shows a single master01 node with multiple roles. The STATUS value of Ready
means that this node is healthy and can accept new pods. A STATUS value of NotReady means
that a condition triggered the NotReady status and the node is not accepting new pods.
As with any other RHOCP resource, you can drill down into further details of the node resource
with the describe node node-name command. For parsable output of the same information,
use the -o json or the -o yaml output options with the get node node-name command.
For more information about using and parsing these output formats, see Inspect Kubernetes
Resources .
The output of the get nodes node-name command with the -o json or -o yaml option is
long. The following examples use the -jsonpath option or the jq processor to parse the get
node node-name command output.
Capacity:
{"cpu":"8","ephemeral-storage":"125293548Ki","hugepages-1Gi":"0",
"hugepages-2Mi":"0","memory":"20531668Ki","pods":"250"}
DO180-OCP4.12-en-1-20230406 119
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
The JSONPath expression in the previous command extracts the allocatable and capacity
measures for the master01 node. These measures help to understand the available resources on
a node.
View the status object of a node to understand the current health of the node.
If the status of the MemoryPressure condition is true, then the node is low on memory.
If the status of the DiskPressure condition is true, then the disk capacity of the node is low.
If the status of the PIDPressure condition is true, then too many processes are running on
the node.
If the status of the Ready condition is false, then the node is not healthy and is not accepting
pods.
120 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Condition Description
OutOfDisk If true, then the node has insufficient free space on the node
for adding new pods.
NetworkUnavailable If true, then the network for the node is not correctly
configured.
To gain deeper insight into a given node, you can view the logs of processes that run on the node.
A cluster administrator can use the oc adm node-logs command to view node logs. Node logs
might contain sensitive output, and thus are limited to privileged node administrators. Use oc
adm node-logs node-name to filter the logs to a single node.
The oc adm node-logs command has other options to further filter the results.
--role master Use the --role option to filter the output to nodes with a
specified role.
For example, to retrieve the most recent log entry for the crio service on the master01 node,
you can use the following command.
When you create a pod with the CLI, the oc or kubectl command is sent to the apiserver
service, which then validates the command. The scheduler service reads the YAML or JSON
pod definition, and then assigns pods to compute nodes. Each compute node runs a kubelet
service that converts the pod manifest to one or more containers in the CRI-O container runtime.
DO180-OCP4.12-en-1-20230406 121
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Each compute node must have an active kubelet service and an active crio service. To verify
the health of these services, first start a debug session on the node by using the debug command.
Note
The debug command is covered in greater detail in a later section.
Within the debug session, change to the /host root directory so that you can run binaries in the
host's executable path.
Then, use the systemctl is-active calls to confirm that the services are active.
For more details about the status of a service, use the systemctl status command.
In RHOCP, the following command returns the output for a container within a pod:
122 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Replace pod-name with the name of the target pod, and replace container-name with the
name of the target container. The -c container-name argument is optional, if the pod has
only one container. You must use the -c container-name argument to connect to a specific
container in a multicontainer pod. Otherwise, the command defaults to the only running container
and returns the output.
When debugging images and setup problems, it is useful to get an exact copy of a running pod
configuration, and then troubleshoot it with a shell. If a pod is failing or does not include a shell,
then the rsh and exec commands might not work. To resolve this issue, the debug command
creates a copy of the specified pod and starts a shell in that pod.
By default, the debug command starts a shell inside the first container of the referenced pod.
The debug pod is a copy of your source pod, with some additional modifications. For example, the
pod labels are removed. The executed command is also changed to the '/bin/sh' command for
Linux containers, or the 'cmd.exe' executable for Windows containers. Additionally, readiness and
liveness probes are disabled.
A common problem for containers in pods is security policies that prohibit a container from
running as a root user. You can use the debug command to test running a pod as a non-root user
by using the --as-user option. You can also run a non-root pod as the root user with the --as-
root option.
With the debug command, you can invoke other types of objects besides pods. For example, you
can use any controller resource that creates a pod, such as a deployment, a build, or a job. The
debug command also works with nodes, and with resources that can create pods, such as image
stream tags. You can also use the --image=IMAGE option of the debug command to start a shell
session by using a specified image.
If you do not include a resource type and name, then the debug command starts a shell session
into a pod by using the OpenShift tools image.
[user@host~]$ oc debug
The debug pod is deleted when the remote command completes, or when the user interrupts the
shell.
DO180-OCP4.12-en-1-20230406 123
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
The oc adm must-gather command collects resource definitions and service logs from
your cluster that are most likely needed for debugging issues. This command creates a pod in
a temporary namespace on your cluster, and the pod then gathers and downloads debugging
information. By default, the oc adm must-gather command uses the default plug-in image,
and writes into the ./must-gather.local. directory on your local system. To write to a specific
local directory, you can also use the --dest-dir option, such as in the following example:
Then, create a compressed archive file from the must-gather directory. For example, on a Linux-
based system, you can run the following command:
Then, attach the compressed archive file to your support case in the Red Hat Customer Portal.
Similar to the oc adm must-gather command, the oc adm inspect command gathers
information on a specified resource. For example, the following command collects debugging data
for the openshift-apiserver and kube-apiserver cluster operators.
The oc adm inspect command can also use the --dest-dir option to specify a local
directory to write the gathered information. The command shows all logs by default. Use the --
since option to filter the results to logs that are newer than a relative duration, such as 5s, 2m, or
3h.
124 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
References
For more information, refer to the Control Plane Architecture chapter in the Red Hat
OpenShift Container Platform 4.12 Architecture documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/architecture/index#control-plane
For more information, refer to the Red Hat OpenShift Container Platform 4.12
Monitoring documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/monitoring/index
For more information, refer to the Red Hat OpenShift Container Platform 4.12
Nodes documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#working-with-nodes
Querying Prometheus
https://prometheus.io/docs/prometheus/latest/querying/basics/
For more information about gathering diagnostic data about your cluster, refer to
the Gathering data about your cluster chapter in the Red Hat OpenShift Container
Platform 4.12 Support documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/support/index#gathering-cluster-
data
DO180-OCP4.12-en-1-20230406 125
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Guided Exercise
Outcomes
• View the status and get information about cluster operators.
Instructions
1. Retrieve the status and view information about cluster operators.
1.1. Log in to the OpenShift cluster as the admin user with the redhatocp password.
1.2. List the operators that users installed in the OpenShift cluster.
1.3. List the cluster operators that are installed by default in the OpenShift cluster.
126 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
1.4. Use the describe command to view detailed information about the openshift-
apiserver cluster operator, such as related objects, events, and version.
DO180-OCP4.12-en-1-20230406 127
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Name: openshift-apiserver
Resource: namespaces
...output omitted...
Versions:
Name: operator
Version: 4.12.0
Name: openshift-apiserver
Version: 4.12.0
Events: <none>
The Related Objects attribute includes information about the name, resource
type, and groups for objects that are related to the operator.
1.5. List the pods in the openshift-apiserver-operator namespace. Then, view the
detailed status of an openshift-apiserver-operator pod by using the JSON
format and the jq command. Your pod names might differ.
128 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
"podIP": "10.8.0.5",
...output omitted...
}
2.1. List the memory and CPU usage of all pods in the cluster. Use the --sum option to
print the sum of the resource usage. The resource usage on your system probably
differs.
2.2. List the pods and their labels in the openshift-etcd namespace.
2.3. List the resource usage of the containers in the etcd-master01 pod in the
openshift-etcd namespace. The resource usage on your system probably differs.
2.4. Display a list of all resources, their status, and their types in the openshift-
monitoring namespace.
DO180-OCP4.12-en-1-20230406 129
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
3.2. Retrieve the resource consumption of the master01 node. The resource usage on
your system probably differs.
3.3. Use a JSONPath filter to determine the capacity and allocatable CPU for the
master01 node. The values might differ on your system.
130 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
3.5. Use the describe command to view the events, resource requests, and resource
limits for the node. The output might differ on your system.
4. Retrieve the logs and status of the systemd services on the master01 node.
4.1. Display the logs of the node. Filter the logs to show the most recent log for the crio
service. The logs might differ on your system.
DO180-OCP4.12-en-1-20230406 131
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
4.2. Display the two most recent logs of the kubelet service on the node. The logs might
differ on your system.
4.3. Create a debug session for the node. The, use the chroot /host command to
access the host binaries.
132 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
sh-4.4# exit
exit
sh-4.4# exit
exit
5.1. Retrieve debugging information of the cluster by using the oc adm must-gather
command. Specify the /home/student/must-gather directory as the destination
directory. This command might take several minutes to complete.
Then, confirm that the debugging information exists in the destination directory.
5.2. Generate debugging information for the openshift-api cluster operator. Specify
the /home/student/inspect directory as the destination directory. Limit the
debugging information to the last five minutes.
Then, confirm that the debugging information exists in the destination directory.
DO180-OCP4.12-en-1-20230406 133
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
134 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Lab
Outcomes
• Use the command line to retrieve information about the cluster resources.
• List the resource types that the OpenShift configuration API group provides.
• Use the JSONPath filter to get the number of allocatable pods and compute resources for
a node.
• List the memory and CPU usage of all pods in the cluster.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
DO180-OCP4.12-en-1-20230406 135
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
• List the resource types that the oauth.openshift.io API group provides.
• List the compute resource usage of the containers in the etcd-master01 pod in the
openshift-etcd namespace.
• Get the number of allocatable pods for the master01 node by using a JSONPath filter.
• List the memory and CPU usage of all pods in the cluster.
• Retrieve the capacity and allocatable CPU for the master01 node by using a JSONPath
filter.
5. Retrieve debugging information for the cluster. Specify the /home/student/DO180/
labs/cli-review/debugging directory as the destination directory.
Then, generate debugging information for the kube-apiserver cluster operator. Specify
the /home/student/DO180/labs/cli-review/inspect directory as the destination
directory. Limit the debugging information to the last five minutes.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
136 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
DO180-OCP4.12-en-1-20230406 137
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Solution
Outcomes
• Use the command line to retrieve information about the cluster resources.
• List the resource types that the OpenShift configuration API group provides.
• Use the JSONPath filter to get the number of allocatable pods and compute resources for
a node.
• List the memory and CPU usage of all pods in the cluster.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
138 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
2. Use the oc command to list the following information for the cluster:
DESCRIPTION:
...output omitted...
3. From the terminal, log in to the OpenShift cluster as the admin user with the redhatocp
password. Then, use the command line to identify the following cluster resources:
DO180-OCP4.12-en-1-20230406 139
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
• List the resource types that the oauth.openshift.io API group provides.
3.4. Identify the resources that belong to the core API group.
140 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
3.5. List the resource types that the OpenShift configuration API group provides.
4. Identify the following information about the cluster services and its nodes:
• List the compute resource usage of the containers in the etcd-master01 pod in the
openshift-etcd namespace.
• Get the number of allocatable pods for the master01 node by using a JSONPath filter.
• List the memory and CPU usage of all pods in the cluster.
• Retrieve the capacity and allocatable CPU for the master01 node by using a JSONPath
filter.
DO180-OCP4.12-en-1-20230406 141
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
4.1. Retrieve the conditions status of the etcd-master01 pod in the openshift-
etcd namespace. Use jq filters to limit the output to the .status.conditions
attribute of the pod.
4.2. List the resource usage of the containers in the etcd-master01 pod in the
openshift-etcd namespace.
4.3. Use a JSONPath filter to determine the number of allocatable pods for the master01
node.
142 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
4.4. List the memory and CPU usage of all pods in the cluster. Use the --sum option to
print the sum of the resource usage. The resource usage on your system probably
differs.
505m 8982Mi
4.6. Use a JSONPath filter to determine the capacity and allocatable CPU for the
master01 node.
5.1. Retrieve debugging information for the cluster. Save the output to the /home/
student/DO180/labs/cli-review/debugging directory.
DO180-OCP4.12-en-1-20230406 143
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
ClusterID: 94ff22c1-88a0-44cf-90f6-0b7b8b545434
ClusterVersion: Stable at "4.12.0"
ClusterOperators:
All healthy and stable
5.2. Generate debugging information for the kube-apiserver cluster operator. Save the
output to the /home/student/DO180/labs/cli-review/inspect directory, and
limit the debugging information to the last five minutes.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
144 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Quiz
As the student user on the workstation machine, log in to the OpenShift cluster as the
admin user with the redhatocp password:
3. Which three commands display the conditions of the master01 node? (Choose three.)
a. oc get node/master01 -o json | jq '.status.conditions'
b. oc get node/master01 -o wide
c. oc get node/master01 -o yaml
d. oc get node/master01 -o json
4. Select the two valid condition types for a control plane node. (Choose two)
a. PIDPressure
b. DiskIOPressure
c. OutOfMemory
d. Ready
DO180-OCP4.12-en-1-20230406 145
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
5. Select three valid options for the oc adm top pods command. (Choose three.)
a. -A
b. --sum
c. --pod-selector
d. --containers
146 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Solution
As the student user on the workstation machine, log in to the OpenShift cluster as the
admin user with the redhatocp password:
3. Which three commands display the conditions of the master01 node? (Choose three.)
a. oc get node/master01 -o json | jq '.status.conditions'
b. oc get node/master01 -o wide
c. oc get node/master01 -o yaml
d. oc get node/master01 -o json
4. Select the two valid condition types for a control plane node. (Choose two)
a. PIDPressure
b. DiskIOPressure
c. OutOfMemory
d. Ready
DO180-OCP4.12-en-1-20230406 147
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
5. Select three valid options for the oc adm top pods command. (Choose three.)
a. -A
b. --sum
c. --pod-selector
d. --containers
148 DO180-OCP4.12-en-1-20230406
Chapter 2 | Kubernetes and OpenShift Command-Line Interfaces and APIs
Summary
• An RHOCP cluster can be managed from the web console or by using the kubectl or oc
command-line interfaces (CLI).
• Use the --help option on any command to view detailed information about the command.
• Token authentication is the only guaranteed method to work with any RHOCP cluster, because
enterprise SSO might replace the login form of the web console.
• All administrative tasks require creating, viewing, and changing the API resources.
• Kubernetes provides YAML- and JSON-formatted output options, which are ideal for parsing or
scripting.
• Operators provide the means of monitoring applications, performing health checks, managing
over-the-air (OTA) updates, and ensuring that applications remain in your specified state.
• The RHOCP web console incorporates useful graphs to visualize cluster and resource analytics.
• The RHOCP web console provides an interface for executing Prometheus queries, visualizing
metrics, and configuring alerts.
• The monitoring stack is based on the Prometheus project, and it is configured to monitor the
core RHOCP cluster components, by default.
• RHOCP provides the ability to view logs in running containers and pods to ease troubleshooting.
• You can collect resource definitions and service logs from your cluster by using the oc adm
must-gather command.
DO180-OCP4.12-en-1-20230406 149
150 DO180-OCP4.12-en-1-20230406
Chapter 3
DO180-OCP4.12-en-1-20230406 151
Chapter 3 | Run Applications as Containers and Pods
Objectives
• Run containers inside pods and identify the host OS processes and namespaces that the
containers use.
Containers Overview
In computing, a container is an encapsulated process that includes the required runtime
dependencies for the program to run. In a container, application-specific libraries are independent
of the host operating system libraries. The host kernel provides libraries and functions that are not
specific to the containerized application. The libraries within the container evoke operating system
calls. The provided libraries and functions help to ensure that the container remains compact, and
that it can quickly execute and stop as needed. A container engine creates a union file system
by merging container image layers. Because container image layers are immutable, a container
engine further adds a thin writable layer for runtime file modifications. Containers are ephemeral
by default, which means that the container engine removes the writable layer when you remove
the container.
Containers use Linux kernel features, such as namespaces and cgroups. For example, containers
use control groups (cgroups) for resource management, such as CPU time allocation and system
memory. Namespaces isolate a container's processes and resources from other containers and the
host system. The container environment is Linux-based, regardless of the host operating system.
For containers that run on non-Linux operating systems, the container engine implementation
often virtualizes these Linux-specific features.
152 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Kubernetes uses pods to manage the containers and their resources for your applications. When
an application requires more resources to meet consumption or capacity demands, Kubernetes
can scale up the application by creating more pods. Likewise, Kubernetes can scale down the
application by deleting pods when consumption demands are low.
Note
Pod scaling is discussed in more detail elsewhere in the course.
Kubernetes also uses pods to manage the lifecycle of an application's containers. For example, if a
container in a pod fails, then Kubernetes can restart the container or deploy a new pod to replace
the failed one, thus ensuring that the application is always running and available.
The following figure illustrates the basic lifecycle of an application that is deployed in a Kubernetes
cluster:
• Starts with the definition of a pod and the containers that it is composed of, which contain the
application.
DO180-OCP4.12-en-1-20230406 153
Chapter 3 | Run Applications as Containers and Pods
Depending on policy and exit code, pods might be removed after exiting, or might be retained to
enable access to the logs of their containers.
Note
Container images are discussed in more detail elsewhere in the course.
For example, the following command deploys an Apache HTTPD application in a pod named web-
server that uses the registry.access.redhat.com/ubi8/httpd-24 container image.
You can use several options and flags with the run command. The --command option executes
a custom command and its arguments in a container, rather than the default command that is
defined in the container image. You must follow the --command option with a double dash (--) to
separate the custom command and its arguments from the run command options. The following
syntax is used with the --command option:
154 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
You can also use the double dash option to provide custom arguments to a default command in
the container image.
To start an interactive session with a container in a pod, include the -it options before the pod
name. The -i option tells Kubernetes to keep open the standard input (stdin) on the container
in the pod. The -t option tells Kubernetes to open a TTY session for the container in the pod. You
can use the -it options to start an interactive, remote shell in a container. From the remote shell,
you can then execute additional commands in the container.
The following example starts an interactive remote shell, /bin/bash, in the default container in
the my-app pod.
Note
Unless you include the --namespace or -n options, the run command creates
containers in pods in the current selected project.
You can also define a restart policy for containers in a pod by including the --restart option.
A pod restart policy determines how the cluster should respond when containers in that pod exit.
The --restart option has the following accepted values: Always, OnFailure, and Never.
Always
If the restart policy is set to Always, then the cluster continuously tries to restart a
successfully exited container, for up to five minutes. The default pod restart policy is Always.
If the --restart option is omitted, then the pod is configured with the Always policy.
OnFailure
Setting the pod restart policy to OnFailure tells the cluster to restart only failed containers
in the pod, for up to five minutes.
Never
If the restart policy is set to Never, then the cluster does not try to restart exited or failed
containers in a pod. Instead, the pods immediately fail and exit.
The following example command executes the date command in the container of the pod named
my-app, redirects the date command output to the terminal, and defines Never as the pod
restart policy.
DO180-OCP4.12-en-1-20230406 155
Chapter 3 | Run Applications as Containers and Pods
To automatically delete a pod after it exits, include the --rm option with the run command.
For some containerized applications, you might need to specify environment variables for the
application to work. To specify an environment variable and its value, include the --env= option
with the run command.
With OpenShift default security policies, regular cluster users cannot choose the USER or UIDs
for their containers. When a regular cluster user creates a pod, OpenShift ignores the USER
instruction in the container image. Instead, OpenShift assigns to the user in the container a UID
and a supplemental GID from the identified range in the project annotations. The GID of the user
is always 0, which means that the user belongs to the root group. Any files and directories that
the container processes might write to must have read and write permissions by GID=0 and have
the root group as the owner. Although the user in the container belongs to the root group, the
user is an unprivileged account.
In contrast, when a cluster administrator creates a pod, the USER instruction in the container
image is processed. For example, if the USER instruction for the container image is set to 0, then
the user in the container is the root privileged account, with a 0 value for the UID. Executing
a container as a privileged account is a security risk. A privileged account in a container has
unrestricted access to the container's host system. Unrestricted access means that the container
could modify or delete system files, install software, or otherwise compromise its host. Red Hat
therefore recommends that you run containers as rootless, or as an unprivileged user with only
the necessary privileges for the container to run.
156 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Red Hat also recommends that you run containers from different applications with unique user IDs.
Running containers from different applications with the same UID, even an unprivileged one, is a
security risk. If the UID for two containers is the same, then the processes in one container could
access the resources and files of the other container. By assigning a distinct range of UIDs and
GIDs for each project, OpenShift ensures that applications in different projects do not run as the
same UID or GID.
Pod Security
The Kubernetes Pod Security Admission controller issues a warning when a pod is created without
a defined security context. Security contexts grant or deny OS-level privileges to pods. OpenShift
uses the Security Context Constraints controller to provide safe defaults for pod security. You
can ignore pod security warnings in these course exercises. Security Context Constraints (SCC)
are discussed in more detail in course DO280: Red Hat OpenShift Administration II: Operating a
Production Kubernetes Cluster.
The output of the executed command is sent to your terminal. In the following example, the exec
command executes the date command in the my-app pod.
The specified command is executed in the first container of a pod. For multicontainer pods,
include the -c or --container= options to specify which container is used to execute the
command. The following example executes the date command in a container named ruby-
container in the my-app pod.
The exec command also accepts the -i and -t options to create an interactive session with a
container in a pod. In the following example, Kubernetes sends stdin to the bash shell in the
ruby-container container from the my-app pod, and sends stdout and stderr from the
bash shell back to the terminal.
In the previous example, a raw terminal is opened in the ruby-container container. From this
interactive session, you can execute additional commands in the container. To terminate the
interactive session, you must execute the exit command in the raw terminal.
DO180-OCP4.12-en-1-20230406 157
Chapter 3 | Run Applications as Containers and Pods
Container Logs
Container logs are the standard output (stdout) and standard error (stderr) output of a
container. You can retrieve logs with the logs pod pod-name command that the kubectl and
oc CLIs provide. The command includes the following options:
-l or --selector=''
Filter objects based on the specified key:value label constraint.
--tail=
Specify the number of lines of recent log files to display; the default value is -1 with no
selectors, which displays all log lines.
-c or --container=
Print the logs of a particular container in a multicontainer pod.
-f or --follow
Follow, or stream, logs for a container.
-p or --previous=true
Print the logs of a previous container instance in the pod, if it exists. This option is helpful for
troubleshooting a pod that failed to start, because it prints the logs of the last attempt.
The following example restricts oc logs command output to the 10 most recent log files:
You can also use the attach pod-name -c container-name -it command to connect
to and start an interactive session on a running container in a pod. The -c container-name
option is required for multicontainer pods. If the container name is omitted, then Kubernetes uses
the kubectl.kubernetes.io/default-container annotation on the pod to select the
container. Otherwise, the first container in the pod is chosen. You can use the interactive session
to retrieve application log files and to troubleshoot application issues.
158 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
bash-4.4$
Deleting Resources
You can delete Kubernetes resources, such as pod resources, with the delete command. The
delete command can delete resources by resource type and name, resource type and label,
standard input (stdin), and with JSON- or YAML-formatted files. The command accepts only
one argument type at a time.
For example, you can supply the resource type and name as a command argument.
To select resources based on labels, you can include the -l option and the key:value label as a
command argument.
You can also provide the resource type and a JSON- or YAML-formatted file that specifies the
name of the resource. To use a file, you must include the -f option and provide the full path to the
JSON- or YAML-formatted file.
You can also use stdin and a JSON- or YAML-formatted file that includes the resource type and
resource name with the delete command.
Pods support graceful termination, which means that pods try to terminate their processes first
before Kubernetes forcibly terminates the pods. To change the time period before a pod is forcibly
terminated, you can include the --grace-period flag and a time period in seconds in your
delete command. For example, to change the grace period to 10 seconds, use the following
command:
To shut down the pod immediately, set the grace period to 1 second. You can also use the --now
flag to set the grace period to 1 second.
DO180-OCP4.12-en-1-20230406 159
Chapter 3 | Run Applications as Containers and Pods
You can also forcibly delete a pod with the --force option. If you forcibly delete a pod,
Kubernetes does not wait for a confirmation that the pod's processes ended, which can leave the
pod's processes running until its node detects the deletion. Therefore, forcibly deleting a pod
could result in inconsistency or data loss. Forcibly delete pods only if you are sure that the pod's
processes are terminated.
To delete all pods in a project, you can include the -all option.
Likewise, you can delete a project and its resources with the oc delete project project-
name command.
Note
For more information about the Kubernetes Container Runtime Interface (CRI)
standards, refer to the CRI-API repository at https://github.com/kubernetes/cri-api.
CRI-O provides a command-line interface to manage containers with the crictl command.
The crictl command includes several subcommands to help you to manage containers. The
following subcommands are commonly used with the crictl command:
crictl pods
Lists all pods on a node.
crictl image
Lists all images on a node.
crictl inspect
Retrieve the status of one or more containers.
crictl exec
Run a command in a running container.
crictl logs
Retrieve the logs of a container.
160 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
crictl ps
List running containers on a node.
To manage containers with the crictl command, you must first identify the node that is hosting
your containers.
Next, you must connect to the identified node as a cluster administrator. Cluster administrators
can use SSH to connect to a node or create a debug pod for the node. Regular users cannot
connect to or create debug pods for cluster nodes.
As a cluster administrator, you can create a debug pod for a node with the oc debug
node/node-name command. OpenShift creates the pod/node-name-debug pod in your
currently selected project and automatically connects you to the pod. You must then enable
access host binaries, such as the crictl command, with the chroot /host command. This
command mounts the host's root file system in the /host directory within the debug pod shell.
By changing the root directory to the /host directory, you can run binaries contained in the host's
executable path.
After enabling host binaries, you can use the crictl command to manage the containers on the
node. For example, you can use the crictl ps and crictl inspect commands to retrieve
the process ID (PID) of a running container. You can then use the PID to retrieve or enter the
namespaces within a container, which is useful for troubleshooting application issues.
To find the PID of a running container, you must first determine the container's ID. You can use
the crictl ps command with the --name option to filter the command output to a specific
container.
The default output of the crictl ps command is a table. You can find the short container ID
under the CONTAINER column. You can also use the -o or --output options to specify the
format of the crictl ps command as JSON or YAML and then parse the output. The parsed
output displays the full container ID.
DO180-OCP4.12-en-1-20230406 161
Chapter 3 | Run Applications as Containers and Pods
After identifying the container ID, you can use the crictl inspect command and the container
ID to retrieve the PID of the running container. By default, the crictl inspect command
displays verbose output. You can use the -o or --output options to format the command output
as JSON, YAML, a table, or as a Go template. If you specify the JSON format, you can then parse
the output with the jq command. Likewise, you can use the grep command to limit the command
output.
After determining the PID of a running container, you can use the lsns -p PID command to list
the system namespaces of a container.
You can also use the PID of a running container with the nsenter command to enter a specific
namespace of a running container. For example, you can use the nsenter command to execute a
command within a specified namespace on a running container. The following example executes
the ps -ef command within the process namespace of a running container.
The -t option specifies the PID of the running container as the target PID for the nsenter
command. The -p option directs the nsenter command to enter the process or pid namespace.
162 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The -r option sets the top-level directory of the process namespace as the root directory, thus
enabling commands to execute in the context of the namespace.
You can also use the -a option to execute a command in all of the container's namespaces.
References
Container Runtime Interface (CRI) CLI
https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md
For more information about resource log files, refer to the Viewing Logs for a
Resource chapter in the Red Hat OpenShift Container Platform 4.12 Logging
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/logging/index
DO180-OCP4.12-en-1-20230406 163
Chapter 3 | Run Applications as Containers and Pods
Guided Exercise
Outcomes
• Create a pod with a single container, and identify the pod and its container within the
container engine of an OpenShift node.
• Retrieve information inside a container, such as the operating system (OS) release and
running processes.
• Identify the User ID (UID) and supplemental group ID (GID) ranges of a project.
• Inspect a pod with multiple containers, and identify the purpose of each container.
This command ensures that all resources are available for this exercise.
Instructions
1. Log in to the OpenShift cluster and create the pods-containers project. Determine the
UID and GID ranges for pods in the pods-containers project.
1.1. Log in to the OpenShift cluster as the developer user with the oc command.
164 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
1.3. Identify the UID and GID ranges for pods in the pods-containers project.
Your UID and GID range values might differ from the previous output.
2. As the developer user, create a pod called ubi9-user from a UBI9 base container
image. The image is available in the registry.ocp4.example.com:8443/ubi9/
ubi container registry. Set the restart policy to Never and start an interactive session.
Configure the pod to execute the whoami and id commands to determine the UIDs,
supplemental groups, and GIDs of the container user in the pod. Delete the pod afterward.
After the ubi-user pod is deleted, log in as the admin user and then re-create the ubi9-
user pod. Retrieve the UIDs and GIDs of the container user. Compare the values to the
values of the ubi9-user pod that the developer user created.
Afterward, delete the ubi9-user pod.
2.1. Use the oc run command to create the ubi9-user pod. Configure the pod to
execute the whoami and id commands through an interactive bash shell session.
DO180-OCP4.12-en-1-20230406 165
Chapter 3 | Run Applications as Containers and Pods
processes might write to must have read and write permissions by GID=0 and have
the root group as the owner.
Although the user in the container belongs to the root group, a UID value over 1000
means that the user is an unprivileged account. When a regular OpenShift user,
such as the developer user, creates a pod, the containers within the pod run as
unprivileged accounts.
You have access to 71 projects, the list has been suppressed. You can list all
projects with 'oc projects'
2.4. Re-create the ubi9-user pod as the admin user. Configure the pod to execute the
whoami and id commands through an interactive bash shell session. Compare the
values of the UID and GID for the container user to the values of the ubi9-user pod
that the developer user created.
Note
It is safe to ignore pod security warnings for exercises in this course. OpenShift uses
the Security Context Constraints controller to provide safe defaults for pod security.
Notice that the value of the UID is 0, which differs from the UID range value of the
pod-containers project. The user in the container is the privileged account root
user and belongs to the root group. When a cluster administrator creates a pod, the
containers within the pod run as a privileged account by default.
166 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
3.2. Create a pod called ubi9-date that executes the date command.
3.3. Wait a few moments for the creation of the pod. Then, retrieve the logs of the ubi9-
date pod.
DO180-OCP4.12-en-1-20230406 167
Chapter 3 | Run Applications as Containers and Pods
bash-5.1$ date
Mon Nov 28 15:05:47 UTC 2022
bash-5.1$ exit
exit
Session ended, resume using 'oc attach ubi9-command -c ubi9-command -i -t' command
when the pod is running
5. View the logs for the ubi9-command pod with the oc logs command. Then, connect to
the ubi9-command pod and issue the following command:
This command executes the date and sleep commands to generate output to the
console every two seconds. Retrieve the logs of the ubi9 pod again to confirm that the
logs display the executed command.
5.1. Use the oc logs command to view the logs of the ubi9-command pod.
The pod's command prompt is returned. The oc logs command displays the pod's
current stdout and stderr output in the console. Because you disconnected from
the interactive session, the pod's current stdout is the command prompt, and not
the commands that you executed previously.
5.2. Use the oc attach command to connect to the ubi9-command pod again. In the
shell, execute the while true; do echo $(date); sleep 2; done command
to continuously generate stdout output.
5.3. Open another terminal window and view the logs for the ubi9-command pod with
the oc logs command. Limit the log output to the last 10 entries with the --tail
option. Confirm that the logs display the results of the command that you executed in
the container.
168 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
6. Identify the name for the container in the ubi9-command pod. Identify the process ID
(PID) for the container in the ubi9-command pod by using a debug pod for the pod's host
node. Use the crictl command to identify the PID of the container in the ubi9-command
pod. Then, retrieve the PID of the container in the debug pod.
6.1. Identify the container name in the ubi9-command pod with the oc get command.
Specify the JSON format for the command output. Parse the JSON output with the
jq command to retrieve the value of the .status.containerStatuses[].name
object.
6.2. Find the host node for the ubi9-command pod. Start a debug pod for the host with
the oc debug command.
The debug pod fails because the developer user does not have the required
permission to debug a host node.
6.3. Log in as the admin user with the redhatocp password. Start a debug pod for the
host with the oc debug command. After connecting to the debug pod, run the
chroot /host command to use host binaries, such as the crictl command-line
tool.
DO180-OCP4.12-en-1-20230406 169
Chapter 3 | Run Applications as Containers and Pods
6.4. Use the crictl ps command to retrieve the ubi9-command container ID. Specify
the ubi9-command container with the --name option and use the JSON output
format. Parse the JSON output with the jq -r command to get the RAW JSON
output. Export the container ID as the $CID environment variable.
Note
When using jq without the -r flag, the container ID is wrapped in double quotes,
which does not work with crictl commands. If the -r flag is not used, then you
can add | tr -d '"' to the end of the command to trim the double quotes.
6.5. The crictl ps command works with container IDs, container names, and pod IDs,
but not with pod names. Execute the crictl pods command to retrieve the pod
ID of the master01-debug pod. Next, use the crictl ps command and the pod
ID to retrieve the master01-debug pod container name. Then, use the crictl
ps command and the container name to retrieve the container ID. Save the debug
container ID as the $DCID environment variable.
170 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Your pod ID and container ID values might differ from the previous output.
6.6. Use the crictl inspect command to find the PID of the ubi9-command
container and the container-00 container. The PID value is in the .info.pid
object in the crictl inspect output. Export the ubi9-command container PID
as the $PID environment variable. Export the container-00 container PID as the
$DPID environment variable.
sh-4.4# PID=365297
sh-4.4# DPID=151115
7. Use the lsns command to list the system namespaces of the ubi9-command container
and the container-00 container. Confirm that the running processes in the containers
are isolated to different system namespaces.
7.1. View the system namespaces of the ubi9-command container with the lsns
command. Specify the PID with the -p option and use the $PID environment
variable. In the resulting table, the NS column contains the namespace values for the
container.
7.2. View the system namespaces of the debug pod container with the lsns command.
Specify the PID with the -p option and use the $DPID environment variable.
DO180-OCP4.12-en-1-20230406 171
Chapter 3 | Run Applications as Containers and Pods
Compare the namespace values of the debug pod container versus the ubi9
container.
8. Use the host debug pod to retrieve and compare the operating system (OS) and the GNU
C Library (glibc) package version of the ubi9-command container and the host node.
8.1. Retrieve the OS for the host node with the cat /etc/redhat-release command.
8.2. Use the crictl exec command and the $CID container ID variable to retrieve the
OS of the ubi9-command container. Use the -it options to create an interactive
terminal to execute the cat /etc/redhat-release command.
8.3. Use the ldd --version command to retrieve the glibc package version of the
host node.
8.4. Use the crictl exec command and the $CID container ID variable to retrieve the
glibc package version of the ubi9-command container. Use the -it options to
create an interactive terminal to execute the ldd --version command.
172 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The ubi9-command container has a different version of the glibc package from its
host.
9. Use the crictl pods command to view details about the pod in the openshift-
dns-operator namespace. Next use the crictl ps command to retrieve the list of
containers in the pod. Then, use the crictl inspect command to find the PID of a
container in the pod. Finally, use the lsns and crictl exec commands to view the
running processes and their namespaces in the container.
9.2. Use crictl ps command and the pod ID to view a list of containers in the dns-
operator pod.
9.3. Use the crictl inspect command and the container ID to retrieve the PID of the
dns-operator container.
9.4. Use the lsns and crictl exec commands to view the running processes and their
namespaces in the dns-operator container.
DO180-OCP4.12-en-1-20230406 173
Chapter 3 | Run Applications as Containers and Pods
9.5. Use the crictl inspect command and container ID to retrieve the PID of the
kube-rbac-proxy container.
Your pod ID, container ID, and PID values might differ from the previous output.
9.6. Use the lsns and crictl exec commands to view the running processes and their
namespaces in the kube-rbac-proxy container.
174 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
10.1. Exit the master01-debug pod. You must issue the exit command to end the host
binary access. Execute the exit command again to exit and remove the master01-
debug pod.
sh-4.4# exit
exit
sh-4.4# exit
exit
10.2. Return to the terminal window that is connected to the ubi9-command pod. Press
Ctrl+C and then execute the exit command. Confirm that the pod is still running.
...output omitted...
^C
bash-5.1$ exit
exit
Session ended, resume using 'oc attach ubi9-command -c ubi9-command -i -t' command
when the pod is running
DO180-OCP4.12-en-1-20230406 175
Chapter 3 | Run Applications as Containers and Pods
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
176 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Objectives
• Find containerized applications in container registries and get information about the runtime
parameters of supported and community container images.
A container image contains a packaged version of your application, with all the necessary
dependencies for the application to run. Images can exist without containers. However, containers
depend on images, because containers use container images to build a runtime environment to
execute applications.
Containers can be split into two similar but distinct concepts: container images and container
instances. A container image contains immutable data that defines an application and its libraries.
You can use container images to create container instances, which are running processes that are
isolated by a set of kernel namespaces.
You can use each container image many times to create many distinct container instances. These
replicas can be split across multiple hosts. The application within a container is independent of the
host environment.
Because the Red Hat Ecosystem Catalog is also searched for software products other than
container images, you must navigate to https://catalog.redhat.com/software/containers/explore
to specifically search for container images.
DO180-OCP4.12-en-1-20230406 177
Chapter 3 | Run Applications as Containers and Pods
The details page of a container image gives relevant information, such as technical data, the
installed packages within the image, or a security scan. You can navigate through these options by
using the tabs on the website. You can also change the image version by selecting a specific tag.
178 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The Red Hat internal security team vets all images in the container catalog. Red Hat rebuilds all
components to avoid known security vulnerabilities.
• Trusted source: All container images comprise sources that Red Hat knows and trusts.
• Original dependencies: None of the container packages are tampered with, and include only
known libraries.
• Runtime protection: All applications in container images run as non-root users, to minimize the
exposure surface to malicious or faulty applications.
• Red Hat Enterprise Linux (RHEL) compatible: Container images are compatible with all RHEL
platforms, from bare metal to cloud.
• Red Hat support: Red Hat commercially supports the complete stack.
Note
You must log in to the registry.redhat.io registry with a customer portal
account or a Red Hat Developer account to use the stored container images in the
registry.
DO180-OCP4.12-en-1-20230406 179
Chapter 3 | Run Applications as Containers and Pods
Quay.io
Although the Red Hat Registry stores only images from Red Hat and certified providers, you
can store your own images with Quay.io, another public image registry that Red Hat sponsors.
Although storing public images in Quay is free of charge, some options are available only for
paying customers. Quay also offers an on-premise version of the product, which you can use to set
up an image registry in your own servers.
Quay.io introduces features such as server-side image building, fine-grained access controls, and
automatic scanning of images for known vulnerabilities.
Quay.io offers live images that creators regularly update. Quay.io users can create their
namespaces, with fine-grained access control, and publish their created images to that
namespace. Container Catalog users rarely or never push new images, but consume trusted
images from the Red Hat team.
Private Registries
Image creators or maintainers might want to make their images publicly available. However, other
image creators might prefer to keep their images private, for the following reasons:
In some cases, private images are preferred. Private registries give image creators control over
image placement, distribution, and usage. Private images are more secure than images in public
registries.
Public Registries
Other public registries, such as Docker Hub and Amazon ECR, are also available for storing,
sharing, and consuming container images. These registries can include official images that the
registry owners or the registry community users create and maintain. For example, Docker Hub
hosts a Docker Official Image of a WordPress container image. Although the docker.io/
library/wordpress container image is a Docker Official Image, the container image is not
supported by WordPress, Docker, or Red Hat. Instead, the Docker Community, a global group of
Docker Hub users, supports and maintains the container image. Support for this container image
depends on the availability and skills of the Docker Community users.
Consuming container images from public registries brings risks. For example, a container image
might include malicious code or vulnerabilities, which can compromise the host system that
executes the container image. A host system can also be compromised by public container
180 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
images, because the images are often configured with the privileged root user. Additionally, the
software in a container image might not be correctly licensed, or might violate licensing terms.
Before you use a container image from a public registry, review and verify the container image.
Also ensure that you have the correct permissions to use the software in the container image.
Registry
It is a content server, such as registry.access.redhat.com, that is used to store and
share container images. A registry consists of one or more repositories that contain tagged
container images.
Name
It identifies the container image repository; it is a string that is composed of letters, numbers,
and some special characters. This component refers to the name of the directory, or the
container repository, within the container registry where the container image is.
For example, consider the fully qualified domain name (FQDN) of the
registry.access.redhat.com/ubi9/httpd-24:1-233 container image. The container
image is in the ubi9/httpd-24 repository in the registry.access.redhat.com
container registry.
ID/Hash
It is the SHA (Secure Hash Algorithm) code to pull or verify an image. The SHA
image ID cannot change, and always references the same container image
content. The ID/hash is the true, unique identifier of an image. For example, the
sha256:4186a1ead13fc30796f951694c494e7630b82c320b81e20c020b3b07c88898
5b image ID always refers to the registry.access.redhat.com/ubi9/
httpd-24:1-233 container image.
Tag
It is a label for a container image in a repository, to distinguish from other images, for version
control. The tag comes after the image repository name and is delimited by a colon (:).
When an image tag is omitted, the floating tag, latest, is used as the default tag. A floating
tag is an alias to another tag. In contrast, a fixed tag points to a specific container build. For
the registry.access.redhat.com/ubi9/httpd-24:1-233.1669634588 container
image, 1-233.1669634588 is the fixed tag for the image, and at the time of writing,
corresponds to the floating latest flag.
Layers
Container images are created from instructions. Each instruction adds a layer to the container
image. Each layer consists of the differences between it and the following layer. The layers are
then stacked to create a read-only container image.
Metadata
Metadata includes the instructions and documentation for a container image.
DO180-OCP4.12-en-1-20230406 181
Chapter 3 | Run Applications as Containers and Pods
ENV
Defines the available environment variables in the container. A container image might include
multiple ENV instructions. Any container can recognize additional environment variables that
are not listed in its metadata.
ARG
It defines build-time variables, typically to make a customizable container build. Developers
commonly configure the ENV instructions by using the ARG instruction. It is useful for
preserving the build-time variables for run time.
USER
Defines the active user in the container. Later instructions run as this user. It is a good practice
to define a user other than root for security purposes. OpenShift does not honor the user
in a container image, for regular cluster users. Only cluster administrators can run containers
(pods) with their chosen user ID (UIDs) and group IDs (GIDs).
ENTRYPOINT
It defines the executable to run when the container is started.
CMD
It defines the command to execute when the container is started. This command is passed
to the executable that the ENTRYPOINT instruction defines. Base images define a default
ENTRYPOINT executable, which is usually a shell executable, such as Bash.
WORKDIR
It sets the current working directory within the container. Later instructions execute within this
directory.
Metadata is used for documentation purposes, and does not affect the state of a running
container. You can also override the metadata values during container creation.
The following metadata is for information only, and does not affect the state of the running
container:
EXPOSE
It indicates the network port that the application binds to within the container. This metadata
does not automatically bind the port on the host, and is used only for documentation
purposes.
VOLUME
It defines where to store data outside the container. The value shows the path where your
container runtime mounts the directory inside the container. More than one path can be
defined to create multiple volumes.
LABEL
Adds a key-value pair to the metadata of the image for organization and image selection.
Container engines are not required to honor metadata in a container image, such as USER or
EXPOSE. A container engine can also recognize additional environment variables that are not listed
in the container image metadata.
182 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Base Images
A base image is the image that your resulting container image is built on. Your chosen base image
determines the Linux distribution, and any of the following components:
• Package manager
• Init system
• File system layout
• Preinstalled dependencies and runtimes
The base image can also influence factors such as image size, vendor support, and processor
compatibility.
Red Hat provides enterprise-grade container images that are engineered to be the base operating
system layer for your containerized applications. These container images are intended as a
common starting point for containers, and are known as universal base images (UBI). Red Hat
UBI container images are Open Container Initiative (OCI) compliant images that contain portions
of Red Hat Enterprise Linux (RHEL). UBI container images include a subset of RHEL content.
They provide a set of prebuilt runtime languages, such as Python and Node.js, and associated
DNF repositories that you can use to add application dependencies. UBI-based images can be
distributed without cost or restriction. They can be deployed to both Red Hat and non-Red Hat
platforms, and be pushed to your chosen container registry.
A Red Hat subscription is not required to use or distribute UBI-based images. However, Red Hat
provides full support only for containers that are built on UBI if the containers are deployed to a
Red Hat platform, such as a Red Hat OpenShift Container Platform (RHOCP) cluster or RHEL.
Red Hat provides four UBI variants: standard, init, minimal, and micro. All UBI variants and
UBI-based images use Red Hat Enterprise Linux (RHEL) at their core and are available from the
Red Hat Container Catalog. The main differences are as follows:
Standard
This image is the primary UBI, which includes DNF, systemd, and utilities such as gzip and
tar.
Init
This image simplifies running multiple applications within a single container by managing them
with systemd.
Minimal
This image is smaller than the init image and provides nice-to-have features. This image
uses the microdnf minimal package manager instead of the full-sized version of DNF.
Micro
This image is the smallest available UBI, and includes only the minimum packages. For
example, this image does not include a package manager.
Skopeo
Skopeo is another tool to inspect and manage remote container images. With Skopeo, you can
copy and sync container images from different container registries and repositories. You can also
copy an image from a remote repository and save it to a local disk. If you have the appropriate
DO180-OCP4.12-en-1-20230406 183
Chapter 3 | Run Applications as Containers and Pods
repository permissions, then you can also delete an image from container registry. You also
can use Skopeo to inspect the configuration and contents of a container image, and to list the
available tags for a container image. Unlike other container image tools, Skopeo can execute
without a privileged account, such as root. Skopeo does not require a running daemon to execute
various operations.
Skopeo is executed with the skopeo command-line utility, which you can install with various
package managers, such as DNF, Brew, and APT. The skopeo utility might already be installed on
some Linux-based distributions. You can install the skopeo utility on Fedora, CentOS Stream 8
and later, and Red Hat Enterprise Linux 8 and later systems by using the DNF package manager.
The skopeo utility is currently not available as a packaged binary for Windows-based systems.
However, the skopeo utility is available as a container image from the quay.io/skopeo/
stable container repository. For more information about the Skopeo container image, refer
to the skopeoimage overview guide in the Skopeo repository (https://github.com/
containers/skopeo/blob/main/contrib/skopeoimage/README.md).
You can also build skopeo from source code in a container, or build it locally without using a
container. Refer to the installation guide in the Skopeo repository (https://github.com/containers/
skopeo/blob/main/install.md#container-images) for more information about installing or building
Skopeo from source code.
The skopeo utility provides commands to help you to manage and inspect container images and
container image registries. For container registries that require authentication, you must first log in
to the registry before you can execute additional skopeo commands.
Note
OpenShift clusters are typically configured with registry credentials. When a pod
is created from a container image in a remote repository, OpenShift authenticates
to the container registry with the configured registry credentials, and then pulls, or
copies, the image. Because OpenShift automatically uses the registry credentials,
you typically do not need to manually authenticate to a container registry when you
create a pod. By contrast, the oc image command and the skopeo utility require
you first to log in to a container registry.
After you log in to a container registry (if required), you can execute additional skopeo commands
against container images in a repository. When you execute a skopeo command, you must specify
the transport and the repository name. A transport is the mechanism to transfer or move container
images between locations. Two common transports are docker and dir. The docker transport is
used for container registries, and the dir transport is used for local directories.
The oc image command and other tools default to the docker transport, and so you do not
need to specify the transport when executing commands. However, the skopeo utility does not
define a default transport; you must specify the transport with the container image name. Most
skopeo commands use the skopeo command [command options] transport://IMAGE-
NAME format. For example, the following skopeo list-tags command lists all available tags in
a registry.access.redhat.com/ubi9/httpd-24 container repository by using the docker
transport:
184 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The skopeo utility includes other useful commands for container image management.
skopeo inspect
View low-level information for an image name, such as environment variables and available
tags. Use the skopeo inspect [command options] transport://IMAGE-NAME
command format. You can include the --config flag to view the configuration, metadata,
and history of a container repository. The following example retrieves the configuration
information for the registry.access.redhat.com/ubi9/httpd-24 container
repository:
DO180-OCP4.12-en-1-20230406 185
Chapter 3 | Run Applications as Containers and Pods
"HTTPD_MAIN_CONF_D_PATH=/etc/httpd/conf.d",
"HTTPD_TLS_CERT_PATH=/etc/httpd/tls",
"HTTPD_VAR_RUN=/var/run/httpd",
"HTTPD_DATA_PATH=/var/www",
"HTTPD_DATA_ORIG_PATH=/var/www",
"HTTPD_LOG_PATH=/var/log/httpd"
],
"Entrypoint": [
"container-entrypoint"
],
"Cmd": [
"/usr/bin/run-httpd"
],
"WorkingDir": "/opt/app-root/src",
...output omitted...
}
...output omitted...
skopeo copy
Copy an image from one location or repository to another. Use the skopeo copy
transport://SOURCE-IMAGE transport://DESTINATION-IMAGE format. For
example, the following command copies the quay.io/skopeo/stable:latest container
image to the skopeo repository in the registry.example.com container registry:
skopeo delete
Delete a container image from a repository. You must use the skopeo delete [command
options] transport://IMAGE-NAME format. The following command deletes the
skopeo:latest image from the registry.example.com container registry:
skopeo sync
Synchronize one or more images from one location to another. Use this command to copy
all container images from a source to a destination. The command uses the skopeo sync
[command options] --src transport --dest transport SOURCE DESTINATION
format. The following command synchronizes the registry.access.redhat.com/ubi8/
httpd-24 container repository to the registry.example.com/httpd-24 container
repository:
Registry Credentials
Some registries require users to authenticate. For example, Red Hat containers that are based on
RHEL typically require authenticated access:
186 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
You might choose a different image that does not require authentication, such as the UBI 8 image:
Alternatively, you must execute the skopeo login command for the registry before you can
access the RHEL 8 image.
DO180-OCP4.12-en-1-20230406 187
Chapter 3 | Run Applications as Containers and Pods
Note
For security reasons, the skopeo login command does not show your password
in the interactive session. Although you do not see what you are typing, Skopeo
registers every key stroke. After typing your full password in the interactive session,
press Enter to start the login.
The oc image info command inspects and retrieves information about a container image.
You can use the oc image info command to identify the ID/hash SHA and to list the image
layers of a container image. You can also review container image metadata, such as environment
variables, network ports, and commands. If a container image repository provides a container
image in multiple architectures, such as amd64 or arm64, then you must include the --filter-
by-os tag. For example, you can execute the following command to retrieve information about
the registry.access.redhat.com/ubi9/httpd-24:1-233 container image that is based
on the amd64 architecture:
188 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
oc image append
Use this command to add layers to container images, and then push the container image to a
registry.
oc image extract
You can use this command to extract or copy files from a container image to a local disk. Use
this command to access the contents of a container image without first running the image as a
container. A running container engine is not required.
oc image mirror
Copy or mirror container images from one container registry or repository to another. For
example, you can use this command to mirror container images between public and private
registries. You can also use this command to copy a container image from a registry to a disk.
The command mirrors the HTTP structure of a container registry to a directory on a disk. The
directory on the disk can then be served as a container registry.
Traditionally, when an attacker gains access to the container file system by using an exploit, the
root user inside the container corresponds to the root user outside the container. If an attacker
escapes the container isolation, then they have elevated privileges on the host system, which
potentially causes more damage.
Containers that do not run as the root user have limitations that might prove unsuitable for use in
your application, such as the following limitations:
Non-trivial Containerization
Some applications might require the root user. Depending on the application architecture,
some applications might not be suitable for non-root containers, or might require a deeper
understanding to containerize.
For example, applications such as HTTPd and Nginx start a bootstrap process and then create
a process with a non-privileged user, which interacts with external users. Such applications are
non-trivial to containerize for rootless use.
Red Hat provides containerized versions of HTTPd and Nginx that do not require root
privileges for production usage. You can find the containers in the Red Hat container registry
(https://catalog.redhat.com/software/containers/explore).
Similarly, non-root containers cannot use the ping utility by default, because it requires
elevated privileges to establish raw sockets.
References
Skopeo GitHub Repository
https://github.com/containers/skopeo
skopeo(1) man page
DO180-OCP4.12-en-1-20230406 189
Chapter 3 | Run Applications as Containers and Pods
190 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Guided Exercise
Outcomes
• Locate and run container images from a container registry.
This command ensures that all resources are available for this exercise.
Instructions
1. Log in to the OpenShift cluster and create the pods-images project.
1.1. Log in to the OpenShift cluster as the developer user with the oc command.
2.1. Use the skopeo login command to log in as the developer user with the
developer password.
DO180-OCP4.12-en-1-20230406 191
Chapter 3 | Run Applications as Containers and Pods
2.2. The classroom registry contains a copy and specific tags of the docker.io/
library/nginx container repository. Use the skopeo list-tags command
to retrieve a list of available tags for the registry.ocp4.example.com:8443/
redhattraining/docker-nginx container repository.
3.2. After a few moments, verify the status of the docker-nginx pod.
3.3. Investigate the pod failure. Retrieve the logs of the docker-nginx pod to identify a
possible cause of the pod failure.
192 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The pod failed to start because of permission issues for the nginx directories.
3.5. From the debug pod, verify the permissions of the /etc/nginx and /var/cache/
nginx directories.
Only the root user has permission to the nginx directories. The pod must therefore
run as the privileged root user to work.
3.6. Retrieve the user ID (UID) of the docker-nginx user to determine whether the user
is a privileged or unprivileged account. Then, exit the debug pod.
$ whoami
1000820000
$ exit
DO180-OCP4.12-en-1-20230406 193
Chapter 3 | Run Applications as Containers and Pods
3.7. Confirm that the docker-nginx:1.23 image requires the root privileged account.
Use the skopeo inspect --config command to view the configuration for the
image.
The image configuration does not define USER metadata, which confirms that the
image must run as the root privileged user.
3.8. The docker-nginx:1-23 container image must run as the root privileged user.
OpenShift security policies prevent regular cluster users, such as the developer
user, from running containers as the root user. Delete the docker-nginix pod.
4. Create a bitnami-mysql pod, which uses a copy of the Bitnami community MySQL
image. The image is available in the registry.ocp4.example.com:8443/
redhattraining/bitnami-mysql container repository.
194 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The image defines the 1001 UID, which means that the image does not require a
privileged account.
4.3. Create the bitnami-mysql pod with the oc run command. Use the
registry.ocp4.example.com:8443/redhattraining/bitnami-
mysql:8.0.31 container image. Then, wait a few moments and then retrieve the
pod's status with the oc get command.
4.4. Examine the logs of the bitnami-mysql pod to determine the cause of the failure.
DO180-OCP4.12-en-1-20230406 195
Chapter 3 | Run Applications as Containers and Pods
The MYSQL_ROOT_PASSWORD environment variable must be set for the pod to start.
4.5. Delete and then re-create the bitnami-mysql pod. Specify redhat123 as the
value for the MYSQL_ROOT_PASSWORD environment variable. After a few moments,
verify the status of the pod.
4.6. Determine the UID of the container user in the bitnami-mysql pod. Compare this
value to the UID in the container image and to the UID range of the pods-images
project.
Your values for the UID of the container and the UID range of the project might differ
from the previous output.
The container user UID is the same as the specified UID range in the namespace.
Notice that the container user UID does not match the 1001 UID of the container
196 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
image. For a container to use the specified UID of a container image, the pod must be
created with a privileged OpenShift user account, such as the admin user.
5. The private classroom registry hosts a copy of a supported MySQL image from Red Hat.
Retrieve the list of available tags for the registry.ocp4.example.com:8443/rhel9/
mysql-80 container repository. Compare the rhel9/mysql-80 container image release
version that is associated with each tag.
5.1. Use the skopeo list-tags command to list the available tags for the rhel9/
mysql-80 container image.
• The latest and 1 tags are floating tags, which are aliases to other tags, such as
the 1-237 tag.
• The 1-228 and 1-224 tags are fixed tags, which point to a build of a container.
5.2. Use the skopeo inspect command to compare the rhel9/mysql-80 container
image release version and SHA IDs that are associated with the identified tags.
Note
To improve readability, the instructions truncate the SHA-256 strings.
DO180-OCP4.12-en-1-20230406 197
Chapter 3 | Run Applications as Containers and Pods
"name": "rhel9/mysql-80",
"release": "237",
...output omitted...
You can also format the output of the skopeo inspect command with a Go
template. Append the template objects with \n to add new lines between the results.
The latest, 1, and 1-237 tags resolve to the same release versions and SHA IDs.
The latest and 1 tags are floating tags for the 1-237 fixed tag.
6. The classroom registry hosts a copy and certain tags of the registry.redhat.io/
rhel9/mysql-80 container repository. Use the oc run command to create a rhel9-
mysql pod from the registry.ocp4.example.com:8443/rhel9/mysql-80:1-228
container image. Verify the status of the pod and then inspect the container logs for any
errors.
6.2. After a few moments, retrieve the pod's status with the oc get command.
198 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
6.3. Retrieve the logs for the rhel9-mysql pod to determine why the pod failed.
The pod failed because the required environment variables were not set for the
container.
7. Delete the rhel9-mysql pod. Create another rhel9-mysql pod and specify the
necessary environment variables. Retrieve the status of the pod and inspect the container
logs to confirm that the new pod is working.
7.1. Delete the rhel9-mysql pod with the oc delete command. Wait for the pod to
delete before continuing to the next step.
Variable Value
MYSQL_USER redhat
MYSQL_PASSWORD redhat123
MYSQL_DATABASE worldx
7.3. After a few moments, retrieve the status of the rhel9-mysql pod with the oc get
command. View the container logs to confirm that the database on the rhel9-
mysql pod is ready to accept connections.
DO180-OCP4.12-en-1-20230406 199
Chapter 3 | Run Applications as Containers and Pods
8. Determine the location of the MySQL database files for the rhel9-mysql pod. Confirm
that the directory contains the worldx database.
8.1. Use the oc image command to inspect the rhel9/mysql-80:1-228 image in the
registry.ocp4.example.com:8443 classroom registry.
The container manifest sets the HOME environment variable for the container user to
the /var/lib/mysql directory.
8.2. Use the oc exec command to list the contents of the /var/lib/mysql directory.
200 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
8.3. Use the oc exec command again to list the contents of the /var/lib/mysql/
data directory.
9. Determine the IP address of the rhel9-mysql pod. Next, create another MySQL pod,
named mysqlclient, to access the rhel9-mysql pod. Confirm that the mysqlclient
pod can view the available databases on the rhel9-mysql pod with the mysqlshow
command.
Note the IP address. Your IP address might differ from the previous output.
9.2. Use the oc run command to create a pod named mysqlclient that uses the
registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 container
image. Set the value of the MYSQL_ROOT_PASSWORD environment variable to
redhat123, and then confirm that the pod is running.
9.3. Use the oc exec command with the -it options to execute the mysqlshow
command on the mysqlclient pod. Connect as the redhat user and specify the
host as the IP address of the rhel9-mysql pod. When prompted, enter redhat123
for the password.
DO180-OCP4.12-en-1-20230406 201
Chapter 3 | Run Applications as Containers and Pods
+--------------------+
| information_schema |
| performance_schema |
| worldx |
+--------------------+
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
202 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Objectives
• Troubleshoot a pod by starting additional processes on its containers, changing their ephemeral
file systems, and opening short-lived network tunnels.
Custom alterations to a running container are incompatible with elegant architecture, reliability,
and resilience for the environment.
Note
When interacting with the cluster containers, take suitable precautions with actively
running components, services, and applications.
Use these tools to validate the functions and environment for a running container:
DO180-OCP4.12-en-1-20230406 203
Chapter 3 | Run Applications as Containers and Pods
Besides supporting the previous kubectl commands, the oc CLI adds the following commands
for inspecting and troubleshooting running containers:
Editing Resources
Troubleshooting and remediation often begin with a phase of inspection and data gathering. When
solving issues, the describe command can provide helpful details about the running resource,
such as the definition of a container and its purpose.
The following example demonstrates use of the oc describe RESOURCE NAME command to
retrieve information about a pod in the openshift-dns namespace:
Various CLI tools can apply a change that you determine is needed to a running container. The
edit command opens the specified resource in the default editor for your environment. This
editor is specified by setting either the KUBE_EDITOR or the EDITOR environment variable, or
otherwise with the vi editor in Linux or the Notepad application in Windows.
The following example demonstrates use of the oc edit RESOURCE NAME command to edit a
running container:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file
will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Pod
metadata:
annotations:
...output omitted...
You can also use the patch command to update fields of a resource.
204 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The following example uses the patch command to update the container image that a pod uses:
Note
For more information about patching resources and the different merge methods,
refer to Update API Objects in Place Using kubectl patch [https://kubernetes.io/
docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/].
Note
To use the cp command with the kubectl CLI or the oc CLI, the tar binary must
be present in the container. If the binary is absent, then an error message appears
and the operation fails.
The following example demonstrates copying a file from a running container to a local directory by
using the oc cp SOURCE DEST command:
The following example demonstrates use of the oc cp SOURCE DEST command to copy a file
from a local directory to a directory in a running container:
Note
Targeting a file path within a pod for either the SOURCE or DEST argument uses
the pod_name:path format, and can include the -c container_name option to
specify a container within the pod. If you omit the -c container_name option,
then the command targets the first container in the pod.
Additionally, when using the oc CLI, file and directory synchronization is available by using the oc
rsync command.
DO180-OCP4.12-en-1-20230406 205
Chapter 3 | Run Applications as Containers and Pods
The following example demonstrates use of the oc rsync SOURCE_NAME DEST command to
synchronize files from a running container to a local directory.
The oc rsync command uses the rsync client on your local system to copy changed files to and
from a pod container. The rsync binary must be available locally and within the container for this
approach. If the rsync binary is not found, then a tar archive is created on the local system and
is sent to the container. The container then uses the tar utility to extract files from the archive.
Without the rsync and tar binaries, an error message occurs and the oc rsync command fails.
Note
For Linux-based systems, you can install the rsync client and the tar utility on
a local system by using a package manager, such as DNF. For Windows-based
systems, you can install the cwRsync client. For more information about the
cwRysnc client, refer to https://www.itefix.net/cwrsync.
When troubleshooting an application that typically runs without a need to connect locally, you can
use the port-forwarding function to expose connectivity to the pod for investigation. With this
function, an administrator can connect on the new port and inspect the problematic application.
After you remediate the issue, the application can be redeployed without the port-forward
connection.
206 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The following example demonstrates use of the oc rsh POD_NAME command to connect to a
container via a shell:
The oc rsh command does not accept the -n namespace option. Therefore, you must change
to the namespace of the pod before you execute the kubectl rsh and oc rsh commands. If
you need to connect to a specific container in a pod, then use the -c container_name option
to specify the container name. If you omit this option, then the command connects to the first
container in the pod.
If you omit the -c container_name option, then the command targets the first container in the
pod.
The following examples demonstrate the use of the oc exec command to execute the ls
command in a container to list the contents of the container's root directory:
Note
It is common to add the -it flags to the kubectl exec or oc exec commands.
These flags instruct the command to send STDIN to the container and
STDOUT/STDERR back to the terminal. The format of the command output is
impacted by the inclusion of the -it flags.
DO180-OCP4.12-en-1-20230406 207
Chapter 3 | Run Applications as Containers and Pods
For the following commands, use the -c container_name to specify a container in the pod. If
you omit this option, then the command targets the first container in the pod.
The following examples demonstrate use of the oc logs POD_NAME command to retrieve the
logs for a pod:
In Kubernetes, an event resource is a report of an event somewhere in the cluster. You can use
the kubectl get events and oc get events commands to view pod events in a namespace:
Before you add tools to a container image, consider how the tools affect your container image.
• Additional tools increase the size of the image, which might impact container performance.
• Tools might require additional update packages and licensing terms, which can impact the ease
of updating and distributing the container image.
208 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Administrators can alternatively author and deploy a container within the cluster for investigation
and remediation. By creating a container image that includes the cluster troubleshooting tools,
you have a reliable environment to perform these tasks from any computer with access to the
cluster. This approach ensures that an administrator always has access to the tools for reliable
troubleshooting and remediation of issues.
Additionally, administrators should plan to author a container image that provides the most
valuable troubleshooting tools for containerized applications. In this way, you deploy this "toolbox"
container to supplement the forensic process and to provide an environment with the required
commands and tools for troubleshooting problematic containers. For example, the "toolbox"
container can test how resources operate inside a cluster, such as to confirm whether a pod
can connect to resources outside the cluster. Regular cluster users can also create a "toolbox"
container to help with application troubleshooting. For example, a regular user could run a pod with
a MySQL client to connect to another pod that runs a MySQL server.
Although this approach falls outside the focus of this course, because it is more application-level
remediation than container-level troubleshooting, it is important to realize that containers have
such capacity.
DO180-OCP4.12-en-1-20230406 209
Chapter 3 | Run Applications as Containers and Pods
References
Kubernetes Documentation - kubectl edit
https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/
#kubectl-edit
For more information about troubleshooting pod issues, refer to the Investigating
Pod Issues section in the Troubleshooting chapter in the Red Hat OpenShift
Container Platform 4.12 Support documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/support/index#investigating-pod-
issues
For more information about how to copy files to and from pods, refer to the Copying
Files to or from an OpenShift Container Platform Container section in the Working
with Containers chapter in the Red Hat OpenShift Container Platform 4.12 Nodes
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#nodes-containers-
copying-files
For more information about port forwarding, refer to the Using Port Forwarding to
Access Applications in a Container section in the Working with Containers chapter in
the Red Hat OpenShift Container Platform 4.12 Nodes documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#nodes-containers-
port-forwarding
210 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Guided Exercise
Outcomes
• Investigate errors with creating a pod.
This command ensures that the cluster and all exercise resources are available.
Instructions
1. Log in to the OpenShift cluster and create the pods-troubleshooting project.
1.1. Log in to the OpenShift cluster as the developer user with the oc command.
2. Create a MySQL pod called mysql-server with the oc run command. Use the
registry.ocp4.example.com:8443/rhel9/mysql-80:1228 container image for
the pod. Specify the environment variables with the following values:
Variable Value
MYSQL_USER redhat
MYSQL_PASSWORD redhat123
MYSQL_DATABASE world
DO180-OCP4.12-en-1-20230406 211
Chapter 3 | Run Applications as Containers and Pods
Then, view the status of the pod with the oc get command.
2.1. Create the mysql-server pod with the oc run command. Specify the environment
values with the --env option.
2.2. After a few moments, retrieve the status of the pod. Execute the oc get pods
command a few times to view the status updates for the pod.
The logs state that the pod cannot pull the container image.
3.2. Retrieve the events log with the oc get events command.
212 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
The output states that the image pull failed because the 1228 manifest is unknown.
This failure could mean that the manifest, or image tag, does not exist in the
repository.
The 1228 manifest, or tag, is not available in the repository, which means that the
registry.ocp4.example.com:8443/rhel9/mysql-80:1228 image does not
exist. However, the 1-228 tag does exist.
4. The pod failed to start because of a typing error in the image tag. Update the
pod's configuration to use the registry.ocp4.example.com:8443/rhel9/
mysql-80:1-228 container image. Confirm that the pod is re-created after editing the
resource.
4.1. Edit the pod's configuration with the oc edit command. Locate
the .spec.containers.image object. Update the value to the
registry.ocp4.example.com:8443/rhel9/mysql-80:1-228 container
image and save the change.
DO180-OCP4.12-en-1-20230406 213
Chapter 3 | Run Applications as Containers and Pods
...output omitted...
apiVersion: v1
kind: Pod
metadata:
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/rhel9/mysql-80:1-228
...output omitted...
4.2. Verify the status of the mysql-server pod with the oc get command. The pod's
status might take a few moments to update after the resource edit. Repeat the oc
get command until the pod's status changes.
The mysql-server pod successfully pulled the image and created the container.
The pod now shows a Running status.
5.1. Use the oc cp command to copy the world_x.sql file in the ~/DO180/labs/
pods-troubleshooting directory to the /tmp/ directory on the mysql-server
pod.
5.2. Confirm that the world_x.sql file is accessible within the mysql-server pod with
the oc exec command.
5.3. Connect to the mysql-server pod with the oc rsh command. Then, log in to
MySQL as the redhat user with the redhat123 password.
214 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
5.4. From the MySQL prompt, select the world database. Source the world_x.sql
script inside the pod to initialize and populate the world database.
5.5. Execute the SHOW TABLES; command to confirm that the database now contains
tables. Then, exit the database and the pod.
mysql> exit;
Bye
sh-5.1$ exit
exit
[student@workstation ~]$
6. Configure port forwarding and then use the MySQL client on the workstation machine
to connect to the world database on the mysql-server pod. Confirm that you can
access data within the world database from the workstation machine.
6.1. From the workstation machine, use the oc port-forward command to forward
the 3306 local port to the 3306 port on the mysql-server pod.
6.2. Open another terminal window on the workstation machine. Connect to the
world database with the local MySQL client on the workstation machine. Log in
as the redhat user with the redhat123 password. Specify the host as the localhost
127.0.0.1 IP address and use 3306 as the port.
DO180-OCP4.12-en-1-20230406 215
Chapter 3 | Run Applications as Containers and Pods
6.3. Select the world database and execute the SHOW TABLES; command.
6.4. Confirm that you can retrieve data from the country table. Execute the SELECT
COUNT(*) FROM country command to retrieve the number of countries within the
country table.
mysql> exit;
Bye
[student@workstation ~]$
6.6. Return to the terminal that is executing the oc port-forward command. Press
Ctrl+C to end the connection.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
216 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
DO180-OCP4.12-en-1-20230406 217
Chapter 3 | Run Applications as Containers and Pods
Lab
Outcomes
• Deploy a pod from a container image.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
218 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Note
The terminal window that you connect to the webphp pod must remain open for the
remainder of the lab. This connection is necessary for the final lab step and for the
lab grade command.
6. An issue occurs with the PHP application that is running on the webphp pod. To debug the
issue, the application developer requires diagnostic and configuration information for the
PHP instance that is running on the webphp pod.
The ~/DO180/labs/pods-review directory contains a phpinfo.php file to generate
debugging information for a PHP instance. Copy the phpinfo.php file to the /var/www/
html/ directory on the webphp pod.
Then, confirm that the PHP debugging information is displayed when accessing the
127.0.0.1:8080/phpinfo.php from a web browser.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 219
Chapter 3 | Run Applications as Containers and Pods
Solution
Outcomes
• Deploy a pod from a container image.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
220 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
2.2. After a few moments, observe the status of the webphp pod.
DO180-OCP4.12-en-1-20230406 221
Chapter 3 | Run Applications as Containers and Pods
The logs indicate permission issues with the /run directory within the pod.
3.2. List the contents of the /run directory to retrieve the permissions, owners, and groups.
The /run/httpd directory grants read, write, and execute permissions to the root
user, but does not provide permissions for the root group.
3.3. Retrieve the UID and GID of the user in the container. Determine whether the user is a
privileged user and belongs to the root group.
sh-4.4$ id
uid=1000680000(1000680000) gid=0(root) groups=0(root),1000680000
Your UID and GID values might differ from the previous output.
The user is an unprivileged, non-root user and belongs to the root group, which
does not have access to the /run directory. Therefore, the user in the container cannot
access the files and directories that the container processes use, which is required for
arbitrarily assigned UIDs.
222 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
sh-4.4$ exit
exit
4.2. Update the .spec.containers.image object value to use the :v2 image tag.
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/webphp:v2
imagePullPolicy: IfNotPresent
...output omitted...
4.4. Retrieve the UID and GID of the user in the container to confirm that the user is an
unprivileged user.
Your UID and GID values might differ from the previous output.
4.5. Confirm that the permissions for the /run/httpd directory are correct.
DO180-OCP4.12-en-1-20230406 223
Chapter 3 | Run Applications as Containers and Pods
5. Connect port 8080 on the Workstation machine to port 8080 on the webphp pod. In a
new terminal window, retrieve the content of the pod's 127.0.0.1:8080/index.php web
page to confirm that the pod is operational.
Note
The terminal window that you connect to the webphp pod must remain open for the
remainder of the lab. This connection is necessary for the final lab step and for the
lab grade command.
5.2. Open another terminal window and then retrieve the 127.0.0.1:8080/index.php
web page on the webphp pod.
6. An issue occurs with the PHP application that is running on the webphp pod. To debug the
issue, the application developer requires diagnostic and configuration information for the
PHP instance that is running on the webphp pod.
The ~/DO180/labs/pods-review directory contains a phpinfo.php file to generate
debugging information for a PHP instance. Copy the phpinfo.php file to the /var/www/
html/ directory on the webphp pod.
Then, confirm that the PHP debugging information is displayed when accessing the
127.0.0.1:8080/phpinfo.php from a web browser.
6.2. Open a web browser and access the 127.0.0.1:8080/phpinfo.php web page.
Confirm that PHP debugging information is displayed.
224 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 225
Chapter 3 | Run Applications as Containers and Pods
Quiz
As the student user on the workstation machine, use Skopeo to log in to the
registry.ocp4.example.com:8443 classroom container registry as the developer
user with the developer password. Then, use the skopeo and oc image commands to
answer the following questions.
226 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
4. Which two environment variables and their values are specified in the
registry.ocp4.example.com:8443/rhel8/postgresql-13:latest container
image? (Choose two.)
a. NAME=PHP 7.4
b. APP_DATA=/opt/app-root/src/bin
c. HOME=/var/lib/pgsql
d. PHP_SYSCONF_FILE=/etc/
e. PGUSER=postgres
6. Which two container images run as a privileged user when an OpenShift cluster
administrator deploys a pod from the image? (Choose two.)
a. registry.ocp4.example.com:8443/rhel8/postgresql-13:latest
b. registry.ocp4.example.com:8443/ubi8/php-74:latest
c. registry.ocp4.example.com:8443/ubi8/nodejs-16:latest
d. registry.ocp4.example.com:8443/rhel8/mysql-80:latest
e. registry.ocp4.example.com:8443/ubi8/python-39
DO180-OCP4.12-en-1-20230406 227
Chapter 3 | Run Applications as Containers and Pods
Solution
As the student user on the workstation machine, use Skopeo to log in to the
registry.ocp4.example.com:8443 classroom container registry as the developer
user with the developer password. Then, use the skopeo and oc image commands to
answer the following questions.
228 DO180-OCP4.12-en-1-20230406
Chapter 3 | Run Applications as Containers and Pods
4. Which two environment variables and their values are specified in the
registry.ocp4.example.com:8443/rhel8/postgresql-13:latest container
image? (Choose two.)
a. NAME=PHP 7.4
b. APP_DATA=/opt/app-root/src/bin
c. HOME=/var/lib/pgsql
d. PHP_SYSCONF_FILE=/etc/
e. PGUSER=postgres
6. Which two container images run as a privileged user when an OpenShift cluster
administrator deploys a pod from the image? (Choose two.)
a. registry.ocp4.example.com:8443/rhel8/postgresql-13:latest
b. registry.ocp4.example.com:8443/ubi8/php-74:latest
c. registry.ocp4.example.com:8443/ubi8/nodejs-16:latest
d. registry.ocp4.example.com:8443/rhel8/mysql-80:latest
e. registry.ocp4.example.com:8443/ubi8/python-39
DO180-OCP4.12-en-1-20230406 229
Chapter 3 | Run Applications as Containers and Pods
Summary
• A container is an encapsulated process that includes the required runtime dependencies for an
application to run.
• OpenShift uses Kubernetes to manage pods. Pods consist of one or more containers that share
resources, such as selected namespaces and networking, and represent a single application.
• Container images can create container instances, which are executable versions of the image,
and include references to networking, disks, and other runtime necessities.
• Container image registries, such as Quay.io and the Red Hat Container Catalog, are the
preferred way to distribute container images to many users and hosts.
• The oc image command and Skopeo, among other tools, can inspect and manage container
images.
• Containers are immutable and ephemeral. Thus, updating a running container is best reserved
for troubleshooting problematic containers.
230 DO180-OCP4.12-en-1-20230406
Chapter 4
DO180-OCP4.12-en-1-20230406 231
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Objectives
• Identify the main resources and settings that Kubernetes uses to manage long-lived
applications and demonstrate how OpenShift simplifies common application deployment
workflows.
Deploying Applications
Microservices and DevOps are growing trends in enterprise software. Containers and Kubernetes
gained popularity alongside those trends, but have become categories of their own. Container-
based infrastructures support most types of traditional and modern applications.
The term application can refer to your software system or to a service within it. Given this
ambiguity, it is clearer to refer directly to resources, services, and other components.
A resource type represents a specific component type, such as a pod. Kubernetes ships with many
default resource types, some of which overlap in function. Red Hat OpenShift Container Platform
(RHOCP) includes the default Kubernetes resource types, and provides other resource types of its
own. To add resource types, you can create or import custom resource definitions (CRDs).
Managing Resources
You can add, view, and edit resources in various formats, including YAML and JSON. Traditionally,
YAML is the most common format.
You can delete resources in batch by using label selectors or by deleting the entire project or
namespace. For example, the following command deletes only deployments with the app=my-app
label.
Similar to creation, deleting a resource is not immediate, but is instead a request for eventual
deletion.
Note
Commands that are executed without specifying a namespace are executed in the
user's current namespace.
232 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Templates
Similar to projects, templates are an RHOCP addition to Kubernetes. A template is a YAML
manifest that contains parameterized definitions of one or more resources. RHOCP provides
predefined templates in the openshift namespace.
Process a template into a list of resources by using the oc process command, which replaces
values and generates resource definitions. The resulting resource definitions create or update
resources in the cluster by supplying them to the oc apply command.
For example, the following command processes a mysql-template.yaml template file and
generates four resource definitions.
The --parameters option instead displays the parameters of a template. For example, the
following command lists the parameters of the mysql-template.yaml file.
You can also use templates with the new-app command from RHOCP. In the following example,
the new-app command uses the mysql-persistent template to create a MySQL application
and its supporting resources.
DO180-OCP4.12-en-1-20230406 233
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Username: userQSL
Password: pyf0yElPvFWYQQou
Database Name: sampledb
Connection URL: mysql://mysql:3306/
...output omitted...
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Database Service Name=mysql
* MySQL Connection Username=userQSL # generated
* MySQL Connection Password=pyf0yElPvFWYQQou # generated
* MySQL root user Password=HHbdurqWO5gAog2m # generated
* MySQL Database Name=sampledb
* Volume Capacity=1Gi
* Version of MySQL Image=8.0-el8
Notice that several resources are created to meet the requirements of the deployment,
including a secret, a service, and a persistent volume claim.
Note
You can specify environment variables to be configured in creating your application.
Pod
From the RHOCP documentation, a pod is defined as "the smallest compute unit that can be
defined, deployed, and managed". A pod runs one or more containers that represent a single
application. Containers in the pod share resources, such as networking and storage.
apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
deployment: docker-registry-1
deploymentconfig: docker-registry
name: registry
namespace: pod-registries
spec:
containers:
- env:
- name: OPENSHIFT_CA_DATA
value: ...
234 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
image: openshift/origin-docker-registry:v0.6.2
imagePullPolicy: IfNotPresent
name: registry
ports:
- containerPort: 5000
protocol: TCP
resources: {}
securityContext: { ... }
volumeMounts:
- mountPath: /registry
name: registry-storage
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-at06w
restartPolicy: Always
serviceAccount: default
volumes:
- emptyDir: {}
name: registry-storage
status:
conditions: { ... }
Information that describes your application, such as the name, project, attached labels, and
annotations.
Section where the application requirements are specified, such as the container name, the
container image, environment variables, volume mounts, network configuration, and volumes.
Indicates the last condition of the pad, such as the last probe time, the last transition time,
the status setting as true or false, and more.
Deployment Configurations
Deployment configurations define the specification of a pod. They manage pods by creating
replication controllers, which manage the number of replicas of a pod. Deployment configurations
and replication controllers are an RHOCP addition to Kubernetes.
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: frontend
spec:
replicas: 1
selector:
name: frontend
template: { ... }
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
DO180-OCP4.12-en-1-20230406 235
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
- helloworld
from:
kind: ImageStreamTag
name: hello-openshift:latest
type: ImageChange
strategy:
type: Rolling
Section to define the metadata, labels, and the container information of the deployment
configuration resource, such as the container name. container image, and ports.
A configuration change trigger results in a new replication controller whenever changes are
detected in the pod template of the deployment configuration.
The ImageChange trigger results in a new replication controller whenever the content of an
image stream tag changes (when a new version of the image is pushed).
An image change trigger causes a new deployment to be created each time a new version of
the backing image is available in the named image stream.
Deployment
Similar to deployment configurations, deployments define the intended state of a replica set.
Replica sets maintain a configurable number of pods that match a specification.
Replica sets are generally similar to and a successor to replication controllers. This difference
in intermediate resources is the primary difference between deployments and deployment
configurations.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
236 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
Section to define the metadata, labels, and the container information of the deployment
resource.
Port configuration, such as the port number, name of the port, and the protocol.
Projects
RHOCP adds projects to enhance the function of Kubernetes namespaces. A project is a
Kubernetes namespace with additional annotations, and is the primary method for managing
access to resources for regular users. Projects can be created from templates and must use Role
Based Access Control (RBAC) for organization and permission management. Administrators must
grant cluster users access to a project. If a cluster user is allowed to create projects, then the user
automatically has access to their created projects.
Projects provide logical and organizational isolation to separate your application component
resources. Resources in one project can access resources in other projects, but not by default.
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: test
spec:
finalizers:
- kubernetes
A finalizer is a special metadata key that tells Kubernetes to wait until a specific condition is
met before it fully deletes a resource.
Services
You can configure internal pod-to-pod network communication in RHOCP by using the Service
object. Applications send requests to the service name and port. RHOCP provides a virtual
network, which reroutes such requests to the pods that the service targets by using labels.
DO180-OCP4.12-en-1-20230406 237
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
apiVersion: v1
kind: Service
metadata:
name: docker-registry
namespace: test
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
The label selector identifies all pods with the attached app=MyApp label and adds the pods
to the service endpoints.
Port on the backing pods, which the service forwards connections to.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-storage
status:
...
238 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Secrets
Secrets provide a mechanism to hold sensitive information, such as passwords, private source
repository credentials, sensitive configuration files, and credentials to an external resource, such
as an SSH key or OAuth token. You can mount secrets into containers by using a volume plug-in.
Kubernetes can also use secrets to perform actions, such as declaring environment variables, on
a pod. Secrets can store any type of data. Kubernetes and OpenShift support different types of
secrets, such as service account tokens, SSH keys, and TLS certificates.
apiVersion: v1
kind: Secret
metadata:
name: example-secret
namespace: my-app
type: Opaque
data:
username: bXl1c2VyCg==
password: bXlQQDU1Cg==
stringData:
hostname: myapp.mydomain.com
secret.properties: |
property1=valueA
property2=valueB
The resource management commands generally fall into one of two categories: declarative or
imperative. An imperative command instructs what the cluster does. A declarative command
defines the state that the cluster attempts to match.
DO180-OCP4.12-en-1-20230406 239
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Use the set command to define attributes on a resource, such as environment variables. For
example, the following command adds the TEAM=red environment variable to the preceding
deployment.
Another imperative approach to creating a resource is the run command. In the following example,
the run command creates the example-pod pod.
The imperative commands are a faster way of creating pods, because such commands do not
require a pod object definition. However, developers cannot handle versioning or incrementally
change the pod definition.
Generally, developers test a deployment by using imperative commands, and then use the
imperative commands to generate the pod object definition. Use the --dry-run=client option
to avoid creating the object in RHOCP. Additionally, use the -o yaml or -o json option to
configure the definition format.
The following command is an example of generating the YAML definition for the example-pod
pod:
240 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
labels:
run: example-pod
name: example-pod
spec:
containers:
...output omitted...
Managing resources in this way is imperative, because you are instructing the cluster what to do
rather than declaring the intended outcomes.
RHOCP adds the new-app command, which provides another declarative way to create resources.
This command uses heuristics to automatically determine which types of resources to create
based on the specified parameters. For example, the following command deploys the my-app
application by creating several resources, including a deployment resource, from a YAML manifest
file.
In both of the preceding create and new-app examples, you are declaring the intended state of
the resources, and so they are declarative.
You can also use the new-app command with templates for resource management. A template
describes the intended state of resources that must be created for an application to run, such as
deployment configurations and services. Supplying a template to the new-app command is a form
of declarative resource management.
The new-app command also includes options, such as the --param option, that customize an
application deployment declaratively. For example, when the new-app is used with a template, you
can include the --param option to override a parameter value in the template.
DO180-OCP4.12-en-1-20230406 241
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Username: operator
Password: myP@55
Database Name: mydata
Connection URL: mysql://db:3306/
...output omitted...
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Database Service Name=db
* MySQL Connection Username=operator
* MySQL Connection Password=myP@55
* MySQL root user Password=tlH8BThuVgnIrCon # generated
* MySQL Database Name=mydata
* Volume Capacity=1Gi
* Version of MySQL Image=8.0-el8
Similar to the create command, you can use the new-app command imperatively. When
using a container image with the new-app, you are instructing the cluster what to do, rather
than declaring the intended outcomes. For example, the following command deploys the
example.com/my-app:dev image by creating several resources, including a deployment
resource.
You can also supply a Git repository to the new-app command. The following command creates
an application named httpd24 by using a Git repository.
242 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
You can view detailed information about a resource, such as the defined parameters, by using the
describe command. For example, RHOCP provides templates in the openshift project to use
with the oc new-app command. The following example command displays detailed information
about the mysql-ephemeral template:
Name: NAMESPACE
Display Name: Namespace
Description: The OpenShift Namespace where the ImageStream resides.
Required: false
Value: openshift
...output omitted...
Objects:
Secret ${DATABASE_SERVICE_NAME}
Service ${DATABASE_SERVICE_NAME}
DeploymentConfig.apps.openshift.io ${DATABASE_SERVICE_NAME}
The describe command cannot generate structured output, such as the YAML or JSON formats.
Without a structured format, the describe command cannot parse or filter the output with tools
such as JSONPath or Go templates. Instead, use the get command to generate and then to parse
the structured output of a resource.
DO180-OCP4.12-en-1-20230406 243
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
References
OpenShift Container Platform Documentation - Understanding Deployment
and DeploymentConfig Objects
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/index#what-
deployments-are
244 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Guided Exercise
Outcomes
In this exercise, you deploy two MySQL database server pods to compare deployment
methods and the resources that each creates.
This command ensures that resources are available for the exercise.
Instructions
1. As the developer user, create a project and verify that it is not empty after creation.
1.3. Observe that resources for the new project are not returned with the oc get all
command.
DO180-OCP4.12-en-1-20230406 245
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Important
Commands that use all for the resource type do not include every available
resource type. Instead, all is a shorthand form for a predefined subset of types.
When you use this command argument, ensure that all includes any types that you
intend to address.
1.4. Observe that the new project contains other types of resources.
2. Create two MySQL instances by using the oc new-app command with different options.
2.1. View the mysql-persistent template definition to see the resources that it
creates. Specify the project that houses the template by using the -n openshift
option.
The objects attribute specifies several resource definitions that are applied on
using the template. These resources include one of each of the following types:
secret, service (svc), persistent volume claim (pvc), and deployment configuration
(dc).
246 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The template creates resources of the types from the preceding step.
2.3. View and wait for the pod to start, which takes a few minutes to complete. You might
need to run the command several times before the status changes to Running.
2.4. Create an instance by using a container image. Specify a name option and attach a
custom team=blue label to the created resources.
The command creates predefined resources that are needed to deploy an image.
These resource types are image stream (is), deployment, and service (svc). Image
streams and services are discussed in more detail elsewhere in the course.
Note
It is safe to ignore pod security warnings for exercises in this course. OpenShift uses
the Security Context Constraints controller to provide safe defaults for pod security.
2.5. Wait for the pod to start. After a few moments, list all pods that contain team as a
label.
DO180-OCP4.12-en-1-20230406 247
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Your pod name might differ from the previous output. Without a readinessProbe,
this pod shows as ready before the MySQL service is ready for requests. Readiness
probes are discussed in more detail elsewhere in the course.
Notice that only the db-image pod has a label that contains the word team. Pods
that the mysql-persistent template creates do not have the team=red label,
because the template does not define this label in its pod specification template.
3. Compare the resources that each image and template method creates.
3.1. View the template-created pod and observe that it contains a readiness probe.
Note
The results of the preceding oc command are passed to the jq command, which
formats the JSON output.
3.2. Observe that the image-based pod does not contain a readiness probe.
3.3. Observe that the template-based pod has a memory resource limit, which restricts
allocated memory to the resulting pods. Resource limits are discussed in more detail
elsewhere in the course.
248 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
3.5. Retrieve secrets in the project. Notice that the template produced a secret, whereas
the pod that was created with only an image did not.
4.2. Observe that supplying a label shows only the services with the label.
4.3. Observe that not all resources include the label, such as the pods that are created
with the template.
5. Use labels to delete only the resources that are associated with the image-based
deployment.
5.1. Delete only the resources that use the team=red label by using it with the oc
delete command. List the resource types from the template to ensure that all
relevant resources are deleted.
DO180-OCP4.12-en-1-20230406 249
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Note
By using the oc delete all -l team=red command, some resources are
deleted, but the persistent volume claim and secret remain.
5.2. Observe that the resources that the template created are deleted.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
250 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Objectives
• Deploy containerized applications as pods that Kubernetes workload resources manage.
• Jobs
• Deployments
• Stateful sets
Jobs
A job resource represents a one-time task to perform on the cluster. As with most cluster tasks,
jobs are executed via pods. If a job's pod fails, then the cluster retries a number of times that the
job specifies. The job does not run again after a specified number of successful completions.
Jobs differ from using the kubectl run and oc run commands; both of the latter create only a
pod.
The following example command creates a job that logs the date and time:
Cron Jobs
A cron job resource builds on a regular job resource by enabling you to specify how often the
job should run. Cron jobs are useful for creating periodic and recurring tasks, such as backups
DO180-OCP4.12-en-1-20230406 251
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
or report generation. Cron jobs can also schedule individual tasks for a specific time, such as to
schedule a job for a low activity period. Similar to the crontab (cron table) file on a Linux system,
the CronJob resource uses the Cron format for scheduling. A CronJob resource creates a job
resource based on the configured time zone on the control plane node that runs the cron job
controller.
The following example command creates a cron job named date that prints the system date and
time every minute:
Deployments
A deployment creates a replica set to maintain pods. A replica set maintains a specified number
of replicas of a pod. Replica sets use selectors, such as a label, to identify pods that are part of
the set. Pods are created or removed until the replicas reach the number that the deployment
specifies. Replica sets are not managed directly. Instead, deployments indirectly manage replica
sets.
Unlike a job, a deployment's pods are re-created after crashing or deletion. The reason is that
deployments use replica sets.
Pods in a replica set are identical and match the pod template in the replica set definition. If the
number of replicas is not met, then a new pod is created by using the template. For example, if a
pod crashes or is otherwise deleted, then a new one is created to replace it.
Labels are a type of resource metadata that are represented as string key-value pairs. A
label indicates a common trait for resources with that label. For example, you might attach a
layer=frontend label to resources that relate to an application's UI components.
252 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Many oc and kubectl commands accept a label to filter affected resources. For example, the
following command deletes all pods with the environment=testing label:
By liberally applying labels to resources, you can cross-reference resources and craft precise
selectors. A selector is a query object that describes the attributes of matching resources.
Certain resources use selectors to find other resources. In the following YAML excerpt, an example
replica set uses a selector to match its pods.
apiVersion: apps/v1
kind: ReplicaSet
...output omitted...
spec:
replicas: 1
selector:
matchLabels:
app: httpd
pod-template-hash: 7c84fbdb57
...output omitted...
Stateful Sets
Like deployments, stateful sets manage a set of pods based on a container specification. However,
each pod that a stateful set creates is unique. Pod uniqueness is useful when, for example, a pod
needs a unique network identifier or persistent storage.
As their name implies, stateful sets are for pods that require state within the cluster. Deployments
are used for stateless pods.
References
Kubernetes Workloads
https://kubernetes.io/docs/concepts/workloads/
DO180-OCP4.12-en-1-20230406 253
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Guided Exercise
Outcomes
In this exercise, you deploy a database server and a batch application that are both managed
by workload resources.
• Create deployments.
This command ensures that resources are available for the exercise.
Instructions
1. As the developer user, create a MySQL deployment in a new project.
254 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Note
It is safe to ignore pod security warnings for exercises in this course. OpenShift uses
the Security Context Constraints controller to provide safe defaults for pod security.
1.5. Retrieve the status of the created pod. Your pod name might differ from the output.
1.6. Review the logs for the pod to determine why it fails to start.
Note that the container fails to start due to missing environment variables.
2. Fix the database deployment and verify that the server is running.
DO180-OCP4.12-en-1-20230406 255
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
2.2. Retrieve the list of deployments and observe that the my-db deployment has a
running pod.
2.3. Retrieve the internal IP address of the MySQL pod within the list of all pods.
The -o wide option enables additional output, such as IP addresses. Your IP address
value might differ from the previous output.
2.4. Verify that the database server is running, by running a query. Replace the IP address
with the one that you retrieved in the preceding step.
3. Delete the database server pod and observe that the deployment causes the pod to be re-
created.
3.1. Delete the existing MySQL pod by using the label that is associated with the
deployment.
3.2. Retrieve the information for the MySQL pod and observe that it is newly created.
Your pod name might differ in your output.
4. Create and apply a job resource that prints the time and date repeatedly.
4.1. Create a job resource called date-loop that runs a script. Ignore the warning.
256 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The command object, which specifies the defined script to execute within the
pod.
Defines the restart policy for the pod. Kubernetes does not restart the job pod
after the pod exits.
4.3. List the jobs to see that the date-loop job completed successfully.
You might need to wait for the script to finish and run the command again.
4.4. Retrieve the logs for the associated pod. The log values might differ in your output.
DO180-OCP4.12-en-1-20230406 257
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
5. Delete the pod for the date-loop job and observe that the pod is not created again.
5.2. View the list of pods and observe that the pod is not re-created for the job.
5.3. Verify that the job status is still listed as successfully completed.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
258 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Objectives
• Interconnect applications pods inside the same cluster by using Kubernetes services.
With the SDN, you can manage the network traffic and network resources programmatically, so
that the organization teams can decide how to expose their applications. The SDN implementation
creates a model that is compatible with traditional networking practices. It makes pods akin to
virtual machines in terms of port allocation, IP address leasing, and reservation.
With the SDN design, you do not need to change how application components communicate with
each other, which helps to containerize legacy applications. If your application is composed of
many services that communicate over the TCP/UDP stack, then this approach still works, because
containers in a pod use the same network stack.
The following diagram shows how all pods are connected to a shared network:
Among the many features of SDN, with open standards, vendors can propose their solutions for
centralized management, dynamic routing, and tenant isolation.
Kubernetes Networking
Networking in Kubernetes provides a scalable means of communication between containers.
DO180-OCP4.12-en-1-20230406 259
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
• Pod-to-pod communications
• Pod-to-service communications
Kubernetes automatically assigns an IP address to every pod. However, pod IP addresses are
unstable, because pods are ephemeral. Pods are constantly created and destroyed across the
nodes in the cluster. For example, when you deploy a new version of your application, Kubernetes
destroys the existing pods and then deploys new ones.
All containers within a pod share networking resources. The IP address and MAC address that are
assigned to the pod are shared among all containers in the pod. Thus, all containers within a pod
can reach each other's ports through the loopback address, localhost. Ports that are bound to
localhost are available to all containers that run within the pod, but never to containers outside it.
By default, the pods can communicate with each other even if they run on different cluster nodes
or belong to different Kubernetes namespaces. Every pod is assigned an IP address in a flat shared
networking namespace that has full communication with other physical computers and containers
across the network. All pods are assigned a unique IP address from a Classless Inter-Domain
Routing (CIDR) range of host addresses. The shared address range places all pods in the same
subnet.
Because all the pods are on the same subnet, pods on all nodes can communicate with pods on
any other node without the aid of Network Address Translation (NAT). Kubernetes also provides
a service subnet, which links the stable IP address of a service resource to a set of specified pods.
The traffic is forwarded in a transparent way to the pods; an agent (depending on the network
mode that you use) manages routing rules to route traffic to the pods that match the service
resource selectors. Thus, pods can be treated much like Virtual Machines (VMs) or physical hosts
from the perspective of port allocation, networking, naming, service discovery, load balancing,
application configuration, and migration. Kubernetes implements this infrastructure by managing
the SDN.
The following illustration gives further insight into how the infrastructure components work along
with the pod and service subnets to enable network access between pods inside an OpenShift
instance.
260 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
DO180-OCP4.12-en-1-20230406 261
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
In the diagram, the Before side shows the Front-end container that is running in a pod with
a 10.8.0.1 IP address. The container also refers to a Back-end container that is running in a
pod with a 10.8.0.2 IP address. In this example, an event occurs that causes the Back-end
container to fail. A pod can fail for many reasons. In response to the failure, Kubernetes creates a
pod for the Back-end container that uses a new IP address of 10.8.0.4. From the After side of
the diagram, the Front-end container now has an invalid reference to the Back-end container
because of the IP address change. Kubernetes resolves this problem with service resources.
Using Services
Containers inside Kubernetes pods must not connect directly to each other's dynamic IP address.
Instead, Kubernetes assigns a stable IP address to a service resource that is linked to a set of
specified pods. The service then acts as a virtual network load balancer for the pods that are
linked to the service.
If the pods are restarted, replicated, or rescheduled to different nodes, then the service endpoints
are updated, thus providing scalability and fault tolerance for your applications. Unlike the IP
addresses of pods, the IP addresses of services do not change.
262 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
In the diagram, the Before side shows that the Front-end container now holds a reference
to the stable IP address of the Back-end service, instead of to the IP address of the pod that
is running the Back-end container. When the Back-end container fails, Kubernetes creates a
pod with the New back-end container to replace the failed pod. In response to the change,
Kubernetes removes the failed pod from the service's host list, or service endpoints, and then
adds the IP address of the New back-end container pod to the service endpoints. With the
addition of the service, requests from the Front-end container to the Back-end container
continue to work, because the service is dynamically updated with the IP address change. A
service provides a permanent, static IP address for a group of pods that belong to the same
deployment or replica set for an application. Until you delete the service, the assigned IP address
does not change, and the cluster does not reuse it.
Most real-world applications do not run as a single pod. Applications need to scale horizontally.
Multiple pods run the same containers to meet a growing user demand. A Deployment resource
manages multiple pods that execute the same container. A service provides a single IP address for
the whole set, and provides load-balancing for client requests among the member pods.
With services, containers in one pod can open network connections to containers in another pod.
The pods, which the service tracks, are not required to exist on the same compute node or in the
same namespace or project. Because a service provides a stable IP address for other pods to
use, a pod also does not need to discover the new IP address of another pod after a restart. The
service provides a stable IP address to use, no matter which compute node runs the pod after
each restart.
DO180-OCP4.12-en-1-20230406 263
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The SERVICE object provides a stable IP address for the CLIENT container on NODE X to send a
request to any one of the API containers.
Kubernetes uses labels on the pods to select the pods that are associated with a service. To
include a pod in a service, the pod labels must include each of the selector fields of the service.
In this example, the selector has a key-value pair of app: myapp. Thus, pods with a matching
label of app: myapp are included in the set that is associated with the service. The selector
attribute of a service is used to identify the set of pods that form the endpoints for the service.
Each pod in the set is a an endpoint for the service.
264 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The oc expose command can use the --selector option to specify the label selectors to use.
When the command is used without the --selector option, the command applies a selector to
match the replication controller or replica set.
The --port option of the oc expose command specifies the port that the service listens on.
This port is available only to pods within the cluster. If a port value is not provided, then the port is
copied from the deployment configuration.
The --target-port option of the oc expose command specifies the name or number of the
container port that the service uses to communicate with the pods.
The --protocol option determines the network protocol for the service. TCP is used by default.
The --name option of the oc expose command can explicitly name the service. If not specified,
the service uses the same name that is provided for the deployment.
To view the selector that a service uses, use the -o wide option with the oc get command.
In this example, db-pod is the name of the service. Pods must use the app=db-pod label to be
included in the host list for the db-pod service. To see the endpoints that a service uses, use the
oc get endpoints command.
This example illustrates a service with two pods in the host list. The oc get endpoints
command returns all of the service endpoints in the current selected project. Add the name of
the service to the command to show only the endpoints of a single service. Use the --namepace
option to view the endpoints in a different namespace.
Use the oc describe deployment <deployment name> command to view the deployment
selector.
DO180-OCP4.12-en-1-20230406 265
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
You can view or parse the selector from the YAML or JSON output for the deployment resource
from the spec.selector.matchLabels object. In this example, the -o yaml option of the oc
get command returns the selector label that the deployment uses.
The DNS server discovers a service from a pod by using the internal DNS server, which is visible
only to pods. Each service is dynamically assigned a Fully Qualified Domain Name (FQDN) that
uses the following format:
SVC-NAME.PROJECT-NAME.svc.CLUSTER-DOMAIN
When a pod is created, Kubernetes provides the container with a /etc/resolv.conf file with
similar contents to the following items:
In this example, deploy-services is the project name for the pod, and cluster.local is the
cluster domain.
The nameserver directive provides the IP address of the Kubernetes internal DNS server. The
options ndots directive specifies the number of dots that must appear in a name to qualify for
an initial absolute query. Alternative hostname values are derived by appending values from the
Search directive to the name that is sent to the DNS server.
In the search directive in this example, the svc.cluster.local entry enables any pod to
communicate with another pod in the same cluster by using the service name and project name:
SVC-NAME.PROJECT-NAME
The first entry in the search directive enables a pod to use the service name to specify another
pod in the same project. In RHOCP, a project is also the namespace for the pod. The service name
alone is sufficient for pods in the same RHOCP project:
266 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
SVC-NAME
Red Hat provides the following CNI plug-ins for a RHOCP cluster:
• OVN-Kubernetes: The default plug-in for first-time installations of RHOCP, starting with
RHOCP 4.10.
• OpenShift SDN: An earlier plug-in from RHOCP 3.x; it is incompatible with some later features
of RHOCP 4.x.
Certified CNI-plugins from other vendors are also compatible with an RHOCP cluster.
The SDN uses CNI plug-ins to create Linux namespaces to partition the usage of resources and
processes on physical and virtual hosts. With this implementation, containers inside pods can share
network resources, such as devices, IP stacks, firewall rules, and routing tables. The SDN allocates
a unique routable IP to each pod, so that you can access the pod from any other service in the
same network.
OVN-Kubernetes uses Open Virtual Network (OVN) to manage the cluster network. A cluster that
uses the OVN-Kubernetes plug-in also runs Open vSwitch (OVS) on each node. OVN configures
OVS on each node to implement the declared network configuration.
An administrator configures the cluster network operator at installation time. To see the
configuration, use the following command:
DO180-OCP4.12-en-1-20230406 267
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Policy:
Network Type: OVNKubernetes
Service Network:
172.30.0.0/16
...output omitted...
The Cluster Network CIDR defines the range of IPs for all pods in the cluster.
The Service Network CIDR defines the range of IPs for all services in the cluster.
References
For more information, refer to the About Kubernetes Pods and Services chapter in
the Red Hat OpenShift Container Platform 4.12 Networking documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/architecture/index#building-
simple-container
For more information, refer to the Cluster Network Operator in OpenShift Container
Platform chapter in the Red Hat OpenShift Container Platform 4.12 Networking
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/networking/index#cluster-network-
operator
For more information, refer to the About the OVN-Kubernetes Network Plug-
in chapter in the Red Hat OpenShift Container Platform 4.12 Networking
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/networking/index#about-ovn-
kubernetes
Cluster Networking
https://kubernetes.io/docs/concepts/cluster-administration/networking/
268 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Guided Exercise
Outcomes
You should be able to deploy a database server, and access it indirectly through a
Kubernetes service, and also directly pod-to-pod for troubleshooting.
This command ensures that all resources are available for this exercise. It also creates the
deploy-services project and the /home/student/DO180/labs/deploy-services/
resources.txt file. The resources.txt file contains some of the commands that you
use during the exercise. You can use the file to copy and paste these commands.
Note
It is safe to ignore pod security warnings for exercises in this course.
OpenShift uses the Security Context Constraints controller to provide safe
defaults for pod security.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the deploy-services project.
DO180-OCP4.12-en-1-20230406 269
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
4. Validate the service. Confirm that the service selector matches the label on the pod. Then,
confirm that the db-pod service endpoint matches the IP of the pod.
4.1. Identify the selector for the db-pod service. Use the oc get service command
with the -o wide option to retrieve the selector that the service uses.
270 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Notice that the label list includes the app=db-pod key-value pair, which is the
selector for the db-pod service.
4.5. Verify that the service endpoint matches the db-pod IP address. Use the oc get
pods command with the -o wide option to view the pod IP address.
The service endpoint resolves to the IP address that is assigned to the pod.
5. Delete and then re-create the db-pod deployment. Confirm that the db-pod service
endpoint automatically resolves to the IP address of the new pod.
5.2. Verify that the service still exists without the deployment.
DO180-OCP4.12-en-1-20230406 271
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
5.6. Confirm that the newly created pod has the app=db-pod selector.
Notice the change in the pod name. The pod IP address might also change. Your pod
name and IP address might differ from the previous output.
5.7. Confirm that the endpoints for the db-pod service include the newly created pod.
6. Create a pod to identify the available DNS name assignments for the service.
6.1. Create a pod named shell to use for troubleshooting. Use the oc run command
and the registry.ocp4.example.com:8443/ubi8/ubi container image.
6.2. From the prompt inside the shell pod, view the /etc/resolv.conf file to identify
the cluster-domain name.
272 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The container uses the values from the search directive as suffix values on DNS
searches. The container appends these values to a DNS query, in the written order,
to resolve the search. The cluster-domain name is the last few components of these
values that start after svc.
6.3. Use the timeout command to test the available DNS names for the service.
Note
The ping utility is often used for this test. However, the ping utility is not available
in the shell pod, because the ubi8/ubi container image is configured as a non-
root container. Non-root containers cannot use the ping utility by default, because
it requires elevated privileges to establish raw sockets. However, the widely available
timeout command can test port connectivity.
The <value> object is the timeout value for the poll on the <server> target on the
<port> port.
The long version of the DNS name is required when accessing the service from a
different project. When the pod is in the same project, you can use a shorter version
of the DNS name.
The search directive in the resolv.conf file enables an even shorter form without
the namespace component.
bash-4.4$ exit
Session ended, resume using 'oc attach shell -c shell -i -t' command when the pod
is running
DO180-OCP4.12-en-1-20230406 273
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
7.2. Execute a timeout command from a pod to test the DNS name access to another
namespace.
The -h option of the mysql command directs the command to communicate with the
DNS short name of the db-pod service. The db-pod short name can be used here,
because the pod for the job is created in the same namespace as the service.
The double dash -- before /bin/bash separates the oc command arguments
from the command in the pod. The -c option of /bin/bash directs the command
interpreter in the container to execute the command string. The /tmp/db-
274 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
init.sql file is redirected as input for the command. The db-init.sql file is
included in the image, and contains the following script.
8.2. Confirm the status of the mysql-init job. Wait for the job to complete.
8.3. Retrieve the status of the mysql-init job pod, to confirm that the pod has a
Completed status.
9.1. Create the query-db pod. Configure the pod to use the MySQL client to execute a
query against the db-pod service. You can use the db-pod service short name, which
provides a stable reference.
DO180-OCP4.12-en-1-20230406 275
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
10. As the cluster administrator, identify the pod and service subnet ranges.
10.3. Retrieve the cluster network configuration for the cluster network operator.
Compare the IP ranges that the pod subnet and the service subnet use.
All pods in this cluster are created in the 10.8.0.0/14 range, and all services are created
in the 172.30.0.0/16 range.
11. It might be necessary to use pod-to-pod communications for troubleshooting. Use the oc
run command to create a pod that executes a network test against the IP address of the
database pod.
11.1. Confirm the IP address of the MySQL database pod. Your pod IP address might differ
from the output.
276 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
11.3. Create a test pod named shell with the oc run command. Execute the timeout
command to test against the $POD_IP environment variable and the 3306 port for
the database.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 277
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Objectives
• Expose applications to clients outside the cluster by using Kubernetes ingress and OpenShift
routes.
By default, services connect clients to pods in a round-robin fashion, and each service is assigned
a unique IP address for clients to connect to. This IP address comes from an internal OpenShift
virtual network, which although distinct from the pods' internal network, is visible only to pods.
Each pod that matches the selector is added to the service resource as an endpoint.
Containers inside Kubernetes pods must not connect to each other's dynamic IP address directly.
Services resolve this problem by linking more stable IP addresses from the SDN to the pods. If
pods are restarted, replicated, or rescheduled to different nodes, then services are updated, to
provide scalability and fault tolerance.
Service Types
You can choose between several service types depending on your application needs, cluster
infrastructure, and security requirements.
ClusterIP
This type is the default, unless you explicitly specify a type for a service. The ClusterIP type
exposes the service on a cluster-internal IP address. If you choose this value, then the service
is reachable only from within the cluster.
The ClusterIP service type is used for pod-to-pod routing within the RHOCP cluster, and
enables pods to communicate with and to access each other. IP addresses for the ClusterIP
services are assigned from a dedicated service network that is accessible only from inside the
cluster. Most applications should use this service type, for which Kubernetes automates the
management.
Load balancer
This resource instructs RHOCP to activate a load balancer in a cloud environment. A load
balancer instructs Kubernetes to interact with the cloud provider that the cluster is running
in, to provision a load balancer. The load balancer then provides an externally accessible IP
address to the application.
Take all necessary precautions before deploying this service type. Load balancers are typically
too expensive to assign one for each application in a cluster. Furthermore, applications that
use this service type become accessible from networks outside the cluster. Additional security
configuration is required to prevent unintended access.
278 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
ExternalIP
This service type redirects traffic from a virtual IP address on a cluster node to a pod. A
cluster administrator assigns the virtual IP address to a node. Additional infrastructure must be
configured to fail over that virtual IP address to another node in the event of failure.
This method instructs RHOCP to set NAT rules to redirect traffic from a virtual IP address
to a pod. NAT (Network Address Translation) refers to a group of public IP addresses that
are placed behind IP subnets, and that mask all requests to one source instead of to many
sources. You must ensure that the external IP addresses are correctly routed to the nodes.
External IP services require allowing direct network connections to cluster nodes. Most
responsible security policies forbid such connections. Because this service type exposes
cluster nodes to external access, additional security measures must also be in place to protect
the cluster.
NodePort
With this method, Kubernetes exposes a service on a port on the node IP address. The port is
exposed on all cluster nodes, and each node redirects traffic to the endpoints (pods) of the
service.
Similar to the ExternalIP service type, a NodePort service requires allowing direct network
connections to a cluster node, which is a security risk.
ExternalName
This service tells Kubernetes that the DNS name in the externalName field is the location
of the resource that backs the service. When a DNS request is made against the Kubernetes
DNS server, it returns the externalName in a Canonical Name (CNAME) record, and directs
the client to look up the returned name to get the IP address.
RHOCP provides the route resource to expose your applications to external networks. With
routes, you can access your application with a unique hostname that is publicly accessible. Routes
rely on a Kubernetes ingress controller to redirect the traffic from the public IP address to pods.
By default, Kubernetes provides an ingress controller, starting from the 1.24 release. For RHOCP
clusters, the ingress controller is provided by the OpenShift ingress operator. RHOCP clusters can
also use various third-party ingress controllers that can be deployed in parallel with the OpenShift
ingress controller.
Routes provide ingress traffic to services in the cluster. Routes were created before Kubernetes
ingress objects, and provide more features. Routes provide advanced features that Kubernetes
ingress controllers might not support through a standard interface, such as TLS re-encryption,
TLS passthrough, and split traffic for blue-green deployments.
To create a route (secure or insecure) with the oc CLI, use the oc expose service service-
name command. Include the --hostname option to provide a custom hostname for the route.
DO180-OCP4.12-en-1-20230406 279
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
If you omit the hostname, then RHOCP automatically generates a hostname with the following
structure: <route-name>-<project-name>.<default-domain>. For example, if you create
a frontend route in an api project, in a cluster that uses apps.example.com as the wildcard
domain, then the route hostname is as follows:
frontend-api.apps.example.com
Important
The DNS server that hosts the wildcard domain is unaware of any route hostnames;
it resolves any name only to the configured IPs. Only the RHOCP router knows
about route hostnames, and treats each one as an HTTP virtual host.
• The name of a service. The route uses the service to determine the pods to direct the traffic to.
• A hostname for the route. A route is always a subdomain of your cluster wildcard domain. For
example, if you are using a wildcard domain of apps.dev-cluster.acme.com, and need to
expose a frontend service through a route, then the route name is as follows:
frontend.apps.dev-cluster.acme.com.
• A target port that the application listens to. The target port corresponds to the port that you
define in the targetPort key of the service.
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: a-simple-route
labels:
app: API
name: api-frontend
spec:
host: api.apps.acme.com
to:
kind: Service
name: api-frontend
port: 8080
targetPort: 8443
280 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The hostname of the route. This hostname must be a subdomain of your wildcard domain,
because RHOCP routes the wildcard domain to the routers.
The service to redirect the traffic to. Although you use a service name, the route uses this
information only to determine the list of pods that receive the traffic.
Port mapping from a router to an endpoint in the service endpoints. The target port on pods
that are selected by the service that this route points to.
Note
Some ecosystem components have an integration with ingress resources, but not
with route resources. For this case, RHOCP automatically creates managed route
objects when an ingress object is created. These route objects are deleted when the
corresponding ingress objects are deleted.
You can delete a route by using the oc delete route route-name command.
Note
The ingress resource is commonly used for Kubernetes. However, the route resource
is the preferred method for external connectivity in RHOCP.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend
DO180-OCP4.12-en-1-20230406 281
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
spec:
rules:
- host: "www.example.com
http:
paths:
- backend:
service:
name: frontend
port:
number: 80
pathType: Prefix
path: /
tls:
- hosts:
- www.example.com
secretName: example-com-tls-certificate
The host for the ingress object. Applies the HTTP rule to the inbound HTTP traffic of the
specified host.
The backend to redirect traffic to. Defines the service name, port number, and port names for
the ingress object. To connect to the back end, incoming requests must match the host and
path of the rule.
The configuration of TLS for the ingress object; it is required for secured paths. The host in
the TLS object must match the host in the rules object.
You can delete an ingress object by using the oc delete ingress ingress-name command.
Sticky sessions
Sticky sessions enable stateful application traffic by ensuring that all requests reach the same
endpoint. RHOCP uses cookies to configure session persistence for ingress and route resources.
The ingress controller selects an endpoint to handle any user requests, and creates a cookie for
the session. The cookie is passed back in response to the request, and the user sends back the
cookie with the next request in the session. The cookie tells the ingress controller which endpoint
is handling the session, to ensure that client requests use the cookie so that they are routed to the
same pod.
RHOCP auto-generates the cookie name for ingress and route resources. You can overwrite
the default cookie name by using the annotate command with either the kubectl or the oc
commads. With this annotation, the application that receives route traffic knows the cookie name.
282 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
The following example configures a cookie named myapp for a route object:
After you annotate the route, capture the route hostname in a variable:
Then, use the curl command to save the cookie and access the route:
The cookie is passed back in response to the request, and is saved to the /tmp/cookie_jar
directory. Use the curl command and the cookie that was saved from the previous command to
connect to the route:
By using the saved cookie, the request is sent to the same pod as the previous request.
You can change the number of replicas in a deployment resource manually by using the oc scale
command.
The deployment resource propagates the change to the replica set. The replica set reacts to the
change by creating pods (replicas) or by deleting existing ones, depending on whether the new
intended replica count is less than or greater than the existing count.
Although you can manipulate a replica set resource directly, the recommended practice is to
manipulate the deployment resource instead. A new deployment creates either a replica set or
a replication controller, and direct changes to a previous replica set or replication controller are
ignored.
A router uses the service selector to find the service and the endpoints, or pods, that back the
service. When both a router and a service provide load balancing, RHOCP uses the router to load-
balance traffic to pods. A router detects relevant changes in the IP addresses of its services, and
DO180-OCP4.12-en-1-20230406 283
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
adapts its configuration accordingly. Custom routers can thereby communicate modifications of
API objects to an external routing solution.
RHOCP routers map external hostnames, and load-balance service endpoints over protocols that
pass distinguishing information directly to the router. The hostname must exist in the protocol for
the router to determine where to send it.
References
For more information, refer to the About Networking section in the Red Hat
OpenShift Container Platform 4.12 Networking documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/networking/index#about-
networking
284 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Guided Exercise
Outcomes
In this exercise, you deploy two web applications to access them through an ingress object
and a route, and scale them to verify the load-balance between the pods.
Instructions
1. Create two web application deployments, named satir-app and sakila-app. Use
the registry.ocp4.example.com:8443/httpd-app:v1 container image for both
deployments.
1.1. Log in to the OpenShift cluster as the developer user with the developer
password.
DO180-OCP4.12-en-1-20230406 285
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
1.6. Wait a few moments and then verify that the deployment is successful.
286 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
2. Create services for the web application deployments. Then, use the services to create
a route for the satir-app application and an ingress object for the sakila-app
application.
2.1. Expose the satir-app deployment. Name the service satir-svc, and specify port
8080 as the port and target port.
2.4. Create a route named satir for the satir-app web application by exposing the
satir-svc service.
2.5. Create an ingress object named ingr-sakila for the sakila-svc service.
Configure the --rule option with the following values:
DO180-OCP4.12-en-1-20230406 287
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Field Value
Host ingr-sakila.apps.ocp4.example.com
2.6. Confirm that a route exists for the ingr-sakila ingress object.
A specific port is not assigned to routes that ingress objects created. By contrast, a
route that an exposed service created is assigned the same ports as the service.
2.7. Use the curl command to access the ingr-sakila ingress object and the satir
route. The output states the name of the pod that is servicing the request.
3. Scale the web application deployments to load-balance their services. Scale the sakila-
app to two replicas, and the satir-app to three replicas.
3.2. Wait a few moments and then verify the status of the replica pods.
288 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
3.4. Wait a few moments and then verify the status of the replica pods.
3.5. Retrieve the service endpoints to confirm that the services are load-balanced
between the additional replica pods.
4. Enable the sticky sessions for the sakila-app web application. Then, use the curl
command to confirm that the sticky sessions are working for the ingr-sakila object.
4.2. Use the curl command to access the ingr-sakila ingress object. The output
states the name of the pod that is servicing the request. Notice that the connection is
load-balanced between the replicas.
4.3. Use the curl command to save the ingr-sakila ingress object cookie to the /
tmp/cookie_jar file. Confirm that the cookie exists in the /tmp/cookie_jar file.
DO180-OCP4.12-en-1-20230406 289
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
4.4. The cookie provides session stickiness for connections to the ingr-sakila route.
Use the curl command and the cookie in the /tmp/cookie_jar file to connect to
the ingr-sakila route again. Confirm that you are connected to the same pod that
handled the request in the previous step.
4.5. Use the curl command to connect to the ingr-sakila route without the cookie.
Observe that session stickiness occurs only with the cookie.
5. Enable the sticky sessions for the satir-app web application. Then, use the curl
command to confirm that sticky sessions are active for the satir route.
5.1. Configure a cookie with a hello value for the satir route.
5.2. Use the curl command to access the satir route. The output states the name of
the pod that is servicing the request. Notice that the connection is load-balanced
between the three replica pods.
5.3. Use the curl command to save the hello cookie to the /tmp/cookie_jar file.
Afterward, confirm that the hello cookie exists in the /tmp/cookie_jar file.
290 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
5.4. The hello cookie provides session stickiness for connections to the satir route.
Use the curl command and the hello cookie in the /tmp/cookie_jar file to
connect to the satir route again. Confirm that you are connected to the same pod
that handled the request in the previous step.
5.5. Use the curl command to connect to the satir route without the hello cookie.
Observe that session stickiness occurs only with the cookie.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 291
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Lab
Outcomes
• Deploy a MySQL database from a container image.
This command ensures that the cluster is accessible and that all exercise resources are
available. It also creates the database-applications project.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
Field Value
MYSQL_USER redhat
MYSQL_PASSWORD redhat123
MYSQL_DATABASE world_x
292 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Then, execute the following command in the mysql-app deployment pod to load the
world_x database:
4. Create a service for the mysql-app deployment by using the following information:
Field Value
Name mysql-service
Port 3306
Field Value
Name php-svc
Port 8080
Then, create a route named phpapp to expose the web application to external access.
7. Test the connectivity between the web application and the MySQL database. In a web
browser, navigate to the phpapp-web-app.apps.ocp4.example.com route, and verify
that the application retrieves data from the MySQL database.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 293
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Solution
Outcomes
• Deploy a MySQL database from a container image.
This command ensures that the cluster is accessible and that all exercise resources are
available. It also creates the database-applications project.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
294 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
2.1. Create the MySQL database deployment. Ignore the warning message.
2.2. Verify the deployment status. The pod name might differ in your output.
3. Configure the environment variables for the mysql-app deployment by using the following
information:
Field Value
MYSQL_USER redhat
MYSQL_PASSWORD redhat123
MYSQL_DATABASE world_x
DO180-OCP4.12-en-1-20230406 295
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Then, execute the following command in the mysql-app deployment pod to load the
world_x database:
3.2. Verify that the mysql-app application pod is in the RUNNING state. The pod name
might differ in your output.
3.5. Exit the MySQL database, and then exit the container.
mysql> exit
Bye
sh-4.4$ exit
4. Create a service for the mysql-app deployment by using the following information:
Field Value
Name mysql-service
Port 3306
296 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
4.2. Verify the service configuration. The endpoint IP address might differ in your output.
5.1. Create the web application deployment. Ignore the warning message.
5.2. Verify the deployment status. Verify that the php-app application pod is in the
RUNNING state.
6. Create a service for the php-app deployment by using the following information:
DO180-OCP4.12-en-1-20230406 297
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Field Value
Name php-svc
Port 8080
Then, create a route named phpapp to expose the web application to external access.
6.2. Verify the service configuration. The endpoint IP address might differ in your output.
7. Test the connectivity between the web application and the MySQL database. In a web
browser, navigate to the phpapp-web-app.apps.ocp4.example.com route, and verify
that the application retrieves data from the MySQL database.
298 DO180-OCP4.12-en-1-20230406
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 299
Chapter 4 | Deploy Managed and Networked Applications on Kubernetes
Summary
• Many resources in Kubernetes and RHOCP create or affect pods.
• Resources are created imperatively or declaratively. The imperative strategy instructs the cluster
what to do. The declarative strategy defines the state that the cluster matches.
• The oc new-app command creates resources that are determined via heuristics.
• The workload API includes several resources to create pods. The choice between resources
depends on for how long and how often the pod needs to run.
• A job resource executes a one-time task on the cluster via a pod. The cluster retries the job until
it succeeds, or it retries a specified number of attempts.
• Resources are organized into projects and are selected via labels.
300 DO180-OCP4.12-en-1-20230406
Chapter 5
DO180-OCP4.12-en-1-20230406 301
Chapter 5 | Manage Storage for Application Configuration and Data
Objectives
• Configure applications by using Kubernetes secrets and configuration maps to initialize
environment variables and to provide text and binary configuration files.
With Kubernetes, you can use manifests in JSON and YAML formats to specify the intended
configuration for each application. You can define the name of the application, labels, the image
source, storage, environment variables, and more.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
replicas: 1
selector:
matchLabels:
app: hello-deployment
template:
metadata:
labels:
app: hello-deployment
spec:
containers:
- env:
- name: ENV_VARIABLE_1
valueFrom:
secretKeyRef:
key: hello
name: world
image: quay.io/hello-image:latest
In this section, you specify the metadata of your application, such as the name.
You can define the general configuration of the resource that is applied to the deployment,
such as the number of replicas (pods), the selector label, and the template data.
302 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
In this section, you specify the configuration for your application, such as the image name,
the container name, ports, environment variables, and more.
You can define the environment variables to configure your application needs.
Sometimes your application requires configuring a combination of files. For example, at the time of
creation, a database deployment must have preloaded databases and data. You most commonly
configure applications by using environment variables, external files, or command-line arguments.
This process of configuration externalization ensures that the application is portable across
environments when the container image, external files, and environment variables are available in
the environment where the application runs.
You can use configuration maps to inject containers with configuration data. The ConfigMap
(configuration map) namespaced objects provide ways to inject configuration data into
containers, which helps to maintain platform independence of the containers. These objects can
store fine-grained information, such as individual properties, or coarse-grained information, such
as entire configuration files or JSON blobs (JSON sections). The information in configuration
maps does not require protection.
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
namespace: my-app
data:
example.property.1: hello
example.property.2: world
example.property.file: |-
property.1=value-1
property.2=value-2
property.3=value-3
binaryData:
bar: L3Jvb3QvMTAw
Points to an encoded file in base64 that contains non-UTF-8 data, for example, a binary Java
keystore file. Place a key followed by the encoded file.
Applications often require access to sensitive information. For example, a back-end web
application requires access to database credentials to query a database. Kubernetes and
OpenShift use secrets to hold sensitive information. For example, you can use secrets to store the
following types of sensitive information:
• Passwords
DO180-OCP4.12-en-1-20230406 303
Chapter 5 | Manage Storage for Application Configuration and Data
apiVersion: v1
kind: Secret
metadata:
name: example-secret
namespace: my-app
type: Opaque
data:
username: bXl1c2VyCg==
password: bXlQQDU1Cg==
stringData:
hostname: myapp.mydomain.com
secret.properties: |
property1=valueA
property2=valueB
A secret is a namespaced object and it can store any type of data. Data in a secret is Base64-
encoded, and is not stored in plain text. Secret data is not encrypted; you can decode the secret
from Base64 format to access the original data. The following example shows the decoded values
for the username and password objects from the example-secret secret:
Kubernetes and OpenShift support different types of secrets, such as service account tokens,
SSH keys, Docker registry credentials, and TLS certificates. When you store information in a
specific secret resource type, Kubernetes validates that the data conforms to the type of secret.
Note
By default, configuration maps and secrets are not encrypted. To encrypt your
secret data at rest, you must encrypt the Etcd database. When enabled, Etcd
encrypts the following resources: secrets, configuration maps, routes, OAuth access
tokens, and OAuth authorization tokens. Encrypting the Etcd database is outside
the scope of the course.
For more information, refer to the Encrypting Etcd Data chapter in the
Red Hat OpenShift Container Platform 4.12 Security and Compliance
documentation at https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/security_and_compliance/
index#encrypting-etcd
304 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
• Create a generic secret that contains key-value pairs from literal values that are typed on the
command line:
• Create a generic secret by using key names that are specified on the command line and values
from files:
• Create a TLS secret that specifies a certificate and the associated key:
The syntax for creating a configuration map and for creating a secret closely match. You can enter
key-value pairs on the command line, or use the content of a file as the value of a specified key.
You can use either the oc or kubectl command-line tools to create a configuration map. The
following command shows how to create a configuration map:
DO180-OCP4.12-en-1-20230406 305
Chapter 5 | Manage Storage for Application Configuration and Data
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map-example
namespace: example-app
data:
database.name: sakila
database.user: redhat
The project where the configuration map resides. ConfigMap objects can be referenced only
by pods in the same project.
You can then use the configuration map to populate environment variables for your application.
The following example shows a pod resource that populates specific environment variables by
using a configuration map.
apiVersion: v1
kind: Pod
metadata:
name: config-map-example-pod
namespace: example-app
spec:
containers:
- name: example-container
image: registry.example.com/mysql-80:1-237
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: config-map-example
key: database.name
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: config-map-example
key: database.user
optional: true
The name of a pod environment variable where you are populating a key's value.
Sets the environment variable as optional. The pod is started even if the specified
ConfigMap object and keys do not exist.
306 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
The following example shows a pod resource that injects all environment variables from a
configuration map:
apiVersion: v1
kind: Pod
metadata:
name: config-map-example-pod2
namespace: example-app
spec:
containers:
- name: example-container
image: registry.example.com/mysql-80:1-237
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: config-map-example
restartPolicy: Never
You can use secrets with other Kubernetes resources such as pods, deployments, builds, and more.
You can specify secret keys or volumes with a mount path to store your secrets. The following
snippet shows an example of a pod that populates environment variables with data from the
test-secret Kubernetes secret:
apiVersion: v1
kind: Pod
metadata:
name: secret-example-pod
spec:
containers:
- name: secret-test-container
image: busybox
command: [ "/bin/sh", "-c", "export" ]
env:
- name: TEST_SECRET_USERNAME_ENV_VAR
valueFrom:
secretKeyRef:
name: test-secret
key: username
The key that is extracted from the secret is the username for authentication.
In contrast with configuration maps, the values in secrets are always encoded (not encrypted), and
their access is restricted to fewer authorized users.
DO180-OCP4.12-en-1-20230406 307
Chapter 5 | Manage Storage for Application Configuration and Data
The following command creates a generic secret that contains key-value pairs from literal values
that are typed on the command line: user with the demo-user value, and root_password with
the zT1kTgk value.
You can also create a generic secret by specifying key names on the command line and values
from files:
You can mount a secret to a directory within a pod. Kubernetes creates a file for each key in the
secret that uses the name of the key. The content of each file is the decoded value of the secret.
The following command shows how to mount secrets in a pod:
Make the secret data available in the /app-secrets directory in the pod. The content
of the /app-secrets/user file is demo-user. The content of the /app/secrets/
root_password file is zT1KTgk.
Similar to secrets, you must first create a configuration map before a pod can consume it. The
configuration map must exist in the same namespace, or project, as the pod. The following
command shows how to create a configuration map from an external configuration file:
You can similarly add a configuration map as a volume by using the following command:
308 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
To confirm that the volume is attached to the deployment, use the following command:
You can also use the oc set env command to set application environment variables from either
secrets or configuration maps. In some cases, you can modify the names of the keys to match
the names of environment variables by using the --prefix option. In the following example,
the user key from the demo-secret secret sets the MYSQL_USER environment variable, and
the root_password key from the demo-secret secret sets the MYSQL_ROOT_PASSWORD
environment variable. If the key name from the secret is lowercase, then the corresponding
environment variable is converted to uppercase to match the pattern that the --prefix option
defines.
After updating the locally saved files, use the oc set data command to update the secret
or configuration map. For each key that requires an update, specify the name of a key and the
associated value. If a file contains the value, then use the --from-file option.
DO180-OCP4.12-en-1-20230406 309
Chapter 5 | Manage Storage for Application Configuration and Data
You do not need to rebuild container images when a secret or a configuration map changes. New
pods use the updated secrets and configuration maps. You can delete pods that use the outdated
secrets and configuration maps.
References
For more information, refer to the Using Config Maps with Applications chapter
in the Red Hat OpenShift Container Platform 4.12 Building Applications
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/
index#config-maps
For more information, refer to Providing Sensitive Data to Pods in the Red Hat
OpenShift Container Platform 4.12 Working with Pods documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#nodes-pods-secrets
For more information, refer to the Encrypting Etcd Data chapter in the Red Hat
OpenShift Container Platform 4.12 Security and Compliance documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/security_and_compliance/
index#encrypting-etcd
310 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Guided Exercise
Outcomes
In this exercise, you deploy a web application to mount the missing files from a configuration
map.
Instructions
1. Create a web application deployment named webconfig. Use the
registry.ocp4.example.com:8443/redhattraining/httpd-noimage:v1
container image.
1.1. Log in to the OpenShift cluster as the developer user with the developer
password.
DO180-OCP4.12-en-1-20230406 311
Chapter 5 | Manage Storage for Application Configuration and Data
2. Expose the web application to external access. Use the following information to create a
service and a route for the web application.
312 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
3.1. Return to the terminal. Then, create a configuration map named webfiles by
using the redhatlogo.png file in the /home/student/DO180/labs/storage-
configs directory.
4.1. Mount the webfiles configuration map as a volume. Ignore the warning message.
4.2. Verify the deployment status. Verify that a new pod was created.
DO180-OCP4.12-en-1-20230406 313
Chapter 5 | Manage Storage for Application Configuration and Data
The configuration map successfully added the missing image file to the web
application.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
314 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Objectives
• Provide applications with persistent storage volumes for block and file-based data.
Because OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework,
cluster administrators can provision persistent storage for a cluster. Developers can use persistent
volume claims (PVCs) to request PV resources without specific knowledge of the underlying
storage infrastructure.
Two ways exist to provision storage for the cluster: static and dynamic. Static provisioning requires
the cluster administrator to create persistent volumes manually. Dynamic provisioning uses
storage classes to create the persistent volumes on demand.
Administrators can use storage classes to provide persistent storage. Storage classes describe
types of storage for the cluster. Cluster administrators create storage classes to manage storage
services or storage tiers of a service. Rather than specifying provisioned storage, PVCs instead
refer to a storage class.
Developers use PVCs to add persistent volumes to their applications. Developers need not know
details of the storage infrastructure. With static provisioning, developers use previously created
PVs, or ask a cluster administrator to manually create persistent volumes for their applications.
With dynamic provisioning, developers declare the storage requirements of the application, and
the cluster creates a PV to fill the request.
Persistent Volumes
Not all storage is equal. Storage types vary in cost, performance, and reliability. Multiple storage
types are usually available for each Kubernetes cluster.
The following list of commonly used storage volume types and their use cases is not exhaustive.
configMap
The configMap volume externalizes the application configuration data. This use of
the configMap resource ensures that the application configuration is portable across
environments and can be version-controlled.
emptyDir
An emptyDir volume provides a per-pod directory for scratch data. The directory is usually
empty after provisioning. emptyDir volumes are often required for ephemeral storage.
DO180-OCP4.12-en-1-20230406 315
Chapter 5 | Manage Storage for Application Configuration and Data
hostPath
A hostPath volume mounts a file or directory from the host node into your pod. To use a
hostPath volume, the cluster administrator must configure pods to run as privileged. This
configuration grants access to other pods in the same node.
Red Hat does not recommend the use of hostPath volumes in production. Instead, Red Hat
supports hostPath mounting for development and testing on a single-node cluster.
Although most pods do not need a hostPath volume, it does offer a quick option for testing
if an application requires it.
iSCSI
Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides
block-level access to storage devices. With iSCSI volumes, Kubernetes workloads can
consume persistent storage from iSCSI targets.
local
You can use Local persistent volumes to access local storage devices, such as a disk or
partition, by using the standard PVC interface. Local volumes are subject to the availability of
the underlying node, and are not suitable for all applications.
NFS
An NFS (Network File System) volume can be accessed from multiple pods at the same
time, and thus provides shared data between pods. The NFS volume type is commonly used
because of its ability to share data safely. Red Hat recommends to use NFS only for non-
production systems.
Developers must select a volume type that supports the required access level by the application.
The following table shows some example supported access modes:
configMap Yes No No
emptyDir Yes No No
hostPath Yes No No
316 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
local Yes No No
Volume Modes
Kubernetes supports two volume modes for persistent volumes: Filesystem and Block. If the
volume mode is not defined for a volume, then Kubernetes assigns the default volume mode,
Filesystem, to the volume.
OpenShift Container Platform can provision raw block volumes. These volumes do not have a file
system, and can provide performance benefits for applications that either write to the disk directly
or that implement their own storage service. Raw block volumes are provisioned by specifying
volumeMode: Block in the PV and PVC specification.
The following table provides examples of storage options with block volume support:
iSCSI Yes No
local Yes No
Manually Creating a PV
Use a PersistentVolume manifest file to manually create a persistent volume. The following
example creates a persistent volume from a fiber channel storage device that uses block mode.
apiVersion: v1
kind: PersistentVolume
metadata:
name: block-pv
spec:
capacity:
DO180-OCP4.12-en-1-20230406 317
Chapter 5 | Manage Storage for Application Configuration and Data
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Block
persistentVolumeReclaimPolicy: Retain
fc:
targetWWNs: ["50060e801049cfd1"]
lun: 0
readOnly: false
Provide a name for the PV, which subsequent claims use to access the PV.
The storage device must support the access mode that the PV specifies.
The volumeMode attribute is optional for Filesystem volumes, but is required for Block
volumes.
The remaining attributes are specific to the storage type. In this example, the fc object
specifies the Fiber Channel storage type attributes.
If the previous manifest is in a file named my-fc-volume.yaml, then the following command can
create the PV resource on RHOCP:
The lifecycle of a PVC is not tied to a pod, but to a namespace. Multiple pods from the same
namespace but with potentially different workload controllers can connect to the same PVC. You
can also sequentially connect storage to and detach storage from different application pods, to
initialize, convert, migrate, or back up data.
Kubernetes matches each PVC to a persistent volume (PV) resource that can satisfy the
requirements of the claim. It is not an exact match. A PVC might be bound to a PV with a larger
disk size than is requested. A PVC that specifies single access might be bound to a PV that is
shareable for multiple concurrent accesses. Rather than enforcing policy, PVCs declare what an
application needs, which Kubernetes provides on a best-effort basis.
Creating a PVC
A PVC belongs to a specific project. To create a PVC, you must specify the access mode and
size, among other options. A PVC cannot be shared between projects. Developers use a PVC
to access a persistent volume (PV). Persistent volumes are not exclusive to projects, and are
accessible across the entire OpenShift cluster. When a PV binds to a PVC, the PV cannot be
bound to another PVC.
318 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Specify the name of the deployment that requires the PVC resource.
Setting the add option to true adds volumes and volume mounts for containers.
The name option specifies a volume name. If not specified, a name is autogenerated.
The supported types, for the add operation, include emptyDir, hostPath, secret,
configMap, and persistentVolumeClaim.
The claim-mode option defaults to ReadWriteOnce. The valid values are ReadWriteOnce
(RWO), ReadWriteMany (RWX), and ReadOnlyMany (ROX).
Create a claim with the given size in bytes, if specified along with the persistent volume type.
The size must use SI notation, for example, 15, 15 G, or 15 Gi.
The mount-path option specifies the mount path inside the container.
The claim-name option provides the name for the PVC, and is required for the
persistentVolumeClaim type.
The command creates a PVC resource and adds it to the application as a volume within the pod.
The command updates the deployment for the application with volumeMounts and volumes
specifications.
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
namespace: storage-volumes
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
...output omitted...
volumeMounts:
- mountPath: /var/lib/example-app
name: example-pv-storage
DO180-OCP4.12-en-1-20230406 319
Chapter 5 | Manage Storage for Application Configuration and Data
...output omitted...
volumes:
- name: example-pv-storage
persistentVolumeClaim:
claimName: example-pv-claim
...output omitted...
The volume name, which is used to specify the volume that is associated with the mount.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pv-claim
labels:
app: example-application
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
Use this name in the claimName field of the persistentVolumeClaim element in the
volumes section of a deployment manifest.
Specify the access mode that this PVC requests. The storage class provisioner must provide
this access mode. If persistent volumes are created statically, then an eligible persistent
volume must provide this access mode.
The storage class creates a persistent volume that matches this size request. If persistent
volumes are created statically, then an eligible persistent volume must be at least the
requested size.
Use the oc create command to create the PVC from the manifest file.
Use oc get pvc to view the available PVCs in the current namespace.
320 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
When you create a PVC, you specify a storage amount, the required access mode, and a storage
class to describe and classify the storage. The control loop in the RHOCP control node watches
for new PVCs, and binds the new PVC to an appropriate PV. If an appropriate PV does not exist,
then a provisioner for the storage class creates one.
Claims remain unbound indefinitely if a matching volume does not exist or if a volume cannot
be created with any available provisioner that services a storage class. Claims are bound when
matching volumes become available. For example, a cluster with many manually provisioned 50 Gi
volumes would not match a PVC that requests 100 Gi. The PVC can be bound when a 100 Gi PV is
added to the cluster.
Use oc get storageclass to view the storage classes that the cluster provides.
In the example, the nfs-storage storage class is marked as the default storage class. When a
default storage class is configured, the PVC must explicitly name any other storage class to use, or
can set the storageClassName annotation to "", to be bound to a PV without a storage class.
The following oc set volume command example uses the claim-class option to specify a
dynamically provisioned PV.
Note
Because a cluster administrator can change the default storage class, Red Hat
recommends that you always specify the storage class when you create a PVC.
DO180-OCP4.12-en-1-20230406 321
Chapter 5 | Manage Storage for Application Configuration and Data
Available
After a PV is created, it becomes available for any PVC to use in the cluster in any namespace.
Bound
A PV that is bound to a PVC is also bound to the same namespace as the PVC, and no other
PVC can use it.
In Use
You can delete a PVC if no pods actively use it. The Storage Object in Use Protection feature
ensures that PVCs that a pod actively uses and PVs that are bound to PVCs are not removed
from the system, which can result in data loss. Storage Object in Use Protection is enabled by
default.
If a user deletes a PVC that a pod actively uses, then the PVC is not removed immediately.
PVC removal is postponed until no pods actively use the PVC. Also, if a cluster administrator
deletes a PV that is bound to a PVC, then the PV is not removed immediately. PV removal is
postponed until the PV is no longer bound to a PVC.
Released
After the developer deletes the PVC that is bound to a PV, the PV is released, and the storage
that the PV used can be reclaimed.
Reclaimed
The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is
released. A volume's reclaim policy can be Retain or Delete.
Policy Description
Retain Enables manual reclamation of the resource for those volume plug-ins that
support it.
322 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
References
For more information, refer to the Understanding Ephemeral Storage chapter in the
Red Hat OpenShift Container Platform 4.12 Storage documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/storage/index#understanding-
ephemeral-storage
For more information, refer to the Understanding Persistent Storage chapter in the
Red Hat OpenShift Container Platform 4.12 Storage documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/storage/
index#pvcprotection_understanding-persistent-storage
https://developers.redhat.com/articles/2022/10/06/kubernetes-improves-
developer-agility#few_cloud_native_applications_are_stateless
https://developers.redhat.com/articles/2022/10/06/kubernetes-storage-
concepts#3_concepts_of_kubernetes_storage_for_developers_
https://loft.sh/blog/kubernetes-persistent-volumes-examples-and-best-practices/
DO180-OCP4.12-en-1-20230406 323
Chapter 5 | Manage Storage for Application Configuration and Data
Guided Exercise
Outcomes
You should be able to do the following tasks:
This command ensures that all resources are available for this exercise.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the storage-volumes project.
2.1. Use the oc get storageclass command to identify the default storage class.
324 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
4. Use the oc get deployment command to view the pod template specification for the
deployment.
DO180-OCP4.12-en-1-20230406 325
Chapter 5 | Manage Storage for Application Configuration and Data
}
],
"image": "registry.ocp4.example.com:8443/rhel8/mysql-80",
"imagePullPolicy": "Always",
"name": "mysql-80",
"ports": [
{
"containerPort": 3306,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File"
}
]
5. Add a 1 Gi, RWO PVC named db-pod-pvc to the deployment. Specify the volume name as
nfs-volume-storage, and set the /var/lib/mysql directory as the mount path.
5.1. Use the oc set volume command to create a PVC for the deployment. Ignore the
warning message.
5.2. Use the oc get pvc command to view the status of the PVC. Identify the name of
the PV, and confirm that the PVC uses the nfs-storage default storage class.
6.1. Use the oc get deployment command to view the deployment again.
...output omitted...
volumeMounts:
- mountPath: /var/lib/mysql
name: nfs-volume-storage
...output omitted...
326 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
volumes:
- name: nfs-volume-storage
persistentVolumeClaim:
claimName: db-pod-pvc
...output omitted...
7.1. Observe the contents of the init-db.sql script that initializes the database.
7.2. Use the contents of the init-db.sql file to create a configMap API object named
init-db-cm.
7.6. Use the mysql client to execute the database script in the /tmp/init-db volume.
DO180-OCP4.12-en-1-20230406 327
Chapter 5 | Manage Storage for Application Configuration and Data
sh-4.4$ exit
7.8. Use the oc set volume command to remove the init-db-volume volume from
the dp-pod deployment. Ignore the warning message.
8.1. Create the query-db pod. Configure the pod to use the MySQL client to execute a
query against the db-pod service. Ignore the warning message.
9.2. Verify that the PVC still exists without the deployment.
328 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
10. Use the oc set volume command to attach the existing PVC to the deployment. Ignore
the warning message.
11. Create a query-db pod by using the oc run command and the
registry.ocp4.example.com:8443/redhattraining/do180-dbinit container
image. Use the pod to execute a query against the database service.
11.1. Create the query-db pod. Configure the pod to use the MySQL client to execute a
query against the db-pod service. Ignore the warning message.
DO180-OCP4.12-en-1-20230406 329
Chapter 5 | Manage Storage for Application Configuration and Data
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
330 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Objectives
• Match applications with storage classes that provide storage services to satisfy application
requirements.
Kubernetes supports multiple storage back ends. The storage options differ in cost, performance,
reliability, and function. An administrator can create different storage classes for these options.
As a result, developers can select the storage solution that fits the needs of the application.
Developers do not need to know the storage infrastructure details.
Recall that an administrator selects the default storage class for dynamic provisioning. A default
storage class enables Kubernetes to automatically provision a persistent volume claim (PVC) that
does not specify a storage class. Because an administrator can change the default storage class, a
developer should explicitly set the storage class for an application.
Reclaim Policy
Outside the application function, the developer must also consider the impact of the reclaim
policy on storage requirements. A reclaim policy determines what happens to the data on a PVC
after the PVC is deleted. When you are finished with a volume, you can delete the PVC object from
the API, which enables reclamation of the resource. Kubernetes releases the volume when the
PVC is deleted, but the volume is not yet available for another claim. The previous claimant's data
remains on the volume and must be handled according to the policy. To keep your data, choose a
storage class with a retain reclaim policy.
By using the retain reclaim policy, when you delete a PVC, only the PVC object is deleted from
the cluster. The Persistent Volume (PV) that backed the PVC, the physical storage device that the
PV used, and your data still exist. To reclaim the storage and use it in your cluster again, the cluster
administrator must take manual steps.
The associated asset in the external storage infrastructure, such as an AWS EBS, GCE PD,
Azure Disk, or Cinder volume, still exists after the PV is deleted.
2. At this point, the cluster administrator can create another PV by using the same storage and
data from the previous PV. A developer could then mount the new PV and access the data
from the previous PV.
DO180-OCP4.12-en-1-20230406 331
Chapter 5 | Manage Storage for Application Configuration and Data
3. Alternatively, the cluster administrator can remove the data on the storage asset, and then
delete the storage asset.
To automatically delete the PV, the data, and the physical storage for a deleted PVC, you must
choose a storage class that uses the delete reclaim policy. This reclaim policy automatically
reclaims your storage volume when the PVC is deleted. The delete reclaim policy is the default
setting for all storage provisioners that adhere to the Kubernetes Container Storage Interface
(CSI) standards. If you use a storage class that does not specify a reclaim policy, then the delete
reclaim policy is used.
For more information about the Kubernetes Container Storage Interface standards, refer to the
Kubernetes CSI Developer Documentation website at https://kubernetes-csi.github.io/docs/.
Because a PVC is a storage device that your Linux host mounts, an improperly configured
application could behave unexpectedly. For example, you could have an iSCSI LUN, which is
expressed as an RWO PVC that is not supposed to be shared, and then mount that same PVC on
two pods of the same host. Whether this situation is problematic depends on the applications.
Usually, it is fine for two processes on the same host to share a disk. After all, many applications
on your personal machine share a local disk. However, nothing prevents one text editor from
overwriting and losing all edits from another text editor. The use of Kubernetes storage must come
with the same caution.
Single-node access (RWO) and shared access (RWX) do not ensure that files can be shared safely
and reliably. RWO means that only one cluster node can read and write to the PVC. Alternatively,
with RWX, Kubernetes provides a storage volume that any pod can access for reading or writing.
332 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Storage classes can use a combination of these factors and others to best fit the needs of the
developers.
Kubernetes matches PVCs with the best available PV that is not bound to another PVC. The PV
must provide the access mode that is specified in the PVC, and the volume must be at least as
large as the requested size in the PVC. The supported access modes depend on the capabilities of
the storage provider. A PVC can specify additional criteria, such as the name of a storage class. If
a PVC cannot find a PV that matches all criteria, then the PVC enters a pending state and waits
until an appropriate PV becomes available.
PVCs can request a specific storage class by specifying the storageClassName attribute. This
method of selecting a specific storage class ensures that the storage medium is a good fit for the
application requirements. Only PVs of the requested storage class can be bound to the PVC. The
cluster administrator can configure dynamic provisioners to service storage classes. The cluster
administrator can also create a PV on demand that matches the specifications in the PVC.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: io1-gold-storage
annotations:
storageclass.kubernetes.io/is-default-class: 'false'
description:'Provides RWO and RWOP Filesystem & Block volumes'
...
parameters:
type: io1
iopsPerGB: "10"
...
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
(optional) The required parameters for the specific provisioner; this object differs between
plug-ins.
(required) The type of provisioner that is associated with this storage class.
(optional) The selected volume binding mode for the storage class.
DO180-OCP4.12-en-1-20230406 333
Chapter 5 | Manage Storage for Application Configuration and Data
Several attributes, such as the API version, API object type, and annotations, are common for
Kubernetes objects, whereas other attributes are specific to storage class objects.
Parameters
Parameters can configure file types, change storage types, enable encryption, enable
replication, and so on. Each provisioner has different parameter options. Accepted parameters
depend on the storage provisioner. For example, the io1 value for the type parameter, and
the iopsPerGB parameter, are specific to EBS. When a parameter is omitted, the storage
provisioner uses the default value.
Provisioners
The provisioner attribute identifies the source of the storage medium plug-in. Provisioners
with names that begin with a kubernetes.io value are available by default in a Kubernetes
cluster.
ReclaimPolicy
The default reclaim policy, Delete, automatically reclaims the storage volume when the PVC
is deleted. Reclaiming storage in this way can reduce the storage costs. The Retain reclaim
policy does not delete the storage volume, so that data is not lost if the wrong PVC is deleted.
This reclaim policy can result in higher storage costs if space is not manually reclaimed.
VolumeBindingMode
The volumeBindingMode attribute determines how volume attachments are handled for a
requesting PVC. Using the default Immediate volume binding mode creates a PV to match
the PVC when the PVC is created. This setting does not wait for the pod to use the PVC, and
thus can be inefficient. The Immediate binding mode can also cause problems for storage
back ends that are topology-constrained or are not globally accessible from all nodes in the
cluster. PVs are also bound without the knowledge of a pod's scheduling requirements, which
might result in unschedulable pods.
By using the WaitForFirstConsumer mode, the volume is created after the pod that
uses the PVC is in use. With this mode, Kubernetes creates PVs that conform to the pod's
scheduling constraints, such as resource requirements and selectors.
AllowVolumeExpansion
When set to a true value, the storage class specifies that the underlying storage volume
can be expanded if more storage is required. Users can resize the volume by editing the
corresponding PVC object. This feature can be used only to grow a volume, not to shrink it.
The cluster administrator can use the create command to create a storage class from a YAML
manifest file. The resulting storage class is non-namespaced, and thus is available to all projects in
the cluster.
A regular cluster user can view the attributes of a storage class by using the describe command.
The following example queries the attributes of the storage class with the name lvms-vg1.
334 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
The describe command can help a developer to decide whether the storage class is a good fit
for an application. If none of the storage classes in the cluster are appropriate for the application,
then the developer can request the cluster administrator to create a PV with the required features.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-block-pvc
spec:
accessModes:
- RWO
volumeMode: Block
storageClassName: <storage-class-name>
resources:
requests:
storage: 10Gi
Use the create command to create the resource from the YAML manifest file.
Use the --claim-name option with the set volume command to add the pre-existing PVC to a
deployment.
DO180-OCP4.12-en-1-20230406 335
Chapter 5 | Manage Storage for Application Configuration and Data
References
For more information, refer to the Understanding Persistent Storage section in the
Red Hat OpenShift Container Platform 4.12 Storage documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/storage/index#understanding-
persistent-storage
Kubernetes Storage
https://kubernetes.io/docs/concepts/storage/storage-classes/
For more information about the Kubernetes Container Storage Interface standards,
refer to the Kubernetes CSI Developer Documentation website at
https://kubernetes-csi.github.io/docs/
.
336 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Guided Exercise
Outcomes
You should be able to deploy applications with persistent storage and create volumes from a
storage class. The storage class must meet the application storage requirements.
This command ensures that all resources are available for this exercise.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the storage-classes project.
2. Examine the available storage classes on the cluster. Identify an appropriate storage class
to use for a database application.
2.1. Use the get command to retrieve a list of storage classes in the cluster. You can use
the storageclass short name, sc, in the command.
DO180-OCP4.12-en-1-20230406 337
Chapter 5 | Manage Storage for Application Configuration and Data
Because an administrator can change the default storage, applications must specify a
storage class that meets the application requirements.
2.2. Use the oc describe command to view the details of the lvms-vg1 storage class.
The description annotation states that the storage class provides support for block
volumes. For some applications, such as databases, block volumes can provide a
performance advantage over file system volumes. In the lvms-vg1 storage class,
the AllowVolumeExpansion field is set to True. With volume expansion, cluster
users can edit their PVC objects and specify a new size for the PVC. Kubernetes then
uses the storage back end to automatically expand the volume to the requested size.
Kubernetes also expands the file system of pods that use the PVC. Enabling volume
expansion can help to protect an application from failing due to the data growing
too fast. With these features, the lvms-vg1 storage class is a good choice for the
database application.
338 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
4. Add a 1 Gi, RWO PVC named db-pod-odf-pvc to the deployment. Specify the volume
name as odf-lvm-storage, and set the /var/lib/mysql directory as the mount path.
Use the lvms-vg1 storage class to create a block mode volume.
4.1. Use the oc set volume command to create a PVC for the deployment. Ignore the
warning message.
4.2. Use the oc get pvc command to view the status of the PVC. Identify the name of
the PV, and confirm that the PVC uses the lvms-vg1 non-default storage class.
4.3. Use the oc describe pvc command to inspect the details of the db-pod-odf-
pvc PVC.
DO180-OCP4.12-en-1-20230406 339
Chapter 5 | Manage Storage for Application Configuration and Data
VolumeMode: Filesystem
Used By: db-pod-568888457d-qmxn9
Events:
...output omitted...
340 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
7.2. Verify that the PVC still exists without the deployment.
8. Create a PVC for an application that requires a shared storage volume from the nfs-
storage storage class.
8.1. Create a PVC YAML manifest file named nfs-pvc.yaml with the following contents:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-storage
8.2. Create the PVC by using the oc create -f command and the YAML manifest file.
8.3. Use the oc describe pvc command to view the details of the PVC resource.
DO180-OCP4.12-en-1-20230406 341
Chapter 5 | Manage Storage for Application Configuration and Data
volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-
subdir-external-provisioner
volume.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-
external-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none>
Events:
...output omitted...
The Used By: <none> attribute shows that no pod is using the PVC. The Status:
Bound value and the Volume attribute assignment confirm that the storage class has
its VolumeBindingMode set to Immediate.
9.3. Expose the service to create a route for the wep-pod application. Specify web-
pod.apps.ocp4.example.com as the hostname.
10. Verify that the route that is assigned to the web-pod application is accessible.
10.2. Use the curl command to view the index page of the web-pod application.
342 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
11.1. Use the oc set volume command to add the PVC to the deployment. Specify
the volume name as nfs-volume, and set the mount path to the /var/www/html
directory. Ignore the warning message.
The volume mount-path is set to the /var/www/html directory. The server uses
this path to serve HTML content.
12.1. Create the app-pod deployment and specify port 9090 as the target port. Ignore the
warning message.
12.3. Expose the service to create a route for the app-pod application. Use app-
pod.apps.ocp4.example.com for the hostname.
DO180-OCP4.12-en-1-20230406 343
Chapter 5 | Manage Storage for Application Configuration and Data
13.1. Use the oc set volume command to add the PVC to the deployment. Set the
volume name to nfs-volume and the mount path to the /var/tmp directory. Ignore
the warning message.
At this point, the web-pod and the app-pod applications are sharing a PVC.
Kubernetes does not have a mechanism to prevent data conflicts between the
two applications. In this case, the app-pod application is a writer and the web-
pod application is a reader, and thus they do not have a conflict. The application
implementation, not Kubernetes, prevents data corruption from the two applications
that use the same PVC. The RWO access mode does not protect data integrity. The
RWO access mode means that a single node can mount the volume as read/write,
and pods that share the volume must exist on the same node.
14. Use the app-pod application to add content to the shared volume.
14.2. In the form, enter your information and then click save. The application adds your
information to the list after the form.
14.3. Click push to create the /var/tmp/People.html file on the shared volume.
15. Open another tab on the browser and navigate to the http://web-
pod.apps.ocp4.example.com/People.html page. The web-pod application displays
the People.html file from the shared volume.
344 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 345
Chapter 5 | Manage Storage for Application Configuration and Data
Objectives
• Deploy applications that scale without sharing storage.
Application Clustering
Clustering applications, such as MySQL and Cassandra, typically require persistent storage to
maintain the integrity of the data and files that the application uses. When many applications
require persistent storage at the same time, multi-disk provisioning might not be possible due to
the limited amount of available resources.
Shared storage solves this problem by allocating the same resources from a single device to
multiple services.
Storage Services
File storage solutions provide the directory structure that is found in many environments. Using
file storage is ideal when applications generate or consume reasonable volumes of organized data.
Applications that use file-based implementations are prevalent, easy to manage, and provide an
affordable storage solution.
File-based solutions are a good fit for data backup and archiving, due to their reliability, as are also
file sharing and collaboration services. Most data centers provide file storage solutions, such as a
network-attached storage (NAS) cluster, for these scenarios.
Network-attached storage (NAS) is a file-based storage architecture that makes stored data
accessible to networked devices. NAS gives networks a single access point for storage with built-in
security, management, and fault-tolerant capabilities. Out of the multiple data transfer protocols
that networks can run, two protocols are fundamental to most networks: internet protocol (IP) and
transmission control protocol (TCP).
The files that are transferred across these protocols can be formatted as one of the following
protocols:
• Network File Systems (NFS): This protocol enables remote hosts to mount file systems over a
network and to interact with those file systems as though they are mounted locally.
• Server Message Blocks (SMB): This protocol implements an application-layer network protocol
that is used to access resources on a server, such as file shares and shared printers.
NAS solutions can provide file-based storage to applications within the same data center. This
approach is common to many application architectures, including the following architectures:
346 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
These applications take advantage of data reliability and the ease of file sharing that is available by
using file storage. Additionally, for file storage data, the OS and file system handle the locking and
caching of the files.
Although familiar and prevalent, file storage solutions are not ideal for all application scenarios.
One particular pitfall of file storage is poor handling of large data sets or unstructured data.
Block storage solutions, such as Storage Area Network (SAN) and iSCSI technologies, provide
access to raw block devices for application storage. These block devices function as independent
storage volumes, such as the physical drives in servers, and typically require formatting and
mounting for application access.
Using block storage is ideal when applications require faster access for optimizing computationally
heavy data workloads. Applications that use block-level storage implementations gain efficiencies
by communicating at the raw device level, instead of relying on operating system layer access.
Block-level approaches enable data distribution on blocks across the storage volume. Blocks also
use basic metadata, including a unique identification number for each block of data, for quick
retrieval and reassembly of blocks for reading.
SAN and iSCSI technologies provide applications with block-level volumes from network-
based storage pools. Using block-level access to storage volumes is common for application
architectures, including the following architectures:
Application storage that uses several block devices in a RAID configuration benefits from the data
integrity and performance that the various arrays provide.
With Red Hat OpenShift Container Platform (RHOCP), you can create customized storage classes
for your applications. With the NAS and the SAN storage technologies, RHOCP applications can
use either the NFS protocol for file-based storage, or the block-level protocol for block storage.
A stateful set is the representation of a set of pods with consistent identities. These identities are
defined as a network with a single stable DNS, hostname, and storage from as many volume claims
as the stateful set specifies. A stateful set guarantees that a given network identity maps to the
same storage identity.
Deployments represent a set of containers within a pod. Each deployment can have many active
replicas, depending on the user specification. These replicas can be scaled up or down, as needed.
A replica set is a native Kubernetes API object that ensures that the specified number of pod
replicas are running. Deployments are used for stateless applications by default, and they can be
used for stateful application by attaching a persistent volume. All pods in a deployment share a
volume and PVC.
In contrast with deployments, stateful set pods do not share a persistent volume. Instead, stateful
set pods each have their own unique persistent volumes. Pods are created without a replica
DO180-OCP4.12-en-1-20230406 347
Chapter 5 | Manage Storage for Application Configuration and Data
set, and each replica records its own transactions. Each replica has its own identifier, which is
maintained in any rescheduling. You must configure application-level clustering so that stateful set
pods have the same data.
Stateful sets are the best option for applications, such as databases, that require consistent
identities and non-shared persistent storage.
The following snippet shows an example of a YAML manifest file for a stateful set:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dbserver
spec:
selector:
matchLabels:
app: database
replicas: 3
template:
metadata:
labels:
app: database
spec:
containers:
- env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: user
name: sakila-cred
image: registry.ocp4.example.com:8443/redhattraining/mysql-app:v1
name: database
ports:
- containerPort: 3306
name: database
volumeMounts:
- mountPath: /var/lib/mysql
name: data
terminationGracePeriodSeconds: 10
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "lvms-vg1"
resources:
requests:
storage: 1Gi
348 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Application labels.
Number of replicas.
Image source.
Container name.
Container ports.
Mount path information for the persistent volumes for each replica. Each persistent volume
has the same configuration.
The access mode of the persistent volume. You can choose between the ReadWriteOnce,
ReadWriteMany, and ReadOnlyMany options.
Note
Stateful sets can be created only by using manifest files. The oc and kubectl CLI
do not have commands to create stateful sets imperatively.
DO180-OCP4.12-en-1-20230406 349
Chapter 5 | Manage Storage for Application Configuration and Data
Notice that three PVCs were created. Confirm that persistent volumes are attached to each
instance:
Note
You must configure application-level clustering for stateful set pods to have the
same data.
You can update the number of replicas of the stateful set by using the scale command:
350 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Notice that the PVCs are not deleted after the execution of the oc delete statefulset
command:
References
Kubernetes Documentation - StatefulSets
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
For more information, refer to the What Is Network-Attached Storage? section in the
Understanding Data Storage chapter at
https://www.redhat.com/en/topics/data-storage/network-attached-storage#how-
does-it-work
DO180-OCP4.12-en-1-20230406 351
Chapter 5 | Manage Storage for Application Configuration and Data
Guided Exercise
Outcomes
In this exercise, you deploy a web server with a shared persistent volume between the
replicas, and a database server from a stateful set with dedicated persistent volumes for
each instance.
• Scale the web server deployment and observe the data that is shared with the replicas.
• Create a database server with a stateful set by using a YAML manifest file.
• Verify that each instance from the stateful set has a persistent volume claim.
This command ensures that all resources are available for this exercise.
Instructions
1. Create a web server deployment named web-server. Use the
registry.ocp4.example.com:8443/redhattraining/hello-world-
nginx:latest container image.
1.1. Log in to the OpenShift cluster as the developer user with the developer
password.
352 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
2. Add the web-pv persistent volume to the web-server deployment. Use the default
storage class and the following information to create the persistent volume:
Field Value
Name web-pv
Type persistentVolumeClaim
2.1. Add the web-pv persistent volume to the web-server deployment. Ignore the
warning message.
Because a storage class was not specified with the --claim-class option, the
command uses the default storage class to create the persistent volume.
2.2. Verify the deployment status. Notice that a new pod is created.
DO180-OCP4.12-en-1-20230406 353
Chapter 5 | Manage Storage for Application Configuration and Data
3.2. Use the exec command to add the pod name that you retrieved from the previous
step to the /var/www/html/index.html file on the pod. Then, retrieve the
contents of the /var/www/hmtl/index.html file to confirm that the pod name is
in the file.
4. Scale the web-server deployment to two replicas and confirm that an additional pod is
created.
4.2. Verify the replica status and retrieve the pod names.
5.1. Verify that the /var/www/html/index.html file is the same in both pods.
354 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Notice that both files show the name of the first instance, because they share the
persistent volume.
6. Create a database server with a stateful set by using the statefulset-db.yml file in the
/home/student/DO180/labs/storage-statefulsets directory. Update the file with
the following information:
Field Value
metadata.name dbserver
spec.selector.matchLabels.app database
spec.template.metadata.labels.app database
spec.template.spec.containers.name dbserver
spec.template.spec.containers.volumeMounts.name data
spec.template.spec.containers.volumeMounts.mountPath /var/lib/mysql
spec.volumeClaimTemplates.metadata.name data
spec.volumeClaimTemplates.spec.storageClassName lvms-vg1
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dbserver
spec:
selector:
matchLabels:
app: database
replicas: 2
template:
metadata:
labels:
app: database
spec:
terminationGracePeriodSeconds: 10
containers:
DO180-OCP4.12-en-1-20230406 355
Chapter 5 | Manage Storage for Application Configuration and Data
- name: dbserver
image: registry.ocp4.example.com:8443/redhattraining/mysql-app:v1
ports:
- name: database
containerPort: 3306
env:
- name: MYSQL_USER
value: "redhat"
- name: MYSQL_PASSWORD
value: "redhat123"
- name: MYSQL_DATABASE
value: "sakila"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "lvms-vg1"
resources:
requests:
storage: 1Gi
6.3. Wait a few moments and then verify the status of the stateful set and its instances.
6.4. Use the exec command to add data to each of the stateful set pods.
mysql: [Warning] Using a password on the command line interface can be insecure.
356 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
mysql: [Warning] Using a password on the command line interface can be insecure.
7. Confirm that each instance from the dbserver stateful set has a persistent volume claim.
Then, verify that each persistent volume claim contains unique data.
7.1. Confirm that the persistent volume claims have a Bound status.
7.2. Verify that each instance from the dbserver stateful set has its own
persistent volume claim by using the oc get pod pod-name -o json |
jq .spec.volumes[0].persistentVolumeClaim.claimName command.
7.3. Application-level clustering is not enabled for the dbserver stateful set. Verify that
each instance of the dbserver stateful set has unique data.
DO180-OCP4.12-en-1-20230406 357
Chapter 5 | Manage Storage for Application Configuration and Data
8. Delete a pod in the dbserver stateful set. Confirm that a new pod is created and that the
pod uses the PVC from the previous pod. Verify that the previously added table exists in
the sakila database.
8.1. Delete the dbserver-0 pod in the dbserver stateful set. Confirm that a new pod
is generated for the stateful set. Then, confirm that the data-dbserver-0 PVC still
exists.
8.2. Use the exec command to verify that the new dbserver-0 pod has the items table
in the sakila database.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
358 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Lab
Outcomes
• Deploy a database server.
• Add and remove a volume on the database server and the web application.
This command ensures that the cluster is accessible and that all exercise resources are
available. It also creates the storage-review project, and it creates files that this lab uses,
in the /home/student/DO180/labs/storage-review directory.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
DO180-OCP4.12-en-1-20230406 359
Chapter 5 | Manage Storage for Application Configuration and Data
Field Value
User redhat
Password redhat123
Database world_x
Field Value
Name dbserver-lvm
Type persistentVolumeClaim
6. Create a service for the dbserver deployment by using the following information:
Field Value
Name mysql-service
Port 3306
360 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
container image. Scale the deployment to two replicas. Then, expose the deployment by
using the following information:
Field Value
Name file-sharing
Port 8080
Field Value
Name shared-volume
Type persistentVolumeClaim
Next, connect to a file-sharing deployment pod and then use the cp command to
copy the /home/database-files/insertdata.sql file to the /home/sharedfiles
directory. Then, remove the config-map-pvc volume from the file-sharing
deployment.
10. Add the shared-volume PVC to the dbserver deployment. Then, connect to a dbserver
deployment pod and verify the content of the /home/sharedfiles/insertdata.sql
file.
11. Connect to the database server and execute the /home/sharedfiles/insertdata.sql
file to add data to the world_x database. You can execute the file by using the following
command:
Then, confirm connectivity between the web application and database server by accessing
the file-sharing route in a web browser.
DO180-OCP4.12-en-1-20230406 361
Chapter 5 | Manage Storage for Application Configuration and Data
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
362 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Solution
Outcomes
• Deploy a database server.
• Add and remove a volume on the database server and the web application.
This command ensures that the cluster is accessible and that all exercise resources are
available. It also creates the storage-review project, and it creates files that this lab uses,
in the /home/student/DO180/labs/storage-review directory.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
DO180-OCP4.12-en-1-20230406 363
Chapter 5 | Manage Storage for Application Configuration and Data
Field Value
User redhat
Password redhat123
Database world_x
3.1. Create a configuration map named dbfiles by using the insertdata.sql file in the
~/DO180/labs/storage-review directory.
364 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
4.1. Create the database server deployment. Ignore the warning message.
4.3. Verify that the dbserver pod is in the RUNNING state. The pod name might differ in
your output.
Field Value
Name dbserver-lvm
Type persistentVolumeClaim
5.1. Add a volume to the dbserver deployment. Ignore the warning message.
DO180-OCP4.12-en-1-20230406 365
Chapter 5 | Manage Storage for Application Configuration and Data
6. Create a service for the dbserver deployment by using the following information:
Field Value
Name mysql-service
Port 3306
6.2. Verify the service configuration. The endpoint IP address might differ in your output.
Field Value
Name file-sharing
Port 8080
366 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
7.2. Verify the deployment status. Verify that the file-sharing application pod is in the
RUNNING state. The pod names might differ on your system.
7.4. Verify the replica status and retrieve the pod name. The pod names might differ on
your system.
7.6. Verify the service configuration. The endpoint IP address might differ in your output.
DO180-OCP4.12-en-1-20230406 367
Chapter 5 | Manage Storage for Application Configuration and Data
7.8. Test the connectivity between the web application and the database
server. In a web browser, navigate to http://file-sharing-storage-
review.apps.ocp4.example.com, and verify that a Connected successfully
message is displayed.
8.1. Mount the dbfiles configuration map to the file-sharing deployment. Ignore the
warning message.
368 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
9. Add a shared volume to the file-sharing deployment. Use the following information to
create the volume:
Field Value
Name shared-volume
Type persistentVolumeClaim
Next, connect to a file-sharing deployment pod and then use the cp command to
copy the /home/database-files/insertdata.sql file to the /home/sharedfiles
directory. Then, remove the config-map-pvc volume from the file-sharing
deployment.
9.1. Add the shared-volume volume to the file-sharing deployment. Ignore the
warning message.
9.2. Verify the deployment status. Your pod names might differ on your system.
DO180-OCP4.12-en-1-20230406 369
Chapter 5 | Manage Storage for Application Configuration and Data
9.5. Remove the config-map-pvc volume from the file-sharing deployment. Ignore
the warning message.
10. Add the shared-volume PVC to the dbserver deployment. Then, connect to a dbserver
deployment pod and verify the content of the /home/sharedfiles/insertdata.sql
file.
10.1. Add the shared-volume volume to the dbserver deployment. Ignore the warning
message.
10.2. Verify the deployment status. The pod names might differ on your system.
370 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Then, confirm connectivity between the web application and database server by accessing
the file-sharing route in a web browser.
11.2. Test the connectivity between the web application and the database
server. In a web browser, navigate to http://file-sharing-storage-
review.apps.ocp4.example.com, and verify that the application retrieves data
from the world_x database.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
DO180-OCP4.12-en-1-20230406 371
Chapter 5 | Manage Storage for Application Configuration and Data
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
372 DO180-OCP4.12-en-1-20230406
Chapter 5 | Manage Storage for Application Configuration and Data
Summary
• Configuration maps are objects that provide mechanisms to inject configuration data into
containers.
• A persistent volume claim (PVC) resource represents a request from an application for storage,
and specifies the minimal storage characteristics, such as the capacity and access mode.
• Kubernetes supports two volume modes for persistent volumes: Filesystem and Block.
• Storage classes are a way to describe types of storage for the cluster and to provision dynamic
storage on demand.
• A reclaim policy determines what happens to the data on a PVC after the PVC is deleted.
• A storage class with block volume mode support can improve performance for applications that
can use raw block devices.
DO180-OCP4.12-en-1-20230406 373
374 DO180-OCP4.12-en-1-20230406
Chapter 6
DO180-OCP4.12-en-1-20230406 375
Chapter 6 | Configure Applications for Reliability
Objectives
• Describe how Kubernetes tries to keep applications running after failures.
Additionally, HA practices can protect the cluster from applications, such as one with a memory
leak.
Applications must work with the cluster so that Kubernetes can best handle failure scenarios.
Kubernetes expects the following behaviors from applications:
• Tolerates restarts
• Responds to health probes, such as the startup, readiness, and liveness probes
Although the cluster can run applications that lack the preceding behaviors, applications with
these behaviors better use the reliability and HA features that Kubernetes provides.
Most HTTP-based applications provide an endpoint to verify application health. The cluster can
be configured to observe this endpoint and mitigate potential issues for the application.
The application is responsible for providing such an endpoint. Developers must decide how the
application determines its state.
For example, if an application depends on a database connection, then the application might
respond with a healthy status only when the database is reachable. However, not all applications
376 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
that make database connections need such a check. This decision is at the discretion of the
developers.
• Restarting pods: By configuring a restart policy on a pod, the cluster restarts misbehaving
instances of an application.
• Probing: By using health probes, the cluster knows when applications cannot respond to
requests, and can automatically act to mitigate the issue.
• Horizontal scaling: When the application load changes, the cluster can scale the number of
replicas to match the load.
DO180-OCP4.12-en-1-20230406 377
Chapter 6 | Configure Applications for Reliability
Guided Exercise
Outcomes
• Explore how the restartPolicy attribute affects crashing pods.
• Use a deployment to scale the application, and observe the behavior of a broken pod.
The long-load container image contains an application with utility endpoints. These
endpoints perform such tasks as crashing the process and toggling the server's health
status.
Instructions
1. As the developer user, create a pod from a YAML manifest in the reliability-ha
project.
1.3. Navigate to the lab materials directory and view the contents of the pod definition. In
particular, restartPolicy is set to Always.
378 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
1.5. Send a request to the pod to confirm that it is running and responding.
2. Trigger the pod to crash, and observe that the restartPolicy instructs the cluster to re-
create the pod.
2.1. Observe that the pod is running and has not restarted.
2.2. Send a request to the /destruct endpoint in the application. This request triggers
the process to crash.
2.3. Observe that the pod is running and restarted one time.
DO180-OCP4.12-en-1-20230406 379
Chapter 6 | Configure Applications for Reliability
The pod is not re-created, because it was created manually, and not via a workload
resource such as a deployment.
3. Use a restart policy of Never to create the pod, and observe that it is not re-created on
crashing.
3.1. Modify the long-load.yaml file so that the restartPolicy is set to Never.
...output omitted...
restartPolicy: Never
3.3. Send a request to the pod to confirm that the pod is running and that the application
is responding.
3.4. Send a request to the /destruct endpoint in the application to crash it.
3.5. Observe that the pod is not restarted and is in an error state.
4. Because the cluster does not know when the application inside the pod is ready to receive
requests, you must add a startup delay to the application. Adding this capability by using
probes is covered in a later exercise.
4.1. Update the long-load.yaml file by adding a startup delay and use a restart policy
of Always. Set the START_DELAY variable to 60,000 milliseconds (one minute) so
that the file looks like the following excerpt:
380 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/long-load:v1
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
name: long-load
env:
- name: START_DELAY
value: "60000"
restartPolicy: Always
Note
Although numbers are a valid YAML type, environment variables must be passed as
strings. YAML syntax is also indentation-sensitive.
For these reasons, ensure that your file appears exactly as the preceding example.
4.2. Apply the YAML file to create the pod and proceed within one minute to the next
step.
4.3. Within a minute of pod creation, verify the status of the pod. The status shows as
ready even though it is not. Try to send a request to the application, and observe that
it fails.
4.4. After waiting a minute for the application to start, send another a request to the pod
to confirm that it is running and responding.
5. Use a deployment to scale up the number of deployed pods. Observe that deleting the
pods causes service outages, even though the deployment handles re-creating the pods.
5.1. Review the long-load-deploy.yaml file, which defines a deployment, service, and
route. The deployment creates three replicas of the application pod.
DO180-OCP4.12-en-1-20230406 381
Chapter 6 | Configure Applications for Reliability
5.2. Start the load test script, which sends a request to the /health API endpoint of the
application every two seconds. Leave the script running in a visible terminal window.
5.4. Watch the output of the load test script as the pods and the application instances
start. After a delay, the requests succeed.
...output omitted...
Ok
Ok
Ok
...output omitted...
5.5. By using the /togglesick API endpoint of the application, put one of the three
pods into a broken state.
5.6. Watch the output of the load test script as some requests start failing. Because of the
load balancer, the exact order of the output is random.
382 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
...output omitted...
Ok
app is unhealthy
app is unhealthy
Ok
Ok
...output omitted...
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 383
Chapter 6 | Configure Applications for Reliability
Objectives
• Describe how Kubernetes uses health probes during deployment, scaling, and failover of
applications.
Kubernetes Probes
Health probes are an important part of maintaining a robust cluster. Probes enable the cluster to
determine the status of an application by repeatedly probing it for a response.
Because the cluster calls them frequently, health probe endpoints should be quick to perform.
Endpoints should not perform complicated database queries or many network calls.
Probe Types
Kubernetes provides the following types of probes: startup, readiness, and liveness. Depending on
the application, you might configure one or more of these types.
Readiness Probes
A readiness probe determines whether the application is ready to serve requests. If the readiness
probe fails, then Kubernetes prevents client traffic from reaching the application by removing the
pod's IP address from the service resource.
Readiness probes help to detect temporary issues that might affect your applications. For
example, the application might be temporarily unavailable when it starts, because it must establish
initial network connections, load files in a cache, or perform initial tasks that take time to complete.
The application might occasionally need to run long batch jobs, which make it temporarily
unavailable to clients.
Kubernetes continues to run the probe even after the application fails. If the probe succeeds
again, then Kubernetes adds back the pod's IP address to the service resource, and requests are
sent to the pod again.
In such cases, the readiness probe addresses a temporary issue and improves application
availability.
384 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Liveness Probes
Like a readiness probe, a liveness probe is called throughout the lifetime of the application.
Liveness probes determine whether the application container is in a healthy state. If an application
fails its liveness probe enough times, then the cluster restarts the pod according to its restart
policy.
Unlike a startup probe, liveness probes are called after the application's initial start process.
Usually, this mitigation is by restarting or re-creating the pod.
Startup Probes
A startup probe determines when an application's startup is completed. Unlike a liveness probe, a
startup probe is not called after the probe succeeds. If the startup probe does not succeed after a
configurable timeout, then the pod is restarted based on its restartPolicy value.
Consider adding a startup probe to applications with a long start time. By using a startup probe,
the liveness probe can remain short and responsive.
Types of Tests
When defining a probe, you must specify one of the following types of test to perform:
HTTP GET
Each time that the probe runs, the cluster sends a request to the specified HTTP endpoint.
The test is considered a success if the request responds with an HTTP response code
between 200 and 399. Other responses cause the test to fail.
Container command
Each time that the probe runs, the cluster runs the specified command in the container. If the
command exits with a status code of 0, then the test succeeds. Other status codes cause the
test to fail.
TCP socket
Each time that the probe runs, the cluster attempts to open a socket to the container. The
test succeeds only if the connection is established.
For example, a probe with a failure threshold of 3 and period seconds of 5 can fail up to three
times before the overall probe fails. Using this probe configuration means that the issue can exist
for 15 seconds before it is mitigated. However, running probes too often can waste resources.
Consider these values when setting probes.
DO180-OCP4.12-en-1-20230406 385
Chapter 6 | Configure Applications for Reliability
apiVersion: apps/v1
kind: Deployment
...output omitted...
spec:
...output omitted...
template:
spec:
containers:
- name: web-server
...output omitted...
livenessProbe:
failureThreshold: 6
periodSeconds: 10
httpGet:
path: /health
port: 3000
Specifies how many times the probe must fail before mitigating.
Sets the probe as an HTTP request and defines the request port and path.
Sets how many times the probe must fail before mitigating.
Sets the probe as an HTTP request, and defines the request port and path.
Note
The set probe command is exclusive to RHOCP and oc.
386 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
References
Configure Liveness, Readiness and Startup Probes
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-
readiness-startup-probes
For more information about health probes, refer to the Monitoring Application
Health by Using Health Checks chapter in the Red Hat OpenShift Container
Platform 4.12 Building Applications documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/
index#application-health
DO180-OCP4.12-en-1-20230406 387
Chapter 6 | Configure Applications for Reliability
Guided Exercise
Outcomes
• Observe potential issues with an application that is not configured with health probes.
The registry.ocp4.example.com:8443/redhattraining/long-load:v1
container image contains an application with utility endpoints. These endpoints perform
such tasks as crashing the process and toggling the server's health status.
Instructions
1. As the developer user, deploy the long-load application in the reliability-probes
project.
388 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
1.4. Verify that the pods take several minutes to start by sending a request to a pod in the
deployment.
1.5. Observe that the pods are listed as ready even though the application is not ready.
2. Add a startup probe to the pods so that the cluster knows when the pods are ready.
2.1. Modify the long-load-deploy.yaml YAML file by defining a startup probe. The
probe runs every three seconds and triggers a pod as failed after 30 failed attempts.
The file should match the following excerpt:
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/long-load:v1
imagePullPolicy: Always
name: long-load
startupProbe:
failureThreshold: 30
periodSeconds: 3
httpGet:
path: /health
port: 3000
env:
...output omitted...
DO180-OCP4.12-en-1-20230406 389
Chapter 6 | Configure Applications for Reliability
2.3. Apply the updated long-load-deploy.yaml file. Because the YAML file specifies
the number of replicas, the deployment is scaled up. Move to the next step within one
minute.
2.4. Observe that the pods do not show as ready until the application is ready and the
startup probe succeeds.
3. Add a liveness probe so that broken instances of the application are restarted.
3.1. Start the load test script. The test begins to print Ok as the pods become available.
3.2. In a new terminal window, use the /togglesick endpoint to make one of the pods
unhealthy.
The load test window begins to show app is unhealthy. Because only one pod is
unhealthy, the remaining pods still respond with Ok.
3.3. Update the long-load-deploy.yaml file to add a liveness probe. The probe runs
every three seconds and triggers the pod as failed after three failed attempts. Modify
the spec.template.spec.containers object in the file to match the following
excerpt.
390 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/long-load:v1
...output omitted...
startupProbe:
failureThreshold: 30
periodSeconds: 3
httpGet:
path: /health
port: 3000
livenessProbe:
failureThreshold: 3
periodSeconds: 3
httpGet:
path: /health
port: 3000
env:
...output omitted...
The load test script shows that the application is not available.
3.5. Apply the updated long-load-deploy.yaml file to update the deployment, which
triggers the deployment to re-create its pods.
3.6. Wait for the load test window to show Ok for all responses, and then toggle one of the
pods to be unhealthy.
The load test window might show app is unhealthy a number of times before the
pod is restarted.
3.7. Observe that the unhealthy pod is restarted after the liveness probe fails. After the
pod is restarted, the load test window shows only Ok.
DO180-OCP4.12-en-1-20230406 391
Chapter 6 | Configure Applications for Reliability
4. Add a readiness probe so that traffic goes only to pods that are ready and healthy.
4.2. Use the oc set probe command to add the readiness probe.
The command does not immediately finish, but continues to show updates to the
pods' status. Leave this command running in a visible window.
4.5. Wait for the pods to show as ready. Then, in a new terminal window, make one of the
pods unhealthy for five seconds by using the /hiccup endpoint.
The pod status window shows that one of the pods is no longer ready. After five
seconds, the pod is healthy again and shows as ready.
The load test window might show app is unhealthy one time before the pod
is set as not ready. After the cluster determines that the pod is no longer ready, it
stops sending traffic to the pod until either the pod is fixed or the liveness probe
fails. Because the pod is sick only for five seconds, it is enough time for the readiness
probe to fail, but not the liveness probe.
392 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Note
Optionally, repeat this step and observe as the temporarily sick pod's status
changes.
4.6. Stop the load test and status commands by pressing Ctrl+c in their respective
windows. Return to the /home/student/ directory.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 393
Chapter 6 | Configure Applications for Reliability
Objectives
• Configure an application with resource requests so Kubernetes can make scheduling decisions.
Filtering nodes
A pod can define a node selector that matches the labels in the cluster nodes. Only labels that
match are eligible.
Additionally, the scheduler filters the list of running nodes by evaluating each node against
a set of predicates. A pod can define resource requests for compute resources such as CPU,
memory, and storage. Only nodes with enough available computer resources are eligible.
The filtering step reduces the list of eligible nodes. In some cases, the pod could run on any
of the nodes. In other cases, all of the nodes are filtered out, so the pod cannot be scheduled
until a node with the appropriate prerequisites becomes available.
If all nodes are filtered out, then a FailedScheduling event is generated for the pod.
The scheduler is flexible and can be customized for advanced scheduling situations. Customizing
the scheduler is outside the scope of this course.
Resource requests specify the minimum required compute resources necessary to schedule a pod.
The scheduler tries to find a node with enough compute resources to satisfy the pod requests.
In Kubernetes, memory resources are measured in bytes, and CPU resources are measured in CPU
units. CPU units are allocated by using millicore units. A millicore is a CPU core, either virtual or
physical, that is split into 1000 units. A request value of "1000 m" allocates an entire CPU core to
a pod. You can also use fractional values to allocate CPU resources. For example, you can set the
394 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
CPU resource request to a 0.1 value, which represents 100 millicores (100 m). Likewise, a CPU
resource request with a 1.0 value represents an entire CPU or 1000 millicores (1000 m).
You can define resource requests for each container in either a deployment or a deployment
configuration resource. If resources are not defined, then the container specification shows a
resources: {} line.
In your deployment, modify the resources: {} line to specify the chosen requests. The
following example defines a resource request of 100 millicores (100 m) of CPU and one gigabyte
(1 Gi) of memory.
...output omitted...
spec:
containers:
- image: quay.io/redhattraining/hello-world-nginx:v1.0
name: hello-world-nginx
resources:
requests:
cpu: "100m"
memory: "1Gi"
If you use the edit command to modify a deployment or a deployment configuration, then ensure
that you use the correct indentation. Indentation mistakes can result in the editor refusing to save
changes. Alternatively, use the set resources command that the kubectl and oc commands
provide, to specify resource requests. The following command sets the same requests as the
preceding example:
The set resource command works with any resource that includes a pod template, such as the
deployments and job resources.
DO180-OCP4.12-en-1-20230406 395
Chapter 6 | Configure Applications for Reliability
ephemeral-storage: 114396791822
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 19389692Ki
pods: 250
...output omitted...
Non-terminated Pods: (88 in total)
... Name CPU Requests CPU Limits Memory Requests Memory Limits ...
... ---- ------------ ---------- --------------- -------------
... controller-... 10m (0%) 0 (0%) 20Mi (0%) 0 (0%) ...
... metallb-... 50m (0%) 0 (0%) 20Mi (0%) 0 (0%) ...
... metallb-... 0 (0%) 0 (0%) 0 (0%) 0 (0%) ...
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 3183m (42%) 1202m (16%)
memory 12717Mi (67%) 1350Mi (7%)
...output omitted...
RHOCP cluster administrators can also use the oc adm top pods command. This command
shows the compute resource usage for each pod in a project. You must include the --namespace
or -n options to specify a project. Otherwise, the command returns the resource usage for pods in
the currently selected project.
The following command displays the resource usage for pods in the openshift-dns project:
Additionally, cluster administrators can use the oc adm top node command to view the
resource usage of a cluster node. Include the node name to view the resource usage of a particular
node.
References
For more information about pod scheduling, refer to the Controlling Pod Placement
onto Nodes (Scheduling) chapter in the Red Hat OpenShift Container Platform 4.12
Nodes documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#controlling-pod-
placement-onto-nodes-scheduling
396 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Guided Exercise
Outcomes
• Observe that memory resource requests allocate cluster node memory.
• Explore how adjusting resource requests impacts the number of replicas that can be
scheduled on a node.
The registry.ocp4.example.com:8443/redhattraining/long-load:v1
container image contains an application with utility endpoints. These endpoints perform
such tasks as crashing the process and toggling the server's health status.
Instructions
1. As the admin user, deploy the long-load application by applying the long-load-
deploy.yaml file in the reliability-requests project.
Note
In general, use accounts with the least required privileges to perform a task.
In the classroom environment, this account is the developer user. However,
cluster administrator privileges are required to view the cluster node metrics in this
exercise.
DO180-OCP4.12-en-1-20230406 397
Chapter 6 | Configure Applications for Reliability
1.4. View the total memory request allocation for the node.
Important
Projects and objects from previous exercises can cause the memory usage from
this exercise to mismatch the intended results. Delete any unrelated projects before
continuing.
If you still experience issues, re-create your classroom environment and try this
exercise again.
2. Add a resource request to the pod definition and scale the deployment beyond the cluster's
capacity.
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
398 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
- image: registry.ocp4.example.com:8443/redhattraining/long-load:v1
resources:
requests:
memory: 1G
...output omitted...
2.2. Apply the YAML file to modify the deployment with the resource request.
2.4. Observe that the cluster cannot schedule all of the pods on the single node. The pods
with a Pending status cannot be scheduled.
2.5. Retrieve the cluster event log, and observe that insufficient memory is the cause of
the failed scheduling.
2.6. Alternatively, view the events for a pending pod to see the reason. In the following
command, replace the pod name with one of the pending pods in your classroom.
DO180-OCP4.12-en-1-20230406 399
Chapter 6 | Configure Applications for Reliability
3. Reduce the requested memory per pod so that the replicas can run on the node.
3.2. Delete the pods so that they are re-created with the new resource request.
3.3. Observe that all of the pods can start with the lowered memory request. Within a
minute, the pods are marked as Ready and in a Running state, with no pods in a
Pending status.
400 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 401
Chapter 6 | Configure Applications for Reliability
Objectives
• Configure an application with resource limits so Kubernetes can protect other applications from
it.
Memory and CPU requests that you define for containers help Red Hat OpenShift Container
Platform (RHOCP) to select a compute node to run your pods. However, these resource requests
do not restrict the memory and CPU that the containers can use. For example, setting a memory
request at 1 GiB does not prevent the container from consuming more memory.
Red Hat recommends that you set the memory and CPU requests to the peak usage of your
application. In contrast, by setting lower values, you overcommit the node resources. If all the
applications that are running on the node start to use resources above the values that they
request, then the compute nodes might run out of memory and CPU.
In addition to requests, you can set memory and CPU limits to prevent your applications from
consuming too many resources.
As soon as the container reaches the limit, the compute node selects and then kills a process in
the container. When that event occurs, RHOCP detects that the application is not working any
more, because the main container process is missing, or because the health probes report an
error. RHOCP then restarts the container according to the pod restartPolicy attribute, which
defaults to Always.
RHOCP relies on Linux kernel features to implement resource limits, and to kill processes in
containers that reach their memory limits:
You must set a memory limit when the application has a memory usage pattern that you
must mitigate, such as when the application has a memory leak. A memory leak is a bug in the
application, which occurs when the application uses some memory but does not free it after use. If
the leak appears in an infinite service loop, then the application uses more and more memory over
time, and can end up consuming all the available memory on the system. For these applications,
setting a memory limit prevents them from consuming all the node's memory. The memory limit
also enables OpenShift to regularly restart applications to free up their memory when they reach
the limit.
To set a memory limit for the container in a pod, use the oc set resources command:
402 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
In addition to the oc set resources command, you can define resource limits from a file in the
YAML format:
apiVersion: apps/v1
kind: Deployment
...output omitted...
spec:
containers:
- image: registry.access.redhat.com/ubi9/nginx-120:1-86
name: hello
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 200m
memory: 1Gi
When RHOCP restarts a pod because of an OOM event, it updates the pod's lastState
attribute, and sets the reason to OOMKilled:
In contrast, if you do not set a CPU limit, then the container can consume as much CPU as is
available on the node. If the node's CPU is under pressure, for example because several containers
are running CPU-intensive tasks, then the Linux kernel shares the CPU resource between all these
containers, according to the CPU requests value for the containers.
DO180-OCP4.12-en-1-20230406 403
Chapter 6 | Configure Applications for Reliability
You must set a CPU limit when you require a consistent application behavior across clusters
and nodes. For example, if the application runs on a node where the CPU is available, then the
application can execute at full speed. On the other hand, if the application runs on a node with
CPU pressure, then the application executes at a slower pace.
The same behavior can occur between your development and production clusters. Because the
two environments might have different node configurations, the application might run differently
when you move it from development to production.
Note
Clusters can have differences in hardware configuration beyond what limits observe.
For example, two clusters' nodes might have CPUs with equal core count and
unequal clock speeds.
Requests and limits do not account for these hardware differences. If your clusters
differ in such a way, take care that requests and limits are appropriate for both
configurations.
By setting a CPU limit, you mitigate the differences between the configuration of the nodes, and
you experience a more consistent behavior.
To set a CPU limit for the container in a pod, use the oc set resources command:
You can also define CPU limits from a file in the YAML format:
apiVersion: apps/v1
kind: Deployment
...output omitted...
spec:
containers:
- image: registry.access.redhat.com/ubi9/nginx-120:1-86
name: hello
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 200m
memory: 1Gi
404 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
The oc describe node command displays requests and limits. The oc adm top command
shows resource usage. The oc adm top nodes command shows the resource usage for nodes in
the cluster. You must run this command as the cluster administrator.
The oc adm top pods command shows the resource usage for each pod in a project.
The following command displays the resource usage for the pods in the current project:
References
cgroups(7) man page
For more information about resource limits, refer to the Configuring Cluster
Memory to Meet Container Memory and Risk Requirements section in the Working
with Clusters chapter in the Red Hat OpenShift Container Platform 4.12 Nodes
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#nodes-cluster-
resource-configure
DO180-OCP4.12-en-1-20230406 405
Chapter 6 | Configure Applications for Reliability
Guided Exercise
Outcomes
You should be able to monitor the memory usage of an application, and set a memory limit
for a pod.
This command ensures that all resources are available for this exercise. It also creates the
reliability-limits project and the /home/student/DO180/labs/reliability-
limits/resources.txt file. The resources.txt file contains some commands that
you use during the exercise. You can use the file to copy and paste these commands.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the reliability-limits project.
406 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
...output omitted...
resources:
requests:
memory: 20Mi
limits:
memory: 35Mi
2.2. Use the oc apply command to create the application. Ignore the warning message.
2.3. Wait for the pod to start. You might have to rerun the command several times for
the pod to report a Running status. The name of the pod on your system probably
differs.
3.1. Use the watch command to monitor the oc get pods command. Wait for
OpenShift to restart the pod, and then press Ctrl+C to quit the watch command.
3.2. Retrieve the container status to verify that OpenShift restarted the pod due to an
Out-Of-Memory (OOM) event.
DO180-OCP4.12-en-1-20230406 407
Chapter 6 | Configure Applications for Reliability
4. Observe the pod status for a few minutes, until the CrashLoopBackOff status is
displayed. During this period, OpenShift restarts the pod several times because of the
memory leak.
Between each restart, OpenShift sets the pod status to CrashLoopBackOff, waits an
increasing amount of time between retries, and then restarts the pod. The delay between
restarts gives the operator the opportunity to fix the issue.
After various retries, OpenShift finally sets the CrashLoopBackOff wait timer to five
minutes. During this wait time, the application is not available to your customers.
5. Fixing the memory leak would resolve the issue. However, it might take some time for the
developers to fix the bug. In the meantime, set the memory limit to 600 MiB. With this
setting, the pod can run for ten minutes before the application reaches the limit.
5.1. Use the oc set resources command to set the new limit. Ignore the warning
message.
5.2. Wait for the pod to start. You might have to rerun the command several times for
the pod to report a Running status. The name of the pod on your system probably
differs.
5.3. Wait two minutes to verify that OpenShift no longer restarts the pod every 30
seconds.
6. Review the memory that the pod consumes. You might have to rerun the command several
times for the metrics to be available. The memory usage on your system probably differs.
408 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
7. Optional. Wait seven more minutes. After this period, OpenShift restarts the pod, because
it reached the 600 MiB memory limit.
7.1. Open a new terminal window, and then run the watch command to monitor the oc
adm top pods command.
Note
You might see a message that metrics are not yet available. If so, wait some time
and try again.
7.2. In the first terminal, run the watch command to monitor the oc get pods
command. Watch the output of the oc adm top pods command in the second
terminal. When the memory usage reaches 600 MiB, the OOM subsystem kills the
process inside the container, and OpenShift restarts the pod.
7.3. Press Ctrl+C to quit the watch command in the second terminal. Close this second
terminal when done.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 409
Chapter 6 | Configure Applications for Reliability
Application Autoscaling
Objectives
• Configure a horizontal pod autoscaler for an application.
Kubernetes can autoscale a deployment based on current load on the application pods, by means
of a HorizontalPodAutoscaler (HPA) resource type.
A horizontal pod autoscaler resource uses performance metrics that the OpenShift Metrics
subsystem collects. The Metrics subsystem comes preinstalled in OpenShift 4, rather than
requiring a separate installation, as in OpenShift 3. To autoscale a deployment, you must specify
resource requests for pods so that the horizontal pod autoscaler can calculate the percentage of
usage.
The autoscaler works in a loop. Every 15 seconds by default, it performs the following steps:
• The autoscaler retrieves the details of the metric for scaling from the HPA resource.
• For each pod that the HPA resource targets, the autoscaler collects the metric from the metric
subsystem.
• For each targeted pod, the autoscaler computes the usage percentage, from the collected
metric and from the pod resource requests.
• The autoscaler computes the average usage and the average resource requests across all the
targeted pods. It establishes a usage ratio from these values, and then uses the ratio for its
scaling decision.
The simplest way to create a horizontal pod autoscaler resource is by using the oc autoscale
command, for example:
The previous command creates a horizontal pod autoscaler resource that changes the number of
replicas on the hello deployment to keep its pods under 80% of their total requested CPU usage.
The oc autoscale command creates a horizontal pod autoscaler resource by using the name of
the deployment as an argument (hello in the previous example).
The maximum and minimum values for the horizontal pod autoscaler resource accommodate
bursts of load and avoid overloading the OpenShift cluster. If the load on the application changes
too quickly, then it might help to keep several spare pods to cope with sudden bursts of user
requests. Conversely, too many pods can use up all cluster capacity and impact other applications
that use the same OpenShift cluster.
To get information about horizontal pod autoscaler resources in the current project, use the oc
get command. For example:
410 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Important
The horizontal pod autoscaler initially has a value of <unknown> in the TARGETS
column. It might take up to five minutes before <unknown> changes to display a
percentage for current usage.
A persistent value of <unknown> in the TARGETS column might indicate that the
deployment does not define resource requests for the metric. The horizontal pod
autoscaler does not scale these pods.
Pods that are created by using the oc create deployment command do not
define resource requests. Using the OpenShift autoscaler might therefore require
editing the deployment resources, creating custom YAML or JSON resource files for
your application, or adding limit range resources to your project that define default
resource requests.
In addition to the oc autoscale command, you can create a horizontal pod autoscaler resource
from a file in the YAML format.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hello
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- resource:
name: cpu
target:
averageUtilization: 80
type: Utilization
type: Resource
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hello
Ideal average CPU usage for each pod. If the global average CPU usage is above that value,
then the horizontal pod autoscaler starts new pods. If the global average CPU usage is below
that value, then the horizontal pod autoscaler deletes pods.
DO180-OCP4.12-en-1-20230406 411
Chapter 6 | Configure Applications for Reliability
Use the oc apply -f hello-hpa.yaml command to create the resource from the file.
The preceding example creates a horizontal pod autoscaler resource that scales based on CPU
usage. Alternatively, it can scale based on memory usage by setting the resource name to
memory, as in the following example:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hello
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- resource:
name: memory
target:
averageUtilization: 80
...output omitted...
Note
If an application uses more overall memory as the number of replicas increases, then
it cannot be used with memory-based autoscaling.
References
For more information, refer to the Automatically Scaling Pods with the Horizontal
Pod Autoscaler section in the Working with Pods chapter in the Red Hat OpenShift
Container Platform 4.12 Nodes documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#nodes-pods-
autoscaling
412 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Guided Exercise
Application Autoscaling
• Configure an autoscaler for an application and then load test that application to observe
scaling up.
Outcomes
You should be able to manually scale up a deployment, configure a horizontal pod autoscaler
resource, and monitor the autoscaler.
This command ensures that all resources are available for this exercise. It also creates the
reliability-autoscaling project.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the reliability-autoscaling project.
2. Create the loadtest deployment, service, and route. The deployment uses the
registry.ocp4.example.com:8443/redhattraining/loadtest:v1.0 container
image that provides a web application. The web application exposes an API endpoint that
creates a CPU-intensive task when queried.
DO180-OCP4.12-en-1-20230406 413
Chapter 6 | Configure Applications for Reliability
apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: apps/v1
kind: Deployment
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/loadtest:v1.0
name: loadtest
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/loadtest/v1/healthz
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- apiVersion: v1
kind: Service
...output omitted...
- apiVersion: route.openshift.io/v1
kind: Route
...output omitted...
2.2. Use the oc apply command to create the application. Ignore the warning message.
2.3. Wait for the pod to start. You might have to rerun the command several times for
the pod to report a Running status. The name of the pod on your system probably
differs.
3. Configure a horizontal pod autoscaler resource for the loadtest deployment. Set the
minimum number of replicas to 2 and the maximum to 20. Set the average CPU usage to
50% of the CPU requests attribute.
414 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
The horizontal pod autoscaler does not work, because the loadtest deployment does not
specify requests for CPU usage.
3.1. Use the oc autoscale command to create the horizontal pod autoscaler resource.
3.2. Retrieve the status of the loadtest horizontal pod autoscaler resource. The
unknown value in the TARGETS column indicates that OpenShift cannot compute the
current CPU usage of the loadtest deployment. The deployment must include the
CPU requests attribute for OpenShift to be able to compute the CPU usage.
3.3. Get more details about the resource status. You might have to rerun the command
several times. Wait three minutes for the command to report the warning message.
3.4. Delete the horizontal pod autoscaler resource. You re-create the resource in another
step, after you fix the loadtest deployment.
DO180-OCP4.12-en-1-20230406 415
Chapter 6 | Configure Applications for Reliability
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/loadtest:v1.0
name: loadtest
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/loadtest/v1/healthz
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 25m
limits:
cpu: 100m
...output omitted...
4.2. Use the oc apply command to deploy the application from the file. Ignore the
warning message.
416 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
4.3. Wait for the pod to start. You might have to rerun the command several times for
the pod to report a Running status. The name of the pod on your system probably
differs.
5. Manually scale the loadtest deployment by first increasing and then decreasing the
number of running pods.
5.2. Confirm that all five application pods are running. You might have to rerun the
command several times for all the pods to report a Running status. The name of the
pods on your system probably differ.
5.4. Confirm that only one application pod is running. You might have to rerun the
command several times for the pods to terminate.
6. Configure a horizontal pod autoscaler resource for the loadtest deployment. Set the
minimum number of replicas to 2 and the maximum to 20. Set the average CPU usage to
50% of the CPU request attribute.
6.1. Use the oc autoscale command to create the horizontal pod autoscaler resource.
DO180-OCP4.12-en-1-20230406 417
Chapter 6 | Configure Applications for Reliability
6.2. Open a new terminal window and run the watch command to monitor the oc get
hpa loadtest command. Wait five minutes for the loadtest horizontal pod
autoscaler to report usage in the TARGETS column.
Notice that the horizontal pod autoscaler scales up the deployment to two replicas, to
conform with the minimum number of pods that you configured.
7. Increase the CPU usage by sending requests to the loadtest application API.
7.1. Use the oc get route command to retrieve the URL of the application.
7.2. Send a request to the application API to simulate additional CPU pressure on the
container. Do not wait for the curl command to complete, and continue with the
exercise. After a minute, the command reports a timeout error that you can ignore.
7.3. Watch the output of the oc get hpa loadtest command in the second terminal.
After a minute, the horizontal pod autoscaler detects an increase in the CPU usage
and deploys additional pods.
Note
The increased activity of the application does not immediately trigger the
autoscaler. Wait a few moments if you do not see any changes to the number of
replicas.
You might need to run the curl command multiple times before the application
uses enough CPU to trigger the autoscaler.
The CPU usage and the number of replicas on your system probably differ.
Every 2.0s: oc get hpa loadtest workstation: Fri Mar 3 07:20:19 2023
418 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
7.4. Wait five minutes after the curl command completes. The oc get hpa loadtest
command shows that the CPU load decreases.
Note
Although the horizontal pod autoscaler resource can be quick to scale up, it is slower
to scale down.
Every 2.0s: oc get hpa loadtest workstation: Fri Mar 3 07:23:11 2023
7.5. Optional: Wait for the loadtest application to scale down. It takes five additional
minutes for the horizontal pod autoscaler to scale down to two replicas.
Every 2.0s: oc get hpa loadtest workstation: Fri Mar 3 07:29:12 2023
7.6. Press Ctrl+C to quit the watch command. Close that second terminal when done.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 419
Chapter 6 | Configure Applications for Reliability
Lab
Outcomes
You should be able to add resource requests to a Deployment object, configure probes, and
create a horizontal pod autoscaler resource.
This command ensures that all resources are available for this exercise. It also creates the
reliability-review project and deploys the longload application in that project.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
420 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 421
Chapter 6 | Configure Applications for Reliability
Solution
Outcomes
You should be able to add resource requests to a Deployment object, configure probes, and
create a horizontal pod autoscaler resource.
This command ensures that all resources are available for this exercise. It also creates the
reliability-review project and deploys the longload application in that project.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password.
422 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
1.3. List the pods in the project. The pod is in the Pending status. The name of the pod on
your system probably differs.
1.4. Retrieve the events for the pod. No compute node has enough memory to
accommodate the pod.
1.5. Review the resource requests for memory. The longload deployment requests 8 GiB
of memory.
1.6. Set the memory requests to 512 MiB. Ignore the warning message.
1.7. Wait for the pod to start. You might have to rerun the command several times for the
pod to report a Running status. The name of the pod on your system probably differs.
DO180-OCP4.12-en-1-20230406 423
Chapter 6 | Configure Applications for Reliability
5 longload-5897c9558f-cx4gt: Ok
6 longload-5897c9558f-cx4gt: Ok
7 longload-5897c9558f-cx4gt: Ok
8 longload-5897c9558f-cx4gt: Ok
...output omitted...
2. When the application scales up, your customers complain that some requests fail. To
replicate the issue, manually scale up the longload application to three replicas, and run the
~/DO180/labs/reliability-review/curl_loop.sh script at the same time.
The application takes seven seconds to initialize. The application exposes the /health API
endpoint on HTTP port 3000. Configure the longload deployment to use this endpoint, to
ensure that the application is ready before serving client requests.
2.3. Watch the output of the curl_loop.sh script in the second terminal. Some requests
fail because OpenShift sends requests to the new pods before the application is ready.
...output omitted...
22 longload-5897c9558f-cx4gt: Ok
23 longload-5897c9558f-cx4gt: Ok
24 longload-5897c9558f-cx4gt: Ok
25 curl: (7) Failed to connect to master01.ocp4.example.com port 30372: Connection
refused
26 curl: (7) Failed to connect to master01.ocp4.example.com port 30372: Connection
refused
27 longload-5897c9558f-cx4gt: Ok
28 curl: (7) Failed to connect to master01.ocp4.example.com port 30372: Connection
refused
29 longload-5897c9558f-cx4gt: Ok
30 curl: (7) Failed to connect to master01.ocp4.example.com port 30372: Connection
refused
31 longload-5897c9558f-tpssf: app is still starting
32 longload-5897c9558f-kkvm5: app is still starting
33 longload-5897c9558f-cx4gt: Ok
34 longload-5897c9558f-tpssf: app is still starting
35 longload-5897c9558f-tpssf: app is still starting
424 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
2.4. Add a readiness probe to the longload deployment. Ignore the warning message.
2.6. To test your work, scale up the application to three replicas again.
2.7. Watch the output of the curl_loop.sh script in the second terminal. No request fails.
...output omitted...
92 longload-7ddcc9b7fd-72dtm: Ok
93 longload-7ddcc9b7fd-72dtm: Ok
94 longload-7ddcc9b7fd-72dtm: Ok
95 longload-7ddcc9b7fd-qln95: Ok
96 longload-7ddcc9b7fd-wrxrb: Ok
97 longload-7ddcc9b7fd-qln95: Ok
98 longload-7ddcc9b7fd-wrxrb: Ok
99 longload-7ddcc9b7fd-72dtm: Ok
...output omitted...
3. Configure the application so that it automatically scales up when the average memory usage
is above 60% of the memory requests value, and scales down when the usage is below this
percentage. The minimum number of replicas must be one, and the maximum must be three.
The resource that you create for scaling the application must be named longload.
The lab command provides the ~/DO180/labs/reliability-review/hpa.yml
resource file as an example. Use the oc explain command to learn the valid parameters for
the hpa.spec.metrics.resource.target attribute. Because the file is incomplete, you
must update it first if you choose to use it.
To test your work, use the ~/DO180/labs/reliability-review/allocate.sh script
that the lab command prepared. This script sends an HTTP request to the application
DO180-OCP4.12-en-1-20230406 425
Chapter 6 | Configure Applications for Reliability
/leak API endpoint. Each request consumes an additional 480 MiB of memory. To free this
memory, you can use the ~/DO180/labs/reliability-review/free.sh script.
3.1. Before you create the horizontal pod autoscaler resource, scale down the application to
one pod.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: longload
labels:
app: longload
spec:
maxReplicas: 3
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: longload
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 60
3.3. Use the oc apply command to deploy the horizontal pod autoscaler.
3.4. In the second terminal, run the watch command to monitor the oc get hpa
longload command. Wait for the longload horizontal pod autoscaler to report
usage in the TARGETS column. The percentage on your system probably differs.
426 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
3.6. In the second terminal, after two minutes, the oc get hpa longload command
shows the memory increase. The horizontal pod autoscaler scales up the application to
more than one replica. The percentage on your system probably differs.
Every 2.0s: oc get hpa longload workstation: Fri Mar 10 05:19:44 2023
Press Ctrl+C to quit the watch command. Close that second terminal when done.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 427
Chapter 6 | Configure Applications for Reliability
Quiz
1. In the previous lab, given your node capacity and the application's initial memory
request, how many pods can OpenShift schedule?
a. The request is set to 8 GiB; therefore, no pod can be scheduled.
b. The request is set to 8 GiB; therefore, only one pod can be scheduled.
c. The request is set to 512 MiB; therefore, no pod can be scheduled.
d. The request is set to 512 MiB; therefore, three pods can be scheduled.
2. In the previous lab, which deployment setting is incorrect and prevents the application
from starting?
a. The readiness probe is misconfigured and always fails.
b. The liveness probe is misconfigured and always fails.
c. The memory request is too high. No compute nodes have enough memory to
accommodate the application.
d. The memory limit is too low. The OOM kernel subsystem keeps killing the container
processes.
3. In the previous lab, which setting is initially absent from the deployment, but does not
prevent the application from running?
a. The liveness probe is not defined.
b. The readiness probe is not defined.
c. The memory limit is not set.
d. The CPU limit is not set.
428 DO180-OCP4.12-en-1-20230406
Chapter 6 | Configure Applications for Reliability
Solution
1. In the previous lab, given your node capacity and the application's initial memory
request, how many pods can OpenShift schedule?
a. The request is set to 8 GiB; therefore, no pod can be scheduled.
b. The request is set to 8 GiB; therefore, only one pod can be scheduled.
c. The request is set to 512 MiB; therefore, no pod can be scheduled.
d. The request is set to 512 MiB; therefore, three pods can be scheduled.
2. In the previous lab, which deployment setting is incorrect and prevents the application
from starting?
a. The readiness probe is misconfigured and always fails.
b. The liveness probe is misconfigured and always fails.
c. The memory request is too high. No compute nodes have enough memory to
accommodate the application.
d. The memory limit is too low. The OOM kernel subsystem keeps killing the container
processes.
3. In the previous lab, which setting is initially absent from the deployment, but does not
prevent the application from running?
a. The liveness probe is not defined.
b. The readiness probe is not defined.
c. The memory limit is not set.
d. The CPU limit is not set.
DO180-OCP4.12-en-1-20230406 429
Chapter 6 | Configure Applications for Reliability
Summary
• A highly available application is resistant to scenarios that might otherwise make it unavailable.
• Kubernetes and RHOCP provide HA features, such as health probes, that help the cluster to
route traffic only to working pods.
• Resource requests and limits help to keep cluster node resource usage balanced.
• Horizontal pod autoscalers automatically add or remove replicas based on current resource
usage and specified parameters.
430 DO180-OCP4.12-en-1-20230406
Chapter 7
DO180-OCP4.12-en-1-20230406 431
Chapter 7 | Manage Application Updates
Objectives
• Relate container image tags to their identifier hashes, and identify container images from pods
and containers on Kubernetes nodes.
• The name is nginx-120. In this example, the name of the image includes the version of the
software, Nginx version 1.20.
• The tag, which points to a specific version of the image, is 1-86. If you omit the tag, then most
container tools use the latest tag by default.
Multiple tags can refer to the same image version. The following screen capture of the Red Hat
Ecosystem Catalog at https://catalog.redhat.com/software/containers/explore lists the tags for
the ubi9/nginx-120 image:
In this case, the 1.86, latest, and 1 tags point to the same image version. You can use any of
these tags to refer to that version.
432 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
The latest and 1 tags are floating tags, because they can point to different image versions over
time. For example, when developers publish a new version of the image, they change the latest
tag to point to that new version. They also update the 1 tag to point to the latest release of that
version, such as 1-87 or 1-88.
As a user of the image, by specifying a floating tag, you ensure that you always consume the up-
to-date image version that corresponds to the tag.
As a user of the image, you might not notice that the tag that you were using now points to a
different image version.
Suppose that you deploy an application on OpenShift and use the latest tag for the image. The
following series of events might occur:
1. When OpenShift deploys the container, it pulls the image with the latest tag from the
container registry.
2. Later, the image developer pushes a new version of the image, and reassigns the latest tag
to that new version.
3. OpenShift relocates the pod to a different cluster node, for example because the original
node fails.
4. On that new node, OpenShift pulls the image with the latest tag, and thereby retrieves the
new image version.
5. Now the OpenShift deployment runs with a new version of the application, without your
awareness of that version update.
A similar issue is that when you scale up your deployment, OpenShift starts new pods. On the
nodes, OpenShift pulls the latest image version for these new pods. As a result, if a new version
is available, then your deployment runs with containers that use different versions of the image.
Application inconsistencies and unexpected behavior might occur.
To prevent these issues, select an image that is guaranteed not to change over time. You thus gain
control over the lifecycle of your application: you can choose when and how OpenShift deploys a
new image version.
• Use a tag that does not change, instead of relying on floating tags.
• Use OpenShift image streams for tight control over the image versions. Another section in this
course discusses image streams further.
• Use the SHA (Secure Hash Algorithm) image ID instead of a tag when referencing an image
version.
The distinction between a floating and non-floating tag is not a technical one, but a convention.
Although it is discouraged, there is no mechanism to prevent a developer from pushing a different
image to an existing tag. Thus, you must specify the SHA image ID to guarantee that the
referenced container image does not change.
DO180-OCP4.12-en-1-20230406 433
Chapter 7 | Manage Application Updates
To refer to an image by its SHA ID, replace name:tag with name@SHA-ID in the image name. The
following example uses the SHA image ID instead of a tag.
registry.access.redhat.com/ubi9/nginx-120@sha256:1be2006abd21735e7684eb4cc6eb62...
To retrieve the SHA image ID from the tag, use the oc image info command.
Note
A multi-architecture image references images for several CPU architectures.
Multi-architecture images include an index that points to the images for different
platforms and CPU architectures.
For these images, the oc image info command requires you to select an
architecture by using the --filter-by-os option:
OS DIGEST
linux/amd64 sha256:1be2006abd21735e7684eb4cc6eb6295346a89411a187e37cd4...
linux/arm64 sha256:d765193e823bb89b878d2d2cb8be0e0073839a6c19073a21485...
linux/ppc64le sha256:0dd0036620f525b3ba9a46f9f1c52ac70414f939446b2ba3a07...
linux/s390x sha256:d8d95cc17764b82b19977bc7ef2f60ff56a3944b3c7c14071dd...
The following example displays the SHA ID for the image that the 1-86 tag currently points to.
You can also use the skopeo inspect command. The output format differs from the oc image
info command, although both commands report similar data.
If you use the oc debug node/node-name command to connect to a compute node, then
you can list the locally available images by running the crictl images --digests --no-
trunc command. The --digests option instructs the command to display the SHA image IDs,
and the --no-trunc option instructs the command to display the full SHA string; otherwise, the
command displays only the first characters.
434 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
The IMAGE ID column displays the local image identifier that the container engine assigns to the
image. This identifier is not related to the SHA ID.
The container image format relies on SHA-256 hashes to identify several image components, such
as the image layers or the image metadata. Because some commands also report these SHA-256
strings, ensure that you use the SHA-256 hash that corresponds to the SHA image ID. Commands
often refer to the SHA image ID as the image digest.
By setting the imagePullPolicy attribute in the deployment resource, you can control how
OpenShift pulls the image.
The following example shows the myapp deployment resource. The pull policy is set to
IfNotPresent.
IfNotPresent
If the image is already on the compute node, because another container is using it or because
OpenShift pulled the image during a preceding pod run, then OpenShift uses that local image.
Otherwise, OpenShift pulls the image from the container registry.
DO180-OCP4.12-en-1-20230406 435
Chapter 7 | Manage Application Updates
If you use a floating tag in your deployment, and the image with that tag is already on the
node, then OpenShift does not pull the image again, even if the floating tag might point to a
newer image in the source container registry.
OpenShift sets the imagePullPolicy attribute to IfNotPresent by default when you use
a tag or the SHA ID to identify the image.
Always
OpenShift always verifies whether an updated version of the image is available on the source
container registry. To do so, OpenShift retrieves the SHA ID of the image from the registry. If a
local image with that same SHA ID is already on the compute node, then OpenShift uses that
image. Otherwise, OpenShift pulls the image.
If you use a floating tag in your deployment, and an image with that tag is already on the node,
then OpenShift queries the registry anyway to ensure that the tag still points to the same
image version. However, if the developer pushed a new version of the image and updated the
floating tag, then OpenShift retrieves that new image version.
OpenShift sets the imagePullPolicy attribute to Always by default when you use the
latest tag, or when you do not specify a tag.
Never
OpenShift does not pull the image, and expects the image to be already available on the
node. Otherwise, the deployment fails.
To use this option, you must prepopulate your compute nodes with the images that you plan
to use. You use this mechanism to improve speed or to avoid relying on a container registry for
these images.
Because the images consume disk space on the compute nodes, OpenShift needs to remove, or
prune, the unused images when disk space becomes sparse. The kubelet process, which runs on
the compute nodes, includes a garbage collector that runs every five minutes. If the usage of the
file system that stores the images is above 85%, then the garbage collector removes the oldest
unused images. Garbage collection stops when the file system usage drops below 80%.
The reference documentation at the end of this lecture includes instructions to adjust these
default thresholds.
From a compute node, you can run the crictl imagefsinfo command to retrieve the name of
the file system that stores the images:
436 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
"fsId": {
"mountpoint": "/var/lib/containers/storage/overlay-images"
},
"usedBytes": {
"value": "1318560"
},
"inodesUsed": {
"value": "446"
}
}
}
From the preceding command output, the file system that stores the images is /var/lib/
containers/storage/overlay-images. The images consume 1318560 bytes of disk space.
From the compute node, you can use the crictl rmi to remove an unused image. However,
pruning objects by using the crictl command might interfere with the garbage collector and the
kubelet process.
It is recommended that you rely on the garbage collector to prune unused objects, images, and
containers from the compute nodes. The garbage collector is configurable to better fulfill custom
needs that you might have.
References
skopeo-inspect(1) and podman-system-prune(1) man pages
For more information about image names, refer to the Overview of Images chapter
in the Red Hat OpenShift Container Platform 4.12 Images documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/images/index#overview-of-images
For more information about pull policies, refer to the Image Pull Policy section in the
Managing Images chapter in the Red Hat OpenShift Container Platform 4.12 Images
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/images/index#image-pull-policy
For more information about garbage collection, refer to the Understanding How
Terminated Containers Are Removed Through Garbage Collection section in the
Working with Nodes chapter in the Red Hat OpenShift Container Platform 4.12
Nodes documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/nodes/index#nodes-nodes-
garbage-collection-containers_nodes-nodes-configuring
DO180-OCP4.12-en-1-20230406 437
Chapter 7 | Manage Application Updates
Guided Exercise
Outcomes
You should be able to inspect container images, list images of containers that run on
compute nodes, and deploy applications by using image tags or SHA IDs.
This command ensures that all resources are available for this exercise. It also creates
the updates-ids project and the /home/student/DO180/labs/updates-ids/
resources.txt file. The resources.txt file contains the name of the images and some
commands that you use during the exercise. You can use the file to copy and paste these
image names and commands.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the updates-ids project.
2.1. Use the oc image info command to inspect the image version that the 1-209 tag
references. Notice the unique SHA ID that identifies the image version.
438 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Note
To improve readability, the instructions truncate the SHA-256 strings.
On your system, the commands return the full SHA-256 strings. Also, you must type
the full SHA-256 string, to provide such a parameter to a command.
2.2. Inspect the image version that the 1-215 tag references. Notice that the SHA ID, or
digest, differs from the preceding image version.
2.3. For inspecting images, you can also use the skopeo inspect command. The output
format differs from the oc image info command, although both commands report
similar data.
Log in to the registry as the developer user with the developer password by
using the skopeo login command. Then, use the skopeo inspect command to
inspect the 1-215 image tag.
The skopeo inspect command also shows other existing image tags.
3. Deploy an application from the image version that the 1-209 tag references.
DO180-OCP4.12-en-1-20230406 439
Chapter 7 | Manage Application Updates
3.1. Use the oc create deployment command to deploy the application. Set the
name of the deployment to httpd1. Ignore the warning message.
3.2. Wait for the pod to start, and then retrieve the name of the cluster node that runs it.
You might have to rerun the command several times for the pod to report a Running
status. The name of the pod on your system probably differs.
3.3. Retrieve the name of the container that is running inside the pod. The crictl ps
command that you run in a following step takes the container name as an argument.
4. Access the cluster node and then retrieve the image that the container is using.
4.1. You must log in as the admin user to access the cluster node. Use the redhatocp
password.
4.2. Use the oc debug node command to access the cluster node.
4.4. Use the crictl ps command to confirm that the httpd-24 container is running.
Add the -o yaml option to display the container details in YAML format.
440 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Notice that the command refers to the image by its SHA ID, and not by the tag that
you specified when you created the deployment resource.
4.5. Use the crictl images command to list the locally available images on the node.
The registry.ocp4.example.com:8443/ubi8/httpd-24:1-209 is in that
list, because the local container engine pulled it when you deployed the httpd1
application.
Note
The IMAGE ID column displays the local image identifier that the container engine
assigns to the image. This identifier is not related to the SHA image ID that the
container registry assigned to the image.
4.6. The preceding crictl images command does not display the SHA image IDs
by default. Rerun the command and add the --digests option to display the
SHA IDs. Also add the local image ID to the command to limit the output to the
registry.ocp4.example.com:8443/ubi8/httpd-24:1-209 image.
The command reports only the first characters of the SHA image ID. These
characters match the SHA ID of the image that the httpd-24 container is using.
Therefore, the httpd-24 container is using the expected image.
DO180-OCP4.12-en-1-20230406 441
Chapter 7 | Manage Application Updates
sh-4.4# exit
exit
sh-4.4# exit
exit
5. Log in as the developer user and then deploy another application by using the SHA ID of
the image as the digest.
5.2. Rerun the oc image info command to retrieve the SHA ID of the image version
that the 1-209 tag references. Specify the JSON format for the command output.
Parse the JSON output with the jq -r command to retrieve the value of the
.digest object. Export the SHA ID as the $IMAGE environment variable.
5.3. Use the oc create deployment command to deploy the application. Set the
name of the deployment to httpd2. Ignore the warning message.
5.4. Confirm that the new deployment refers to the image version by its SHA ID.
442 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
6.1. In the httpd2 deployment, update the httpd-24 container to use the image version
that the 1-215 tag references. Ignore the warning message.
6.2. Confirm that the deployment refers to the new image version.
6.3. Confirm that the deployment finished redeploying the pod. You might have to rerun
the command several times for the pod to report a Running status. The pod names
probably differ on your system.
6.4. Inspect the pod to confirm that the container is using the new image. Replace the
pod name with your own from the previous step.
7. Add the latest tag to the image version that the 1-209 tag already references. Deploy an
application from the image with the latest tag.
7.1. Use the skopeo login command to log in to the classroom container registry as
the developer user. Use developer for the password.
7.2. Use the skopeo copy command to add the latest tag to the image.
DO180-OCP4.12-en-1-20230406 443
Chapter 7 | Manage Application Updates
7.3. Use the oc image info command to confirm that both tags refer to the same
image. The two commands report the same SHA image ID, which indicates that the
tags point to the same image version.
7.4. Use the oc create deployment command to deploy another application. Set the
name of the deployment to httpd3. To confirm that by default the command selects
the latest tag, do not provide the tag part in the image name. Ignore the warning
message.
7.5. Confirm that the pod is running. You might have to rerun the command several times
for the pod to report a Running status. The pod names probably differ on your
system.
7.6. Confirm that the pod is using the expected image. Notice that the SHA image ID
corresponds to the image that the 1-209 tag references. You retrieved that SHA
image ID in a preceding step when you ran the oc image info command.
444 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
8. Assign the latest tag to a different image version. This operation simulates a developer
who pushes a new version of an image and assigns the latest tag to that new image
version.
8.1. Use the skopeo copy command to add the latest tag to the image version that
the 1-215 tag already references. The command automatically removes the latest
tag from the earlier image.
8.3. Even though the latest tag is now referencing a different image version, OpenShift
does not redeploy the pods that are running with the previous image version.
Rerun the oc describe pod command to confirm that the pod still uses the
preceding image.
9.1. Use the oc scale command to add a new pod to the deployment.
DO180-OCP4.12-en-1-20230406 445
Chapter 7 | Manage Application Updates
9.2. List the pods to confirm that two pods are running for the httpd3 deployment. The
pod names probably differ on your system.
9.3. Retrieve the SHA image ID for the pod that the deployment initially created. The ID
did not change. The container is still using the original image version.
9.4. Retrieve the SHA image ID for the additional pod. Notice that the ID is different. The
additional pod is using the image that the latest tag is currently referencing.
The state of the deployment is inconsistent. The two replicated pods use a different
image version. Consequently, the scaled application might not behave correctly.
Red Hat recommends that you use a less volatile tag than latest in production
environments, or that you tightly control the tag assignments in your container
registry.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
446 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Objectives
• Update applications with minimal downtime by using deployment strategies.
OpenShift provides configuration map, secret, and volume resources to store the application
configuration and data. The application code is available through container images.
Because OpenShift deploys applications from container images, developers must build a new
version of the image when they update the code of their application. Organizations usually use a
Continuous Integration and Continuous Delivery (CI/CD) pipeline to automatically build the image
from the application source code, and then to push the resulting image to a container registry.
You use OpenShift resources, such as configuration maps and secrets, to update the configuration
of the application. To control the deployment process of a new image version, you use a
Deployment object.
Deployment Strategies
Deploying functional application changes or new versions to users is a significant phase of the CI/
CD pipelines, where you add value to the development process.
Introducing application changes carries risks, such as downtime during the deployment, bugs,
or reduced application performance. You can reduce or mitigate some risks with testing and
validation stages in your pipelines.
Application or service downtime can result in lost business, disruption to other services that
depend on yours, and violations of service level agreements, among others. To reduce downtime
and minimize risks in deployments, use a deployment strategy. A deployment strategy changes or
upgrades an application in a way that minimizes the impact of those changes.
In OpenShift, you use Deployment objects to define deployments and deployment strategies.
The RollingUpdate and the Recreate strategies are the main OpenShift deployment
strategies.
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
DO180-OCP4.12-en-1-20230406 447
Chapter 7 | Manage Application Updates
spec:
progressDeadlineSeconds: 600
replicas: 10
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp2
strategy:
type: Recreate
template:
...output omitted...
In this strategy, both versions of the application run simultaneously, and it scales down instances
of the previous version only when the new version is ready. The main drawback is that this strategy
requires compatibility between the versions in the deployment.
The following graphic shows the deployment of a new version of an application by using the
RollingUpdate strategy:
1. Some application instances run a code version that needs updating (v1). OpenShift scales
up a new instance with the updated application version (v2). Because the new instance with
version v2 is not ready, only the version v1 instances fulfill customer requests.
2. The instance with v2 is ready and accepts customer requests. OpenShift scales down an
instance with version v1, and scales up a new instance with version v2. Both versions of the
application fulfill customer requests.
3. The new instance with v2 is ready and accepts customer requests. OpenShift scales down the
remaining instance with version v1.
4. No instances remain to replace. The application update was successful, and without
downtime.
448 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Note
The RollingUpdate strategy is the default strategy if you do not specify a
strategy on the Deployment objects.
The following snippet shows a Deployment object that uses the RollingUpdate strategy:
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
spec:
progressDeadlineSeconds: 600
replicas: 10
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp2
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 50%
type: RollingUpdate
template:
...output omitted...
Out of many parameters to configure the RollingUpdate strategy, the preceding snippet shows
the maxSurge and maxUnavailable parameters.
During a rolling update, the number of pods for the application varies, because OpenShift starts
new pods for the new revision, and removes pods from the previous revision. The maxSurge
parameter indicates how many pods OpenShift can create above the normal number of replicas.
The maxUnavailable parameter indicates how many pods OpenShift can remove below the
normal number of replicas. You can express these parameters as percentages or as a number of
pods.
If you do not configure a readiness probe for your deployment, then during a rolling update,
OpenShift starts sending client traffic to new pods as soon as they are running. However, the
application inside a container might not be immediately ready to accept client requests. The
application might have to load files to cache, establish a network connection to a database, or
perform initial tasks that might take time to complete. Consequently, OpenShift redirects client
requests to a container that is not yet ready, and these requests fail.
Adding a readiness probe to your deployment prevents OpenShift from sending traffic to new
pods that are not ready.
DO180-OCP4.12-en-1-20230406 449
Chapter 7 | Manage Application Updates
Recreate Strategy
In this strategy, all the instances of an application are killed first, and are then replaced with new
ones. The major drawback of this strategy is that it causes a downtime in your services. For a
period, no application instances are available to fulfill requests.
The following graphic shows the deployment of a new version of an application that uses the
Recreate strategy:
1. The application has some instances that run a code version to update (v1).
2. OpenShift scales down the running instances to zero. This action causes application
downtime, because no instances are available to fulfill requests.
3. OpenShift scales up new instances with a new version of the application (v2). When the new
instances are booting, the downtime continues.
4. The new instances finished booting, and are ready to fulfill requests. This step is the last step
of the Recreate strategy, and it resolves the application outage.
You can use this strategy when your application cannot have different simultaneously running
code versions. You might also use it to execute data migrations or data transformations before the
new code starts. This strategy is not recommended for applications that need high availability, for
example, medical systems.
To prevent these multiple deployments, pause the rollout, apply all your modifications to the
Deployment object, and then resume the rollout. OpenShift then performs a single rollout to
apply all your modifications:
• Use the oc rollout pause command to pause the rollout of the myapp deployment:
450 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
• Apply all your modifications to the Deployment object. The following example modifies the
image, an environment variable, and the readiness probe.
OpenShift rolls out the application to apply all your modifications to the Deployment object.
You can follow a similar process when you create and configure a new deployment:
• Create the deployment, and set the number of replicas to zero. This way, OpenShift does not roll
out your application, and no pods are running.
• Apply the configuration to the Deployment object. The following example adds a readiness
probe.
To deploy pods, replica sets use the pod template definition from the Deployment object.
OpenShift copies the template definition from the Deployment object when it creates the
ReplicaSet object.
When you update the Deployment object, OpenShift does not update the existing ReplicaSet
object. Instead, it creates another ReplicaSet object with the new pod template definition.
Then, OpenShift rolls out the application according to the update strategy.
Thus, several ReplicaSet objects for a deployment can exist at the same time on your system.
During a rolling update, the old and the new ReplicaSet objects coexist and coordinate the
DO180-OCP4.12-en-1-20230406 451
Chapter 7 | Manage Application Updates
rollout of the new application version. After the rollout completes, OpenShift keeps the old
ReplicaSet object so that you can roll back if the new application version does not operate
correctly.
The following graphic shows a Deployment object and two ReplicaSet objects. The
old ReplicaSet object for version 1 of the application does not run any pods. The current
ReplicaSet object for version 2 of the application manages three replicated pods.
Do not directly change or delete ReplicaSet objects, because OpenShift manages them
through the associated Deployment objects. The .spec.revisionHistoryLimit attribute
in Deployment objects specifies how many ReplicaSet objects OpenShift keeps. OpenShift
automatically deletes the extra ReplicaSet objects. Also, when you delete a Deployment
object, OpenShift deletes all the associated ReplicaSet objects.
Run the oc get replicaset command to list the ReplicaSet objects. OpenShift uses the
Deployment object name as a prefix for the ReplicaSet objects.
The preceding output shows three ReplicaSet objects for the myapp2 deployment. Whenever
you modified the myapp2 deployment, OpenShift created a ReplicaSet object. The second
object in the list is active and monitors 10 pods. The other ReplicaSet objects do not manage
any pods. They represent the previous versions of the Deployment object.
During a rolling update, two ReplicaSet objects are active. The old ReplicaSet object is
scaling down, and at the same time the new object is scaling up:
452 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
The new ReplicaSet object already started four pods, but the READY column shows that
the readiness probe succeeded for only two pods so far. These two pods are likely to receive
client traffic.
Managing Rollout
Because OpenShift preserves ReplicaSet objects from earlier deployment versions, you can roll
back if you notice that the new version of the application does not work.
Use the oc rollout undo command to roll back to the preceding deployment version. The
command uses the existing ReplicaSet object for that version to roll back the pods. The
command also reverts the Deployment object to the preceding version.
If the rollout operation fails, because you specify a wrong container image name or the readiness
probe fails, then OpenShift does not automatically roll back your deployment. In this case, run the
oc rollout undo command to revert to the preceding working configuration.
By default, the oc rollout undo command rolls back to the preceding deployment version.
If you need to roll back to an earlier revision, then list the available revisions and add the --to-
revision rev option to the oc rollout undo command.
DO180-OCP4.12-en-1-20230406 453
Chapter 7 | Manage Application Updates
Note
The CHANGE-CAUSE column provides a user-defined message that describes the
revision. You can store the message in the kubernetes.io/change-cause
deployment annotation after every rollout:
• Add the --revision option to the oc rollout history command for more details about a
specific revision:
• Roll back to a specific revision by adding the --to-revision option to the oc rollout
undo command:
If you use floating tags to refer to container image versions in deployments, then the resulting
image when you roll back a deployment might have changed in the container registry. Thus, the
image that you run after the rollback might not be the original one that you used.
To prevent this issue, use OpenShift image streams for referencing images instead of floating
tags. Another section in this course discusses image streams further.
454 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
References
For more information about deployment strategies, refer to the Using Deployment
Strategies section in the Deployments chapter in the Red Hat OpenShift Container
Platform 4.12 Building Applications documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/
index#deployment-strategies
For more information about readiness probes, refer to the Monitoring Application
Health by Using Health Checks chapter in the Red Hat OpenShift Container
Platform 4.12 Building Applications documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/
index#application-health
For more information about replication sets, refer to the Understanding Deployment
and DeploymentConfig Objects section in the Deployments chapter in the Red Hat
OpenShift Container Platform 4.12 Building Applications documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/building_applications/index#what-
deployments-are
DO180-OCP4.12-en-1-20230406 455
Chapter 7 | Manage Application Updates
Guided Exercise
Outcomes
You should be able to pause, update, and resume a deployment, and roll back a failing
application.
This command ensures that all resources are available for this exercise. It also creates the
updates-rollout-db project and deploys a MySQL database in that project. It creates
the updates-rollout-web project and then deploys a web application with 10 replicas.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the updates-rollout-db project.
2. Review the resources that the lab command created. Confirm that you can connect to the
database. The MySQL database uses ephemeral storage.
456 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
2.1. List the Deployment object and confirm that the pod is available. Retrieve the name
of the container. You use that information when you update the container image in
another step.
2.2. List the pods and confirm that the pod is running. The name of the pod on your
system probably differs.
2.3. Retrieve the name of the image that the pod is using. The pod is using the rhel9/
mysql-80 image version 1-224. Replace the pod name with your own from the
previous step.
The classroom setup copied that image from the Red Hat Ecosystem Catalog. The
original image is registry.redhat.io/rhel9/mysql-80.
2.4. Confirm that you can connect to the database system by listing the available
databases. Run the mysql command from inside the pod and connect as the
operator1 user by using test as the password.
3. You must implement several updates to the Deployment object. Pause the deployment
to prevent OpenShift from rolling out the application for each modification that you make.
After you pause the deployment, change the password for the operator1 database user,
update the container image, and then resume the deployment.
DO180-OCP4.12-en-1-20230406 457
Chapter 7 | Manage Application Updates
3.2. Change the password of the operator1 database user to redhat123. To change
the password, update the MYSQL_PASSWORD environment variable in the pod
template of the Deployment object. Ignore the warning message.
3.3. Because the Deployment object is paused, confirm that the new password is not
yet active. To do so, rerun the mysql command by using the current password. The
database connection succeeds.
3.4. Update the MySQL container image to the 1-228 version. Ignore the warning
message.
3.5. Because the Deployment object is paused, confirm that the pod still uses the 1-224
image version.
3.7. Confirm that the new rollout completes by waiting for the new pod to be running. The
name of the pod on your system probably differs.
458 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
4. Verify that OpenShift applied all your modifications to the Deployment object.
4.1. Retrieve the name of the image that the new pod is using. In the following command,
use the name of the new pod as a parameter to the oc get pod command. The pod
is now using the 1-228 image version.
4.2. Confirm that you can connect to the database system by using the new password,
redhat123, for the operator1 database user.
5. In the second part of the exercise, you perform a rolling update of a replicated web
application. Use the updates-rollout-web project and review the resources that the
lab command created.
5.2. List the Deployment object and confirm that the pods are available. Retrieve the
name of the containers. You use that information when you update the container
image in another step.
5.3. List the ReplicaSet objects. Because OpenShift did not yet perform rolling
updates, only one ReplicaSet object exists. The name of the ReplicaSet object
on your system probably differs.
DO180-OCP4.12-en-1-20230406 459
Chapter 7 | Manage Application Updates
5.4. Retrieve the name and version of the image that the ReplicaSet object uses to
deploy the pods. The pods are using the redhattraining/versioned-hello
image version v1.0.
5.5. Confirm that the version deployment includes a readiness probe. The probe
performs an HTTP GET request on port 8080.
6. To watch the rolling update that you cause in a following step, open a new terminal window
and then run the ~/DO180/labs/updates-rollout/curl_loop.sh script that the
lab command prepared. The script sends web requests to the application in a loop.
7. Change the container image of the version deployment. The new application version
creates a web page with a different message.
7.1. Switch back to the first terminal window, and then use the oc set image command
to update the deployment. Ignore the warning message.
460 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
7.2. Changing the image caused a rolling update. Watch the output of the
curl_loop.sh script in the second terminal.
Before the update, only pods that run the v1.0 version of the application reply.
During the rolling updates, both old and new pods are responding. After the update,
only pods that run the v1.1 version of the application reply. The following output
probably differs on your system.
...output omitted...
Hi!
Hi!
Hi!
Hi!
Hi! v1.1
Hi! v1.1
Hi!
Hi! v1.1
Hi!
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi! v1.1
...output omitted...
8. Confirm that the rollout process is successful. List the ReplicaSet objects and verify that
the new object uses the new image version.
8.1. Use the oc rollout status command to confirm that the rollout process is
successful.
8.2. List the ReplicaSet objects. The initial object scaled down to zero pods. The new
ReplicaSet object scaled up to 10 pods. The names of the ReplicaSet objects on
your system probably differ.
8.3. Retrieve the name and version of the image that the new ReplicaSet object uses.
This image provides the new version of the application.
DO180-OCP4.12-en-1-20230406 461
Chapter 7 | Manage Application Updates
9.1. Use the oc rollout undo command to roll back to the initial application version.
Ignore the warning message.
9.2. Watch the output of the curl_loop.sh script in the second terminal. The pods that
run the v1.0 version of the application are responding again. The following output
probably differs on your system.
...output omitted...
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi!
Hi! v1.1
Hi!
Hi! v1.1
Hi! v1.1
Hi!
Hi!
Hi!
...output omitted...
Press Ctrl+C to quit the script. Close that second terminal when done.
9.3. List the ReplicaSet objects. The initial object scaled up to 10 pods. The object for
the new application version scaled down to zero pods.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
462 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Objectives
• Ensure reproducibility of application deployments by using image streams and short image
names.
Image Streams
Image streams are one of the main differentiators between OpenShift and upstream Kubernetes.
Kubernetes resources reference container images directly, but OpenShift resources, such as
deployment configurations and build configurations, reference image streams. OpenShift also
extends Kubernetes resources, such as Kubernetes Deployments, with annotations that make
them work with OpenShift image streams.
With image streams, OpenShift can ensure reproducible, stable deployments of containerized
applications and also rollbacks of deployments to their latest known-good state.
Image streams provide a stable, short name to reference a container image that is independent of
any registry server and container runtime configuration.
As an example, an organization could start by downloading container images directly from the
Red Hat public registry and later set up an enterprise registry as a mirror of those images to
save bandwidth. OpenShift users would not notice any change, because they still refer to these
images by using the same image stream name. Users of the RHEL container tools would notice the
change, because they would be required either to change the registry names in their commands,
or to change their container engine configurations to search for the local mirror first.
In other scenarios, the indirection that an image stream provides can be helpful. Suppose that
you start with a database container image that has security issues, and the vendor takes too long
to update the image with fixes. Later, you find an alternative vendor who provides an alternative
container image for the same database, with those security issues already fixed, and even better,
with a track record of providing timely updates to them. If those container images are compatible
regarding configuration of environment variables and volumes, you could change your image
stream to point to the image from the alternative vendor.
Red Hat provides hardened, supported container images that work mostly as drop-in
replacements of container images from some popular open source projects, such as the MariaDB
database.
An image stream provides default configurations for a set of image stream tags. Each image
stream tag references one stream of container images, and can override most configurations from
its associated image stream.
DO180-OCP4.12-en-1-20230406 463
Chapter 7 | Manage Application Updates
An image stream tag stores a copy of the metadata about its current container image. Storing
metadata supports faster search and inspection of container images, because you do not need to
reach its source registry server.
You can also configure an image stream tag to store the source image layers in the OpenShift
internal container registry, which acts as a local image cache. Storing image layers locally avoids
the need to fetch these layers from their source registry server. Consumers of the cached image,
such as pods and deployment configurations, just reference the internal registry as the source
registry of the image.
For some other OpenShift resource types that relate to image streams, you can usually dismiss
them as implementation details of the internal registry, and focus only on image streams and
image stream tags.
To better visualize the relationship between image streams and image stream tags, you can
explore the openshift project that is pre-created in all OpenShift clusters. You can see many
image streams in that project, including the php image stream:
Several tags exist for the php image stream, and an image stream tag resource exists for each tag:
The oc describe command on an image stream shows information from both the image stream
and its image stream tags:
8.0-ubi9
tagged from registry.access.redhat.com/ubi9/php-80:latest
...output omitted...
8.0-ubi8 (latest)
tagged from registry.access.redhat.com/ubi8/php-80:latest
...output omitted...
7.4-ubi8
tagged from registry.access.redhat.com/ubi8/php-74:latest
464 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
...output omitted...
7.3-ubi7
tagged from registry.access.redhat.com/ubi7/php-73:latest
...output omitted...
In the previous example, each of the php image stream tags refers to a different image name.
A SHA image ID is a SHA-256 hash that uniquely identifies an immutable container image. You
cannot modify a container image. Instead, you create a container image that has a new ID. When
you push a new container image to a registry server, the server associates the existing textual
name with the new image ID.
When you start a container from an image name, you download the image that is currently
associated with that image name. The image ID behind that name might change at any moment,
and the next container that you start might have a different image ID. If the image that is
associated with an image name has any issues, and you know only the image name, then you
cannot roll back to an earlier image.
OpenShift image stream tags keep a history of the latest image IDs that they fetched from a
registry server. The history of image IDs is the stream of images from an image stream tag. You
can use the history inside an image stream tag to roll back to a previous image, if for example a
new container image causes a deployment error.
Updating a container image in an external registry does not automatically update an image stream
tag. The image stream tag keeps the reference to the last image ID that it fetched. This behavior
is crucial to scaling applications, because it isolates OpenShift from changes that happen at a
registry server.
Suppose that you deploy an application from an external registry, and after a few days of testing
with a few users, you decide to scale its deployment to enable a larger user population. In the
meantime, your vendor updates the container image on the external registry. If OpenShift had
no image stream tags, then the new pods would get the new container image, which is different
from the image on the original pod. Depending on the changes, this new image could cause your
application to fail. Because OpenShift stores the image ID of the original image in an image stream
tag, it can create new pods by using the same image ID and avoid any incompatibility between the
original and updated image.
OpenShift keeps the image ID of the first pod, and ensures that new pods use the same image ID.
OpenShift ensures that all pods use the identical image.
To better visualize the relationship between an image stream, an image stream tag, an image
name, and an image ID, refer to the following oc describe is command, which shows the
source image and current image ID for each image stream tag:
DO180-OCP4.12-en-1-20230406 465
Chapter 7 | Manage Application Updates
8.0-ubi9
tagged from registry.access.redhat.com/ubi9/php-80:latest
...output omitted...
* registry.access.redhat.com/ubi9/php-80@sha256:2b82...f544
2 days ago
8.0-ubi8 (latest)
tagged from registry.access.redhat.com/ubi8/php-80:latest
* registry.access.redhat.com/ubi8/php-80@sha256:2c74...5ef4
2 days ago
...output omitted...
If your OpenShift cluster administrator already updated the php:8.0-ubi9 image stream tag,
the oc describe is command shows multiple image IDs for that tag:
8.0-ubi9
tagged from registry.access.redhat.com/ubi9/php-80:latest
...output omitted...
* registry.access.redhat.com/ubi9/php-80@sha256:2b82...f544
2 days ago
registry.access.redhat.com/ubi9/php-80@sha256:8840...94f0
5 days ago
registry.access.redhat.com/ubi9/php-80@sha256:506c...5d90
9 days ago
In the previous example, the asterisk (*) shows which image ID is the current one for each image
stream tag. It is usually the latest one to be imported, and the first one that is listed.
When an OpenShift image stream tag references a container image from an external registry,
you must explicitly update the image stream tag to get new image IDs from the external registry.
By default, OpenShift does not monitor external registries for changes to the image ID that is
associated with an image name.
You can configure an image stream tag to check the external registry for updates on a defined
schedule. By default, new image stream tags do not check for updated images.
Use the oc create is command to create image streams in the current project. The following
example creates an image stream named keycloak:
After you create the image stream, use the oc create istag command to add image stream
tags. The following example adds the 20.0 tag to the keycloak image stream. In this example,
466 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
the image stream tag refers to the quay.io/keycloak/keycloak:20.0.2 image from the
Quay.io public repository.
Repeat the preceding command if you need more image stream tags:
Use the oc describe is command to verify that the image stream tag points to the SHA ID of
the source image:
20.0
tagged from quay.io/keycloak/keycloak:20.0.3
* quay.io/keycloak/keycloak@sha256:c167...62e9
47 seconds ago
quay.io/keycloak/keycloak@sha256:5569...b311
5 minutes ago
19.0
tagged from quay.io/keycloak/keycloak:19.0
* quay.io/keycloak/keycloak@sha256:40cc...ffde
5 minutes ago
DO180-OCP4.12-en-1-20230406 467
Chapter 7 | Manage Application Updates
By using image stream tags, you are in control of the images that your applications are using. If you
want to use a new image version, then you manually need to update the image stream tag to point
to that new version.
However, for some container registries that you trust, or for some specific images, you might
prefer the image stream tags to automatically refresh.
For example, Red Hat regularly updates the images from the Red Hat Ecosystem Catalog with
bug and security fixes. To benefit from these updates as soon as Red Hat releases them, you can
configure your image stream tags to regularly refresh.
OpenShift can periodically verify whether a new image version is available. When it detects a new
version, it automatically updates the image stream tag. To activate that periodic refresh, add the
--scheduled option to the oc tag command.
By default, OpenShift verifies the image every 15 minutes. This period is a setting that your cluster
administrators can adapt.
When the image comes from a registry on the internet, pulling the image can take time, or even fail
in case of a network outage. Some public registries have bandwidth throttling rules that can slow
down your downloads further.
To mitigate these issues, you can configure your image stream tags to cache the images in the
OpenShift internal container registry. The first time that OpenShift pulls the image, it downloads
the image from the source repository and then stores the image in its internal registry. After that
initial pull, OpenShift retrieves the image from the internal registry.
To activate image pull-through, add the --reference-policy local option to the oc tag
command.
• Create the image stream object in the same project as the Deployment object.
• In the Deployment object, reference the image stream tag by its name, such as
keycloak:20.0, and not by the full image name from the source registry.
468 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Use the oc set image-lookup command to enable the local lookup policy for an image stream:
You can also retrieve the local lookup policy status for all the image streams in the current project
by running the oc set image-lookup command without parameters:
To disable the local lookup policy, add the --enabled=false option to the oc set image-
lookup command:
When you use a short name, OpenShift looks for a matching image stream in the current project.
OpenShift considers only the image streams that you enabled the local lookup policy for. If it
does not find an image stream, then OpenShift looks for a regular container image in the allowed
container registries. The reference documentation at the end of this lecture describes how to
configure these allowed registries.
You can also use image streams with other Kubernetes workload resources:
DO180-OCP4.12-en-1-20230406 469
Chapter 7 | Manage Application Updates
• Job objects that you can create by using the following command:
• CronJob objects that you can create by using the following command:
• Pod objects that you can create by using the following command:
Another section in this course discusses how changing an image stream tag can automatically roll
out the associated deployments.
References
For more information about using image streams, refer to the Managing Image
Streams chapter in the Red Hat OpenShift Container Platform 4.12 Images
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/images/index#managing-image-
streams
For more information about using image streams with deployments, refer to the
Using Image Streams with Kubernetes Resources chapter in the Red Hat OpenShift
Container Platform 4.12 Images documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/images/index#using-
imagestreams-with-kube-resources
470 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Guided Exercise
Outcomes
You should be able to create image streams and image stream tags, and deploy applications
that use image stream tags.
This command ensures that all resources are available for this exercise. It also creates the
updates-imagestreams project and the /home/student/DO180/labs/updates-
imagestreams/resources.txt file. The resources.txt file contains the name of the
images and some commands that you use during the exercise. You can use the file to copy
and paste these image names and commands.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the updates-imagestreams project.
2. Create the versioned-hello image stream and the v1.0 image stream tag from the
registry.ocp4.example.com:8443/redhattraining/versioned-hello:v1.0
image.
DO180-OCP4.12-en-1-20230406 471
Chapter 7 | Manage Application Updates
2.2. Use the oc create istag command to create the image stream tag.
3. Enable image stream resolution for the versioned-hello image stream so that
Kubernetes resources in the current project can use it.
3.1. Use the oc set image-lookup command to enable image lookup resolution.
3.2. Run the oc set image-lookup command without any arguments to verify your
work.
4. Review the image stream and confirm that the image stream tag refers to the source image
by its SHA ID. Verify that the source image in the registry.ocp4.example.com:8443
registry has the same SHA ID.
Note
To improve readability, the instructions truncate the SHA-256 strings.
v1.0
tagged from registry.ocp4.example.com:8443/redhattraining/versioned-hello:v1.0
* registry.ocp4.example.com:8443/.../versioned-hello@sha256:66e0...105e
7 minutes ago
4.2. Use the oc image info command to query the image from the classroom
container registry. The SHA image ID is the same as the one from the image stream
tag.
472 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
5.1. Use the oc create deployment command to create the object. Ignore the
warning message.
5.2. Wait for the pod to start. You might have to rerun the command several times for
the pod to report a Running status. The name of the pod on your system probably
differs.
6. Confirm that both the deployment and the pod refer to the image by its SHA ID.
6.1. Retrieve the image that the deployment uses. The deployment refers to the image
from the source registry by its SHA ID. The v1.0 image stream tag also points to that
SHA image ID.
6.2. Retrieve the image that the pod is using. The pod is also referring to the image by its
SHA ID.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO180-OCP4.12-en-1-20230406 473
Chapter 7 | Manage Application Updates
474 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Objectives
• Ensure automatic update of application pods by using image streams with Kubernetes workload
resources.
If a new version of the source image becomes available, then you can change the image stream
tag to point to that new image. However, a Deployment object that uses the image stream tag
does not roll out automatically. For an automatic rollout, you must configure the Deployment
object with an image trigger.
If you update an image stream tag to point to a new image version, and you notice that this version
does not work as expected, then you can revert the image stream tag. Deployment objects for
which you configured a trigger automatically roll back to that previous image.
Other Kubernetes workloads also support image triggers, such as Pod, CronJob, and Job objects.
• Create the image stream object in the same project as the Deployment object.
• Enable the local lookup policy in the image stream object by using the oc set image-lookup
command.
• In the Deployment object, reference the image stream tags by their names, such as
keycloak:20, and not by the full image names from the source registry.
Image triggers apply at the container level. If your Deployment object includes several
containers, then you can specify a trigger for each one. Before you can set triggers, retrieve the
container names:
Use the oc set triggers command to configure an image trigger for the container inside the
Deployment object. Use the --from-image option to specify the image stream tag to watch.
DO180-OCP4.12-en-1-20230406 475
Chapter 7 | Manage Application Updates
OpenShift DeploymentConfig objects natively support image streams, and support automatic
rollout on image change. DeploymentConfig resources have attributes to support image
streams and triggers.
In contrast, Kubernetes Deployment resources do not natively support image streams, and do
not have attributes to store the related configuration. To provide automatic image rollout for
Deployment objects, OpenShift adds the image.openshift.io/triggers annotation to
store the configuration in JSON format.
The fieldPath attribute is a JSONPath expression that OpenShift uses to locate the attribute
that stores the container image name. OpenShift updates that attribute with the new image name
and SHA ID whenever the image stream tag changes.
For a more concise view, use the oc set triggers command with the name of the
Deployment object as an argument:
OpenShift uses the configuration trigger to roll out the deployment whenever you change its
configuration, such as to update environment variables or to configure the readiness probe.
OpenShift watches the keycloak:20 image stream tag that the keycloak container uses.
The true value under the AUTO column indicates that the trigger is enabled.
You can disable the configuration trigger by using the oc rollout pause command, and you
can re-enable it by using the oc rollout resume command.
You can disable the image trigger by adding the --manual option to the oc set triggers
command:
476 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
You can remove the triggers from all the containers in the Deployment object by adding the --
remove-all option to the command:
The image stream tag might change because you manually update it to point to a new version of
the source image. The image stream tag might also change automatically if you configure it for
periodic refresh, by adding the --scheduled option to the oc tag command. When the image
stream tag automatically changes, all the Deployment objects with a trigger that refers to that
image stream tag also roll out.
For OpenShift DeploymentConfig objects, use the oc rollout undo command. The
command rolls back the object and disables the image trigger. The command disables the trigger,
because otherwise, after rolling back, OpenShift would notice that a new image is available, the
malfunctioning image, and would then roll out. You must manually re-enable the image trigger
after you fix the issue.
For Kubernetes Deployment objects, you cannot use the oc rollout undo command in the
event of malfunctioning images, because the command does not disable the image triggers. If you
use the command, then OpenShift notices that the deployment is already using the image that the
trigger points to, and therefore does not roll back the application.
For Kubernetes Deployment objects, instead of rolling back the deployment, you revert the
image stream tag. By reverting the image stream tag, OpenShift rolls out the Deployment object
to use the previous image that the image stream tag is pointing to again.
DO180-OCP4.12-en-1-20230406 477
Chapter 7 | Manage Application Updates
You can rerun the oc import-image and oc tag commands to update the image stream
tag from the source image. If the source image changes, then the commands update the image
stream tag to point to that new version. However, you can use the oc create istag command
only to initially create the image stream tag. You cannot update tags by using that command.
Use the --help option for more details about the commands.
You can create several image stream tags that point to the same image. The following
command creates the keycloak:20 image stream tag, which points to the same image as the
keycloak:20.0.2 image stream tag. In other words, the keycloak:20 image stream tag is an
alias for the keycloak:20.0.2 image stream tag.
The oc describe is command reports that both tags point to the same image:
20.0.2 (20)
tagged from quay.io/keycloak/keycloak:20.0.2
* quay.io/keycloak/keycloak@sha256:5569...b311
3 minutes ago
Using aliases is a similar concept to floating tags for container images. Suppose that a new image
version is available in the Quay.io repository. You could create an image stream tag for that new
image:
20.0.3
tagged from quay.io/keycloak/keycloak:20.0.3
* quay.io/keycloak/keycloak@sha256:c167...62e9
36 seconds ago
20.0.2 (20)
tagged from quay.io/keycloak/keycloak:20.0.2
* quay.io/keycloak/keycloak@sha256:5569...b311
About an hour ago
The keycloak:20 image stream tag does not change. Therefore, the Deployment objects that
use that tag do not roll out.
478 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
After testing the new image, you can move the keycloak:20 tag to point to the new image
stream tag:
20.0.3 (20)
tagged from quay.io/keycloak/keycloak:20.0.3
* quay.io/keycloak/keycloak@sha256:c167...62e9
10 minutes ago
20.0.2
tagged from quay.io/keycloak/keycloak:20.0.2
* quay.io/keycloak/keycloak@sha256:5569...b311
About an hour ago
Because the keycloak:20 image stream tag points to a new image, OpenShift rolls out all the
Deployment objects that use that tag.
If the new application does not work as expected, then you can roll back the deployments by
resetting the keycloak:20 tag to the previous image stream tag:
By providing a level of indirection, image streams give you control over managing the container
images that you use in your OpenShift cluster.
References
For more information about image triggers, refer to the Triggering Updates on Image
Stream Changes chapter in the Red Hat OpenShift Container Platform 4.12 Images
documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/images/index#triggering-updates-
on-imagestream-changes
For more information about image stream tags, refer to the Tagging Images section
in the Managing Images chapter in the Red Hat OpenShift Container Platform 4.12
Images documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.12/html-single/images/index#tagging-images
DO180-OCP4.12-en-1-20230406 479
Chapter 7 | Manage Application Updates
Guided Exercise
Outcomes
• Add an image trigger to a deployment.
This command ensures that all resources are available for this exercise. It also creates the
updates-triggers project and deploys a web application with 10 replicas.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
Use the updates-triggers project.
2. Inspect the versioned-hello image stream that the lab command created.
480 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
2.1. Verify that the lab command enabled the local lookup policy for the versioned-
hello image stream.
2.2. Verify that the lab command created the versioned-hello:1 image stream tag.
The image stream tag refers to the image in the classroom registry by its SHA ID.
Note
To improve readability, the instructions truncate the SHA-256 strings.
On your system, the commands return the full SHA-256 strings. Also, you must type
the full SHA-256 string, to provide such a parameter to a command.
2.3. Verify that the lab command created the versioned-hello:1 image stream tag
from the registry.ocp4.example.com:8443/redhattraining/versioned-
hello:1-123 image.
3. Inspect the Deployment object that the lab command created. Verify that the application
is available from outside the cluster.
3.1. List the Deployment objects. The version deployment retrieved the SHA image ID
from the versioned-hello:1 image stream tag. The Deployment object includes
a container named versioned-hello. You use that information in a later step, when
you configure the trigger.
DO180-OCP4.12-en-1-20230406 481
Chapter 7 | Manage Application Updates
4.1. Switch back to the first terminal window, and then use the oc set triggers
command to add the trigger for the versioned-hello:1 image stream tag to the
versioned-hello container. Ignore the warning message.
5. Update the versioned-hello:1 image stream tag to point to the 1-125 tag of the
registry.ocp4.example.com:8443/redhattraining/versioned-hello image.
Watch the output of the curl_loop.sh script to verify that the Deployment object
automatically rolls out.
5.2. Changing the image stream tag triggered a rolling update. Watch the output of the
curl_loop.sh script in the second terminal.
482 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Before the update, only pods that use the earlier version of the image reply. During
the rolling updates, both old and new pods respond. After the update, only pods that
run the latest version of the image reply. The following output probably differs on
your system.
...output omitted...
Hi!
Hi!
Hi!
Hi!
Hi! v1.1
Hi! v1.1
Hi!
Hi! v1.1
Hi!
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi! v1.1
...output omitted...
6.1. List the version deployment and notice that the image changed.
6.2. Display the details of the versioned-hello image stream. The versioned-
hello:1 image stream tag points to the image with the same SHA ID as in the
Deployment object.
Notice that the preceding image is still available. In the following step, you roll back to
that image by specifying its SHA ID.
1
tagged from registry.ocp4.example.com:8443/redhattraining/versioned-hello:1-125
* registry.ocp4.example.com:8443/.../versioned-hello@sha256:834d...fcb4
6 minutes ago
registry.ocp4.example.com:8443/.../versioned-hello@sha256:66e0...105e
37 minutes ago
7. Roll back the Deployment object by reverting the versioned-hello:1 image stream
tag.
DO180-OCP4.12-en-1-20230406 483
Chapter 7 | Manage Application Updates
7.1. Use the oc tag command. For the source image, copy and paste the old image
name and the SHA ID from the output of the preceding command.
7.2. Watch the output of the curl_loop.sh script in the second terminal. The pods that
run the v1.0 version of the application are responding again. The following output
probably differs on your system.
...output omitted...
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi!
Hi! v1.1
Hi! v1.1
Hi! v1.1
Hi!
Hi! v1.1
Hi! v1.1
Hi!
Hi!
Hi!
...output omitted...
Press Ctrl+C to quit the script. Close that second terminal when done.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
484 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Lab
Outcomes
You should be able to configure Deployment objects with images and triggers, and
configure image stream tags and aliases.
This command ensures that all resources are available for this exercise. It also creates the
updates-review project and deploys two applications, app1 and app2, in that project.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password. Use the
updates-review project for your work.
1. Your team created the app1 deployment in the updates-review project from the
registry.ocp4.example.com:8443/redhattraining/php-ssl:latest container
image. Recently, a developer in your organization pushed a new version of the image and
then reassigned the latest tag to that version.
Reconfigure the app1 deployment to use the 1-222 static tag instead of the latest
floating tag, to prevent accidental redeployment of your application with untested image
versions that your developers can publish at any time.
2. The app2 deployment is using the php-ssl:1 image stream tag, which is an alias for the
php-ssl:1-222 image stream tag.
Enable image triggering for the app2 deployment, so that whenever the php-ssl:1 image
stream tag changes, OpenShift rolls out the application. You test your configuration in a later
step, when you reassign the php-ssl:1 alias to a new image stream tag.
3. A new image version, registry.ocp4.example.com:8443/redhattraining/php-
ssl:1-234, is available in the container registry. Your QA team tested and approved that
version. It is ready for production.
DO180-OCP4.12-en-1-20230406 485
Chapter 7 | Manage Application Updates
Create the php-ssl:1-234 image stream tag that points to the new image. Move the php-
ssl:1 image stream tag alias to the new php-ssl:1-234 image stream tag. Verify that the
app2 application redeploys.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
486 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Solution
Outcomes
You should be able to configure Deployment objects with images and triggers, and
configure image stream tags and aliases.
This command ensures that all resources are available for this exercise. It also creates the
updates-review project and deploys two applications, app1 and app2, in that project.
Instructions
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password. Use the
updates-review project for your work.
1. Your team created the app1 deployment in the updates-review project from the
registry.ocp4.example.com:8443/redhattraining/php-ssl:latest container
image. Recently, a developer in your organization pushed a new version of the image and
then reassigned the latest tag to that version.
Reconfigure the app1 deployment to use the 1-222 static tag instead of the latest
floating tag, to prevent accidental redeployment of your application with untested image
versions that your developers can publish at any time.
DO180-OCP4.12-en-1-20230406 487
Chapter 7 | Manage Application Updates
1.3. Verify that the app1 deployment uses the latest tag. Retrieve the container name.
2. The app2 deployment is using the php-ssl:1 image stream tag, which is an alias for the
php-ssl:1-222 image stream tag.
Enable image triggering for the app2 deployment, so that whenever the php-ssl:1 image
stream tag changes, OpenShift rolls out the application. You test your configuration in a later
step, when you reassign the php-ssl:1 alias to a new image stream tag.
2.2. Add the image trigger to the Deployment object. Ignore the warning message.
488 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
3.2. Move the php-ssl:1 alias to the new php-ssl:1-234 image stream tag.
3.3. Verify that the app2 application rolls out. The names of the replica sets on your system
probably differ.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 489
Chapter 7 | Manage Application Updates
490 DO180-OCP4.12-en-1-20230406
Chapter 7 | Manage Application Updates
Summary
• You reference container images by tags or by SHA IDs. Image developers assign tags to images.
Container registries compute and assign SHA IDs to images.
• Deployment objects have an imagePullPolicy attribute that specifies how compute nodes
pull the image from the registry.
• Deployment objects support the rolling update and the re-create deployment strategies.
• OpenShift image stream and image stream tag resources provide stable references to container
images.
• Kubernetes workload resources, such as Deployments and Jobs, can use image streams. You
must create the image streams in the same project as the Kubernetes resources, and you must
enable the local lookup policy in the image streams to use them.
• You can configure image monitoring in deployments so that OpenShift rolls out the application
whenever the image stream tag changes.
DO180-OCP4.12-en-1-20230406 491
492 DO180-OCP4.12-en-1-20230406
Chapter 8
Comprehensive Review
Goal Review tasks from OpenShift Administration I -
Managing Containers and Kubernetes.
DO180-OCP4.12-en-1-20230406 493
Chapter 8 | Comprehensive Review
Comprehensive Review
Objectives
After completing this section, you should have reviewed and refreshed the knowledge and skills
that you learned in OpenShift Administration I - Managing Containers and Kubernetes.
• Describe the relationship between OpenShift, Kubernetes, and other Open Source projects, and
list key features of Red Hat OpenShift products and editions.
• Navigate the OpenShift web console to identify running applications and cluster services.
• Navigate the Events, Compute, and Observe panels of the OpenShift web console to assess the
overall state of a cluster.
• Access an OpenShift cluster by using the Kubernetes and OpenShift command-line interfaces.
• Run containers inside pods and identify the host OS processes and namespaces that the
containers use.
• Find containerized applications in container registries and get information about the runtime
parameters of supported and community container images.
• Troubleshoot a pod by starting additional processes on its containers, changing their ephemeral
file systems, and opening short-lived network tunnels.
494 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
• Identify the main resources and settings that Kubernetes uses to manage long-lived
applications and demonstrate how OpenShift simplifies common application deployment
workflows.
• Interconnect applications pods inside the same cluster by using Kubernetes services.
• Expose applications to clients outside the cluster by using Kubernetes ingress and OpenShift
routes.
• Provide applications with persistent storage volumes for block and file-based data.
• Match applications with storage classes that provide storage services to satisfy application
requirements.
• Describe how Kubernetes uses health probes during deployment, scaling, and failover of
applications.
• Configure an application with resource requests so Kubernetes can make scheduling decisions.
• Configure an application with resource limits so Kubernetes can protect other applications from
it.
• Relate container image tags to their identifier hashes, and identify container images from pods
and containers on Kubernetes nodes.
• Ensure reproducibility of application deployments by using image streams and short image
names.
DO180-OCP4.12-en-1-20230406 495
Chapter 8 | Comprehensive Review
• Ensure automatic update of application pods by using image streams with Kubernetes workload
resources.
496 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
Lab
Outcomes
You should be able to create and configure OpenShift and Kubernetes resources, such as
projects, secrets, deployments, persistent volumes, services, and routes.
This command ensures that all resources are available for this exercise. The command also
creates the /home/student/DO180/labs/compreview-deploy/resources.txt file.
The resources.txt file contains the URLs of your OpenShift cluster and the image names
that you use in the exercise. You can use the file to copy and paste these URLs and image
names.
Specifications
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password. The
password for the admin user is redhatocp, although you do not need administrator privileges to
complete the exercise.
In this exercise, you deploy a web application and its database for testing purposes. The resulting
configuration is not ready for production, because you do not configure probes and resource
limits, which are required for production. Another comprehensive review exercise covers these
subjects.
DO180-OCP4.12-en-1-20230406 497
Chapter 8 | Comprehensive Review
• Configure your project so that its workloads refer to the database image by the mysql8:1 short
name.
The classroom setup copied the image from the Red Hat Ecosystem Catalog. The original
image is registry.redhat.io/rhel9/mysql-80:1-228.
– Ensure that the workload resources in the review project can use the mysql8:1 resource.
You create these workload resources in a later step.
• Create the dbparams secret to store the MySQL database parameters. Both the database
and the front-end deployment need these parameters. The dbparams secret must include the
following variables:
Name Value
user operator1
password redhat123
database quotesdb
– The database must automatically roll out whenever the source container in the mysql8:1
resource changes.
To test your configuration, you can change the mysql8:1 image to point to the
registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 container image that
the classroom provides, and then verify that the quotesdb deployment rolls out. Remember
to reset the mysql8:1 image to the registry.ocp4.example.com:8443/rhel9/
mysql-80:1-228 container image before grading your work.
– Define the following environment variables in the deployment from the keys in the dbparams
secret:
MYSQL_USER user
MYSQL_PASSWORD password
MYSQL_DATABASE database
– Ensure that OpenShift preserves the database data between pod restarts. This data does
not consume more than 2 GiB of disk space. The MySQL database stores its data under the
/var/lib/mysql directory. Use the lvms-vg1 storage class for the volume.
• Create a quotesdb service to make the database available to the front-end web application.
The database service is listening on port 3306.
498 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
QUOTES_HOSTNAME quotesdb
• You cannot yet test the application from outside the cluster. Expose the frontend deployment
so that the application can be reached at http://frontend-review.apps.ocp4.example.com.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 499
Chapter 8 | Comprehensive Review
Solution
Outcomes
You should be able to create and configure OpenShift and Kubernetes resources, such as
projects, secrets, deployments, persistent volumes, services, and routes.
This command ensures that all resources are available for this exercise. The command also
creates the /home/student/DO180/labs/compreview-deploy/resources.txt file.
The resources.txt file contains the URLs of your OpenShift cluster and the image names
that you use in the exercise. You can use the file to copy and paste these URLs and image
names.
1. Log in to the OpenShift cluster from the command line, and then create the review project.
500 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
2.1. Use the oc create istag command to create the image stream and the image
stream tag.
2.2. Use the oc set image-lookup command to enable image lookup resolution.
2.3. Run the oc set image-lookup command without any arguments to verify your
work.
4. Create the quotesdb deployment from the mysql8:1 image stream tag. Set the number
of replicas to zero, to prevent OpenShift from deploying the database before you finish its
configuration. Ignore the warning message.
5.1. Retrieve the name of the container from the quotesdb deployment.
5.2. Use the oc set triggers command to add the trigger for the mysql8:1 image
stream tag to the mysql8 container. Ignore the warning message.
DO180-OCP4.12-en-1-20230406 501
Chapter 8 | Comprehensive Review
6. Add environment variables to the quotesdb deployment from the dbparams secret. Add
the MYSQL_ prefix to each variable name. Ignore the warning message.
7. Add a 2 GiB persistent volume to the quotesdb deployment. Use the lvms-vg1 storage
class. Inside the pods, mount the volume under the /var/lib/mysql directory. Ignore the
warning message.
8.2. Wait for the pod to start. You might have to rerun the command several times for the
pod to report a Running status. The name of the pod on your system probably differs.
9. Create the quotesdb service for the quotesdb deployment. The database server is
listening on port 3306.
9.2. Verify that OpenShift associates the IP address of the MySQL server with the
endpoint. The endpoint IP address on your system probably differs.
502 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
11. Add environment variables to the frontend deployment from the dbparams secret, and
add the QUOTES_HOSTNAME variable with the quotesdb value.
11.1. Add the variables from the dbparams secret. Add the QUOTES_ prefix to each variable
name. Ignore the warning message.
12. Start the application by scaling up the frontend deployment to one replica.
12.2. Wait for the pod to start. You might have to rerun the command several times for the
pod to report a Running status. The name of the pod on your system probably differs.
DO180-OCP4.12-en-1-20230406 503
Chapter 8 | Comprehensive Review
13. Expose the frontend deployment so that the application is accessible from outside the
cluster. The web application is listening on port 8000.
<h1>Quote List</h1>
<ul>
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
504 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO180-OCP4.12-en-1-20230406 505
Chapter 8 | Comprehensive Review
Lab
Outcomes
You should be able to troubleshoot malfunctioning workloads, configure deployments, and
scale applications.
This command ensures that all resources are available for this exercise. The command also
creates the compreview-scale project and deploys some applications in that project.
Specifications
The API URL of your OpenShift cluster is https://api.ocp4.example.com:6443, and the oc
command is already installed on your workstation machine.
Log in to the OpenShift cluster as the developer user with the developer password. The
password for the admin user is redhatocp.
• A pod in the cluster is consuming excessive CPU and is interfering with other tasks. Identify the
pod and remove its workload.
506 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
The application uses two Kubernetes Deployment objects. The frontend deployment
provides the application web pages, and relies on the quotesdb deployment that runs a
MySQL database. The lab command already created the services and routes that connect the
application components and that make the application available from outside the cluster.
– The quotesdb deployment in the compreview-scale project starts a MySQL server, but
the database is failing. Review the logs of the pod to identify and then fix the issue.
Name Value
Username operator1
Password redhat123
– You security team validated a new version of the MySQL container image that fixes a
security issue. The new container image is registry.ocp4.example.com:8443/rhel9/
mysql-80:1-237.
Update the quotesdb deployment to use this image. Ensure that the database redeploys.
The classroom setup copied the image from the Red Hat Ecosystem Catalog. The original
image is registry.redhat.io/rhel9/mysql-80:1-237.
– Add a probe to the quotesdb deployment so that OpenShift can detect when the database
is ready to accept requests. Use the mysqladmin ping command for the probe.
– Add a second probe that regularly verifies the status of the database. Use the mysqladmin
ping command as well.
– Configure CPU and memory usage for the quotesdb deployment. The deployment needs
200 millicores of CPU and 256 MiB of memory to run, and you must restrict its CPU usage to
500 millicores and its memory usage to 1 GiB.
– Add a probe to the frontend deployment so that OpenShift can detect when the web
application is ready to accept requests. The application is ready when an HTTP request on
port 8000 to the /status path is successful.
– Add a second probe that regularly verifies the status of the web front end. The front end
works as expected when an HTTP request on port 8000 to the /env path is successful.
– Configure CPU and memory usage for the frontend deployment. The deployment needs
200 millicores of CPU and 256 MiB of memory to run, and you must restrict its CPU usage to
500 millicores and its memory usage to 512 MiB.
– Scale the frontend application to three pods to accommodate for the estimated production
load.
DO180-OCP4.12-en-1-20230406 507
Chapter 8 | Comprehensive Review
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
508 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
Solution
Outcomes
You should be able to troubleshoot malfunctioning workloads, configure deployments, and
scale applications.
This command ensures that all resources are available for this exercise. The command also
creates the compreview-scale project and deploys some applications in that project.
1. Use the OpenShift web console to identify and then delete the pod that consumes excessive
CPU.
1.2. Select Red Hat Identity Management, and then log in as the admin user with the
redhatocp password. Click Skip tour if the Welcome to the Developer Perspective
message is displayed.
1.3. Switch to the Administrator perspective and then navigate to Observe > Dashboards.
DO180-OCP4.12-en-1-20230406 509
Chapter 8 | Comprehensive Review
1.4. Select the Kubernetes / Compute Resources / Cluster dashboard, and then click
Inspect in the CPU Usage graph.
510 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
1.5. Set the zoom to five minutes and then hover over the graph. Notice that the interface
lists the compreview-scale-load namespace in the first position, which indicated
that this namespace is the first CPU consumer.
1.6. Navigate to Observe > Dashboards and then select the Kubernetes / Compute
Resources / Namespace (Workloads) dashboard. Select the compreview-
scale-load namespace and then set the time range to the last five minutes. The
computeprime deployment is the workload that consumes excessive CPU.
DO180-OCP4.12-en-1-20230406 511
Chapter 8 | Comprehensive Review
1.7. Navigate to Workloads > Deployments and then select the compreview-scale-
load project. Select the menu for the computeprime deployment and then click
Delete Deployment. Click Delete to confirm the operation.
2. Review the logs of the pod that is failing for the quotesdb deployment. Set the missing
environment variables in the quotesdb deployment.
2.3. List the pods to identify the failing pod from the quotesdb deployment. The names of
the pods on your system probably differ.
2.4. Retrieve the logs for the failing pod. Some environment variables are missing.
512 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
2.5. Add the missing environment variables to the quotesdb deployment. Ignore the
warning message.
3.1. Retrieve the name of the container that is running inside the pod. You need the
container name to update its image.
3.4. Wait for the deployment to roll out. You might have to rerun the command several
times for the pod to report a Running status. The name of the pod on your system
probably differs.
DO180-OCP4.12-en-1-20230406 513
Chapter 8 | Comprehensive Review
4. Add a readiness and a liveness probe to the quotesdb deployment that runs the
mysqladmin ping command.
4.1. Use the oc set probe command with the --readiness option to add the readiness
probe. Ignore the warning message.
4.2. Use the oc set probe command with the --liveness option to add the liveness
probe. Ignore the warning message.
5. Define resource limits for the quotesdb deployment. Set the CPU request to 200 millicores
and the memory request to 256 MiB. Set the CPU limit to 500 millicores and the memory
limit to 1 GiB. Ignore the warning message.
6.1. Use the oc set probe command with the --readiness option to add the readiness
probe that tests the /status path on HTTP port 8000. Ignore the warning message.
6.2. Use the oc set probe command with the --liveness option to add the liveness
probe that tests the /env path on HTTP port 8000. Ignore the warning message.
514 DO180-OCP4.12-en-1-20230406
Chapter 8 | Comprehensive Review
7. Define resource limits for the frontend deployment. Set the CPU request to 200 millicores
and the memory request to 256 MiB. Set the CPU limit to 500 millicores and the memory
limit to 512 MiB. Ignore the warning message.
8.2. Wait for the deployment to scale up. You might have to rerun the command several
times for the pods to report a Running status. The names of the pods on your system
probably differ.
<h1>Quote List</h1>
<ul>
DO180-OCP4.12-en-1-20230406 515
Chapter 8 | Comprehensive Review
- William Shakespeare
</li>
...output omitted...
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
516 DO180-OCP4.12-en-1-20230406