Getting GitOps
Getting GitOps
GitOps
A practical platform with OpenShift, Argo CD, and Tekton
45
12
3
main
78 6
91
branch
Wanja Pernath
Foreword by Florian Heubeck, Principal Engineer at MediaMarktSaturn Technology
& Java User Group Ingolstadt Leader
Contents
05 Foreword
06 Introduction
07 Acknowledgments
08 About the Examples
12 Chapter 1–Creating the Sample Service
First Steps
Creating a Database Client
Java Code for Database Operations
Enabling the Database
Quarkus Dev Services
Testing the Database Client
Finalizing the CRUD REST Service
Creating a New Person
Updating an Existing Person
Deleting an Existing Person
Preparing for CI/CD
Moving the Application to OpenShift
Becoming Native
Summary
25 Chapter 2–Deployment Basics
Introduction and Motivation
Basic Kubernetes Files
YAML Parser (yq)
OpenShift Templates
Kustomize
More Sophisticated Kustomize Examples
Summary
38 Chapter 3–Packaging with Helm and Kubernetes Operators
Modern Distributions
Using an External Registry with Docker or Podman
The Docker and Podman Build Tools
Docker versus Podman and Buildah
Your first build
Testing the image
Working with Skopeo
Getting GitOps | 2
Contents
Getting GitOps | 3
Contents
Getting GitOps | 4
Foreword
Getting GitOps | 5
Chapter X – Getting GitOps
Introduction
Chapter Overview
This book proceeds the way you would when developing your own services. The initial
question is always (after understanding the business requirements, of course), what
software stack to use. In my case, I have decided to use the Java language with the
Quarkus [2] framework. Chapter 1 explains the benefits of Quarkus and shows you how
to use it to develop your microservice.
Once you’ve developed your code, you need to understand how to move it to your target
platform. That is the subject of Chapter 2: Understanding container images and all of the
Kubernetes manifest files, and easily modifying them for later use.
After you’ve decided on your target platform, you might also want to determine how to distrib-
ute your application. This task includes packaging that application, which in turn involves choos-
ing the right package format. Chapter 3 discusses possible solutions in detail, with examples.
Once you understand how to package and distribute your application, we will set up a
process to automate the tasks of building the sources and deploying your service to
your test and production stages. Chapter 4 explains how to use Tekton to set up an
integration pipeline for your service.
And finally, Chapter 5 sets up GitOps for your service.
Getting GitOps | 6
Acknowledgments
The positive feedback I got from readers around the world motivated me to continue
writing on this topic. Then one day, my manager asked me, Wouldn’t it be great to
create a book out of these posts? And Getting GitOps was born.
I want to thank all of the people who helped make this possible:
• My manager, Günter Herold, who came up with the idea for the book.
• Hubert Schweinesbein, who gave the project the final okay.
• Markus Eisele, who helped me find the right contacts.
• Florian Heubeck, for writing the foreword.
• My editor, Andrew Oram, for putting my words into proper and understandable
English.
• My girlfriend, for her patience.
• My dad, for believing in me.
• The cats, for helping me write—and for all their “feedback.”
• Finally, thanks to all of you for reading.
Summary
This book aims to guide your journey to GitOps with OpenShift and Kubernetes.
References
[1]
https://www.opensourcerers.org/2021/04/26/automated-application-packag-
ing-and-distribution-with-openshift-basic-development-principles-part-14/
[2] https://quarkus.io
Getting GitOps | 7
About the Examples
Prerequisites
You will need the following software:
• Red Hat OpenShift 4.8.x (see the next section for instructions)
• Maven 3.8.3
• Java JDK 11 or later
• git
• Docker
• The OpenShift client (oc) matching the version of the OpenShift cluster
• An editor to work with, such as Visual Studio Code, Eclipse, or IntelliJ
• OpenShift also needs to have the following Operators installed:
• OpenShift GitOps
• OpenShift Pipelines
• Crunchy Postgres for Kubernetes by Crunchy Data
To use this option, go to the Developer Sandbox for Red Hat OpenShift [2] and register
for free.
Getting GitOps | 8
Chapter X – Xxxxxxxxxxxxx
• 8 vCPU cores
• 32GB free memory
• 150GB free hard disk space
Visit the Red Hat Console [5] to start the installation process. After installation, look
at my OpenShift Config script [6], which you can find on GitHub. This script creates
persistent volumes, makes the internal registry non-ephemeral, creates a clus-
ter-admin user, and installs necessary operators and a CI environment with Nexus
(a maven repository) and Gogs (a Git repository).
The free account, however, limits you to creating only public repositories. As a result,
anybody can read from your repository, but only you are allowed to write and update
your image.
Once you’ve created your image in the repository, you have to check the image
properties and make sure that the repository is public. By default, Quay.io [7] creates
private repositories.
Chapter 1
The folder person-service contains the Java sources of the Quarkus example.
If you want to deploy it on OpenShift, make sure to first install a PostgreSQL
server, either via Crunchy Data or by instantiating the postgresql-persistent
template as follows:
$ oc new-app postgresql-persistent \
-p POSTGRESQL_USER=wanja \
-p POSTGRESQL_PASSWORD=wanja \
-p POSTGRESQL_DATABASE=wanjadb \
-p DATABASE_SERVICE_NAME=wanjaserver
Getting GitOps | 9
Chapter X – Xxxxxxxxxxxxx
Chapter 2
This chapter introduces Kubernetes configuration and the Kustomize tool. The folders
containing the configuration files used in this chapter follow:
Chapter 3
This chapter is about Helm Charts and Kubernetes Operators. The corresponding
folders are helm-chart and kube-operator.
Chapter 4
This chapter is about Tekton and OpenShift Pipelines. The sources can be found in
the folder tekton. Also have a look at the pipeline.sh script, which installs all the
necessary Tasks and resources if you call it with the init parameter:
$ pipeline.sh init
configmap/maven-settings configured
persistentvolumeclaim/maven-repo-pvc configured
persistentvolumeclaim/builder-pvc configured
task.tekton.dev/kustomize configured
task.tekton.dev/maven-caching configured
pipeline.tekton.dev/build-and-push-image configured
Chapter 5
This chapter is about using Tekton and Argo CD. The sources can be found in the gitops
folder. To initialize these tools, call:
This call (if given the --force flag) creates the following namespaces and Argo CD
applications:
Whenever the pipeline is successfully executed, you should see an updated message
on the person-service-config Git repository.
Getting GitOps | 10
Chapter X – Xxxxxxxxxxxxx
You should also see that Argo CD has initiated a synchronization process, which ends
with a redeployment of the Quarkus application.
This creates a new branch in Git called release-v1.0.1-testing, uses the current
dev image, tags it on Quay.io, and updates the stage config in Git.
In order to apply the changes, you need to either merge the branch directly or create a
pull request and then merge the changes.
References
[1] https://github.com/wpernath/book-example
[2] https://developers.redhat.com/developer-sandbox
[3] https://github.com/code-ready/crc/releases
[4]
https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/
1.33/html-single/getting_started_guide/index
[5] https://console.redhat.com/openshift/assisted-installer/clusters/~new
[6] https://github.com/wpernath/openshift-config
[7] https://quay.io/
[8] https://github.com/wpernath/book-example
Getting GitOps | 11
Chapter 1
This book needs a good example to demonstrate the power of working with Kuberne-
tes and Red Hat OpenShift. Because it’s an application that will be familiar and useful to
most readers, we’ll create a REST-based microservice in Java that reads data from and
writes data to a database.
I have been coding for over two decades with Java and Java Enterprise Edition (Java EE).
I have used most of the frameworks out there. (Does anyone remember Struts or JBoss
Seam or SilverStream?) I’ve even created code generators to make my life easier with
Enterprise JavaBeans (EJB) (1.x and 2.x). Those frameworks and ideas tried to minimize
development effort, but they all had drawbacks.
Then, back in 2020, when I thought that there was nothing out that could really,
positively surprise me, I had a look at Quarkus [1.1]. It enchanted me because it
provided interfaces to all the common open source tools for containerization and
cloud deployment. What’s more, it included a dev mode that took away the tedious
compilation tasks.
Note: For a thorough introduction to Quarkus, check out the e-book Quarkus for Spring
Developers* [1.2] from Red Hat Developer.
First Steps
Quarkus has a Get Started [1.3] page. Go there to have a look at how to install the
command-line interface (CLI) tool, which is called quarkus.
applying codestarts...
java
maven
quarkus
config-properties
dockerfiles
maven-wrapper
resteasy-codestart
-----------
[SUCCESS] quarkus project has been successfully generated in:
--> /Users/wpernath/Devel/quarkus/person-service
-----------
Navigate into this directory and get started: quarkus dev
A successful initialization creates an initial Maven project with the following structure:
$ tree
.
|—— README.md
|—— mvnw
Getting GitOps | 12
Chapter 1 – Creating the Sample Service
|—— mvnw.cmd
|—— pom.xml
|── src
├── main
│ ├── docker
│ │ ├── Dockerfile.jvm
│ │ ├── Dockerfile.legacy-jar
│ │ ├── Dockerfile.native
│ │ └── Dockerfile.native-distroless
│ ├── java
│ │ └── org
│ │ └── wanja
│ │ └── demo
│ │ └── GreetingResource.java
│ └── resources
│ ├── META-INF
│ │ └── resources
│ │ └── index.html
│ └── application.properties
└── test
└── java
└── org
└── wanja
└── demo
├── GreetingResourceTest.java
└── NativeGreetingResourceIT.java
15 directories, 13 files
$ mvn quarkus:dev
Or if you prefer to use the Quarkus CLI tool, you can also call:
$ quarkus dev
These commands compile all the sources and start the development mode of your
project, where you don’t need to specify any runtime environment (Tomcat, Red Hat
JBoss, etc.).
Let’s have a look at the generated GreetingResource.java file, which you can find
under src/main/java/org/wanja/demo:
package org.wanja.demo;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path(“/hello”)
public class GreetingResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return “Hello RESTEasy”;
}
}
Getting GitOps | 13
Chapter 1 – Creating the Sample Service
$ http :8080/hello
HTTP/1.1 200 OK
Content-Type: text/plain;charset=UTF-8
content-length: 14
Hello RESTEasy
Let’s go a little bit deeper. We’ll change the string Hello RESTEasy and call the service
again (but without restarting quarkus dev—that’s a key point to make).
$ http :8080/hello
HTTP/1.1 200 OK
Content-Type: text/plain;charset=UTF-8
content-length: 7
Hi Yay!
OK, this is getting interesting now. You don’t have to recompile or restart Quarkus to
see your changes in action.
Because it’s limiting to put hard-coded strings directly into Java code, let’s switch to
feeding in the strings from a configuration file, as described in the Quarkus documenta-
tion about configuring your application [1.5]. To reconfigure the application, open src/
main/resources/application.properties in your preferred editor and create a
new property. For example:
Then go into the GreetingResource and create a new property on the class level:
package org.wanja.demo;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.eclipse.microprofile.config.inject.ConfigProperty;
@Path(“/hello”)
public class GreetingResource {
@ConfigProperty(name=”app.greeting”)
String greeting;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return greeting;
}
}
Getting GitOps | 14
Chapter 1 – Creating the Sample Service
$ http :8080/hello
HTTP/1.1 200 OK
Content-Type: text/plain;charset=UTF-8
content-length: 25
Again, you haven’t recompiled or restarted the services. Quarkus is watching for any
changes in the source tree and takes the required actions automatically.
First, you need to add the required extensions to your project. The following command
installs a JDBC driver for PostgreSQL and everything to be used for ORM:
package org.wanja.demo;
import javax.persistence.Column;
import javax.persistence.Entity;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
public class Person extends PanacheEntity {
@Column(name=”first_name”)
public String firstName;
@Column(name=”last_name”)
public String lastName;
According to the docs, this should define the Person entity, which maps directly to a
person table in our PostgreSQL database. All public properties will be mapped auto-
matically to the corresponding entity in the database. If you don’t want that, you need
to specify the @Transient annotation.
Getting GitOps | 15
Chapter 1 – Creating the Sample Service
You also need a PersonResource class to act as a REST endpoint. Let’s create that
simple class:
package org.wanja.demo;
import java.util.List;
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import io.quarkus.panache.common.Sort;
@Path(“/person”)
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public class PersonResource {
@GET
public List<Person> getAll() throws Exception {
return Person.findAll(Sort.ascending(“last_name”)).list();
}
}
Right now, this class has exactly one method, getAll(), which simply returns a list of all
persons sorted by the last_name column.
quarkus.hibernate-orm.database.generation=drop-and-create
quarkus.hibernate-orm.log.format-sql=true
quarkus.hibernate-orm.log.sql=true
quarkus.hibernate-orm.sql-load-script=import.sql
quarkus.datasource.db-kind=postgresql
And then make a simple SQL import script to fill some basic data into the database.
Create a new file called src/main/resources/import.sql and put the following
lines in there:
Getting GitOps | 16
Chapter 1 – Creating the Sample Service
You can now restart quarkus dev with everything you need:
$ quarkus dev
2021-12-15 13:39:47,725 INFO [io.qua.dat.dep.dev.DevServicesDatasourceProcessor]
(build-26) Dev Services for the default datasource (postgresql) started.
Hibernate:
Hibernate:
insert into person(id, first_name, last_name, salutation) values
(nextval(‘hibernate_sequence’), ‘Doro’, ‘Pesch’, ‘Mrs’)
Hibernate:
insert into person(id, first_name, last_name, salutation) values
(nextval(‘hibernate_sequence’), ‘Bobby’, ‘Brown’, ‘Mr’)
The first time I started Quarkus, I expected exceptions because no PostgreSQL data-
base was installed locally on my laptop. And yet, no exception upon startup. How could
that be?
Getting GitOps | 17
Chapter 1 – Creating the Sample Service
This is where Quarkus Dev Services come into play. As soon as Quarkus detects a
dependency on a third-party component (a database, MQ, cache, etc.) and you have
Docker Desktop installed on your developer machine, Quarkus starts the component
for you. You don’t have to configure anything. It just happens.
Have a look at the official Quarkus documentation [1.7] to see which components are
currently supported in this manner in dev mode.
$ http :8080/person
HTTP/1.1 500 Internal Server Error
Content-Type: text/html;charset=UTF-8
content-length: 113
OK. Well. It does not work. Why? We need a MessageBodyWriter for this response
type. Looking at the class PersonResource, you can see that we are directly returning
a response of type java.util.List<Person>, and we have a global producer anno-
tation of application/json. We need a component that translates the result into a
JSON string.
If you now call the endpoint again, you should see the correctly resolved and formatted
output:
$ http :8080/person
HTTP/1.1 200 OK
Content-Type: application/json
content-length: 741
Getting GitOps | 18
Chapter 1 – Creating the Sample Service
[
{
“firstName”: “Bobby”,
“id”: 2,
“lastName”: “Brown”,
“salutation”: “Mr”
},
{
“firstName”: “Eric”,
“id”: 11,
“lastName”: “Clapton”,
“salutation”: “Mr”
},
{
“firstName”: “Kurt”,
“id”: 4,
“lastName”: “Cobain”,
“salutation”: “Mr”
},
...
@POST
@Transactional
public Response create(Person p) {
if (p == null || p.id != null)
throw new WebApplicationException(“id != null”);
p.persist();
return Response.ok(p).status(200).build();
}
The only relevant method we call in this method is persist(), called on a given Person
instance. This is known as the active record pattern [1.8].
{
“firstName”: “Carlos”,
“id”: 12,
“lastName”: “Santana”,
“salutation”: “Mr”
}
Getting GitOps | 19
Chapter 1 – Creating the Sample Service
@PUT
@Transactional
@Path(“{id}”)
public Person update(@PathParam Long id, Person p) {
Person entity = Person.findById(id);
if (entity == null) {
throw new WebApplicationException(“Person with id of “ + id
+ “ does not exist.”, 404);
}
if(p.salutation != null ) entity.salutation = p.salutation;
if(p.firstName != null ) entity.firstName = p.firstName;
if(p.lastName != null) entity.lastName = p.lastName;
return entity;
}
{
“firstName”: “Jimi”,
“id”: 6,
“lastName”: “Hendrix”,
“salutation”: “Mr”
}
@DELETE
@Path(“{id}”)
@Transactional
public Response delete(@PathParam Long id) {
Person entity = Person.findById(id);
if (entity == null) {
throw new WebApplicationException(“Person with id of “ + id +
“ does not exist.”, 404);
}
entity.delete();
return Response.status(204).build();
}
Getting GitOps | 20
Chapter 1 – Creating the Sample Service
But how can you specify real database properties when entering test or production
stages?
Quarkus supports configuration profiles [1.9]. Properties marked with a given profile
name are used only if the application runs in that particular profile. By default, Quarkus
supports the following profiles:
• dev: Gets activated when you run your application via quarkus dev
• test: Gets activated when you are running tests
• prod: The default profile if the application is not started in the dev profile
In our case, you want to specify database-specific properties only in prod mode. If
you specified a database URL in dev mode, for example, Quarkus would try to use that
database server instead of starting the corresponding Dev Services as you want.
# only in production
%prod.quarkus.hibernate-orm.database.generation=update
%prod.quarkus.hibernate-orm.sql-load-script=no-file
# Datasource settings...
# note, we only set those props in prod mode
quarkus.datasource.db-kind=postgresql
%prod.quarkus.datasource.username=${DB_USER}
%prod.quarkus.datasource.password=${DB_PASSWORD}
%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DB_HOST}/${DB_DATABASE}
Quarkus also supports the use of property expressions [1.10] For instance, if your appli-
cation is running on Kubernetes, you might want to specify the datasource username
and password via a secret. In this case, use the ${PROP_NAME} expression format to re-
fer to the property that was set in the file. Those expressions are evaluated when they
are read. The property names are either specified in the application.properties
file or come from environment variables.
The jib extension helps you generate a container image out of the application. The
openshift extension generates the necessary manifest files to deploy the application
on, well, OpenShift.
Getting GitOps | 21
Chapter 1 – Creating the Sample Service
# resource limits
quarkus.openshift.resources.requests.memory=128Mi
quarkus.openshift.resources.requests.cpu=250m
quarkus.openshift.resources.limits.memory=256Mi
quarkus.openshift.resources.limits.cpu=500m
After building the image, you can install the application in OpenShift by applying the
manifest file:
$ oc apply -f target/kubernetes/openshift.yml
service/person-service configured
imagestream.image.openshift.io/person-service configured
deployment.apps/person-service configured
route.route.openshift.io/person-service configured
Then, create a PostgreSQL database instance in the same namespace from the corre-
sponding template. You can install the database from the OpenShift console by clicking
+Add → Developer Catalog → Database → PostgreSQL and filling in meaningful properties
for the service name, username, password, and database name. You could alternatively
execute the following command from the shell to instantiate a PostgreSQL server in
the current namespace:
$ oc new-app postgresql-persistent \
-p POSTGRESQL_USER=wanja \
-p POSTGRESQL_PASSWORD=wanja \
-p POSTGRESQL_DATABASE=wanjadb \
-p DATABASE_SERVICE_NAME=wanjaserver
%prod.quarkus.datasource.username=${DB_USER:wanja}
%prod.quarkus.datasource.password=${DB_PASSWORD:wanja}
%prod.quarkus.datasource.jdbc.url=jdbc:postgresql:
//${DB_HOST:wanjaserver}/${DB_DATABASE:wanjadb}
Quarkus takes the values after the colon as defaults, which means you don’t have to cre-
ate those environment values in the Deployment file for this test. But if you want to use a
secret or ConfigMap, have a look at the corresponding extension [1.14] for Quarkus.
After restarting the person-service you should see that the database is used and
that the person table was created. But there is no data in the database because you’ve
defined the corresponding property to be used in dev mode only.
Getting GitOps | 22
Chapter 1 – Creating the Sample Service
You should now have three singers in the database. To verify, call:
$ http http://person-service.apps.art8.ocp.lan/person
HTTP/1.1 200 OK
[
{
“firstName”: “Joe”,
“id”: 2,
“lastName”: “Cocker”,
“salutation”: “Mr”
},
{
“firstName”: “Jimi”,
“id”: 1,
“lastName”: “Hendrix”,
“salutation”: “Mr”
},
{
“firstName”: “Carlos”,
“id”: 3,
“lastName”: “Santana”,
“salutation”: “Mr”
}
]
Becoming Native
Do you want to create a native executable out of your Quarkus application? You can do
that easily by running:
However, this command requires you to set up GraalVM [1.15] locally. GraalVM is a Java
compiler that creates native executables from Java sources. If you don’t want to install
and set up GraalVM locally or if you’re always building for a container runtime, you
could instruct Quarkus to do a container build [1.16] as follows:
Try it. And if you’re using OpenShift 4.9, you could have a look at the Observe
register within the Developer Console. This page monitors the resources used by a
container image.
Getting GitOps | 23
Chapter 1 – Creating the Sample Service
My OpenShift 4.9 instance is installed on an Intel NUC with a Core i7 with six cores and
64GB of RAM. Using a native image instead of a JVM one changes quite a few things:
Summary
Using Quarkus dramatically reduces the lines of code you have to write. As you have
seen, creating a simple REST CRUD service is a piece of cake. If you then want to move
your application to Kubernetes, it’s just a matter of adding another extension to the
build process.
Thanks to Dev Services, you’re even able to do fast prototyping without worrying
about installing many third-party applications, such as databases.
Minimizing the amount of boilerplate code makes your application easier to maintain
and lets you focus on what you need to do: Implement the business case.
In Chapter 2, we’ll have a deeper look into working with images on Kubernetes and
OpenShift.
References
[1.1] https://quarkus.io
[1.2] https://red.ht/quarkus-spring-devs
[1.3] https://quarkus.io/get-started/
[1.4] https://httpie.io
[1.5] https://quarkus.io/guides/config
[1.6] https://quarkus.io/guides/hibernate-orm-panache
[1.7] https://quarkus.io/guides/dev-services
[1.8]
https://quarkus.io/guides/hibernate-orm-panache#solution-1-using-the-active-
record-pattern
[1.9] https://quarkus.io/guides/config-reference#profiles
[1.10] https://quarkus.io/guides/config-reference#property-expressions
[1.11] https://quarkus.io/guides/deploying-to-openshift
[1.12] https://quay.io
[1.13] https://github.com/GoogleContainerTools/jib
[1.14] https://quarkus.io/guides/kubernetes-config
[1.15] https://www.graalvm.org
[1.16] https://quarkus.io/guides/building-native-image#container-runtime
Getting GitOps | 24
Chapter 2
Deployment Basics
This chapter discusses how applications are deployed in Kubernetes and OpenShift,
what manifest files are involved, and how to change the files so that you can redeploy
your application into a new, clean namespace without rebuilding it.
The chapter also discusses OpenShift Templates and Kustomize, tools that help
automate those necessary file changes.
• C
onfigMaps and secrets (well, I have to store my application configuration
somehow, anyway)
• The deployment.yaml file
• The service.yaml file
• An ingress.yaml or route.yaml file
• A PersistentVolumeClaim.yaml file
In native Kubernetes, I have to take care of creating and maintaining those artifacts.
Thanks to the Source-to-Image concept in OpenShift, I don’t have to worry about most
of those manifest files because they will be generated for me.
The following commands create a new project named book-dev in OpenShift, followed
by a new application called person-service. The application is based on the Java
builder image openjdk-11-ubi8 and takes its source code from GitHub. The final
command effectively publishes the service so that applications from outside of
OpenShift can interact with it:
$ oc new-project book-dev
$ oc new-app java:openjdk-11-ubi8~https://github.com/wpernath/book-example.git
--context-dir=person-service --name=person-service --build-env
MAVEN_MIRROR_URL=http://nexus.ci:8081/repository/maven-public/
$ oc expose service/person-service
route.route.openshift.io/person-service exposed
If you don’t have a local Maven mirror, omit the --build-env parameter from the sec-
ond command. The --context-dir option lets you specify a subfolder within the Git
repository with the actual source files.
OpenShift generates the security settings, deployment, image, route, and service
for you (as well as some OpenShift-specific files, such as DeploymentConfig or
ImageStream). These OpenShift conveniences allow you to focus entirely on application
development.
Getting GitOps | 25
Chapter 2 – Deployment Basics
In order to let the example start successfully, we have to create a PostgreSQL database
server as well. Just execute the following command (we will discuss it later):
$ oc new-app postgresql-persistent \
-p POSTGRESQL_USER=wanja \
-p POSTGRESQL_PASSWORD=wanja \
-p POSTGRESQL_DATABASE=wanjadb \
-p DATABASE_SERVICE_NAME=wanjaserver
• D
eployment: A deployment connects the image with a container and provides
various runtime information, including environment variables, startup scripts, and
config maps. This configuration file also defines the ports used by the application.
• D
eploymentConfig: This file is specific to OpenShift, and contains mainly the same
functionality as a Deployment. If you’re starting today with your OpenShift tour,
use Deployment instead of this file.
• S
ervice: A service contains the runtime information Kubernetes needs to load
balance your application over different instances (pods).
• R
oute: A route defines the external URL exposed by your application. Requests
from clients are received at this URL.
•
ConfigMap: ConfigMaps contain, well, configurations for the application.
• Secret: Like a ConfigMap, a secret contains hashed password information.
Once those files are automatically generated, you can get them by using kubectl or oc:
$ oc get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
person-service 1/1 1 1 79m
By specifying the -o yaml option, you can get the complete descriptor:
Just pipe the output into a new .yaml file, and you’re done. You can directly use this file
to create your application in a new namespace (except for the image section). But the
generated file contains a lot of text you don’t need, so it’s a good idea to pare it down. For
example, you can safely remove the managedFields section, big parts of the metadata
section at the beginning, and the status section at the end of each file. After stripping the
file down to the relevant parts (shown in the following listing), add it to your Git repository:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: person-service
name: person-service
spec:
replicas: 1
selector:
Getting GitOps | 26
Chapter 2 – Deployment Basics
matchLabels:
deployment: person-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
deployment: person-service
spec:
containers:
- image: image-registry.openshift-image-registry.svc:
5000/book-dev/person-service:latest
imagePullPolicy: IfNotPresent
name: person-service
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
Do the same with Route and Service. That’s all for the present. You can now create
your application in a new namespace by entering:
$ oc new-project book-test
$ oc policy add-role-to-user system:image-puller system:serviceaccount:
book-test:default --namespace=book-dev
$ oc apply -f raw-kubernetes/service.yaml
$ oc apply -f raw-kubernetes/deployment.yaml
$ oc apply -f raw-kubernetes/route.yaml
This section has described one way of getting the required files. Of course, if you have
more elements to your application, you will need to export the files defining those
elements as well. If you have defined objects of type PersistentVolumeClaim,
ConfigMap, or Secret, you need to export them and strip them down as well.
This simple example has shown how you can export your application’s manifest files to
redeploy it into another clean namespace. Typically, you have to change some fields to
reflect differences between environments, especially for the Deployment file.
For example, it does not make sense to use the latest image from the book-dev
namespace in the book-test namespace. You’d always have the same version of
your application in the development and test environments. To allow the environments
to evolve separately, you have to change the image in the Deployment on every
stage you’re using. You could do this manually, of course. But let’s find some ways to
automate it.
Getting GitOps | 27
Chapter 2 – Deployment Basics
There are many ports available for most operating systems. On macOS, you can install
it via Homebrew [2.2]:
$ brew install yq
To read the name of the image out of the Deployment file, you could enter:
$ yq e ‘.spec.template.spec.containers[0].image’ \
raw-kubernetes/deployment.yaml \
image-registry.openshift-image-registry.svc:5000/book-dev/[email protected]
$ yq e -i ‘.spec.template.spec.containers[0].image = “image-registry.openshift-
image-registry.svc:5000/book-dev/person-service:latest”’ \
raw-kubernetes/deployment.yaml
This command updates the Deployment in place, changing the name of the image to
person-service:latest.
• Tag the currently used image in book-dev to something more meaningful, such as
person-service:v1.0.0-test.
• Use yq to change the image name in the deployment.
• Create a new namespace.
• Apply the necessary Deployment, Service, and Route configuration files as
shown earlier.
This process could easily be scripted in a shell script, for example:
#!/bin/bash
oc tag book-dev/something@some-other-thing book-dev/person-service:stage-v1.0.0
yq e -i ...
oc new-project ...
oc apply -f deployment.yaml
You can find more details on this topic in my article Release Management with Open-
Shift: Under the hood [2.3].
Using a tool such as yq seems to be the easiest way to automate the processing of
Kubernetes manifest files. However, this process forces you to create and maintain a script
with each of your projects. It might be the best solution for small teams and small projects,
but as soon as you’re responsible for more applications, the demands could quickly get
out of control.
OpenShift Templates
OpenShift Templates provides an easy way to create a single file out of the required
configuration files and add customizable parameters to the unified file. As the name
indicates, the service is offered only on OpenShift and is not portable to a generic
Kubernetes environment.
First, create all the standard configurations shown near the beginning of this chapter
(such as route.yaml, deployment.yaml, and service.yaml). However, you don’t
Getting GitOps | 28
Chapter 2 – Deployment Basics
have to separate the configurations into specific files. Next, to create a new template
file, open your preferred editor and create a file called template.yaml. The header of
that file should look like this:
apiVersion: template.openshift.io/v1
kind: Template
name: service-template
metadata:
name: service-template
annotation:
tags: java
iconClass: icon-rh-openjdk
openshift.io/display-name: The person service template
description: This Template creates a new service
objects:
Then add the configurations you want to combine into this file right under the objects
tag. Values that you want to change from one system to another should be specified as
parameters in the format $(parameter). For instance, a typical service configuration
might look like this in example template.yaml:
- apiVersion: v1
kind: Service
metadata:
labels:
app: ${APPLICATION_NAME}
name: ${APPLICATION_NAME}
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
selector:
app: ${APPLICATION_NAME}
parameters:
- name: APPLICATION_NAME
description: The name of the application you’d like to create
displayName: Application Name
required: true
value: person-service
- name: IMAGE_REF
description: The full image path
displayName: Container Image
required: true
value: image-registry.openshift-image-registry.svc:5000/book-dev/
person-service:latest
Now for the biggest convenience offered by OpenShift Templates: Once you have
instantiated a template in an OpenShift namespace, you can use the template to create
applications within the graphical user interface (UI):
$ oc new-project book-template
$ oc policy add-role-to-user system:image-puller system:serviceaccount:
book-template:default --namespace=book-dev
$ oc apply -f ocp-template/service-template.yaml*
template.template.openshift.io/service-template created
Getting GitOps | 29
Chapter 2 – Deployment Basics
Now, open the OpenShift web console, choose the project, click +Add, and choose the
Developer Catalog. You should find a template called service-template (Figure 2.1).
This is the one we’ve created.
Instantiate the template and fill in the required fields (Figure 2.2).
Then click Create. After a short time, you should see the application’s deployment pro-
gressing. Once it is finished, you should be able to access the route of the application.
There are also several ways to create an application instance out of a template without
the UI. You can run an oc command to do the work within OpenShift:
* With parameters:
* Application Name=simple-service
* Container Image=image-registry.openshift-image-registry.svc:
5000/book-dev/person-service:latest
Getting GitOps | 30
Chapter 2 – Deployment Basics
Whatever method you choose to process the template, results show up in your Topolo-
gy view for the project (Figure 2.3).
Figure 2.3: The OpenShift UI after using the template in several ways.
I would like OpenShift Templates even better if I could extend the development pro-
cess to other teams. I would like to be able to create a template of a standard applica-
tion (including a BuildConfig, etc.), and import it into the global openshift name-
space so that all users could reuse my base—just like the other OpenShift Templates
shipped with any OpenShift installation.
Unfortunately, OpenShift Templates is for OpenShift only. If you are using a local Ku-
bernetes installation and a production OpenShift version, the template is not easy to
reuse. But if your development and production environments are completely based on
OpenShift, you should give it a try.
Getting GitOps | 31
Chapter 2 – Deployment Basics
Kustomize
Kustomize is a command-line tool that edits Kubernetes YAML configuration files in
place, similar to yq. Kustomize tends to be easy to use because usually, only a few
properties of configuration files have to be changed from stage to stage. Therefore,
you start by creating a base set of files (Deployment, Service, Route etc.) and apply
changes through Kustomize for each stage. Kustomize’s patch mechanism takes care of
merging the files together.
Kustomize is very handy if you don’t want to learn a new templating engine and maintain
a file that could easily contain thousands of lines, as happens with OpenShift Templates.
Kustomize was originally created by Google and is now a subproject of Kubernetes. The
command-line tools, such as kubectl and oc, have most of the necessary functionality
built-in.
$ tree kustomize
kustomize
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ ├── route.yaml
│ └── service.yaml
└── overlays
├── dev
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── route.yaml
└── stage
├── deployment.yaml
├── kustomization.yaml
└── route.yaml
4 directories, 10 files
The top-level directory contains the base files and an overlays subdirectory. The base
files define the resources that Kubernetes or OpenShift need to deploy your application.
You should be familiar with these files from the previous sections of this chapter.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
org: wanja.org
resources:
- deployment.yaml
- service.yaml
- route.yaml
This file defines the resources for the deployment (Deployment, Service, and Route)
and adds a section called commonLabels. Those labels will be applied to all resources
generated by Kustomize.
Getting GitOps | 32
Chapter 2 – Deployment Basics
The following commands process the files and deploy our application on OpenShift:
$ oc new-project book-kustomize
$ oc apply -k kustomize/overlays/dev
service/dev-person-service created
deployment.apps/dev-person-service created
route.route.openshift.io/dev-person-service created
If you also install the Kustomize command-line tool (for example, with brew install
kustomize on macOS), you can debug the output:
A significant benefit of Kustomize is that you have to maintain only the differences be-
tween each stage, so the overlay files are quite small and clear. If a file does not change
between stages, it does not need to be duplicated.
Getting GitOps | 33
Chapter 2 – Deployment Basics
The following command merges the files for the staging overlay:
The following output shows that all filenames have staging- as a prefix. Additionally,
the configuration has a new commonLabel (the variant: staging line) and annota-
tion (note: we are on staging now):
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
note: We are on staging now
stage: staging
labels:
app: person-service
org: wanja.org
variant: staging
name: staging-person-service
spec:
progressDeadlineSeconds: 600
replicas: 2
selector:
matchLabels:
deployment: person-service
org: wanja.org
variant: staging
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
note: We are on staging now
stage: staging
labels:
deployment: person-service
org: wanja.org
variant: staging
spec:
containers:
- env:
- name: APP_GREETING
value: Hey, this is the STAGING environment of the App
image: image-registry.openshift-image-registry.svc:
5000/book-dev/person-service:latest
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
Getting GitOps | 34
Chapter 2 – Deployment Basics
annotations:
note: We are on staging now
stage: staging
labels:
app: person-service
org: wanja.org
variant: staging
name: staging-person-service
spec:
port:
targetPort: 8080-tcp
to:
kind: Service
name: staging-person-service
weight: 100
wildcardPolicy: None
The global org label is still specified. You can deploy the stage to OpenShift with the
following command:
$ oc apply -k kustomize/overlays/stage
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: dev-
commonLabels:
variant: development
# generate a configmap
configMapGenerator:
- name: app-config
literals:
- APP_GREETING=We are in DEVELOPMENT mode
Getting GitOps | 35
Chapter 2 – Deployment Basics
There are specific fields in newer versions of Kustomize (version 4.x and above) that
help you maintain your overlays even better. For example, if all you have to do is change
the tag of the target image, you could simply use the images field array specifier
shown in the previous listing.
The patches parameter can issue a patch on a list of target files, such as replacing the
target service name of the Route (as shown in the following listing) or adding health
checks for the application in the Deployment file:
# this patch needs to be done, because kustomize does not change the route
target service name
patches:
- patch: |-
- op: replace
path: /spec/to/name
value: dev-person-service
target:
kind: Route
The following file is the patch itself and gets applied to the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: person-service
spec:
template:
spec:
containers:
- name: person-service
readinessProbe:
httpGet:
path: /q/health/ready
port: 8080
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
path: /q/health/live
port: 8080
scheme: HTTP
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
Getting GitOps | 36
Chapter 2 – Deployment Basics
You can even generate the ConfigMap based on fixed parameters or properties files:
# generate a configmap
configMapGenerator:
- name: app-config
literals:
- APP_GREETING=We are in DEVELOPMENT mode
Starting with Kubernetes release 1.21 (which is reflected in OpenShift 4.8.x), oc and
kubectl contain advanced Kustomize features from version 4.0.5. Kubernetes 1.22
(OpenShift 4.9.x) will contain features of Kustomize 4.2.0.
Before Kubernetes 1.21 (OpenShift 4.7.x and earlier) oc apply -k does not contain
recent Kustomize features. So if you want to use those features, you need to use the
kustomize command-line tool and pipe the output to oc apply -f.
For more information and even more sophisticated examples, have a look at the Kus-
tomize home page [2.4] as well as the examples in the official GitHub repository [2.5].
Argo CD has built-in support for Kustomize as well, so if you’re doing CI/CD with Argo
CD, you can still use Kustomize.
Summary
In this chapter we saw how to build an application using OpenShift’s Source-to-Image (S2I)
technology, along with the YAML parser, OpenShift Templates, and Kustomize. These are
the base technologies for automating application deployment and packaging.
Now you have an understanding of which artifacts need to be taken into account when
you want to release your application and how to modify those artifacts to make sure
that the new environment is capable of handling your application.
In the next chapter, we’ll discuss Helm charts and Kubernetes Operators for application
packaging and distribution.
References
[2.1] https://github.com/mikefarah/yq
[2.2] https://brew.sh
[2.3] https://www.opensourcerers.org/2017/09/19/release-management-with-open-
shift-under-the-hood/
[2.4] https://kustomize.io/
[2.5] https://github.com/kubernetes-sigs/kustomize/tree/master/examples
Getting GitOps | 37
Chapter 3
In this chapter, you’ll learn more about working with container images. In particular, we
will dive into the concepts of Helm charts and Kubernetes Operators.
Modern Distributions
As mentioned in Chapter 2, the most challenging task when turning code into a useful
product is creating a distributable package from the application. It is no longer simply a
matter of zipping all files together and putting them somewhere. We have to take care
of several meta-artifacts. Some of these might not belong to the developer but the
owner of the application.
You can easily get a free account on Quay.io, but it must be publicly accessible. That
means everybody can read your repositories, but only you or people to whom you
specifically grant permissions can write to the repositories.
However, Docker requires a daemon to run on the system hosting the tool, and, there-
fore, must be run with root (privileged) access. This is usually unfeasible in Kubernetes
and certainly on Red Hat OpenShift.
Getting GitOps | 38
Chapter 3 – Packaging with Helm and Kubernetes Operators
Podman thus became a popular replacement for Docker. Podman performs basically
the same tasks as Docker and has a compatible interface where almost all the com-
mands and arguments are the same. Podman is very lightweight and—crucially—can be
run as an ordinary user account without a daemon.
However, Podman as a full Docker Desktop replacement currently runs only on GNU/
Linux. If you are working on a Windows or macOS system, you have to set up a remote
Linux system to make full use of Podman [3.6]. You might also have a look at a newer
version of CodeReady Containers (version 2.x onwards) [3.7], which aims to become a
full Docker Desktop replacement for Windows and macOS using Podman underneath.
Podman 4.x onwards [3.8] massively simplifies the way to setup the required remote
Linux system with a new API. The following command initializes a new host:
If you previously have used Podman on your Windows or MacOS system, have a look at
the system connection list. It should return something like this. If there is still your old
system connection available, then either delete it or make it non-default.
Podman internally uses Buildah [3.9] to build container images. According to the official
GitHub page [3.10], Buildah is the main tool for building images that conform to the
Open Container Initiative (OCI) [3.11] standard. The documentation states: “Buildah’s
commands replicate all the commands that are found in a Dockerfile. This allows
building images with and without a Dockerfile, without requiring any root privileges.”
To build a container image inside OpenShift (for example, using Source-to-Image (S2I) [3.12]
or a Tekton pipeline), you should directly use Buildah.
But because everything you can do with Buildah is part of Podman anyway, there is no
CLI client for macOS or Windows. The documentation states to use Podman instead.
The first command logs into your Quay.io account. The second command builds
the application with Maven [3.14]. The third, finally, creates a Docker image out of
Getting GitOps | 39
Chapter 3 – Packaging with Helm and Kubernetes Operators
the application. If you choose to use Podman, simply substitute podman for docker
in your Docker commands because Podman supports the same arguments. You could
also make scripts that invoke Docker run with Podman instead by aliasing the string
docker to podman:
$ alias docker=podman
Setting up Podman on any non-Linux system is a little bit tricky, as mentioned. You
need access to a Linux system, running either directly on one of your systems or in a
virtual machine. The Linux system basically works as the execution unit for the Podman
client. The documentation mentioned earlier [3.15] shows you how that works.
When your image is ready, you need to push the image to your repository:
This will push all (-a) locally stored images to the external repository, including all tags
(Figure 3-1). Now you can use the image in your OpenShift environment.
Figure 3-1. The person-service on quay.io after you’ve pushed it via docker push.
And that’s it. This workflow works for all Docker-compliant registries.
Because Docker and Podman are only incidental to this chapter, I will move on to our
main topics and let you turn to the many good articles out on the Internet to learn
about how to build images.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: dev-
commonLabels:
variant: development
Getting GitOps | 40
Chapter 3 – Packaging with Helm and Kubernetes Operators
# generate a configmap
configMapGenerator:
- name: app-config
literals:
- APP_GREETING=We are in DEVELOPMENT mode
If you’re on any other operating system than Linux, Podman is a little bit complicated to
use right now. The difficulty is not with the (CLI) tool; in most cases, the podman CLI is
identical to the docker CLI. Instead, the difficulty lies in installation, configuration, and
integration into non-Linux operating systems.
Getting GitOps | 41
Chapter 3 – Packaging with Helm and Kubernetes Operators
This is unfortunate, because Podman is much more lightweight than Docker. And
Podman does not require root access. So if you have some time—or are already
developing your applications on Linux—try to set up Podman. If not, continue to use
Docker.
$ skopeo copy \
docker://quay.io/wpernath/person-service:latest \
docker://quay.io/wpernath/person-service:v1.0.1-test
But you can also use Skopeo to mirror a complete repository by copying all images in
one go.
Helm Charts
Think about Helm charts as a package manager for Kubernetes applications, like
RPM or .deb files for Linux. Once a Helm chart is created and hosted on a reposi-
tory, everyone can install, update, and delete your chart from a running Kubernetes
installation.
Let’s first have a look at how to create a Helm chart to install your application on
OpenShift.
First, you need to download and install the Helm CLI [3.19]. On macOS you can easily
do this via:
Helm has its own directory structure for storing necessary files. Helm allows you to
create a basic template structure with everything you need (and even more). The
following command creates a Helm chart structure for a new chart called foo:
But I think it’s better not to create a chart from a template, but to start from scratch.
To do so, enter the following commands to create a basic file system structure for your
new Helm chart:
$ mkdir helm-chart
$ mkdir helm-chart/templates
$ touch helm-chart/Chart.yaml
$ touch helm-chart/values.yaml
Getting GitOps | 42
Chapter 3 – Packaging with Helm and Kubernetes Operators
Done. This creates the directory structure for your chart. You now have to put some
basic data into Chart.yaml, using the following example as a guide to plugging in your
own values:
apiVersion: v2
name: person-service
description: A helm chart for the basic person-service used as example in the book
home: https://github.com/wpernath/book-example
type: application
version: 0.0.1
appVersion: “1.0.0”
sources:
- https://github.com/wpernath/book-example
maintainers:
- name: Wanja Pernath
email: something@some-other-thing
You are now done with your first Helm chart. Of course, right now, it does nothing spe-
cial. You have to fill the chart with some content. So now, copy the following files from
the previous chapter into the helm-chart/templates folder:
$ cp kustomize-ext/base/*.yaml helm-chart/templates/
The directory structure of your Helm chart now looks like this:
$ tree helm-chart
helm-chart
├── Chart.yaml
├── templates
│ ├── config-map.yaml
│ ├── deployment.yaml
│ ├── route.yaml
│ └── service.yaml
└── values.yaml
1 directory, 6 files
The following command installs the Helm chart into a newly created OpenShift project
called book-helm1:
$ oc new-project book-helm1
$ helm install person-service person-service-0.0.1.tgz
NAME: person-service
LAST DEPLOYED: Mon Oct 25 17:10:49 2021
NAMESPACE: book-helm1
STATUS: deployed
REVISION: 1
TEST SUITE: None
If you now go to the OpenShift console, you should see the Helm release (Figure 3-3).
Getting GitOps | 43
Chapter 3 – Packaging with Helm and Kubernetes Operators
You can get the same overview via the command line. The first command that follows
shows a list of all installed Helm charts in your namespace. The second command pro-
vides the update history of a given Helm chart.
$ helm list
$ helm history person-service
If you create a newer version of the chart, upgrade the release in your Kubernetes
namespace by entering:
The rollback argument helps you return to a specified revision of the installed chart.
First, get the history of the installed chart by entering:
Getting GitOps | 44
Chapter 3 – Packaging with Helm and Kubernetes Operators
If you are not satisfied with a given chart, you can easily uninstall it by running:
## Version history
- 0.0.1 Initial release
- 0.0.2 release with some fixed bugs
- 0.0.3 release with image coming from quay.io and better parameter substitution
- 0.0.4 added NOTES.txt and a configmap
- 0.0.5 added a batch Job for post-install and post-upgrade
deployment:
image: quay.io/wpernath/person-service
version: v1.0.0-test
replicas: 2
includeHealthChecks: false
config:
greeting: ‘We are on a newer version now!’
You are just defining your variables in the file. The following YAML configuration file
illustrates how to insert the variables through curly braces:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_GREETING: |-
{{ .Values.config.greeting | default “Yeah, it’s openshift time” }}
This ConfigMap defines a parameter named APP_GREETING with the content of the
variable .Values.config.greeting (the initial period must be present). If this variable
is not defined, the default will be used.
Getting GitOps | 45
Chapter 3 – Packaging with Helm and Kubernetes Operators
Helm also defines some built-in objects [3.20] with predefined parameters:
{{- if .Values.deployment.includeHealthChecks }}
<do something here>
{{- end }}
Do you see the hyphen (-) at the beginning of the reference? This is necessary to get a
well formatted YAML file after the template has been processed. If you removed the -,
you would have empty lines in the YAML.
Debugging Templates
Typically, after you execute helm install, all the generated files are sent directly to
Kubernetes. A couple of commands help you debug your templates first.
• The helm lint command checks whether your chart follows best practices.
• The helm install --dry-run --debug command renders your files without
sending them to Kubernetes.
Defining a Hook
It is often valuable to include sample data with an application for mocking and testing.
But if you want to install a database with example data as part of your Helm chart, you
need to find a way of initializing the database. This is where Helm hooks [3.24] come
into play.
A hook is a Kubernetes resource (like a job or a pod) that gets executed when a certain
event is triggered. An event could take place at one of the following points:
• pre-install
• post-install
• pre-upgrade
• post-upgrade
• pre-delete
• post-delete
• pre-rollback
• post-rollback
Getting GitOps | 46
Chapter 3 – Packaging with Helm and Kubernetes Operators
The type of hook gets configured via the helm.sh/hook annotation. This annotation
lets you assign weights to hooks to specify the order in which they run. Lower-num-
bered weights run before higher-number ones for each type of event.
The following listing defines a new hook as a Kubernetes Job with post-install and
post-upgrade triggers:
apiVersion: batch/v1
kind: Job
metadata:
name: “{{ .Release.Name }}”
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Chart.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
“helm.sh/chart”: “{{ .Chart.Name }}-{{ .Chart.Version }}”
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
“helm.sh/hook”: post-install,post-upgrade
“helm.sh/hook-weight”: “-5”
“helm.sh/hook-delete-policy”: before-hook-creation
spec:
template:
metadata:
name: {{ .Chart.Name }}
labels:
“helm.sh/chart”: “{{ .Chart.Name }}-{{ .Chart.Version }}”
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: “registry.access.redhat.com/ubi8/ubi-minimal:latest”
command:
- /bin/sh
- -c
- |
echo “WELCOME TO ‘{{ .Chart.Name }}-{{ .Chart.Version }}’ “
echo “-------------------------------------------------”
echo “Here we could now do initialization work.”
echo “Like filling our DB with some data or what’s so ever”
echo “...”
sleep 10
As the example shows, you can also execute a script of your design as part of the install
process.
Subcharts must be installable without the main chart. This means that each subchart
has its own values.yaml file. You can override the values in the subchart’s file within
your main chart’s values.yaml.
Getting GitOps | 47
Chapter 3 – Packaging with Helm and Kubernetes Operators
If your chart requires the installation of a custom resource definition (CRD)—for ex-
ample, to install an Operator—simply put the CRD into the helm/crds folder of your
chart. Keep in mind that Helm does not take care of deinstalling any CRDs if you want to
deinstall your chart. So installing CRDs with Helm is a one-shot operation.
If Helm charts do so much, so well, why would you need another package format? Well,
let’s next have a look at Operators.
Kubernetes Operators
Our simple Quarkus application makes few demands of an administrator or of its host-
ing environment, whether plain Kubernetes or OpenShift. The application is a stateless,
web-based application that doesn’t require any special treatment by an administrator.
If you use the Helm chart to install the application (or even install it manually into Open-
Shift via oc apply -f), Kubernetes understands how to manage it out of the box
pretty well. Kubernetes’s control loop mechanism knows what the desired state of the
application is (based on its various YAML files) and compares the desired state contin-
ually with the application’s current state. Any deviations from the desired state are fixed
automatically. For instance, if a pod has just died, Kubernetes takes care of restarting it.
If you have uploaded a new version of the image, Kubernetes re-instantiates the whole
application with the new image.
But what happens if your application requires some complex integrations into other
applications not built for Kubernetes, such as a database? Or if you want to back up
and restore stateful data? Or need a clustered database? Such requirements typically
require the special know-how of an administrator.
Of course, this makes an Operator way more complex than a Helm chart because all the
logic needs to be implemented before putting it into the Operator. There are officially
three ways to implement an Operator:
An Operator creates, watches, and maintains CRDs. This means that it provides new
API resources to the cluster, such as a Route or BuildConfig. Whenever someone cre-
ates a new resource via oc apply based on the CRD, the Operator knows what to do.
The Operator handles all the logic behind that mechanism (just think about the work
necessary to set up a clustered database or back up and restore the persistent volume
of a database). It makes extensive use of the Kubernetes API.
Getting GitOps | 48
Chapter 3 – Packaging with Helm and Kubernetes Operators
If you need to have full control over everything, you have to create the Operator with
Go or (unofficially) with Java. Otherwise, you can make use of an Ansible-based or
Helm-based Operator. The Operator SDK and the base packages in Ansible and Helm
take care of the Kubernetes API calls. So you don’t have to learn Go now in order to
build your first Operator.
$ mkdir kube-operator
$ cd kube-operator
$ operator-sdk init \
--plugins=helm --helm-chart=../helm-chart \
--domain wanja.org --group charts \
--kind PersonService \
--project-name person-service-operator
Writing kustomize manifests for you to edit...
Creating the API:
$ operator-sdk create api --group charts --kind PersonService --helm-chart
../helm-chart
Writing kustomize manifests for you to edit...
Created helm-charts/person-service
Generating RBAC rules
I1027 13:19:52.883020 72377 request.go:665] Waited for 1.017668738s due to
client-side throttling, not priority and fairness, request:
GET:https://api.art6.ocp.lan:6443/apis/controlplane.operator.openshift.io/
v1alpha1?timeout=32s
WARN[0002] The RBAC rules generated in config/rbac/role.yaml are based on the
chart’s default manifest. Some rules might be missing for resources that are
only enabled with custom values, and some existing rules may be overly broad.
Double-check the rules generated in config/rbac/role.yaml to ensure they meet
the Operator’s permission requirements.
These commands initialize the project for the Operator based on the chart found in
the ../helm-chart folder. A PersonService CRD should also have been created in
config/crd/bases. The following listing shows the complete directory structure gen-
erated by the commands:
$ tree
.
├── Dockerfile
├── Makefile
├── PROJECT
├── config
│ ├── crd
│ │ ├── bases
│ │ │ └── charts.wanja.org_personservices.yaml
│ │ └── kustomization.yaml
│ ├── default
│ │ ├── kustomization.yaml
Getting GitOps | 49
Chapter 3 – Packaging with Helm and Kubernetes Operators
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── controller_manager_config.yaml
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── manifests
│ │ └── kustomization.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── personservice_editor_role.yaml
│ │ ├── personservice_viewer_role.yaml
│ │ ├── role.yaml
│ │ ├── role_binding.yaml
│ │ └── service_account.yaml
│ ├── samples
│ │ ├── charts_v1alpha1_personservice.yaml
│ │ └── kustomization.yaml
│ └── scorecard
│ ├── bases
│ │ └── config.yaml
│ ├── kustomization.yaml
│ └── patches
│ ├── basic.config.yaml
│ └── olm.config.yaml
├── helm-charts
│ └── person-service
│ ├── Chart.yaml
│ ├── templates
│ │ ├── NOTES.txt
│ │ ├── config-map.yaml
│ │ ├── deployment.yaml
│ │ ├── post-install-hook.yaml
│ │ ├── route.yaml
│ │ └── service.yaml
│ └── values.yaml
└── watches.yaml
15 directories, 41 files
The Helm-based Operator uses the watches.yaml file to watch changes on the API.
So whenever you create a new resource based on the CRD, the underlying logic knows
what to do.
Now have a look at the Makefile. There are three parameters in it that you should change:
• V
ERSION: Whenever you change something in the project file (and have running
instances of the Operator somewhere), increment the number.
Getting GitOps | 50
Chapter 3 – Packaging with Helm and Kubernetes Operators
• I
MAGE_TAG_BASE: This is the base name of the images that the makefile
produces. Change the name to something like quay.io/wpernath/person-
service-operator.
• I
MG: This is the name of the image within our Helm-based Operator. Change the
name to something like $(IMAGE_TAG_BASE):$(VERSION).
$ make docker-build
docker build -t quay.io/wpernath/person-service-operator:0.0.1 .
[+] Building 6.3s (9/9) FINISHED
[...]
$ make docker-push
docker push quay.io/wpernath/person-service-operator:0.0.1
The push refers to repository [quay.io/wpernath/person-service-operator]
5f70bf18a086: Pushed
fe540cb9bcd7: Pushed
ddaee5130ba6: Pushed
4ea9a10139f9: Mounted from operator-framework/helm-operator
5a5ce86c51f0: Mounted from operator-framework/helm-operator
3a40d0007ffb: Mounted from operator-framework/helm-operator
0b911edbb97f: Mounted from wpernath/person-service
54e42005468d: Mounted from wpernath/person-service
0.0.1: digest: sha256:
acd3f89a7e0788b226d5016df765f61a5429cf5cef6118511a0910b9f4a04aaf size: 1984
In your repository on Quay.io, you now have a new image called person-service-
operator. This image contains the logic to manage the Helm chart. The image exposes
the CRD and the new Kubernetes API.
$ make deploy
cd config/manager && /usr/local/bin/kustomize edit set image controller=quay.io/
wpernath/person-service-operator:0.0.1
/usr/local/bin/kustomize build config/default | kubectl apply -f -
namespace/person-service-operator-system created
customresourcedefinition.apiextensions.k8s.io/personservices.charts.wanja.org created
serviceaccount/person-service-operator-controller-manager created
role.rbac.authorization.k8s.io/person-service-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/person-service-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/person-service-operator-metrics-reader created
Getting GitOps | 51
Chapter 3 – Packaging with Helm and Kubernetes Operators
clusterrole.rbac.authorization.k8s.io/person-service-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/person-service-operator-leader-election-
rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/person-service-operator-manager-
rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/person-service-operator-proxy-
rolebinding created
configmap/person-service-operator-manager-config created
service/person-service-operator-controller-manager-metrics-service created
deployment.apps/person-service-operator-controller-manager created
To see your PersonService in action, create a new namespace and apply a new
instance of the PersonService through the following configuration file, located in
config/samples/charts_v1alpha1_personservice:
apiVersion: charts.wanja.org/v1alpha1
kind: PersonService
metadata:
name: my-person-service1
spec:
# Default values copied from <project_dir>/helm-charts/person-service/values.yaml
config:
greeting: Hello from inside the operator!!!!
deployment:
image: quay.io/wpernath/person-service
includeHealthChecks: false
replicas: 1
version: v1.0.0-test
The following commands create the service:
$ oc new-project book-operator
$ oc apply -f config/samples/charts_v1alpha1_personservice
personservice.charts.wanja.org/my-person-service1 configured
This will take a while. But after some time, you will notice a change in the Topology view
of OpenShift, showing that the Helm chart was deployed.
$ oc delete personservice/my-person-service1
$ make undeploy
$ make bundle
operator-sdk generate kustomize manifests -q
cd config/manager && /usr/local/bin/kustomize edit set image controller=quay.io/
wpernath/person-service-operator:0.0.3
/usr/local/bin/kustomize build config/manifests | operator-sdk generate bundle
-q --overwrite --version 0.0.3
INFO[0000] Creating bundle.Dockerfile
INFO[0000] Creating bundle/metadata/annotations.yaml
Getting GitOps | 52
Chapter 3 – Packaging with Helm and Kubernetes Operators
The bundle generator asks you a few questions to configure the bundle. Have a look at
Figure 3-4 for the output.
Run the generator every time you change the VERSION field in the makefile:
Getting GitOps | 53
Chapter 3 – Packaging with Helm and Kubernetes Operators
This command installs the Operator into OpenShift and registers it with the Oper-
ator Lifecycle Manager (OLM). After this, you’ll be able to watch and manage your
Operator via the OpenShift UI, just like the Operators that come with OpenShift
(Figure 3-5).
You can create a new instance of the service in the UI by clicking Installed Opera-
tors→PersonService→Create PersonService, or by executing the following at the
command line:
$ oc apply -f config/samples/charts_v1alpha1_personservice
personservice.charts.wanja.org/my-person-service1 configured
Shortly after requesting the new instance, you should see the deployed Helm chart of
your PersonService.
Cleaning Up
If you want to get rid of this Operator, just enter:
The cleanup command needs the project name, which you can find in the PROJECT file.
Summary of Operators
Creating an Operator just as a replacement for a Helm chart does not really make
sense because Operators are much more complex to develop and maintain. And
what we’ve done so far is just the tip of the iceberg. We haven’t really touched the
Kubernetes API.
Getting GitOps | 54
Chapter 3 – Packaging with Helm and Kubernetes Operators
However, as soon as you need more influence over creating and maintaining your ap-
plication and its associated resources, think about building an Operator. Fortunately,
the Operator SDK and the documentation can help you with the first steps.
Summary
This chapter described how to build images. You learned more about the various com-
mand-line tools, including skopeo, podman, and buildah. You also saw how to create a
Helm Chart and a Kubernetes Operator. Finally, you should know how to decide when to
use each tool.
The next chapter of this book will talk about Tekton pipelines as a form of internal
distribution.
References
[3.1] https://quay.io/
[3.2] https://hub.docker.com
[3.3] https://quay.io/
[3.4] https://www.docker.com
[3.5] https://podman.io/getting-started/installation
[3.6]
https://github.com/containers/podman/blob/master/docs/tutorials/mac_win_
client.md
[3.7] https://crc.dev/crc/#about-presets_gsg
[3.8] https://podman.io/releases/2022/02/22/podman-release-v4.0.0.html
[3.9] https://buildah.io/
[3.10] https://github.com/containers/buildah
[3.11] https://opencontainers.org/
[3.12] https://github.com/openshift/source-to-image
[3.13] https://github.com/wpernath/book-example
[3.14] https://maven.apache.org/what-is-maven.html
[3.15]
https://github.com/containers/podman/blob/master/docs/tutorials/mac_win_
client.md
[3.16] https://github.com/containers/skopeo
[3.17] https://helm.sh
[3.18] https://operatorframework.io
[3.19] https://helm.sh/docs/intro/install/
[3.20] https://helm.sh/docs/chart_template_guide/builtin_objects/
[3.21]
https://helm.sh/docs/chart_template_guide/getting_started/
[3.22] https://pkg.go.dev/text/template?utm_source=godoc
[3.23] https://golang.org
[3.24] https://helm.sh/docs/topics/charts_hooks/
[3.25]
https://github.com/quarkiverse/quarkus-operator-sdk
[3.26]
https://sdk.operatorframework.io/docs/installation/
[3.27] https://sdk.operatorframework.io/docs/building-operators/helm/tutorial/
Getting GitOps | 55
Chapter 4
Chapters 2 and 3 covered the basics of application packaging with Kustomize, Helm
charts, and Operators. We also saw how to handle images and all the metadata required
for working with Kubernetes.
This chapter will discuss how to integrate complex tasks, such as building and deploy-
ing applications, into Kubernetes using Tekton. Continuous integration and continuous
development (CI/CD) are represented in Tekton as pipelines that combine all the steps
you need to accomplish what you want. And Tekton makes it easy to write a general
pipeline that you can adapt to many related tasks.
OpenShift Pipelines [4.2] is based on Tekton and adds a nice GUI to the OpenShift
developer console. The Pipelines Operator is free to use for every OpenShift user.
Tekton Concepts
Tekton has numerous objects, but the architecture is quite easy to understand. The key
concepts are:
• S
tep: A process that runs in its own container and can execute whatever the container
image provides. A step does not stand independently but must be embedded in a task.
• Task: A set of steps running in separate containers (known as “pods” in Kuberne-
tes). A task could be, for example, a compilation process using Maven. One step
would be to check the Maven settings.xml file. The second step could be to
execute the Maven goals (compile, package etc.).
• Pipeline: A set of tasks executed either in parallel or (in a simpler case) one after
another. A pipeline can be customized through parameters.
• PipelineRun: A collection of parameters to submit to a pipeline. For instance, a
build-and-deploy pipeline might refer to a PipelineRun that contains technical input
(for example, a ConfigMap and PersistentVolumeClaim) as well as non-technical
parameters (for example, the URL for the Git repository to clone, the name of the
target image, etc.)
Internally, Tekton creates a TaskRun object for each task it finds in a PipelineRun.
To summarize: A pipeline contains a list of tasks, each containing a list of steps. One of
Tekton’s benefits is that you can share tasks and pipelines with other people, because
a pipeline just specifies what to do in a given order. So if most of your projects have a
similar pipeline, share and reuse it.
Getting GitOps | 56
Chapter 4 – CI/CD with Tekton Pipelines
Check the official Tekton homepage to see how to install the tool on other operating systems.
After a while, you’ll notice a new GUI entry in both the Administrator and the Developer
UI (Figure 4-2).
Getting GitOps | 57
Chapter 4 – CI/CD with Tekton Pipelines
Why use Jib to build the container image? Well, that’s easily explained: Right now, there
are three different container image build strategies available with Quarkus:
• Docker
• Source-to-Image (S2I)
• Jib
The Docker strategy uses the docker binary to build the container image. But the
docker binary is not available inside a Kubernetes cluster (as mentioned in Chapter 3),
because Docker is too heavyweight and requires root privileges to run the daemon.
So in order to stay vendor-independent, we have to use Jib for this use case.
Of course, you could also use other tools to create your container image inside Kuber-
netes. But to keep this Tekton example clean and simple, we are reusing what Quarkus
provides. So we can simply set a few Quarkus properties in application.properties
to define how Quarkus should package the application. Then we’ll be able to use exact-
ly one Tekton task to compile, package, and push the application to an external registry.
Make sure that your Quarkus application is using the required Quarkus extension
container-image-jib. If your pom.xml file does not include the quarkus-
container-image-jib dependency, add it by executing:
Then have a look at Figure 4-3 to see what properties are needed for Quarkus to
build, package, and push the image. The following properties need to be set in
application.properties:
Getting GitOps | 58
Chapter 4 – CI/CD with Tekton Pipelines
This command compiles the sources and builds the container image. If you want to
push the resulting image to your registry, simply call:
After a while, Quarkus will generate and push your image to your registry. In my case,
it’s quay.io/wpernath/person-service.
Getting GitOps | 59
Chapter 4 – CI/CD with Tekton Pipelines
You’re missing just the kustomize task. You’ll create one later, but first, we want to take
care of the rest of the tasks.
The output explains all the task parameters, together with other necessary information
such as its inputs and outputs.
By specifying the -o yaml parameter, you can view the YAML source definition of the task.
The git-clone task allows many parameters, but most of them are optional. You just
have to specify git-url and git-revision. You also have to specify a workspace for
the task.
Getting GitOps | 60
Chapter 4 – CI/CD with Tekton Pipelines
every task in the pipeline. The PipelineRun (or, in the case of a single running task, the
TaskRun) finally creates the mapping between the defined workspace and correspond-
ing storage.
• A PVC where the git-clone task clones the source code to and from the place
where the Maven task compiles the source.
• A ConfigMap with the maven-settings file you need in your environment.
You have to provide parameters to each task and link the required workspaces to the
tasks. You can easily do that by using the GUI (Figure 4-6).
Getting GitOps | 61
Chapter 4 – CI/CD with Tekton Pipelines
You need to use the maven task twice, using the package goal:
Once you’ve done all that and have clicked the Save button, you’re able to export the
YAML file by executing:
$ oc apply -f tekton/pipelines/build-and-push-image.yaml
Getting GitOps | 62
Chapter 4 – CI/CD with Tekton Pipelines
If you’re providing the necessary parameters directly to each task, you might repeat
the settings over and over again. For example, in our case, we are using the Maven
task for compiling, packaging, image generation, and pushing. Here it makes sense
to take the parameters out of the specification of each task. Instead, put them on the
pipeline level under a property called params (as shown in the following listing) and
refer to them inside the corresponding task by specifying them by their name in the
syntax $(params.parameter-name).
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: build-and-push-image
spec:
params:
- default: ‘https://github.com/wpernath/book-example.git’
description: Source to the GIT
name: git-url
type: string
- default: main
description: revision to be used
name: git-revision
type: string
[...]
tasks:
- name: git-clone
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
[...]
taskRef:
kind: ClusterTask
name: git-clone
workspaces:
- name: output
workspace: shared-workspace
[...]
We first need to have a proper image that contains the kustomize executable. The
Dockerfile for this project is available in the kustomize-ubi repository on GitHub [4.12],
and the image is available in its repository on Quay.io [4.13].
apiVersion: tekton.dev/v1beta1
kind: Task
Getting GitOps | 63
Chapter 4 – CI/CD with Tekton Pipelines
metadata:
name: kustomize
labels:
app.kubernetes.io/version: “0.4”
annotations:
tekton.dev/pipelines.minVersion: “0.12.1”
tekton.dev/tags: build-tool
spec:
description: >-
This task can be used to execute kustomze build scripts and to apply the
changes via oc apply -f
workspaces:
- name: source
description: The workspace holding the cloned and compiled quarkus source.
params:
- name: kustomize-dir
description: Where should kustomize look for kustomization in source?
- name: target-namespace
description: Where to apply the kustomization to
- name: image-name
description: Which image to use. Kustomize is taking care of it
steps:
- name: build
image: quay.io/wpernath/kustomize-ubi:latest
workingDir: $(workspaces.source.path)
script: |
cd $(workspaces.source.path)/$(params.kustomize-dir)
DIGEST=$(cat $(workspaces.source.path)/target/jib-image.digest)
- name: apply
image: ‘image-registry.openshift-image-registry.svc:5000/openshift/cli:latest’
workingDir: $(workspaces.source.path)
script: |
oc apply -f $(workspaces.source.path)/target/kustomized.yaml -n
$(params.target-namespace)
Paste this text into a new file called kustomize-task.yaml. As you can see from the
file’s contents, this task requires a workspace called source and three parameters:
kustomize-dir, target-namespace, and image-name. The task contains two steps:
build and apply.
The build step uses the Kustomize image to set the new image and digest. The apply
step uses the internal OpenShift CLI image to apply the Kustomize-created files in the
target-namespace namespace.
To load the kustomize-task.yaml file into your current OpenShift project, simply
execute:
$ oc apply -f kustomize-task.yaml
task.tekton.dev/kustomize configured
Getting GitOps | 64
Chapter 4 – CI/CD with Tekton Pipelines
If you need to edit the ConfigMap, feel free to do it right now and then execute to im-
port the ConfigMap into your current project:
$ oc apply -f maven-settings-cm.yaml
Creating a PersistentVolumeClaim
Create a new file with the following content and execute oc apply -f to import it into
your project:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: builder-pvc
spec:
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
This file reserves a PVC with the name builder-pvc and a requested storage of
10GB. It’s important to use persistentVolumeReclaimPolicy: Retain here, as
we want to reuse build artifacts from the previous builds—more on this requirement
later in this chapter.
Getting GitOps | 65
Chapter 4 – CI/CD with Tekton Pipelines
The Logs and Events cards of the OpenShift Pipeline Editor show, well, all the logs and
events. If you prefer to view these things from the command line, use tkn to follow the
logs of the PipelineRun:
$ tkn pr
The output shows the available actions for PipelineRuns. To list each PipelineRun and
its status, enter:
$ tkn pr list
NAME STARTED DURATION STATUS
build-and-push-image-run-20211123-091039 1 minute ago 54 seconds Succeeded
build-and-push-image-run-20211122-200911 13 hours ago 2 minutes Succeeded
build-and-push-image-ru0vni 13 hours ago 8 minutes Failed
Getting GitOps | 66
Chapter 4 – CI/CD with Tekton Pipelines
$ tkn pr logs -f -L
If you omit the -L option, tkn lets you choose from the list of PipelineRuns. You can
also log, list, cancel, and delete PipelineRuns.
Visual Code has a Tekton Pipeline extension that you can also use to edit, build, and
execute pipelines.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: $PIPELINE-run-$(date “+%Y%m%d-%H%M%S”)
spec:
params:
- name: git-url
value: https://github.com/wpernath/book-example.git
- name: git-revision
value: main
- name: context-dir
value: the-source
- name: image-name
value: quay.io/wpernath/person-service
- name: image-username
value: wpernath
- name: image-password
value: *****
- name: target-namespace
value: book-tekton
workspaces:
- name: shared-workspace
persistentVolumeClaim:
claimName: builder-pvc
- configMap:
name: maven-settings
name: maven-settings
pipelineRef:
name: build-and-push-image
serviceAccountName: pipeline
Most of the properties of this object are self-explanatory. Just one word on the
serviceAccountName property: Each PipelineRun runs under a given service ac-
count, which means that all pods started along the pipeline run inside this security
context.
OpenShift Pipelines creates a default service account for you called pipeline. If you
have secrets that you want to make available to your PipelineRun, you have to connect
them with the service account name. But this requirement is out of scope for this chap-
ter of the book; we’ll return to secrets in the next chapter.
The tekton/pipeline.sh shell script creates a full version of this PipelineRun based
on input parameters.
Getting GitOps | 67
Chapter 4 – CI/CD with Tekton Pipelines
But still, the pipelines at this point take way too long.
Looking at the logs, you can see that the maven task takes a long time to finish. This is
because Maven is downloading the necessary artifacts again and again on every run.
Depending on your Internet connection, this takes some time, even if you’re using a
local Maven mirror.
On your developer machine, Maven uses the $HOME/.m2 folder as a cache for the
artifacts. The same is true when you run Maven from a task. However, because each
PipelineRun runs on a separate set of pods, $HOME/.m2 is not properly defined, which
means the whole cache gets invalidated once the PipelineRun finishes.
I have created a new Maven task (maven-caching), which you can find in the book’s
example repository [4.14]. The file was originally just a copy of the one from Tekton
Hub. But then I decided to remove the init step, which was building a maven-set-
tings.xml file based on some input parameters. Instead, I removed most of the
parameters and added a ConfigMap with must-have maven-settings. I believe this
makes everything much easier.
As Figure 4-8 shows, you now have only two parameters: GOALS and CONTEXT_DIR.
Getting GitOps | 68
Chapter 4 – CI/CD with Tekton Pipelines
The important properties for the maven call is shown in the second red box of Fig-
ure 4-8. These properties call maven with the maven-settings property and with
the parameter indicating where to store the downloaded artifacts.
One note on artifact storage: When I was testing this example, I realized that if
the git-clone task clones the source to the root directory of the PVC (when no
subdirectory parameter is given on task execution), the next start of the pipeline
will delete everything from the PVC again. And in that case, we once again have no
artifact cache.
The changes discussed in this section reduce our maven calls dramatically, in my case
from 8 minutes to 54 seconds:
$ tkn pr list
NAME STARTED DURATION STATUS
build-and-push-image-123 1 minute ago 54 seconds Succeeded
build-and-push-image-ru0 13 hours ago 8 minutes Succeeded
Getting GitOps | 69
Chapter 4 – CI/CD with Tekton Pipelines
Summary
Tekton is a powerful tool for creating CI/CD pipelines. Because it is based on Kuber-
netes, it uses extensive concepts from Kubernetes and reduces the maintenance of
the tool itself. If you want to start your first pipeline quickly, try to use the OpenShift
Developer UI, which you get for free if you’re installing the Operator. This gives you
a nice base to start your tests. However, at some point—especially when it comes to
optimizations—you need a proper editor to code your pipelines.
One of the biggest advantages of Tekton over other CI/CD tools such as Jenkins is that
you can reuse all your work for other projects and applications. If you want to standard-
ize the way your pipelines work, build one pipeline and simply specify different sets of
parameters for different situations. PipelineRun objects make this possible. You can
easily reuse the pipeline we created in this chapter for all Quarkus-generated applica-
tions. Just change the git-url and image-name parameters. Isn‘t that great?
And even if you’re not satisfied with all the tasks you get from Tekton Hub, use them as
bases and build your own iterations out of them, as we did with the optimized Maven
task and the Kustomize task in this chapter.
Tekton is not the easiest technology available for CI/CD pipelines, but it is definitely
one of the most flexible.
However, we have not even talked about Tekton security [4.15] and how we can provide,
for example, secrets to access your Git repository or the image repository. And we
cheated a little bit with image generation because we used the mechanism Quarkus
provides. There are other ways of creating images using a dedicated Buildah task.
The next chapter discusses Tekton security, GitOps, and Argo CD.
References
[4.1] https://tekton.dev
[4.2]
https://cloud.redhat.com/blog/introducing-openshift-pipelines
[4.3] https://github.com/code-ready/crc
[4.4]
https://github.com/wpernath/book-example/tree/main/person-service
[4.5] https://github.com/GoogleContainerTools/jib
[4.6] https://quay.io/repository/wpernath/quarkus-simple-wow
[4.7]
https://www.opensourcerers.org/2021/04/26/automated-application-packag-
ing-and-distribution-with-openshift-part-12/
[4.8] https://github.com/GoogleContainerTools/jib
[4.9] https://github.com/wpernath/book-example
[4.10] https://quarkus.io/guides/container-image#quarkus-container-image-jib_
quarkus.jib.base-jvm-image
[4.11] https://hub.tekton.dev
[4.12] https://github.com/wpernath/kustomize-ubi
[4.13] https://quay.io/repository/wpernath/kustomize-ubi
[4.14]
https://raw.githubusercontent.com/wpernath/book-example/main/tekton/
tasks/maven-task.yaml
[4.15] https://tekton.dev/docs/pipelines/auth/
Getting GitOps | 70
Chapter 5
The previous chapters discussed the basics of modern application development with
Kubernetes. This chapter shows you how to integrate a project into Kubernetes-native
pipelines to do your CI/CD and automatically deploy your application out of a pipeline
run. We discuss the risks and benefits of using GitOps and Argo CD [5.1] in your project
as well as how to use it with Red Hat OpenShift.
Introduction to GitOps
I can imagine a reader complaining, “We are still struggling to implement DevOps, and now
you’re coming to us with yet another new fancy acronym to help solve all the issues we still
have?“ I heard this the first time I talked about GitOps during a customer engagement.
The short answer is that DevOps is a cultural change in your enterprise, meaning
developers and operations people should talk to each other instead of secretly doing
their work behind big walls.
As described earlier in this book, you should already maintain a formal description of your
infrastructure. Each application you’re deploying on Kubernetes has a bunch of YAML files
that are required to run your application. Adding those files to your project in a Git reposi-
tory is just a natural step forward. And if you had a tool that could read those files from the
repository and apply them to a specified Kubernetes namespace, wouldn’t that be great?
Well, that’s what GitOps accomplishes. And Argo CD is one of the available tools to help
you do GitOps.
Getting GitOps | 71
Chapter 5 – GitOps and Argo CD
3. This pipeline compiles the code, runs all automated tests, and creates and pushes
the image.
4. Finally, the pipeline automatically installs the application on the test system.
With GitOps, the developer workflow looks somewhat different (see Figure 5-1):
So you’re still using your pipeline based on Tekton, Jenkins, or whatever to do CI.
GitOps then takes care of the CD part.
Argo CD Concepts
Right now (as of version 2.0), the concepts behind Argo CD are quite easy. You reg-
ister an Argo application that contains pointers to the necessary Git repository with
all the application-specific descriptors such as Deployment, Service, etc., and the
Kubernetes cluster. You might also define an Argo project, which defines various
defaults such as:
Once the application is registered, you can manually start a sync to update the actual
environment. Alternatively, Argo CD starts “healing” the application automatically, if the
synchronization policy is set to do so.
Getting GitOps | 72
Chapter 5 – GitOps and Argo CD
We are going to set up Argo CD (via the OpenShift GitOps Operator [5.3]) on
OpenShift 4.9 (via Red Hat OpenShift Local [5.4]). We are going to use Tekton to build
a pipeline [5.5], which updates the person-service-config [5.6] Git repository with
the latest image digest of the build. Argo CD should then detect the changes and Note: Typically, a GitOps
start synchronizing our application. pipeline does not directly
push changes into the main
The Application Configuration Repository branch of a configuration
repository. Instead, the pipe-
First of all, let’s create a new repository for our application configuration: person-
line should commit files into
service-config.
a feature branch or release
Create a new remote Git repository and copy the URL (for example, https://github. branch and should create a
com/wpernath/person-service-config.git). Then jump to the shell, create a new pull request so that com-
empty folder somewhere, and issue the following commands: mitters can review changes
before they are merged to
$ mkdir person-service-config the main branch.
$ git init -b main
$ git remote add origin https://github.com/wpernath/person-service-config.git
One of the main concepts behind GitOps is to represent your application’s configura-
tion and build parameters as a Git repository. This repository could be either part of the
source code repository or separate. As I am a big fan of separation of concerns [5.7],
we will create a new repository containing the artifacts that we built in earlier chapters
using Kustomize:
$ tree
└── config
├── base
│ ├── config-map.yaml
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ ├── route.yaml
│ └── service.yaml
└── overlays
├── dev
│ └── kustomization.yaml
├── prod
│ └── kustomization.yaml
└── stage
├── apply-health-checks.yaml
├── change-env-value.yaml
└── kustomization.yaml
6 directories, 10 files
Of course, there are several ways to structure your config repositories. Some natural
choices include:
1. A single configuration repository with all files covering all services and stages for
your complete environment.
Getting GitOps | 73
Chapter 5 – GitOps and Argo CD
2. A separate configuration repository per service or application, with all files for
all stages.
3. A separate configuration repository for each stage of each service.
This is completely up to you. But option 1 is probably not optimal because combining
all services and stages in one configuration repository might make the repository
hard to read and does not promote separation of concerns. On the other hand,
option 3 might break up information too much, forcing you to maintain hundreds
of repositories for different applications or services. Therefore, option 2 strikes me
as a good balance: One repository per application, containing files that cover all
stages for that application.
For now, create this configuration repository by copying the files from the book-
example/kustomize_ext directory into the newly created Git repository:
$ mkdir config
$ cp -r ../book-example/kustomize-ext/ config/
$ git add config
$ git commit -am ‘initial commit’ Note: The original
$ git push -u origin main kustomization.yaml file
already contains an image
section. You should remove
Install the OpenShift GitOps Operator this first.
Because the OpenShift GitOps Operator is offered free of charge to OpenShift users and
comes well preconfigured, I am focusing on its use. If you want to bypass the Operator and
dig into Argo CD installation, feel free to have a look at the official guides [5.8].
The OpenShift GitOps Operator can easily be installed in OpenShift [5.9]. Just log in
as a user with cluster-admin rights and switch to the Administrator perspective of the
OpenShift console. Then go to the Operators menu entry and select OperatorHub
(Figure 5-2). Start typing “gitops” in the search field and select the GitOps Operator
when its panel is shown.
Getting GitOps | 74
Chapter 5 – GitOps and Argo CD
At the time of this writing, Argo CD is not yet configured to use OpenShift authenti-
cation, so you have to get the password of the admin user by getting the value of the
openshift-gitops-cluster secret in the openshift-gitops namespace:
Go to the URL and log in using admin as the user and the password you got, as de-
scribed in the previous section. Click New App and fill in the required fields shown in
Figure 5-4 as follows:
• Application name: We’ll use book-dev, the same name as our repository.
• Project: In our case, it’s default, the project created during Argo CD
installation.
• Sync Policy: Choose whether you want automatic synchronization, which is
enabled by the SELF HEAL option.
• Repository URL: Specify your directory with the application metadata
(Kubernetes resources).
• Path: This specifies the subdirectory within the repository that points to the
actual files.
• Cluster URL: Specify your Kubernetes instance.
• Namespace: This specifies the OpenShift or Kubernetes namespace to
deploy to.
Getting GitOps | 75
Chapter 5 – GitOps and Argo CD
After filling out the fields, click Create. All Argo CD objects of the default Argo CD
instance will be stored in the openshift-gitops namespace, from which you can
export them via:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: book-dev
namespace: openshift-gitops
spec:
destination:
namespace: book-dev
Getting GitOps | 76
Chapter 5 – GitOps and Argo CD
server: https://kubernetes.default.svc
project: default
source:
path: config/overlays/dev
repoURL: https://github.com/wpernath/person-service-config.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
syncOptions:
- PruneLast=true
Remove the metadata from the object file so that it looks like the listing just shown, and
then enter the following command to import the application into the predefined Argo
CD instance:
Now import the application into the predefined Argo CD instance. Please note that you
have to import the application into the openshift-gitops namespace. Otherwise, it
won’t be recognized by the default Argo CD instance running after you’ve installed the
OpenShift GitOps operator.
First Synchronization
As you’ve chosen to do an automatic synchronization, Argo CD will immediately start
synchronizing the configuration repository with your OpenShift target server. However,
you might notice that the first synchronization takes quite a while and breaks without
doing anything except to issue an error message (Figure 5-5).
The error arises because the Argo CD service account does not have the necessary au-
thority to create typical resources in a new Kubernetes namespace. You have to enter
the following command for each namespace Argo CD is taking care of:
Getting GitOps | 77
Chapter 5 – GitOps and Argo CD
Alternatively, if you prefer to use a YAML description file for this task, create something
like the following:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: book-dev-role-binding
namespace: book-dev Note: You could also provide
roleRef: cluster-admin rights to the
apiGroup: rbac.authorization.k8s.io Argo CD service account.
kind: ClusterRole This would have the bene-
name: admin fit of granting Argo CD to
subjects: everything on its own. The
- kind: ServiceAccount drawback is that Argo is then
name: openshift-gitops-argocd-application-controller
a superuser of your Kuber-
namespace: openshift-gitops
netes cluster. This might not
be very secure.
After giving the service account the necessary role, you can safely click Sync, and Argo
CD will do the synchronization (Figure 5-6).
If you chose automatic sync during configuration, any change to a file in the appli-
cation’s Git repository will cause Argo CD to check what has changed and start the
necessary actions to keep the environment in sync.
Automated Setup
To automatically create everything you need to let Argo CD start synchronizing your
config repository with a Kubernetes cluster, you must create the files described in the
following sections. See book-example/gitops/argocd.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: book-dev
namespace: openshift-gitops
Getting GitOps | 78
Chapter 5 – GitOps and Argo CD
spec:
destination:
namespace: book-dev
server: https://kubernetes.default.svc
project: default
source:
path: config/overlays/dev
repoURL: https://github.com/wpernath/person-service-config.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
syncOptions:
- PruneLast=true
apiVersion: v1
kind: Namespace
metadata:
annotations:
openshift.io/description: “”
openshift.io/display-name: “DEV”
labels:
kubernetes.io/metadata.name: book-dev
name: book-dev
spec:
finalizers:
- kubernetes
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: book-dev-role-binding
namespace: book-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- kind: ServiceAccount
name: openshift-gitops-argocd-application-controller
namespace: openshift-gitops
Getting GitOps | 79
Chapter 5 – GitOps and Argo CD
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ns.yaml
- roles.yaml
- book-apps.yaml
To install everything in one go, you simply have to execute the following command:
$ oc apply -k book-example/gitops/argocd
Getting GitOps | 80
Chapter 5 – GitOps and Argo CD
• Push that image to an external registry (in our case, quay.io [5.10]).
• Use Kustomize to change the image target.
• Apply the changes via the OpenShift CLI to a given namespace.
In GitOps, we don’t do pipeline-centric deployments anymore. As explained earlier, the
final step of our pipeline just updates our person-service-config Git repository with
the new version of the image. Instead of the apply-kustomize task, we create and use
the git-update-deployment task as the final step. This task should clone the config
repository, use Kustomize to apply the image changes, and finally push the changes
back to GitHub [5.11].
apiVersion: v1
kind: Secret
metadata:
name: git-user-pass
annotations:
tekton.dev/git-0: https://github.com
type: kubernetes.io/basic-auth
stringData:
username: <cleartext username>
password: <cleartext password>
Once you’ve filled in the username and password, you can apply the secret into the
namespace where you want to run your newly created pipeline.
$ oc new-project book-ci
$ oc apply -f secret.yaml
Now you need to either create a new ServiceAccount for your pipeline or update the
existing one, which was generated by the OpenShift Pipeline Operator. The pipeline
runs entirely within the security context of the provided ServiceAccount.
Let’s use it on a new ServiceAccount. To see which other secrets this ServiceAccount
requires, execute:
apiVersion: v1
kind: ServiceAccount
metadata:
name: pipeline-bot
secrets:
- name: git-user-pass
You don’t need to copy the following generated secrets to your ServiceAccount, be-
cause they will be linked automatically with the new ServiceAccount by the Operator:
Getting GitOps | 81
Chapter 5 – GitOps and Argo CD
pipeline-dockercfg-: The default secret for reading and writing images from
•
and to the internal OpenShift registry.
pipeline-token-: The default secret for the pipeline ServiceAccount. This is
•
used internally.
You also have to create two RoleBindings for the ServiceAccount. Otherwise, you can’t
reuse the PersistenceVolumes we’ve been using so far:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: piplinebot-rolebinding1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pipelines-scc-clusterrole
subjects:
- kind: ServiceAccount
name: pipeline-bot
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: piplinebot-rolebinding2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: pipeline-bot
The edit role is mainly used if your pipeline needs to change any Kubernetes meta-
data in the given namespace. If your pipeline doesn’t do things like that, you can safely
ignore that role. In our case, we don’t necessary need the edit role.
Getting GitOps | 82
Chapter 5 – GitOps and Argo CD
steps:
- name: update-digest
image: quay.io/wpernath/kustomize-ubi:latest
workingDir: $(workspaces.workspace.path)/the-config
script: |
cd $(params.KUSTOMIZATION_PATH)
kustomize edit set image
$(params.CURRENT_IMAGE)=$(params.NEW_IMAGE)@$(params.NEW_DIGEST)
cat kustomization.yaml
- name: git-commit
image: docker.io/alpine/git:v2.26.2
workingDir: $(workspaces.workspace.path)/the-config
script: |
git config user.email “something@some-other-thing”
git config user.name “My Tekton Bot”
Nothing special here. It’s the same things we would do via the CLI. The full task and
everything related can be found, as always, in the gitops/tekton/tasks folder of
the repository on GitHub.
You can define a results property within the spec section of a task. Each result is
stored in $(results.<result-name>.path), where the <result-name> component
is a string that refers to the data in that result. Results are available in all tasks and on
the pipeline level through strings in the format:
$(tasks.<task-name>.results.<result-name>)
Getting GitOps | 83
Chapter 5 – GitOps and Argo CD
The following configuration defines the step that extracts the image digest and stores
it into a result:
spec:
params:
- name: image-digest-path
default: target
results:
- name: DIGEST
description: The image digest of the last quarkus maven build with JIB
image creation
steps:
- name: extract-digest
image: quay.io/wpernath/kustomize-ubi:latest
script: |
# extract DIGEST
DIGEST=$(cat $(workspaces.source.path)/$(params.image-digest-path)/
jib-image.digest)
Now it’s time to summarize everything in a new pipeline. Figure 5-8 shows the tasks.
The first three are the same as in the Chapter 4 version of this pipeline. We have added
the extract-digest step as described in the previous section and end by updating
our repository.
Start by using the previous non-GitOps pipeline [5.16], which we created in Chapter 4.
Remove the last task and add extract-digest and git-update-deployment as
new tasks.
Getting GitOps | 84
Chapter 5 – GitOps and Argo CD
Add a new git-clone task at the beginning by hovering over clone-source and
pressing the plus sign below it to create a new parallel task.
Now it’s time to summarize everything in a new pipeline. Figure 5-8 shows the tasks.
The first three are the same as in the Chapter 4 version of this pipeline. We have added
the extract-digest step as described in the previous section and end by updating
our repository.
Start by using the previous non-GitOps pipeline [5.17], which we created in Chapter 4.
Remove the last task and add extract-digest and git-update-deployment as
new tasks.
Add a new git-clone task at the beginning by hovering over clone-source and
pressing the plus sign below it to create a new parallel task. Name the new task
clone-config and fill in the necessary parameters:
Getting GitOps | 85
Chapter 5 – GitOps and Argo CD
To create the necessary namespaces and Argo CD applications, enter the following
command, passing your username and password:
If the --force option is included, the command creates the following namespaces and
Argo CD applications for you:
Refer back to Chapter 4 for more information on starting and testing pipelines.
Getting GitOps | 86
Chapter 5 – GitOps and Argo CD
These tasks are followed by a manual process where a test specialist accepts the pull
request and merges the content from the branch back into the main branch. Then Argo
CD takes the changes and updates the running staging instance in Kubernetes.
You can use the Bash script I created as follows to start the staging pipeline, creating
release 1.2.5:
Pipeline Setup
The git-clone and git-branch steps use existing ClusterTasks, so there is nothing
to explain here except one new Tekton feature: Conditional execution of a task [5.18]
by using a “When” expression.
Getting GitOps | 87
Chapter 5 – GitOps and Argo CD
when:
- input: $(params.release-name)
operator: notin
values:
- “”
The new extract-digest task uses yq to extract the digest from the kustomization.
yaml file. The command looks like:
$ yq eval ‘.images[0].digest’
$(workspaces.source.path)/$(params.kustomize-dir)/kustomization.yaml
- name: tag-image
params:
- name: srcImageURL
value: >-
docker://$(params.target-image)@$(tasks.extract-digest.results.DIGEST)
- name: destImageURL
value: >-
docker://$(params.target-image):$(params.release-name)
- name: srcTLSverify
value: ‘false’
- name: destTLSverify
value: ‘false’
runAfter:
- extract-digest
taskRef:
kind: ClusterTask
name: skopeo-copy
[...]
skopeo uses an existing Docker configuration if it finds one in the home directory of
the current user. For us, this means that we have to create another secret with the
following content:
apiVersion: v1
kind: Secret
metadata:
annotations:
tekton.dev/docker-0: https://quay.io
name: quay-push-secret
type: kubernetes.io/basic-auth
stringData:
username: <use your quay.io user>
password: <use your quay.io token>
Also, update the ServiceAccount accordingly:
Getting GitOps | 88
Chapter 5 – GitOps and Argo CD
apiVersion: v1
kind: ServiceAccount
metadata:
name: pipeline-bot
secrets:
- name: git-user-pass
- name: quay-push-secret
After you’ve applied the configuration changes, the skopeo task can authenticate to
Quay.io and do its work.
Challenges
Of course, you still face challenges if you’re completely switching over to do GitOps.
Some challenges spring from the way Kubernetes intrinsically works, which has pros
and cons by itself. Other challenges exist because Git was intended to be used by
people who analyze merge conflicts and apply them manually.
However, nobody says GitOps would be the only and de facto method of doing CD
nowadays. If GitOps is not right for your environment, simply don’t use it.
Order-Dependent Deployments
Although order-dependent deployments are not necessarily a best practice, the reality
is that nearly every large application does have dependencies, and these must be ful-
filled before the installation process can continue. Two examples:
• Before my application can start, I must have a properly installed and configured
database.
• Before I can install an instance of a Kafka service, an Operator must be installed
on Kubernetes.
Fortunately, Argo CD has a solution for those scenarios. Like with Helm charts, you’re
able to define sync phases and so-called waves [5.18].
Getting GitOps | 89
Chapter 5 – GitOps and Argo CD
Waves can be defined by annotating your Kubernetes resources with the following
annotation:
metadata:
annotations:
argocd.argoproj.io/sync-wave: <+/- number>
Argo CD sorts all resources first by the phase, then by the wave, and finally by type and
name. If you know that some resources need to be applied before others, simply group
them via the annotation. By default, Argo CD uses wave zero for any resources and hooks.
Nondeclarative Deployments
Nondeclarative deployments are simply lists of steps; they are also called imperative
deployments. This is probably the most familiar type of deployment. For instance, if
you’re creating a hand-over document for the OPS department, you are providing
imperative instructions about how to install the application. And most of us are used to
creating installation scripts for more complex applications.
However, the preferred way of installing applications with Kubernetes is through declar-
ative deployments. These specify the service, the deployment, persistent volume
claims, secrets, config maps, etc.
If this declaration is not enough, you have to provide a script to configure a special
resource—for example, to update the structure of a database or do a backup of the
data first.
apiVersion: batch/v1
kind: Job
metadata:
generateName: post-sync-run
name: my-final-run
annotations:
argocd.argoproj.io/hook: PostSync
spec:
ttlSecondsAfterFinished: 100
template:
spec:
restartPolicy: Never
containers:
- name: post-install-job
env:
- name: NAMESPACE
value: book-dev
image: “quay.io/wpernath/kustomize-ubi:v4.4.1”
command:
- /bin/sh
- -c
- |
echo “WELCOME TO the post installation hook for Argo CD “
echo “-------------------------------------------------”
Getting GitOps | 90
Chapter 5 – GitOps and Argo CD
SERVICE_URL=person-service.${NAMESPACE}.svc:8080/person
NUM_PERSONS=$(curl http://$SERVICE_URL/count)
else
echo “There are already $NUM_PERSONS persons in the database.”
http --ignore-stdin ${SERVICE_URL}
fi
This hook runs a simple job that checks to see whether there is any data in the PostgreSQL
database and, if not, uses the person-service to create data. The only difference
between this sync and a “normal” Kubernetes job is the argocd.argoproj.io/hook
annotation.
Figure 5-11: The synchronization log of Argo CD with a presync and a postsync hook.
Getting GitOps | 91
Chapter 5 – GitOps and Argo CD
If you want to use a single configuration Git repository for all your applications and
stages, keep in mind that Git was never meant to resolve merge conflicts automatically.
Instead, conflict resolution sometimes needs to be done manually. Be careful in such a
case and plan your releases thoroughly. Otherwise, you might end up not being able to
create and merge release branches automatically.
Managing Secrets
Another important task requiring a lot of thought is proper secret management. A se-
cret contains access tokens to mission-critical external applications, such as databases,
SAP systems, or GitHub and Quay.io. You don’t want to make that confidential informa-
tion publicly accessible in a Git repository.
Instead, think about a solution like Hashicorp Vault [5.20], where you are able to cen-
trally manage your secrets.
Summary
The desire to automatically deploy the latest code into production is as old as infor-
mation technology, I suppose. Automation is important, as it makes everybody in the
release chain more productive and helps create reproducible and well-documented
deployments.
There are many ideas and even more tools out there for how to implement automation.
The easiest way, of course, is to create scripts for deployment that do exactly what you
need. The downside of scripts is that they might become unmanageable after some
time. You also tend to do a lot of copying, and then if you want to make a change, you
need to find and correct each script.
With Kubernetes and GitOps, you can define everything as a file, which helps you store
everything in a version control system and use the conveniences it provides.
Kubernetes bases its infrastructure on YAML files, making it easy to reuse what you
already know as a developer or administrator: Just use Git and store the complete
infrastructure description in a repository. You work through the repository to create
releases, roll the latest release back if there was a bug in it, keep track of any changes,
and create an automated process around deployments.
I hope this book has helped you put together the basic concepts and enable your work
with DevOps and GitOps.
Getting GitOps | 92
Chapter 5 – GitOps and Argo CD
References
[5.1] https://argoproj.github.io/argo-cd/
[5.2] https://github.com/wpernath/book-example/tree/main/person-service
[5.3] https://docs.openshift.com/container-platform/4.8/cicd/gitops/installing-
openshift-gitops.html
[5.4] https://github.com/code-ready/crc
[5.5] https://www.opensourcerers.org/2021/07/26/automated-application-
packaging-and-distribution-with-openshift-tekton-pipelines-part-34-2/
[5.6] https://github.com/wpernath/person-service-config
[5.7] https://deviq.com/principles/separation-of-concerns
[5.8] https://argo-cd.readthedocs.io/en/latest/getting_started/
[5.9] https://docs.openshift.com/container-platform/4.8/cicd/gitops/installing-
openshift-gitops.html
[5.10] https://quay.io
[5.11] http://github.com
[5.12] https://tekton.dev/docs/pipelines/auth/
[5.13] https://www.opensourcerers.org/2021/07/26/automated-application-
packaging-and-distribution-with-openshift-tekton-pipelines-part-34-2/
[5.14] https://cloud.google.com/blog/products/application-development/
introducing-jib-build-java-docker-images-better
[5.15] https://tekton.dev/docs/pipelines/tasks/#emitting-results
[5.16] https://raw.githubusercontent.com/wpernath/book-example/main/tekton/
pipelines/tekton-pipeline.yaml
[5.17] https://tekton.dev/docs/pipelines/pipelines/#guard-task-execution-using-
whenexpressions
[5.18] https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/
[5.19] https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/
[5.20] https://www.hashicorp.com/products/vault/secrets-management
Getting GitOps | 93
About the Author
Getting GitOps | 94