DevOps Bootcamp Exercises
DevOps Bootcamp Exercises
If you are a new user on those platforms, you usually get free starter package. On Digital
Ocean and Linode you get 100$ credit to use up for 60 days (this can change in the
future so check yourself).
On AWS you get a free tier that allows you to use 1 small server for free for a year.
However, EKS service and all other EC2 instances still cost.
On all platforms you will be charged for exactly what you use, based on the size of the
resources and how long you use them.
Context: We have a ready NodeJS application that needs to run on a server. The app is
already configured to read in environment variables.
Install NodeJS and NPM and print out which versions were installed
Download an artifact file from the URL: https://node-envvars-artifact.s3.eu-
west-2.amazonaws.com/bootcamp-node-envvars-project-1.0.0.tgz 33. Hint:
use curl or wget
Unzip the downloaded file
Set the following needed environment variables: APP_ENV=dev,
DB_USER=myuser, DB_PWD=mysecret
Change into the unzipped package directory
Run the NodeJS application by executing the following commands: npm install
and node server.js
Notes:
Make sure to run the application in background so that it doesn’t block the
terminal session where you execute the shell script
If any of the variables is not set, the node app will print error message that env
vars is not set and exit
It will give you a warning about LOG_DIR variable not set. You can ignore it
for now.
The script will check whether the parameter value is a directory name that doesn’t exist
and will create the directory, if it does exist, it sets the env var LOG_DIR to the
directory’s absolute path before running the application, so the application can read the
LOG_DIR environment variable and write its logs there.
Note:
Exercise Solutions
EXERCISE 2: .gitignore
You see that build folders and editor specific folders are in the repository and decide to
ignore it as a best practice.
Note: There is a standard in your team to name commits with descriptive text.
Some time went by since you opened your bugfix branch, so you want the up-to-date
master state to avoid major conflicts.
Merge the master branch in your bugfix branch - fix the merge conflict!
Your team members tell you the previous image was the correct one, so you want to
undo it. But since you already pushed to remote, you must revert the change.
Revert the last commit and push your changes to remote repository
However after talking to a colleague, you find out it has already been fixed in another
branch. So you want to undo your local commit.
EXERCISE 9: Merge
This time you want to merge your branch directly into master without merge request. So:
merge your bugfix branch into master using git CLI (Hint: master branch must
be up-to-date before the merge)
Being on the master branch now. Push your merge commit to remote
repository
Exercise Solutions
You can find example solutions
here: https://gitlab.com/devops-bootcamp3/git-project/-/blob/feature/solutions/
Solutions.md 19
The Build will fail, because of a compile error in a test, so you can’t build the jar.
Add parameter input to the Java code (see code snippet below, which you can
copy)*
Rebuild the jar file
Execute the jar file again with 2 params
Copy your simple Nodejs app tar file and package.json to the droplet
Start the node application in detached mode ( npm install and node
server.js commands)
Exercise Solutions
So they ask you to setup Nexus in the company and create repositories for 2 different
projects.
If not, you can watch the module demo video to install Nexus.
You create Nexus user for the project 1 team to have access to this npm
repository
Hint:
You create a Nexus user for project 2 team to have access to this maven
repository
build and publish the jar file to the new repository using the team 2 user.
Create new user for droplet server that has access to both repositories
On a digital ocean droplet, using Nexus Rest API, fetch the download URL info
for the latest NodeJS app artifact
Execute a command to fetch the latest artifact itself with the download URL
Untar it and run on the server!
Hint:
Write a script that fetches the latest version from npm repository. Untar it and
run on the server!
Execute the script on the droplet
Hints:
save the artifact details in a json file curl -u
{user}:{password} -X GET ‘http://{nexus-
ip}:8081/service/rest/v1/components?
repository={node-repo}&sort=version’ | jq
“.” > artifact.json # grab the download url
from the saved artifact details using ‘jq’
json processor tool artifactDownloadUrl=$
(jq ‘.items[].assets[].downloadUrl’
artifact.json --raw-output) # fetch the
artifact with the extracted download url
using ‘wget’ tool wget --http-user={user} --
http-password={password}
$artifactDownloadUrl
7 - Containers with Docker
They ask you to configure and run the application with Mysql database on a server using
docker-compose.
You can check out the code changes and notice that we are using environment variables
for the database and its credentials inside the application.
Start mysql container locally using the official Docker image. Set all needed
environment variables.
Export all needed environment variables for your application for connecting
with the database (check variable names inside the code)
Build jar file and start the application. Test access from browser. Make some
change.
And since your DB and DB UI are running as docker containers, you want to make your
app also run as a docker container. So you can all start them using 1 docker-compose
file on the server. So you do the following:
Now your app and Mysql containers in your docker-compose are using environment
variables.
INFO: Again, since docker-compose is part of your application and checked in to the
repo, it shouldn’t contain any sensitive data. But also allow configuring these values from
outside based on an environment
Exercise Solutions
You can find example solutions here: https://gitlab.com/devops-bootcamp3/bootcamp-
java-mysql/-/blob/feature/solutions/Solutions.md 14
Also, you think it’s a good idea to add tests, to test that no one accidentally breaks the
existing code.
Moreover, you all decide every change should be immediately built and pushed to the
Docker repository, so everyone can access it right away.
Increment version
Run tests
You want to test the code, to be sure to deploy only working code. When tests fail, the
pipeline should abort.
The application version increment must be committed and pushed to a remote Git
repository.
9 - AWS Services
So you need to deploy the previous NodeJS application on an EC2 instance now. This
means you need to create and prepare an EC2 server with the AWS Command Line
Tool to run your NodeJS app container on it.
You know there are many steps to set this up, so you go through it with step by step
exercises.
Create a new IAM user “your name” with “devops” user-group with
all needed permissions to execute the tasks below - with login and CLI
credentials
As a solution you want to automate this thing to save you and your team members time
and energy.
The reason is you want to have the whole configuration for starting the docker container
in a file, in case you need to make changes to that, instead of plain docker command
with parameters. Also in case you add a database later.
Configure EC2 security group to access your application from browser (using
AWS CLI)
So when this happens, the users write you that the app is down and ask you to fix it. You
ssh into the server, restart containers with docker-compose and containers start again.
But this is an annoying work, plus it doesn’t look good for your company that your clients
often can’t access the app. So you want to make your application more reliable and
highly available . You want to replicate both database and the app, so if one container
goes down, there is always a backup. Also you don’t want to rely on a single server, but
have multiple, in case 1 whole server goes down or gets rebooted etc.
So you look into different solutions and decide to use a container orchestration tool
Kubernetes to solve the issue. For now you want to configure it and deploy your
application manually, since it’s a new tool and want to try it out manually before
automating.
Deploy Mysql database with 3 replicas and volumes for data persistence
With docker-compose, you were setting env_vars on server. In K8s there are own
components for that, so
create ConfigMap and Secret with the values and reference them in the
application deployment config file.
For this deployment you just need 1 replica, since this is only for your own use, so it
doesn’t have to be High Availability. A simple deployment.yaml file and internal service
will be enough.
Now your application setup is running in the cluster, but you still need a proper way to
access the application. Also, you don’t want users to access the application using the IP
address and instead use a domain name. For that, you want to install Ingress controller
in the cluster and configure ingress access for your application.
All config files: service, deployment, ingress, configMap, secret, will be part of
the chart
Create custom values file as an example for developers to use when
deploying the application
Deploy the java application using the chart with helmfile
Host the chart in its own git repository
Mysql and phpmyadmin and nginx controller will run on the EC2 Instances
You deploy mysql and phpmyadmin on EC2 nodes with the same setup as
before.
You deploy your Java application using Fargate with 3 replicas and same
setup as before
So, company wants to use ECR instead, again to have everything on 1 platform and also
to let AWS manage the repository with storage, cleanups etc.
Therefore, you need to replace the docker repository in your pipeline with ECR
So you suggest to your manager, that you will be able to save the company some
infrastructure costs, by configuring autoscaling. Your manager is happy about that and
asks you to configure it.
But you don’t want to do that manually 3 times, so you decide it would be much more
efficient to script creating the EKS cluster and execute that same script 3 times to create
3 more identical environments.
Create EKS cluster with 3 Nodes and 1 Fargate profile only for your java
application
Deploy Mysql with 3 replicas with volumes for data persistence using helm
Deploy your Java application with 3 replicas with ConfigMap and Secret
Configure remote state with a remote data store for your terraform project
Now the team wants to make own changes in Terraform based on their project and
adjust the infrastructure from time to time. They ask you to help them integrate TF in the
code and make it part of the CI/CD pipeline.
Integrate Terraform provisioning the EKS cluster in the Java Gradle app
Jenkins pipeline
Write a program that prints out all the elements of the list that are higher than
or equal 10.
Instead of printing the elements one by one, make a new list that has all the
elements higher than or equal 10 from this list in it and print out this new list.
Ask the user for a number as input and return a list that contains only those
elements from my_list that are higher than the number given by the user.
dict_one = {‘a’: 100, ‘b’: 400} dict_two = {‘x’: 300, ‘y’: 200}
Prints out - the name, job and city of each employee using a loop. The
program must work for any number of employees in the list, not just 2.
Prints the country of the second employee in the list by accessing it directly
without the loop.
Write a function that accepts a list of dictionaries with employee age (see
example list from the Exercise 3) and prints out the name and age of the
youngest employee.
Write a function that accepts a string and calculates the number of upper case
letters and lower case letters.
Write a function that prints the even numbers from a provided list.
For cleaner code, declare these functions in its own helper Module and use
them in the main.py file
Concepts covered: working with different data types, conditionals, type conversion,
user input, user input validation
Concepts covered: Built-In Module, User Input, Comparison Operator, While loop
with properties:
first name
last name
age
lectures he/she attends
with methods:
with properties:
first name
last name
age
subjects he/she teaches
with methods:
with properties:
name
max number of students
duration
list of professors giving this lecture
with methods:
d) Bonus task
As both students and professors have a first name, last name and age, you think of a
cleaner solution:
Inheritance allows us to define a class that inherits all the methods and properties from
another class.
Create a Person class, which is the parent class of Student and Professor
classes
This Person class has the following properties: “first_name”, “last_name” and
“age”
and following method: “print_name”, which can print the full name
So you don’t need these properties and method in the other two classes. You
can easily inherit these.
Change Student and Professor classes to inherit “first_name”, “last_name”,
“age” properties and “print_name” method from the Person class
Instructions:
Developers will execute the Ansible script by specifying their first name as the Linux user
which will start the application on a remote server. If the Linux User for that name
doesn’t exist yet on the remote server, Ansible playbook will create it.
Also consider that the application may already be running from the previous jar file
deployment, so make sure to stop the application and remove the old jar file from the
remote server first, before copying and deploying the new one, also using Ansible.
Now your team can use this project to spin up a new Jenkins server with 1 Ansible
command.
Here is a reference of a full docker command for starting Jenkins container, which you
should map to Ansible playbook:
Your team is happy, because they can now use Ansible to quickly spin up a Jenkins
server for different needs.
Use repository: https://gitlab.com/devops-bootcamp3/bootcamp-java-mysql 11
The setup you and the team agreed on is the following: You create a dedicated Ansible
server on AWS. In the same VPC as the Ansible server, you create 2 servers, 1 for
deploying your Java application and another one for running a MySQL database. Also,
the database should not be accessible from outside, only within the VPC, so the DB
server shouldn’t have a public IP address.
Now your task is to write Ansible playbook that creates these servers. Then it installs
and starts MySQL server on the EC2 instance without public IP address. And finally it
deploys and runs the Java web application on another EC2 instance.
Playbook also tests accessing the deployed web application from the browser and prints
the result.
You expect this Ansible project to grow in size and complexity later and also because
you expect other DevOps team members to add more tasks etc to the project, so you
decide to refactor it right away with roles to keep the code clean and structured, and also
so that each team member can develop and test their own role functionality separately
without affecting the work of others. So create roles in your Ansible project for web
server and db server tasks.
However, K8s is a very new tool for them, and they don’t want to learn kubectl and K8s
configuration syntax and how all that works, so they want the deployment process to be
automated so that it’s much easier for them to deploy the application to the cluster
without much K8s knowledge.
So they ask you to help them in the process. You create K8s configuration files for
deployments, services for Java and MySQL and PHP MyAdmin applications as well as
configMap and Secret for the Database connectivity. And you deploy everything in a
cluster using Ansible automated script.
Note: MySQL application will run as 1 replica and for the Java Application you will need
to create and push an image to a Docker repo. You can create the K8s cluster with TF
script or any other way you prefer.
You consult with your DevOps team members, and you decide that you will use one of
the automated Jenkins deployment scripts you wrote for EC2 or Droplet servers. Plus
you will add installing Ansible to the playbook, so that you can execute Ansible
commands directly on a Jenkins server.
In your Jenkinsfile, you will execute Ansible playbooks for building the application and
docker image from it, pushing it to the Docker registry and deploying the new application
version to the cluster.
Context
You and your team are running the following setup in the K8s cluster:
Java application that uses Mysql DB and is accessible from browser using Ingress. It’s
all running fine, but sometimes you have issues where Mysql DB is not accessible or
Ingress has some issues and users can’t access your Java application. And when this
happens, you and your team spend a lot of time figuring out what the issue is and
troubleshooting within the cluster. Also, most of the time when these issues happen, you
are not even aware of them until an application user writes to the support team that the
application isn’t working or developers write you an email that things need to be fixed.
Your manager suggested using Prometheus, since it’s a well known tool with a large
community and is widely used, especially in K8s environment.
So you and your team are super motivated to improve the application observability using
Prometheus monitoring.
Note: as you’ve learned, we deploy separate exporter applications for different services
to monitor third party applications. But, some cloud native applications may have the
metrics scraping configuration inside and not require an addition exporter application. So
check whether the chart of that application supports scraping configuration before
deploying a separate exporter for it.
Now it’s time to configure alerts for critical issues that may happen with any of the
applications.
Configure alert manager to send all issues related to Java or Mysql application
to the developer team’s Slack channel. (Hint: You can use the following guide
to set up a Slack channel for the
notifications: https://www.freecodecamp.org/news/what-are-github-actions-
and-how-can-you-automate-tests-and-slack-notifications/#part-2-post-new-
pull-requests-to-slack 2 )
Configure alert manager to send all issues related Nginx Ingress Controller or
K8s components to K8s administrator’s email address.
Note: Of course, in your case, this can be your own email address or your own Slack
channel.
Of course, you want to check now that your whole setup works, so try to simulate issues
and trigger 1 alert for each notification channel (Slack and E-mail)
Awesome! You and your team are super happy with the results. Now, as the final step,
you want to configure Grafana dashboards for additional visibility of what’s going on in
the cluster.