Building and Deploying An Application To Multiple Clusters
Building and Deploying An Application To Multiple Clusters
Clusters
1. Introduction
This document outlines the process for building and deploying an application to multiple
environments, including development and production clusters.
2. Workflow Overview
1. Client Request:
o The client creates a Jira ticket for any updates or new features.
2. Developer Process:
o Local Development:
The developer clones the repository to their local machine.
They implement the new feature or update.
The code is tested locally to ensure functionality.
o Code Push:
The developer pushes the code to the dev branch on GitHub.
3. Continuous Integration and Deployment (CI/CD):
o GitHub Actions Pipeline:
Compilation:
The pipeline triggers automatically on code push.
The code is compiled to check for syntax errors.
Unit Testing:
Unit tests are executed to verify the functionality of the
application.
Dependency Check:
Aqua Trivy scans the application dependencies for
vulnerabilities and outdated packages.
Code Quality Check:
SonarQube analyzes the code for bugs, vulnerabilities, and
code smells.
Code coverage is assessed to ensure a minimum quality
threshold.
Build Artifacts:
The application is built into an executable file (e.g., JAR for
Java applications).
A Docker image is created and scanned with Aqua Trivy for
vulnerabilities.
Deployment to Development Cluster:
The Docker image is pushed to Docker Hub.
YAML manifest files are used to deploy the application to the
development cluster.
4. Production Deployment:
o Pre-Deployment:
After successful deployment and verification in the development
cluster, the Docker image is tagged (e.g., prod-latest).
o Production Deployment:
The tagged Docker image is used to create new YAML manifest files
for the production cluster.
The application is deployed to the production cluster.
We need to set up two clusters: one for Production and one for Development.
Once both clusters are set up, we can start writing the CI/CD pipelines for each of them.
Ensure that the below mentioned necessary ports are open in your security group's
inbound rules.
Creating a self hosted kubernetes cluster(Development Cluster)
Master Node:
o Create a virtual machine with the following specifications:
OS: Ubuntu 20.04 LTS
Size: T2 medium
Key Pair: Use an existing key pair or create a new one. If creating a
new key, name it and ensure the format is PEM.
o Worker Node:
Create another virtual machine with the same specifications as the
Master node.
Network Settings:
Select the security group configured with the necessary open ports.
Storage:
Launch VMs:
Launch the virtual machines and wait for them to be in the running state. Name one as
the Master node and the other as the Worker node.
This command updates the list of available packages and their versions, ensuring you have
the latest information on the newest versions and their dependencies.
2. Install Docker:
sudo apt install docker.io -y
This command installs Docker, a containerization platform, which is essential for running
Kubernetes containers. The -y flag automatically confirms the installation.
This command changes the permissions of the Docker socket file, allowing non-root users to
run Docker commands.
This command installs packages necessary for allowing the apt package manager to use
HTTPS for retrieving packages. It also installs curl for downloading files and gnupg for
handling encryption and signing.
This command creates the /etc/apt/keyrings directory with permissions set to 755, which
is required for storing the Kubernetes signing key.
This command adds the Kubernetes repository to the list of sources from which apt can
install packages. It specifies that packages from this repository should be verified using the
previously downloaded GPG key.
This command refreshes the package list to include the newly added Kubernetes repository.
This command installs specific versions of kubeadm, kubelet, and kubectl, which are
essential components for setting up and managing a Kubernetes cluster. The -y flag confirms
the installation automatically.
Run the following command on master node to initialize the Kubernetes master node with a
specific pod network CIDR:
This command initializes the Kubernetes control plane on the master node and sets the pod
network CIDR to 192.168.0.0/16. It also generates a token that will be used by the worker
nodes to join the cluster.
After running the above command, a kubeadm join command will be provided. This
command must be run on the worker node(s) to join them to the master node.
Joining Worker Nodes to the Master Node
Run the kubeadm join Command on the Worker Node(s):
Use the kubeadm join command generated by the master node initialization process on
worker node to join them to the cluster. The command will look similar to this:
Run the following commands on the master node to set up the kubeconfig file, which is
required for kubectl to interact with the Kubernetes cluster:
mkdir -p $HOME/.kube
This command changes the ownership of the config file to the current user, allowing
kubectl to use it for cluster management.
kubectl apply -f
https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/ca
lico.yaml
This command applies the Calico manifest, which configures the Calico network plugin in
your Kubernetes cluster, providing networking and network policy capabilities.
This command lists all the nodes in your Kubernetes cluster along with their status. If
everything is set up correctly, you should see the master and worker nodes listed with their
respective statuses (e.g., Ready).
Master Node: This node should show a Ready status, indicating it is properly
initialized and operational.
Worker Nodes: These nodes should also show a Ready status if they have successfully
joined the cluster.
To run Terraform commands, we need a dedicated virtual machine. Although you can run
Terraform from your local machine, using a separate virtual machine is often more
convenient.
Create and launch the virtual machine, and wait for it to come online.
Once the virtual machine is ready, you can proceed with running Terraform commands to set
up the EKS cluster
Connecting to the Virtual Machine and Setting Up AWS
CLI
1. Connect to the Virtual Machine:
To interact with AWS resources, we need to install the AWS CLI. Execute the following
commands to install AWS CLI:
Run the following command to configure the AWS CLI and provide your AWS credentials:
aws configure
After entering the required details, the AWS CLI will be configured, and you will be
connected to your AWS account.
Setting Up Terraform for EKS Cluster Creation
1. Install Terraform:
Before creating an EKS cluster on AWS, ensure that Terraform is installed on your machine.
Run the following command to check if Terraform is already installed:
terraform --version
If Terraform is not present, you will need to install it. Use the following command to install
Terraform via Snap:
Make sure to include the --classic flag; otherwise, you may encounter errors.
2. Verify Installation:
terraform --version
3. Prepare Terraform Files:
To execute Terraform commands, you need the appropriate Terraform configuration files.
Make sure you have these files prepared before running Terraform commands to create the
EKS cluster.
Here is the generalized Terraform configuration files (please update according to your
environment) for setting up an EKS cluster on AWS, including both the VPC and the
necessary resources for the cluster and nodes.
Feel free to adjust the names, regions, and any other parameters according to your specific
requirements.
mkdir -p EKS
cd EKS
vi main.tf
Paste the following contents into main.tf and save the file:
provider "aws" {
region = "us-east-1" # Change this to your preferred region
}
resource "aws_vpc" "my_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "my-vpc"
}
}
tags = {
Name = "my-subnet-${count.index}"
}
}
tags = {
Name = "my-igw"
}
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.my_igw.id
}
tags = {
Name = "my-route-table"
}
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-cluster-sg"
}
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-node-sg"
}
}
vpc_config {
subnet_ids = aws_subnet.my_subnet[*].id
security_group_ids = [aws_security_group.my_cluster_sg.id]
}
depends_on =
[aws_iam_role_policy_attachment.my_cluster_role_policy]
}
scaling_config {
desired_size = 3
max_size = 3
min_size = 3
}
instance_types = ["t2.medium"]
remote_access {
ec2_ssh_key = var.ssh_key_name
source_security_group_ids = [aws_security_group.my_node_sg.id]
}
depends_on = [
aws_iam_role_policy_attachment.my_node_group_role_policy_attachment,
aws_iam_role_policy_attachment.my_node_group_cni_policy,
aws_iam_role_policy_attachment.my_node_group_registry_policy
]
}
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment"
"my_node_group_role_policy_attachment" {
role = aws_iam_role.my_node_group_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}
vi output.tf
Paste the following contents into output.tf and save the file:
output "vpc_id" {
description = "The ID of the VPC"
value = aws_vpc.my_vpc.id
}
output "subnet_ids" {
description = "The IDs of the subnets"
value = aws_subnet.my_subnet[*].id
}
output "eks_cluster_id" {
description = "The ID of the EKS cluster"
value = aws_eks_cluster.my_cluster.id
}
output "eks_node_group_id" {
description = "The ID of the EKS node group"
value = aws_eks_node_group.my_node_group.id
}
vi variables.tf
Paste the following contents into variables.tf and save the file:
variable "ssh_key_name" {
description = "The name of the SSH key pair to use for the
instances"
type = string
default = "my_projects_key"
}
First, ensure all three files (main.tf, output.tf, and variables.tf) are created.
terraform init
This command initializes the working directory, downloads necessary plugins, and
prepares Terraform to execute commands.
terraform apply
Terraform will prompt you to confirm the actions it plans to take. You will see a
message asking if you want to perform these actions. Type yes to proceed.
This option automatically approves the execution without prompting for confirmation.
Terraform will then start creating resources such as virtual machines, clusters, VPCs,
subnets, etc. This process typically takes 5 to 10 minutes.
This VM will act as a runner for GitHub Actions, allowing you to perform CI/CD tasks.
Connect to the VM via SSH. First, always remember to run:
curl -o actions-runner-linux-x64-2.317.0.tar.gz -L
https://github.com/actions/runner/releases/download/v2.317.0/actions-
runner-linux-x64-2.317.0.tar.gz
echo
"9e883d210df8c6028aff475475a457d380353f9d01877d51cc01a17b2a91161d
actions-runner-linux-x64-2.317.0.tar.gz" | shasum -a 256 -c
1. Runner Group: By default, no runner group is created. You can use the default
group.
2. Runner Name: Enter a name for the runner (e.g., Runner-1).
3. Labels: Optionally, add labels to help identify where the runner should execute jobs.
4. Work Folder: The default work folder is _work. You can change this if desired.
2. Set Up Trivy: Install Trivy by adding its repository and then installing the
package:
# Install dependencies
sudo apt-get install wget apt-transport-https gnupg lsb-
release
./run.sh
1. Check the Runner Status: After starting the runner, check the GitHub Actions
settings. Initially, the runner may appear as "offline." To bring it online, ensure the
run.sh script is running.
Replace <region> with your AWS region (e.g., ap-south-1) and <cluster_name> with
your EKS cluster name. This command updates the kubeconfig file used for authentication.
Next, verify that kubectl is installed. If it's not installed, you can install it using Snap:
This will show the nodes in your cluster and their status. If the nodes are in the "Ready" state,
your cluster is up and running correctly.
name: CICD
on:
push:
branches: ["Dev"]
pull_request:
branches: ["Dev"]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: '17'
distribution: 'temurin'
cache: maven
- name: Build with Maven
run: mvn package --file pom.xml
Explanation of the Configuration:
Workflow Name:
o name: CICD specifies the name of the workflow.
Triggers:
o The workflow triggers on pushes and pull requests to the Dev branch.
Jobs:
o The build job runs on a self-hosted runner.
Steps:
o Checkout Repository: Uses actions/checkout@v4 to check out the repository
code to the runner.
o Set up JDK 17: Uses actions/setup-java@v4 to set up JDK 17 with the Temurin
distribution and enables caching for Maven dependencies.
o Build with Maven: Runs the mvn package command, specifying the pom.xml file.
Additional Note:
One interesting part of GitHub Actions is that you don't need to specifically install multiple
tools on the virtual machine. You can directly call the action in your pipeline and utilize it.
For example, we haven't installed Java on our virtual machine, but using the setup-java
action, we can directly use the required Java version in our pipeline.
1. Go to Actions:
o Navigate to the Actions tab in your GitHub repository.
2. Monitor the Pipeline:
o You should see the pipeline has started. Click on it to monitor the stages and verify
that they are running correctly.
3. Check Success:
o Once complete, verify that the application has been built successfully.
1. Go to Actions:
o Navigate to the Actions tab in your GitHub repository.
2. Monitor the Pipeline:
o You should see the pipeline has started. Click on it to monitor the stages and
verify that they are running correctly.
3. Check Success:
o Once complete, verify that the application has been built successfully.
Now, we need to add multiple stages in our pipeline. We'll add a file system scan using Trivy
and a SonarQube analysis.
name: CICD
on:
push:
branches:
- Dev
pull_request:
branches:
- Dev
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: "17"
distribution: temurin
cache: maven
- name: Build with Maven
run: mvn package --file pom.xml
- name: Trivy FS Scan
run: trivy fs --format table -o fs.html .
Adding SonarQube Analysis to the CI/CD Pipeline
To perform SonarQube analysis, we need to ensure that a SonarQube server is set up. For
simplicity, we'll set up the SonarQube server directly on the runner using a Docker container.
1. SonarQube Server: This is where the reports are published and analyzed.
2. SonarQube Scanner: This performs the analysis, generates the report, and publishes it to the
server.
We've already set up the SonarQube server. Now, we need to set up the SonarQube scanner.
The benefit of using GitHub Actions is that we can simply call an action to set up the
SonarQube scanner.
First, let access SonarQube server, which will be running on port 9000, wait for it to start.
1. Access the SonarQube Server: Copy the public IP of the machine where the
SonarQube Docker container is running. Open your browser and paste the IP followed
by :9000.
2. Change Default Credentials: By default, SonarQube uses the username admin and
password admin. Log in with these credentials, then change the password to
something secure of your choice. Click on "Update" to save the new password.
Meanwhile, define the stages in our pipeline we can utilize different actions by calling them
directly in pipeline configuration. Here, we are using the action sonar-scanner which sets up
the SonarQube scanner to perform the scanning.
To perform SonarQube analysis in our CI/CD pipeline, we need to add the necessary steps for
SonarQube. Here’s how to set it up:
name: CICD
on:
push:
branches:
- Dev
pull_request:
branches:
- Dev
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: "17"
distribution: temurin
cache: maven
- name: Build with Maven
run: mvn package --file pom.xml
- name: Trivy FS Scan
run: trivy fs --format table -o fs.html .
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
Explanation
Set up Secrets: Ensure the SONAR_HOST_URL and SONAR_TOKEN are added as secrets in
your repository settings.
Workflow Triggers: The pipeline triggers on pushes and pull requests to the Dev branch.
Jobs: The build job runs on a self-hosted runner and includes steps to check out the
repository, set up JDK 17, build with Maven, perform a Trivy FS scan, and execute a
SonarQube scan
To run a SonarQube analysis, you need to provide information about the project name,
project key, and the location of Java binaries. This information is specified in a sonar-
project.properties file in your repository. Here's how you can set this up:
1. Create/Edit sonar-project.properties:
o Navigate to the dev branch of your repository.
o Create or edit the sonar-project.properties file and add the following details:
sonar.projectKey=TaskMaster
sonar.projectName=TaskMaster
sonar.java.binaries=target
name: CICD
on:
push:
branches:
- Dev
pull_request:
branches:
- Dev
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: "17"
distribution: temurin
cache: maven
- name: Build with Maven
run: mvn package --file pom.xml
- name: Trivy FS Scan
run: trivy fs --format table -o fs.html .
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
Monitoring Steps
To extend your CI/CD pipeline to include Docker image building, scanning, and pushing,
follow these steps:
(Make sure to adjust the Docker image name and tags as per your specific
configuration or requirement. )
name: CICD
on:
push:
branches:
- Dev
pull_request:
branches:
- Dev
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: "17"
distribution: temurin
cache: maven
- name: Build with Maven
run: mvn package --file pom.xml
- name: Trivy FS Scan
run: trivy fs --format table -o fs.html .
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build Docker Image
run: |
docker build -t kviondocker/devtaskmaster:latest .
- name: Trivy Image Scan
run: trivy image --format table -o image.html
kviondocker/devtaskmaster:latest
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push Docker Image
run: |
docker push kviondocker/devtaskmaster:latest
Explanation of Docker Stages:
3. Set Up Secrets: Ensure that the following secret variables are set up in your GitHub
repository:
o DOCKERHUB_USERNAME: Your Docker Hub username.
o DOCKERHUB_TOKEN: Your Docker Hub token (password).
By following these steps, you'll extend your CI/CD pipeline to handle Docker image
management, ensuring that your Docker images are built, scanned for vulnerabilities, and
deployed as part of your automated workflow.
cd ~/.kube/
oThis file contains the configuration required for authenticating with your
Kubernetes cluster.
2. Encode the Kubernetes Configuration File:
o Convert the config file to base64 format using the following command:
base64 ~/.kube/config
To add the final stage for deploying to Kubernetes, follow these steps:
image: kviondocker/devtaskmaster:latest
(Make sure to adjust the Docker image name and tags as per your specific configuration or
requirement)
name: CICD
on:
push:
branches:
- Dev
pull_request:
branches:
- Dev
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: "17"
distribution: temurin
cache: maven
- name: Build with Maven
run: mvn package --file pom.xml
- name: Trivy FS Scan
run: trivy fs --format table -o fs.html .
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build Docker Image
run: |
docker build -t kviondocker/devtaskmaster:latest .
- name: Trivy Image Scan
run: trivy image --format table -o image.html
kviondocker/devtaskmaster:latest
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push Docker Image
run: |
docker push kviondocker/devtaskmaster:latest
- name: Kubectl Action
uses: tale/kubectl-action@v1
with:
base64-kube-config: ${{ secrets.KUBE_CONFIG }}
- run: |
kubectl apply -f deployment-service.yml
After the pipeline completes successfully, you can verify the deployment with the following
steps:
1. Check Service Status:
o On the Master node, run the command:
http://X.X.X.X:32560
o Open this URL in your browser to see if the application is deployed correctly.
By following these steps, you can confirm that your application is successfully deployed and
accessible.
cd ~/.kube/
base64 ~/.kube/config
yaml
Copy code
image: kviondocker/prodtaskmaster:latest
bash
Copy code
curl "https://d1uj6qtbmh3dt5.cloudfront.net/awscli-exe-
linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws configure
Enter your AWS Access Key ID, Secret Access Key, and default
region (e.g., ap-south-1).
name: CDPROD
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: self-hosted
steps:
- name: Build with Maven
run: mvn -B package --file pom.xml
- run: |
kubectl apply -f deployment-service.yml
Summary:
After committing these changes, monitor the pipeline to ensure the deployment is successful
and the application is running as expected in the Production cluster.
You can see that the deployment and service have been created, and the workflow has
completed successfully.