0% found this document useful (0 votes)
7 views

3-tier-project-ci--jenkins-cd--argocd-with mysqlrds (1)

The document outlines a comprehensive DevOps project using AWS, focusing on setting up an IAM user, configuring a Jenkins server, deploying an EKS cluster, and integrating various tools like Sonarqube and ArgoCD for continuous delivery. It details the steps for creating Docker repositories, configuring load balancers, and establishing monitoring systems. The project culminates in deploying a three-tier application and ensuring data persistence with DNS configuration and Route 53 setup.

Uploaded by

msai Gopi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

3-tier-project-ci--jenkins-cd--argocd-with mysqlrds (1)

The document outlines a comprehensive DevOps project using AWS, focusing on setting up an IAM user, configuring a Jenkins server, deploying an EKS cluster, and integrating various tools like Sonarqube and ArgoCD for continuous delivery. It details the steps for creating Docker repositories, configuring load balancers, and establishing monitoring systems. The project culminates in deploying a three-tier application and ensuring data persistence with DNS configuration and Route 53 setup.

Uploaded by

msai Gopi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Aws & devops by veera nareshit

DevOps three tier project

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Project Overview:

In this project, we will cover the following key aspects:

1. IAM User Setup: Create an IAM user on AWS with the


necessary permissions to facilitate deployment and
management activities.

2. Infrastructure as Code (IaC): Use Terraform and AWS


CLI to set up the Jenkins server (EC2 instance) on AWS.

3. Jenkins Server Configuration: Install and configure


essential tools on the Jenkins server, including Jenkins
itself, Docker, Sonarqube, Terraform, Kubectl, AWS CLI,
and Trivy.

4. EKS Cluster Deployment: Utilize eksctl commands to


create an Amazon EKS cluster, a managed Kubernetes
service on AWS.

5. Load Balancer Configuration: Configure AWS


Application Load Balancer (ALB) for the EKS cluster.

6. Amazon ECR Repositories: Create private repositories


for both frontend and backend Docker images on Amazon
Elastic Container Registry (ECR).

7. ArgoCD Installation: Install and set up ArgoCD for


continuous delivery and GitOps.

8. Sonarqube Integration: Integrate Sonarqube for code


quality analysis in the DevSecOps pipeline.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

9. Jenkins Pipelines: Create Jenkins pipelines for


deploying backend and frontend code to the EKS cluster.

10. Monitoring Setup: Implement monitoring for the EKS


cluster using Helm, Prometheus, and Grafana.

11. ArgoCD Application Deployment: Use ArgoCD to


deploy the Three-Tier application, including database,
backend, frontend, and ingress components.

12. DNS Configuration: Configure DNS settings to make


the application accessible via custom subdomains.

13. Data Persistence: Implement persistent volume and


persistent volume claims for database pods to ensure data
persistence.

14. Route 53 Configuration: Creating records for ingress


load balancer.

15. Conclusion and Monitoring: Conclude the project by


summarizing key achievements and monitoring the EKS
cluster’s performance using Grafana.

Prerequisites

We need to create an IAM user and generate the AWS


Access key

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Create a new IAM User on AWS and give it to the


AdministratorAccess for testing purposes (not recommended for
your Organization's Projects)

Go to the AWS IAM Service and click on Users.

Click on Create user

Provide the name to your user and click on Next.

Select the Attach policies directly option and search


for AdministratorAccess then select it.

Click on the Next.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Click on Create user

Now, Select your created user then click on Security


credentials and generate access key by clicking on Create access
key.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Select the Command Line Interface (CLI) then select the


checkmark for the confirmation and click on Next.

Provide the Description and click on the Create access key.

Here, you will see that you got the credentials and also you can
download the CSV file for the future.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Configure keys by using aws configure


command and start the process.

Create Jump host server or Main server


To configure all tools like Jenkins, SonarQube, Trivy, Terraform, Git, Docker,
Kubernetes etc.

Link for main server IAC code

htps://github.com/CloudTechDevOps/2nd10WeeksofCloudOps-
main/tree/main/terraform_main_ec2

Let’s Create the main server by using terraform, login into the Jenkins and
create the EKS by using Jenkins terraform.

After complete the creation jump host server, please check below
tools weather installed or not Note: Tools installation provision
enabled by using user data for same terraform script.

#Check all installed versions


• jenkins --version
• docker --version
• docker ps
• terraform --version
• kubectl version
• aws --version
• trivy --version
• eksctl --version
Aws & devops by veera nareshit
Aws & devops by veera nareshit

Jenkins Job Configuration for Terraform EKS


Step 1: EKS Provision job

htps://github.com/CloudTechDevOps/2nd10WeeksofCloudOps-
main/tree/main/eks-terraform

Note: before keep it ready for EKS script we are going to run
EKS by using Jenkins pipeline

That is done now go to Jenkins and add a terraform plugin to provision the
AWS EKS using the Pipeline Job.

Go to Jenkins dashboard –> Manage Jenkins –> Plugins

Available Plugins, Search for Terraform and install it.

Go to Terminal and use the below command

let’s find the path to our Terraform (we will use it in the tools section of
Terraform)

which terraform

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now come back to Manage Jenkins –> Tools

Add the terraform in Tools

Apply and save.

Now in our EKS script we have configured backeen.tf (to maintain state file
remote(s3)

GIVE YOUR S3 BUCKET NAME IN THE BACKEND.TF

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now create a new job for the Eks provision

Choose option pipeline from SCM as our Jenkins script is already available in
GitHub

Path of the Jenkins file is :


https://github.com/CloudTechDevOps/2nd10WeeksofCloudOps-
main/tree/main/eks-terraform

We have to give the path of the Jenkins file or you can copy paste script into
Jenkins pipeline editor.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

let’s apply and save and Build with parameters and select action as apply

Stage view it will take max 10mins to provision

Jenkins-script:

pipeline {

agent any

parameters {

string(name: 'ACTION', defaultValue: 'apply', description: 'Terraform


action: apply or destroy')

Aws & devops by veera nareshit


Aws & devops by veera nareshit

stages {

stage('Checkout from Git'){

steps{

echo 'git checkout'

stage('Terraform version'){

steps{

sh 'terraform --version'

stage('Terraform init'){

steps{

dir('eks-terraform') {

sh 'terraform init --reconfigure'

stage('Terraform validate'){

steps{

Aws & devops by veera nareshit


Aws & devops by veera nareshit

dir('eks-terraform') {

sh 'terraform validate'

stage('Terraform plan'){

steps{

dir('eks-terraform') {

sh 'terraform plan'

stage('Terraform apply/destroy'){

steps{

dir('eks-terraform') {

sh "terraform ${params.ACTION} -auto-approve"

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Check in Your Aws console whether it created EKS or not.

And Node groups

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Configuring Project Integra�on:

Now, we have to configure SonarQube for our


Integration Pipeline
To do that, copy your Jenkins Server public IP and paste it on your
favourite browser with a 9000 port

The username and password will be admin

Click on Log In.

Update the password

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Click on Administration then Security, and select Users

Click on Update tokens

Click on Generate

Copy the token keep it somewhere safe and click on Done.

Now, we have to configure webhooks for quality checks.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Click on Administration then, Configuration and


select Webhooks

Click on Create

Provide the name of your project and in the URL, provide the
Jenkins server public IP with port 8080 add sonarqube-webhook in
the suffix, and click on Create.

http://<jenkins-server-public-ip>:8080/sonarqube-webhook/

Here, you can see the webhook.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we have to create a Project for frontend code.

Click on Manually.

Provide the display name to your Project and click on Setup

Click on Locally.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Select the Use existing token and click on Continue.

Select Other and Linux as OS.

After performing the above steps, you will get the command which
you can see in the below snippet.

Now, use the command in the Jenkins Frontend Pipeline where


Code Quality Analysis will be performed.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we have to create a Project for backend code.

Click on Create Project.

Provide the name of your project name and click on Set up.

Click on Locally.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Select the Use existing token and click on Continue.

Select Other and Linux as OS.

After performing the above steps, you will get the command which
you can see in the below snippet.

Now, use the command in the Jenkins Backend Pipeline where Code
Quality Analysis will be performed.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we have to store the sonar credentials.

Go to Dashboard -> Manage Jenkins -> Credentials

Select the kind as Secret text paste your token in Secret and keep
other things as it is.

Click on Create

Now, we have to store the GitHub Personal access token to push the
deployment file which will be modified in the pipeline itself for the
ECR image.
Aws & devops by veera nareshit
Aws & devops by veera nareshit

Add GitHub credentials

Select the kind as Secret text and paste your GitHub Personal
access token(not password) in Secret and keep other things as it is.

Click on Create

Note: If you haven’t generated your token then, you have it


generated first then paste it into the Jenkins

Now, according to our Pipeline, we need to add an Account ID in the


Jenkins credentials because of the ECR repo URI.

Select the kind as Secret text paste your AWS Account ID in Secret
and keep other things as it is.

Click on Create

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we need to provide our ECR image name for frontend which
is frontend only.

Select the kind as Secret text paste your frontend repo name in
Secret and keep other things as it is.

Click on Create

Now, we need to provide our ECR image name for the backend
which is backend only.

Select the kind as Secret text, paste your backend repo name in
Secret, and keep other things as it is.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Click on Create

Final Snippet of all Credentials that we needed to implement this


project.

Step 10: Install the required plugins and configure the


plugins to deploy our Three-Tier Application
Install the following plugins by going to Dashboard -> Manage
Jenkins -> Plugins -> Available Plugins

Docker
Docker Commons
Docker Pipeline
Docker API
docker-build-step
Eclipse Temurin installer
NodeJS
OWASP Dependency-Check
SonarQube Scanner

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we have to configure the installed plugins.

Go to Dashboard -> Manage Jenkins -> Tools

We are configuring jdk

Search for jdk and provide the configuration like the below snippet.

Now, we will configure the sonarqube-scanner

Search for the sonarqube scanner and provide the configuration like
the below snippet.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we will configure nodejs

Search for node and provide the configuration like the below
snippet.

Now, we will configure the OWASP Dependency check

Search for Dependency-Check and provide the configuration like


the below snippet.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, we will configure the docker

Search for docker and provide the configuration like the below
snippet.

Now, we have to set the path for Sonarqube in Jenkins

Go to Dashboard -> Manage Jenkins -> System

Search for SonarQube installations

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Provide the name as it is, then in the Server URL copy the sonarqube
public IP (same as Jenkins) with port 9000 select the sonar token
that we have added recently, and click on Apply & Save.

After Sonar Integration

We need to create Amazon ECR Private Repositories


for both Tiers (Frontend & Backend)
Click on Create repository

Select the Private option to provide the repository and click


on Save.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Do the same for the backend repository and click on Save

Now, we have set up our ECR Private Repository and

Now, we need to configure ECR locally because we have to upload


our images to Amazon ECR.

Copy the 1st command for login and past it on jumphost server

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Open your git forke project edit config.js file here place your domain name my end
api.narni.co.in,, for example your end api.xyz.com. don’t remove api because we call this
api in route53 record creation time.

After making these changes our frontend of the application will send all the API
calls on the domain name https://api.narni.co.in And lastly, that will point to
our backend server.

Now, we are ready to create our Jenkins Pipeline to deploy our


Backend Code.

Go to Jenkins Dashboard

Click on New Item

Provide the name of your Pipeline and click on OK.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

This is the Jenkins file to deploy the Frontend Code on EKS.

FRONTEND

htps://github.com/CloudTechDevOps/2nd10WeeksofCloudOps-
main/blob/main/Jenkins-Pipeline-Code/Jenkinsfile-Frontend

OUTPUT

BACKEND

Aws & devops by veera nareshit


Aws & devops by veera nareshit

htps://github.com/CloudTechDevOps/fpjt/blob/main/Jenkins-
Pipeline-Code/Jenkinsfile-Backend

Total Three Jobs EKS, FRONTEND, BACKEND

Aws & devops by veera nareshit


Aws & devops by veera nareshit

ArgoCD

Install & Configure ArgoCD


We will be deploying our application on a three-tier namespace. To
do that, we will create a three-tier namespace on EKS

aws eks update-kubeconfig --name project-eks --region us-east-1

kubectl create namespace three-tier

Now, we will install argoCD.

To do that, create a separate namespace for it and apply the argocd


configuration for installation.

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-


cd/v2.4.7/manifests/install.yaml

Aws & devops by veera nareshit


Aws & devops by veera nareshit

All pods must be running, to validate run the below command

kubectl get pods -n argocd

Now, expose the argoCD server as LoadBalancer using the below


command or you can edit service file enable LB or NodePort

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type":


"LoadBalancer"}}'

You can validate whether the Load Balancer is created or not by


going to the AWS Console

Aws & devops by veera nareshit


Aws & devops by veera nareshit

To access the argoCD, copy the LoadBalancer DNS and hit on your
favorite browser.

You will get a warning like the below snippet.

Click on Advanced.

Click on the below link which is appearing under Hide advanced

Now, we need to get the password for our argoCD server to perform
the deployment.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

• Now, in order to log into the UI you need the credentials.


So, for a username, you can write admin and the password is
stored in the secret called argocd-initial-admin-secret in
the cluster.

• You need to run the below command to get the value of the
secret.

kubectl get secret argocd-initial-admin-secret -n argocd -o yaml

• The secret base64 encoded so, you have to decode the


secret by running the below command.

echo "secret value" | base64 --decode

• After running the above command you can have the


decoded value of the secret and using that as a password
you can log in to the UI.

• Now, the installation has been completed.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Here is our ArgoCD Dashboard.

Grafana & Prometheus

 First wee need to install helm


• wget https://get.helm.sh/helm-v3.6.0-linux-
amd64.tar.gz
• tar -zxvf helm-v3.6.0-linux-amd64.tar.gz
• sudo mv linux-amd64/helm /usr/local/bin/helm
Aws & devops by veera nareshit
Aws & devops by veera nareshit

• chmod 777 /usr/local/bin/helm # give permissions


• helm version

Step-1. We need to add the Helm Stable Charts for your local client.
Execute the below command:

helm repo add stable https://charts.helm.sh/stable

Step2: Add Prometheus Helm repo

helm repo add prometheus-community https://prometheus-


community.github.io/helm-charts

Step 3: Create Prometheus namespace

kubectl create namespace prometheus

Insstall prometheus

helm install stable prometheus-community/kube-prometheus-stack -n


prometheus

Aws & devops by veera nareshit


Aws & devops by veera nareshit

• above command is used to install kube-Prometheus-stack.


The helm repo kube-stack-Prometheus comes with a
Grafana deployment embedded ( as the default one ).

Now Prometheus is installed using helm in the ec2 instance

• To check whether Prometheus is installed or not use the


below command

kubectl get pods -n prometheus

to check the services file (svc) of the Prometheus

kubectl get svc -n prometheus

Grafana will be coming along with Prometheus as the stable version


This output is conformation that our Prometheus is installed
Aws & devops by veera nareshit
Aws & devops by veera nareshit

successfully
there is no need of installing Grafana as a separate tool it comes
along with Prometheus

let’s expose Prometheus and Grafan to the external world


there are 2 ways to expose

1. through Node Port

2. through LoadBalancer

let’s go with the LoadBalancer


to attach the load balancer we need to change from ClusterIP to
LoadBalancer
command to get the svc file

kubectl edit svc stable-kube-prometheus-sta-prometheus -n prometheus

Aws & devops by veera nareshit


Aws & devops by veera nareshit

change it from Cluster IP to LoadBalancer after


changing make sure you save the file

you guys can see I have a load balancer for my Prometheus which I
can access from that link

Aws & devops by veera nareshit


Aws & devops by veera nareshit

we can use a Prometheus UI for monitoring the EKSbut the UI of


Prometheus is not a convent for the user Grafana will extract the
matrix from the Prometheus UI and show it in a user-friendly
manner

let’s change the SVC file of the Grafana and expose it to the
outer world

Aws & devops by veera nareshit


Aws & devops by veera nareshit

command to edit the SVC file of grafana

kubectl edit svc stable-grafana -n prometheus

the Grafana LoadBalancer also exposed

use the link of the LoadBalancer and access from the Browser

Aws & devops by veera nareshit


Aws & devops by veera nareshit

kubectl get secret --namespace prometheus stable-grafana -o


jsonpath="{.data.admin-password}" | base64 --decode ; echo

use the above command to get the password


the user name is admin

Create a Dashboard in Grafana

Aws & devops by veera nareshit


Aws & devops by veera nareshit

let’s create a Dashboard by importing


click on Import and import the dashboard with numbers

there are plenty of ready templates to use the pre-existing templates


and modify based on our desired
it uses a Prometheus

Aws & devops by veera nareshit


Aws & devops by veera nareshit

click on the Import

the Entire data of the cluster


where we can able to see the entire data of the EKS cluster

Aws & devops by veera nareshit


Aws & devops by veera nareshit

1. CPU and RAM use


2. pods in a specific namespace

3. Pod up history
4. HPA
5. Resources by Container

Aws & devops by veera nareshit


Aws & devops by veera nareshit

CPU used by container & limits


network bandwidth & packet rate

Note: we can deploy the Grafana and Prometheus with a


different method
1. Create all configuration files of both Prometheus and Grafana and
execute them in the right order.

2. Prometheus Operator — to simplify and automate the


configuration and management of the Prometheus monitoring stack
running on a Kubernetes cluster

3. Helm chart — Using helm to install Prometheus Operator


including Grafana

Install Ingress Process

This is optional

We can deploy our ingress controller by using below command

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-


nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Step 11: We will deploy our Three-Tier Application


using ArgoCD.
As our repository is private. So, we need to configure the Private
Repository in ArgoCD.

Click on Settings and select Repositories

Click on CONNECT REPO USING HTTPS

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now, provide the repository name where your Manifests files are
present.

Provide the username and GitHub Personal Access token and click
on CONNECT.

If your Connection Status is Successful it means repository


connected successfully.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

After that you need to change the the rds end point in secrets.yaml
file by using base encode 64 formate

Like this encode the values and copy it.

Place this values in secrets.yaml file

Like this change the values in github secret.yaml file

Now, we will create our first application

Click on CREATE APPLICATION.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Provide the details as it is provided in the below snippet and scroll


down.

Click on CREATE.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

You can check out the load balancer named with k8s-three.

Now, Copy the backend ALB-DNS and go to your Domain Provider


in my case rote53 is the domain provider.

 Create alias record for backend service like this

Aws & devops by veera nareshit


Aws & devops by veera nareshit

You can see all application deployments in the below snippet.

Then access the applica�on with frontend loadbalncer url

Aws & devops by veera nareshit


Aws & devops by veera nareshit

This is frontend ui

Here you will get front end page now wee need to ini�alize the database

Install mysql in jumphost server

mysql and mariadb are same family so install mariadb

- sudo yum install mariadb105-server -y

Clone the project repository in jumphost server

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Go to project backend folder

Connect to your database and create database test and exit

mysql -h <your-rds endponit> -u <rds username> -p<rdspasword>

ex:::::::mydetails::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
mysql -h book-rds.cru0eyk6ukru.us-east-1.rds.amazonaws.com -u admin -psrivardhan

Then ini�alize the test.sql file in backed directory

mysql -h <your-rds endponit> -u <rds username> -p<rdspasword> test < test.sql

ex:::::::mydetails::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
mysql -h book-rds.cru0eyk6ukru.us-east-1.rds.amazonaws.com -u admin -psrivardhan test < test.sql

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Then hit with frontend load balancer url you will get books then add the books

Now your project is completed

Aws & devops by veera nareshit


Aws & devops by veera nareshit

aws devops project interview explanation

OVERVIEW OF AWS & DEVOPS 3 TIER PROJECT


We successfully deployed a comprehensive DevOps three-�er nodes project u�lizing various AWS
services and DevOps tools. The project follows a CI/CD pipeline, leveraging Jenkins CI to automate
the Docker build process for both the Node.js frontend and backend applica�ons. Using Jenkins, we
created a robust pipeline that automates the build, test, and deployment of Docker images, ensuring
consistency and efficiency.
For deployment, we integrated ArgoCD to handle con�nuous delivery, deploying both the frontend
and backend pods to the Kubernetes cluster hosted on Amazon EKS (Elas�c Kubernetes Service). This
allowed us to automate the deployment process and manage the state of our applica�ons seamlessly
within the cluster.
On the database side, we used Amazon RDS (Rela�onal Database Service) to manage our backend
data, ensuring high availability, scalability, and automa�c backups. AWS IAM (Iden�ty and Access
Management) was used to configure role-based access, ensuring that the deployments had the
correct permissions across services.
The en�re infrastructure, including the EKS cluster, RDS instance, and networking setup, was
provisioned and managed using Terraform. This Infrastructure as Code (IaC) approach allowed us to
automate the crea�on of VPCs, subnets, security groups, NAT gateways, and other key resources,
ensuring a scalable and secure environment.

TOOLS AND SERVICES USED IN THIS PROJECT


We have successfully implemented a robust DevOps three-�er nodes project that fully integrates
AWS cloud services with advanced DevOps tools. This project demonstrates not only proficiency in
modern so�ware development prac�ces but also deep exper�se in cloud-na�ve architecture and
automa�on.
CI/CD Pipeline with Jenkins: Our project begins with Jenkins CI for Con�nuous Integra�on (CI). We
automated the en�re build process, star�ng from the source code to crea�ng Docker images for both
the Node.js frontend and backend applica�ons. Jenkins pulls the code from the version control
system, builds the applica�on, runs unit tests, and finally, builds Docker images. This ensures a
smooth and repeatable build process with minimal manual interven�on, enhancing development
velocity. By integra�ng Docker, we ensure that our applica�ons are containerized, promo�ng
portability and consistency across different environments.
Con�nuous Deployment with ArgoCD: For Con�nuous Deployment (CD), we used ArgoCD, which is a
GitOps tool designed for Kubernetes. ArgoCD monitors the Git repository for changes and
automa�cally syncs the Kubernetes manifests to the Amazon EKS (Elas�c Kubernetes Service) cluster.
This allows us to achieve fast, reliable, and repeatable deployments. The frontend and backend
applica�ons are deployed as pods within the EKS cluster. ArgoCD's ability to ensure the desired state
of applica�ons within the cluster ensures a reliable and automated deployment strategy. This also
aligns with GitOps principles, where Git is the source of truth for both applica�on code and
infrastructure.
Infrastructure Management with Terraform: The en�re infrastructure was provisioned using
Terraform, following Infrastructure as Code (IaC) best prac�ces. Terraform automated the crea�on
and management of key AWS resources, including the VPC (Virtual Private Cloud), subnets, security

Aws & devops by veera nareshit


Aws & devops by veera nareshit
groups, Internet Gateways, and NAT Gateways, providing a scalable and secure network architecture.
Terraform also managed the provisioning of the Amazon EKS cluster, Amazon RDS (Rela�onal
Database Service), and the Elas�c Load Balancer (ELB) for managing incoming traffic. This approach
allowed us to version control the infrastructure, easily replicate environments, and scale resources
based on demand.

AWS Services U�lized:


Amazon EKS: EKS was chosen for container orchestra�on, as it provides a fully managed Kubernetes
service, ensuring high availability and scalability for our Node.js applica�ons. EKS manages the
Kubernetes control plane, allowing us to focus on applica�on deployment and scaling.
Amazon RDS: The backend database layer was handled by Amazon RDS, a managed rela�onal
database service that provides automated backups, security, and scaling. This offloads the
management overhead of maintaining databases and ensures that our data is always available and
secure.
Amazon IAM: IAM roles and policies were implemented to securely control access to AWS services
and ensure that each component of the architecture had the least-privileged access necessary. This
ensures secure communica�on between services, especially for CI/CD pipelines and Kubernetes
clusters.
Amazon S3: For storage purposes, AWS S3 (Simple Storage Service) was used to store sta�c assets
and backups, offering high durability and scalability.
Amazon CloudWatch: For logging and monitoring, we integrated AWS CloudWatch to collect and
track metrics, collect log files, and set alarms to automa�cally respond to any changes in
performance or opera�onal health.
Security and Networking: Security was paramount in this project. Terraform managed the VPC to
isolate network traffic, while security groups and NACLs (Network Access Control Lists) ensured that
only the necessary ports were open to communica�on between different layers of the applica�on
(frontend, backend, and database). An Applica�on Load Balancer (ALB) routed traffic to the
appropriate services, ensuring high availability and load balancing.
Addi�onally, SSL cer�ficates from AWS ACM (Cer�ficate Manager) were used to enable secure
communica�on over HTTPS, protec�ng data in transit. Secrets and sensi�ve informa�on, such as
database creden�als and API keys, were securely managed using AWS Secrets Manager, ensuring
that secrets were encrypted and securely retrieved when needed by applica�ons.
Scalability and Monitoring: The use of Amazon EKS allowed us to take advantage of Kubernetes'
na�ve auto-scaling features, ensuring that the applica�ons can scale up or down based on demand.
CloudWatch, along with Prometheus and Grafana, was used to monitor the system's health and
performance. Any anomalies in CPU usage, memory, or applica�on performance were automa�cally
flagged, enabling quick issue detec�on and resolu�on.
Key Benefits: This project illustrates how combining AWS cloud infrastructure with DevOps
automa�on tools like Jenkins, Docker, ArgoCD, and Terraform can result in a highly efficient,
automated, and scalable deployment pipeline. It enhances the development workflow, reduces
manual efforts, and ensures the system is always running in a desired state

Roles and Responsibili�es:


1. CI/CD Pipeline Management:

Aws & devops by veera nareshit


Aws & devops by veera nareshit
• Design and Implement CI/CD Pipelines: Developed and maintained Con�nuous Integra�on and
Con�nuous Deployment (CI/CD) pipelines using Jenkins. Automated the build, tes�ng, and
deployment process, ensuring code quality and fast releases.
• Docker Integra�on: Built and deployed Docker images for both Node.js frontend and backend
applica�ons, ensuring consistency across environments by using containerized builds.
2. Containeriza�on and Orchestra�on:
• Container Management: Designed and deployed applica�ons as Docker containers, which
enhanced applica�on portability and reduced environment inconsistencies.
• Kubernetes Deployment (Amazon EKS): Managed and deployed Node.js applica�ons on Amazon
EKS (Elas�c Kubernetes Service), leveraging Kubernetes for orchestra�on, scaling, and high
availability of applica�ons.
• ArgoCD for GitOps: Set up ArgoCD for con�nuous delivery, managing the state of Kubernetes
applica�ons. Ensured automated, repeatable, and reliable deployments through GitOps prac�ces.
3. Infrastructure as Code (IaC):
• Terraform Infrastructure Automa�on: Provisioned and managed AWS infrastructure using
Terraform, ensuring version control and repeatability. Automated the crea�on of VPCs, subnets,
security groups, NAT Gateways, Amazon EKS clusters, and Amazon RDS instances.
• Modular Terraform Architecture: Created reusable Terraform modules to streamline the process of
infrastructure management, making it easier to replicate environments for different stages
(development, tes�ng, produc�on).
• AWS Cloud Infrastructure Design: Designed and implemented cloud-na�ve architecture on AWS,
ensuring high availability, fault tolerance, and scalability for the applica�on infrastructure.
4. AWS Services Management:
• Amazon EKS: Deployed and managed the Kubernetes cluster using Amazon EKS, handling the
scalability and orchestra�on of microservices-based applica�ons. Used Kubernetes features like
horizontal pod autoscaling and rolling updates.
• Amazon RDS: Managed the rela�onal database using Amazon RDS, ensuring secure, scalable, and
automated database management. Performed tasks like se�ng up mul�-AZ deployments,
automated backups, and read replicas for failover and performance.
• AWS IAM: Implemented security best prac�ces by configuring IAM roles and policies for services to
ensure secure access between Jenkins, Kubernetes, and other AWS services.
• Amazon S3 & CloudFront: Managed sta�c assets storage in Amazon S3 and set up CloudFront as a
content delivery network (CDN) to op�mize the delivery of sta�c content for the applica�on.
• AWS CloudWatch: Configured CloudWatch for applica�on performance monitoring, logging, and
aler�ng. Monitored key metrics such as CPU u�liza�on, memory usage, and request latencies.
5. Security and Networking:
• VPC and Networking: Designed secure networking architecture within AWS, including VPC setup
with private/public subnets, Internet Gateways, NAT Gateways, and security groups. Ensured secure
communica�on between �ers (frontend, backend, and database).
• SSL/TLS Security: Managed SSL cer�ficates using AWS ACM (Cer�ficate Manager) for securing
communica�on with HTTPS. Ensured secure traffic flow between users and the applica�on.
• AWS Secrets Manager: Implemented AWS Secrets Manager to store and manage sensi�ve
informa�on, such as database creden�als and API

Day-to-Day Tasks in a DevOps Role:


Aws & devops by veera nareshit
Aws & devops by veera nareshit
1 CI/CD Pipeline Management:
◦ Monitoring Jenkins CI/CD pipelines to ensure that automated builds, tests, and deployments are
successful.
◦ Troubleshoo�ng failed builds or deployments, addressing any issues related to environment
inconsistencies or code errors.
◦ Upda�ng the CI/CD pipeline to integrate new features or tools based on project requirements.
◦ Op�mizing build �mes and enhancing tes�ng coverage to ensure efficient deployment processes.
2 Containeriza�on and Orchestra�on:
◦ Monitoring the health and performance of containers running in Amazon EKS.
◦ Performing rou�ne tasks like scaling the Kubernetes cluster up or down based on traffic paterns or
applica�on performance.
◦ Managing deployments using ArgoCD, ensuring that the Kubernetes manifests are in sync with the
Git repository, and troubleshoo�ng any deployment failures.
◦ Rolling out updates to the Node.js frontend and backend services in a phased or blue-green
deployment model.
3 Infrastructure as Code (IaC) with Terraform:
◦ Wri�ng and maintaining Terraform scripts to modify or add infrastructure as needed (e.g., adding
new EC2 instances, modifying security groups).
◦ Running Terraform plan and apply commands to make infrastructure changes in a controlled and
versioned manner.
◦ Monitoring the state of AWS infrastructure and troubleshoo�ng any issues that arise, such as
misconfigured networking or failed resource crea�on.
◦ Regularly reviewing and upda�ng Terraform modules to ensure best prac�ces are followed for
scalability and security.
4 AWS Services Management:
◦ Monitoring AWS EKS and RDS instances for performance, ensuring that the applica�ons are running
op�mally.
◦ Ensuring database backups and security configura�ons are properly maintained (e.g., checking RDS
backups, reviewing security policies).
◦ Performing rou�ne maintenance tasks like cleaning up unused resources (e.g., unused EC2
instances, stale Docker images).
◦ Monitoring costs related to AWS services and making adjustments to op�mize usage and budget.
5 Security and Networking:
◦ Reviewing security logs and making updates to IAM roles and policies to ensure the principle of
least privilege is maintained.
◦ Monitoring network traffic and ensuring security group configura�ons allow only necessary access
to applica�on resources.
◦ Managing and rota�ng creden�als and secrets stored in AWS Secrets Manager.
◦ Responding to security incidents or poten�al vulnerabili�es by upda�ng firewall rules, patching, or
adjus�ng permissions.
6 Monitoring and Logging:
◦ Monitoring applica�on and system metrics using CloudWatch and Prometheus, and responding to
alerts when thresholds are crossed (e.g., high CPU usage, memory leaks, database latency).
◦ Analyzing logs in CloudWatch Logs to iden�fy and troubleshoot issues in the applica�on or
infrastructure.

Aws & devops by veera nareshit


Aws & devops by veera nareshit
◦ Crea�ng or adjus�ng dashboards in Grafana to give clear visibility into applica�on performance and
system health.
◦ Performing root cause analysis for performance issues or applica�on failures.
7 Collabora�on and Communica�on:
◦ Daily stand-up mee�ngs with cross-func�onal teams (development, opera�ons, QA) to discuss
progress, blockers, and any upcoming changes to the infrastructure or deployment process.
◦ Reviewing and providing feedback on infrastructure-related pull requests (e.g., changes to
Terraform scripts or Jenkins pipelines).
◦ Keeping documenta�on up-to-date with any changes to infrastructure, deployment processes, or
security prac�ces.
8 Con�nuous Improvement:
◦ Iden�fying botlenecks in the deployment process and sugges�ng improvements, such as caching
dependencies in the build process or op�mizing Jenkins jobs.
◦ Researching and implemen�ng new tools or updates to exis�ng tools to improve efficiency or
security.
◦ Conduc�ng periodic security audits to ensure all cloud and DevOps processes adhere to best
prac�ces.
Summary of Daily Tasks:
On a day-to-day basis, you'll be:
• Monitoring and maintaining CI/CD pipelines and infrastructure.
• Responding to issues, scaling resources, and op�mizing processes.
• Managing security, networking, and costs within the cloud environment.
• Collabora�ng with teams to ensure smooth deployments.
• Con�nuously improving automa�on, monitoring, and security measures.

Aws & devops by veera nareshit

You might also like