End-To-End DevSecOps Pipeline With Jenkins
End-To-End DevSecOps Pipeline With Jenkins
Project Overview:
• Log in to the AWS Management Console, navigate to the EC2 dashboard, and click "Launch
Instance." Choose an instance type and memory accordingly, configure security settings
• Add the user data script that is provided in git-hub to install the specific tools like java,
Jenkins, AWS CLI, Trivy, Docker, SonarQube, Kubectl, eksctl and Helm and launch with your
selected key pair.
• Now, we have to configure Jenkins. So, copy the public IP of your Jenkins Server and paste it
on your browser with an 8080 port.
• Install the Selected plugins.
• Go to Manage Plugins in Jenkins, select Credentials, choose AWS Credentials as the Kind,
add your AWS Access Key & Secret Access Key with a unique ID, and click Create.
• Add GitHub credentials by selecting Username and Personal Access Token, as repositories in
industry projects are often private, and save the credentials for use in your pipelines.
• Create a VPC with an internet gateway, a public subnet, a route table, a security group
allowing access to specific ports (22, 8080, 9000, 9090, 80), and associating the route table
with the subnet.
Step-4: Create jump Server an EKS Cluster using the below commands
1. Create an ec2 instance for the Jump Server using the below command:
aws ec2 run-instances --image-id <image-id> --count 1 --instance-type <type> --key-name
<keypair> --security-group-ids <sg-id>--vpc-id <vpc-id>--subnet-id <subnet-id> --region
<region>
2. Create an AWS EKS cluster using the below command:
ksctl create cluster --name <name> --region <region> --node-type <type> --nodes-min
<min-num> --nodes-max <max-num>
3. Configure the aws credentials in the jump server using aws configure command:
4. After creation of the EKS cluster add the EKS cluster into the Jump server using the below
command:
aws eks update-kubeconfig --region us-east-1 --name <name of the cluster>
5. Verify the nodes that are creating along with the EKS cluster
Step-5: Now, we will configure the Load Balancer on our EKS because our application will have an
ingress controller.
4. Create a Service Account by using below command and replace your account ID:
eksctl create iamserviceaccount --cluster=Three-Tier-K8s-EKS-Cluster --namespace=kube-
system --name=aws-load-balancer-controller --role-name
AmazonEKSLoadBalancerControllerRole --attach-policy-
arn=arn:aws:iam::<your_account_id>:policy/AWSLoadBalancerControllerIAMPolicy --
approve --region=us-east-1
5. Run the below command to deploy the AWS Load Balancer Controller:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --
set clusterName=my-cluster --set serviceAccount.create=false --set
serviceAccount.name=aws-load-balancer-controller
6. After 2 minutes, run the command below to check whether your pods are running or not:
kubectl get deployment -n kube-system aws-load-balancer-controller
Step 6: We need to create Amazon ECR Private Repositories for both Tiers (Frontend & Backend)
• Now, we need to configure ECR locally because we have to upload our images to Amazon
ECR.
• Now, expose the argoCD server as LoadBalancer using the below command:
• kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
• You can validate whether the Load Balancer is created or not by going to the AWS Console:
• To access the argoCD, copy the LoadBalancer DNS and hit on your favorite browser.
• You will get a warning like Your connection is not private then Click on Advanced.
Now, we need to get the password for our argoCD server to perform the deployment using ArgoCD.
Enter the username (admin) and password in argoCD and click on SIGN IN.
• Previously we have created the instance using User script in the Script the Creation of
Sonarqube container is declared so the Sonarqube is running in the Instance.
• To do that, copy your Jenkins Server public IP and paste it on your browser with a 9000 port
Now we need to generate the Token to access the Jenkins:
Click on Manually
Click on Locally.
Select the Use existing token and click on Continue.
After performing the above steps, you will get the command which you can see in the below snippet.
Now, use the command in the Jenkins Frontend Pipeline where Code Quality Analysis will be
performed.
• Now create the Credentials of GitHub Personal access Token (Not password) and Also create
the ACCOUNT ID in secret and also create the secret name for the Frontend Repo and
Backend Repo in the Jenkins Credentials.
Step-9: Install the required plugins and configure the plugins to deploy our Three-Tier Application
Install the following plugins by going to Dashboard -> Manage Jenkins -> Plugins -> Available Plugins
1. Docker
2. Docker Commons
3. Docker Pipeline
4. Docker API
5. docker-build-step
6. Eclipse Temurin installer
7. NodeJS
8. OWASP Dependency-Check
9. SonarQube Scanner
• Now, we have to configure the installed plugins.
• Go to Dashboard -> Manage Jenkins -> Tools
• Now, we will configure the sonarqube-scanner
• Search for the sonarqube scanner and provide the configuration like the below snippet.
• Now, we will configure nodejs
• Search for node and provide the configuration like the below snippet.
• As our repository is private. So, we need to configure the Private Repository in ArgoCD.
• Click on Settings and select Repositories.
• Add your github repository where the Kubernetes Manifest Files are present.
• Go to the settings add the repo using the connect repo using the HTTPS and add the github
repository URL.
• And Click on connect then the repo will be connected to the argocd:
• Now create the separate apps for Frontend, Backend, Database and the ingress using
manifest files:
• Frontend app creation:
• If you observe, we have configured the Persistent Volume & Persistent Volume Claim.
So, if the pods get deleted then, the data won’t be lost. The Data will be stored on the host
machine.
• To ensure all the pods and the service are running are not check in the eks cluster using
command:
Kubectl get all –n three-tier
Step-12: Make sure to modify the deployment.yml files in frontend, backend.
1. In frontend deployment file edit the file with your Domain name
Now, we need to access our Prometheus and Grafana consoles from outside of the cluster.
1. For that, we need to change the Service type from ClusterType to LoadBalancer
2. Edit the stable-kube-prometheus-server service
3. kubectl edit svc stable-kube-prometheus-sta-prometheus
• Edit the stable-grafana service
• kubectl edit svc stable-grafana
• Now, if you list again the service then, you will see the LoadBalancers DNS names
1. Click on Dashboard.
2. Once you click on Dashboard. You will see a lot of Kubernetes components monitoring.
3. Let’s try to import a type of Kubernetes Dashboard.
4. Click on New and select Import
5. Provide 17375 ID and click on Load
6. Note: 17315 is a unique ID from grafana which is used to Monitor and Visualize Kubernetes
Data.
• You can view your Kubernetes Cluster Data.
Step-14: Create the DNS record for the ALB
• Once your Ingress application is deployed. It will create an Application Load Balancer
• You can check out the load balancer named with k8s-three.
• Now, Copy the ALB-DNS and go to your Domain Provider in my case GODaddy is the
domain provider.
• Choose Simple Routing
We can check the data is stored in the database (mongodb container) using commands:
6. You can see the data that we have entered in the application.