0% found this document useful (0 votes)
10 views

CI_CD_Pipeline_Automation_with_DevOps_Tools_1735564992

Uploaded by

t6tg4qpq8r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

CI_CD_Pipeline_Automation_with_DevOps_Tools_1735564992

Uploaded by

t6tg4qpq8r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

CI/CD Pipeline for Automating Website Deployment

Server 1 (EC2):
• Purpose: Jenkins server for CI/CD pipeline.
• Role: Automates build and deployment triggers.
Server 2 (EC2):
• Purpose: Ansible server for configuration management.
• Role: Executes playbooks to manage deployments (deployment.yaml, service.yaml).
Server 3 (EC2):
• Purpose: Terraform for infrastructure as code (IaC).
• Role: Manages VPC creation for Server 1 and Server 2.

This terraform installation process:


Switch the normal user to the root user then update my server because I use Ubuntu OS, so OS update
requires installing some packages. after installing the terraform and step-by-step type all terraform
commands.
The wget command is used to copy the terraform package link.

Then unzip the terraform zip file.


After that unzip that file then run Terraform used to unzip the Terraform file.

The next step is to create a file using with Terraform extension. for example (main.tf)

( Terraform init ) This command is used to create a terraform repo and also check the terraform code

Step by step type the terraform command and eventually create a vpc for two servers. The second
command is used for planning I mean terraform tell a server. what will it create using that code? Last
command is terraform apply it command using for run code and then create a vpc.
Finally, successfully create a VPC:

JENKINS SERVER:
(Server 1 Jenkins server)
Sudo apt update:

Then install the Jenkins Java package because Jenkins build Java so that Java package requires.

Jenkins install package:

Then go to the Ansible server then install the Ansible package in (sever 2 ansible server)
Then install the DOCKER in server 2.

Then check docker is successfully installed in server 2


Docker images

Next step: copy the Jenkins server public IP address number and then paste it within the browser. Then
open the Jenkins page. Copy the path above the admin password.
1)

2)
Then create a username and password after install the default plugins.
Install this plugin (ssh-agent, pipeline stage view, docker pipeline)

Install important plugins then restart Jenkins and type the username and password.

And then already created a docker file and index.html file then pushed those files to GitHub

GIT:
Git init. This command is used for create a git repo because I didn’t create a git repo then I never pushed
those files to GitHub.
Type the step by step git command.
Git remote add origin: This command is used to connect the remote repo.
Git add: used to push those files in the staging area which means temporary storage.
Git commit: command used to push stage area to the local repo.

Git push origin main: these commands are used to push those files in the remote repo.
Then go back Jenkins page. Create a new job.

Select the pipeline then click (ok) then choose build trigger.

Then click pipeline syntax and choose the git: Git and copy the GitHub repo link. After that paste it into the
repo URL and select the branch finally click the generate pipeline script and automatically create a script
then copy that script and go back script page and paste it.
This script is used to pull the files from a GitHub repo

Then click apply and save and click build now then automatically pulled the files from GitHub repo.

Go back to the Jenkins server and change the Jenkins path.

Successfully run first stage script


The next step create a GitHub webhook because if I don’t create a GitHub webhook then never
automatically triggers the stage script so first create an API token. this token is just for security purposes
and to create a GitHub webhook. What it takes to make it create.. the first Jenkins page URL and next is
API token.

GitHub webhook page:


The next concept is to create a master machine and a slave machine:

Kubernetes:
Master machine:

Slave machine:
Then copy and paste kubeadm token:

Slave:

Then copy this line and those line paste it slave server.

Successfully configurated kubeadm cluster.


Type this command after successfully congigure kubeadm cluster.

Next setp is passwordless authentication:


First switch normal user to Jenkins user then create a passwordless key. After that go to the ansible
serverThen modify some files and successfully modify those files then create a password for security
purpose and that password helpful to connect server to server so it requires and also same process in
master server.
ssh [email protected] : this command is using to connect server to server and it ask password then
type it password then connected that server in Jenkins server
First file

Second file.

Next one is files creatation: ansible.yml,demployment.yml,service.yml


Then automatically transfer files in server because already created a github webhook so push that files
github then Jenkins job automatically trigger.

Then go back Jenkins page and click pipeline syntax and choose sh: shell script

Then copy this command and paste it stage script:


Then I was face one error because type the wrong job name that’ why error then change my job name

Pipeline stage view:

Again recreate script then click apply and save next run job
Then check my Ansible server for those files that are transferred.
Ansible server:

Master server:

Create docker image with tag script:


Successfully run those scripts.

The next step is to push image to docker hub.


Some configura do that dockerhub password before push image to dockerhub. just because update raw
dockerhub password in script it not safe that’s why I do that some credentials for dockerhub
Next step is give the dockerhub password then create a id name and then click the apply button.

Go back stage script page. Create a script for push image to dockerhub and dockerhub login in ansible
server before click apply.

Then click the build now.


This is my dockerhub account and it’s successfully push image to my dockerhub.

The next step is write the yaml script for three yaml file. The first yml file use for run both yaml file I mean
the first yaml file work by ansible playbook.
Deployment.yaml:

Service.yaml:

Push those all yaml file to server after it change some configura within ansible server.
The configuration finish those file and then it ‘s create ssh key and copy master server private ip address
within ansible server because if it don’t connect to each server then it’s never properly run that yaml file
that’s why it is must be copy those private ip .in ansible server.

Push all three yaml file to github and then automatically trigger Jenkins script. After those edited three file
store in ansible server and master machine.

Successfully push all files.

Finally run the last stage script. This script do run the ansible playbook ansible.yml file

stage('execute k8s from ansible'){


sh 'ssh [email protected] cd /home/ubuntu'
sh 'ssh [email protected] ansible-playbook ansible.yml'
}
Finally job is successfully run then go back the master server. after type this command ( kubectl get all )

Then finally check the project output through browser.

The next one is Prometheus installation. The Prometheus use for metrics monitoring purpose and it helpful
tool because sometimes we don’t know how many memory we are using for server, network traffic, and
mostly use this tool for cpu utilization because we don’t down cpu contition then it come biggest problem
because we don’t know which time server down that’ why it’s important.
Successfully install that Prometheus tool in server then xtrack Prometheus file next run the Prometheus.

Run the Prometheus.


This is the metrics.

The next installation is the node_exporter. the node exporter work is collating the slave server metrics
then push those slave metrics to master server. that is node exporter concept.

Then xtrack the node exporter file


Finally run the node exporter.

Node exporter home page.

Slave server metrics.

The next process is the connected the master server Prometheus to slave server node exporter.
Set job name and targets:

:wq!
Next type this command ( ps -ef| grep Prometheus )
Kill command use for process kill
Again one more time run Prometheus.
./promrtheus –config.file=Prometheus.yml
Finally successfully connected Prometheus and node exporter together.
Then check the slave server metrics by master server because it already connected the booth tools

Finally install is Grafana tool.


Xtrack Grafana file then gone bin path
Checking the Prometheus Datasource:

Select the datasources.


Upload the dashboard file. The finish all configuration and then create a Grafana dashboard for the master
server
metrics

Finally, create a dashboard.

THE END

You might also like