CI_CD_Pipeline_Automation_with_DevOps_Tools_1735564992
CI_CD_Pipeline_Automation_with_DevOps_Tools_1735564992
Server 1 (EC2):
• Purpose: Jenkins server for CI/CD pipeline.
• Role: Automates build and deployment triggers.
Server 2 (EC2):
• Purpose: Ansible server for configuration management.
• Role: Executes playbooks to manage deployments (deployment.yaml, service.yaml).
Server 3 (EC2):
• Purpose: Terraform for infrastructure as code (IaC).
• Role: Manages VPC creation for Server 1 and Server 2.
The next step is to create a file using with Terraform extension. for example (main.tf)
( Terraform init ) This command is used to create a terraform repo and also check the terraform code
Step by step type the terraform command and eventually create a vpc for two servers. The second
command is used for planning I mean terraform tell a server. what will it create using that code? Last
command is terraform apply it command using for run code and then create a vpc.
Finally, successfully create a VPC:
JENKINS SERVER:
(Server 1 Jenkins server)
Sudo apt update:
Then install the Jenkins Java package because Jenkins build Java so that Java package requires.
Then go to the Ansible server then install the Ansible package in (sever 2 ansible server)
Then install the DOCKER in server 2.
Next step: copy the Jenkins server public IP address number and then paste it within the browser. Then
open the Jenkins page. Copy the path above the admin password.
1)
2)
Then create a username and password after install the default plugins.
Install this plugin (ssh-agent, pipeline stage view, docker pipeline)
Install important plugins then restart Jenkins and type the username and password.
And then already created a docker file and index.html file then pushed those files to GitHub
GIT:
Git init. This command is used for create a git repo because I didn’t create a git repo then I never pushed
those files to GitHub.
Type the step by step git command.
Git remote add origin: This command is used to connect the remote repo.
Git add: used to push those files in the staging area which means temporary storage.
Git commit: command used to push stage area to the local repo.
Git push origin main: these commands are used to push those files in the remote repo.
Then go back Jenkins page. Create a new job.
Select the pipeline then click (ok) then choose build trigger.
Then click pipeline syntax and choose the git: Git and copy the GitHub repo link. After that paste it into the
repo URL and select the branch finally click the generate pipeline script and automatically create a script
then copy that script and go back script page and paste it.
This script is used to pull the files from a GitHub repo
Then click apply and save and click build now then automatically pulled the files from GitHub repo.
Kubernetes:
Master machine:
Slave machine:
Then copy and paste kubeadm token:
Slave:
Then copy this line and those line paste it slave server.
Second file.
Then go back Jenkins page and click pipeline syntax and choose sh: shell script
Again recreate script then click apply and save next run job
Then check my Ansible server for those files that are transferred.
Ansible server:
Master server:
Go back stage script page. Create a script for push image to dockerhub and dockerhub login in ansible
server before click apply.
The next step is write the yaml script for three yaml file. The first yml file use for run both yaml file I mean
the first yaml file work by ansible playbook.
Deployment.yaml:
Service.yaml:
Push those all yaml file to server after it change some configura within ansible server.
The configuration finish those file and then it ‘s create ssh key and copy master server private ip address
within ansible server because if it don’t connect to each server then it’s never properly run that yaml file
that’s why it is must be copy those private ip .in ansible server.
Push all three yaml file to github and then automatically trigger Jenkins script. After those edited three file
store in ansible server and master machine.
Finally run the last stage script. This script do run the ansible playbook ansible.yml file
The next one is Prometheus installation. The Prometheus use for metrics monitoring purpose and it helpful
tool because sometimes we don’t know how many memory we are using for server, network traffic, and
mostly use this tool for cpu utilization because we don’t down cpu contition then it come biggest problem
because we don’t know which time server down that’ why it’s important.
Successfully install that Prometheus tool in server then xtrack Prometheus file next run the Prometheus.
The next installation is the node_exporter. the node exporter work is collating the slave server metrics
then push those slave metrics to master server. that is node exporter concept.
The next process is the connected the master server Prometheus to slave server node exporter.
Set job name and targets:
:wq!
Next type this command ( ps -ef| grep Prometheus )
Kill command use for process kill
Again one more time run Prometheus.
./promrtheus –config.file=Prometheus.yml
Finally successfully connected Prometheus and node exporter together.
Then check the slave server metrics by master server because it already connected the booth tools
THE END