0% found this document useful (0 votes)
17 views

Dev Ops

Fundamentals of DevOps1

Uploaded by

pkg2027
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Dev Ops

Fundamentals of DevOps1

Uploaded by

pkg2027
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 185

welcome guys welcome to the star agile

YouTube channel today we are going to

discuss on

devops which is nothing but the

automation of a software development

life cycle what exactly is software

development life cycle what exactly does

a devops to how the devops will help you

to help in your career so all these

things I will be discussing with you in

this particular training we are also

going to do a lot of practicals and also

we are going to see the devops in and

the devops tools in very very detail now

devops is when I say devops devops is

the automation it involves Automation

and scripting okay wherein some tools we

need to write some basic script RS and

also it will automate your the process

of your complete software development

life cycle with the in the devops also

includes the automation of

infrastructure via some

tools but what exactly is the definition

of devops so devops is a set of

practices that combines the software

development which is the development

team and the IT operations

to shorten the basically to shorten the

system development life cycle and to


provide with the continuous delivery now

earlier what used to happen is whenever

we used to develop any particular

project it goes through various life

cycles for example let's say you are a

developer and you're working onto a

functionality let's say you're working

on a login functionality of a e-commerce

website now that particular

functionality involves lot of steps

steps I mean to say first you will be

the developer would be developing the

code okay on majorly towards the

development side that after development

of the code developer puts the code okay

developer puts the code for the build

now in the build the the project is

being deployed and project is being

compiled which is converted we convert

the code from a machine readable

language okay uh from a user readable

language to the machine readable

language after that uh and also we also

converted to a package package is a

single executable file for example when

you are working on Java based projects

then you need to convert the code to a

war file which is a direct executed mble

files which are being present up then we


need to do some testing whether the code

or whether the functionality is working

properly so we used to do the testing so

that testing was done manually then it

was being done by people then after that

there's a quality assurance which is by

using various uh like people used people

used to do and check the quality of the

code where the commands unwanted ments

they used to check the bugs in the code

is being checked so the overall quality

of the code is was being checked and

then finally the the the code used to be

deployed we deploy the particular code

now all these process earlier was being

done by some

person okay so there is a separate build

teams there would be separate testing

teams there would be separate QA teams

there would be separate packaging teams

and when each one of these teams do the

task the delays happen assume a

situation that the build team is working

they take five days to build the project

build that particular functionality

assume that login functionality again it

goes to goes back to the development or

it goes back to the first stage which is

the developer let's say error comes up

in the build it goes back to the


developer saying that error is there

developer let's suppose take some time

again again goes to the build again five

days it's being taken then it goes to

testing error in the testing around I

think 15 20 days is being passed here

and it's still my de my client is not

able to receive anything and this

process keeps on happening up sometimes

a project gets stuck in QA sometimes the

project gets stuck in the development

sometimes the client since it's a

iterative process the client is asking

for changes this keeps on happening up

and because of this particular

reason ultimately

we cannot we cannot rely on the manual

processes we need to move to some

automation we need to move to the

automation side we need to automate all

these processes and at this point of

time the devops comes into the picture

we automate the whole process of

deployment of development and the

deployment now let's say we need to

deploy to the client machine whatever we

have shown it to the client let's say

login functionality client says okay

fine I'm happy with your code let's


deploy this code so now this time you

will be deploying the code to the

development machine and when you start

deploying the code to the development

machine what will happen is there could

be some issues coming up let's say the

code is working on your machine but code

is not working on another client machine

might be because of the

dependencies and because of this

particular reason guys there there is

sometimes dispute happens so I need to

automate all these processes and I need

to ensure that deployment is seamless so

I can automate end to endend process as

well so rather than having let's suppose

a build team I will have a software

working on its place I can have some

softwares like mavin and gradel working

on its place rather than having a

testing done manually I will be writing

the test cases and I will automate it

with the help of Jenkins rather than um

doing a quality assurance manually I can

get use a softwares like sonar Cube I

can use the check style software like

check style PMD find bucks and finally

to ease my deployment I can put it in a

container via Dockers which provides a

very stable environment and it's it's a


constant environment right because in a

container if you if the if that

container is in your machine or the

client machine container will remain the

same so I don't have to worry about the

deployment issues kubernetes I can use

to manage heavy work heavy containers or

there are lot of containers it's a

container orchestration tool we can

later on monitor it with the help of

Prometheus grafana we can uh automate

the infrastructure Creation with the

help of terraform all these things all

this automation of each and every step

is basically the part of devop okay so

if I again repeat it devops is a short

for development and operation is a set

of practices that aim to enhance the

collaboration and communication between

software development which is Dev and it

operation which is Operation teams and

the primary goal of devops is to improve

improve the efficiency and the quality

of software development and the delivery

okay now devop Fosters a culture of

collaboration between developer and

operation teams breaking down the

traditional SOS to improve improve the

efficiency and Innovations now there is


a collaboration with the help of devops

there's a collaboration between

developers and the operation

teams earlier the developers used to

work on a completely separate systems or

in the complete separate environments

and the operation teams were set a

completely different space but now it's

with the help of the dev there's a

collaboration there's a complete

collaboration which is being there with

the development developers which is

happening in company X and the operation

which is the client machines client

teams okay devops heavily emphasis

Automation and man monitoring at all the

steps of software constructions now with

the help of devops as I have been

repeating it again

it emphasis on automation we can

automate with the help of devops and

monitoring at all the steps of software

creation we can monitor each and every

step of the software integration of

software creation from if we want to uh

integrate we want to test we want to

release the deployment we want to work

on the infrastructure management

everything I can do with the help of

devops it is a central to devops cicd


integration code changes more frequently

and reliability reliably

now in devops we do have a software

development life cycle wherein the

developer puts the code let's say the

developer is working onto the login

functionality developer will put the

code to for the build then it goes for

testing then the QA and finally to the

deployment so it emphasis or this is

known as the pipeline now we automate

this complete pipeline okay we automate

this complete steps where we call it a

continuous integration where we build

the code we test the code we do a

quality assurance and then we finally

deliver or deploy the code now the

difference between delivering a code and

the deployment of the code is delivering

the code happens within your machine

whereas the deployment of the code

happens s within your client machine

okay so I can automate The Continuous

integration where the code would be

continuously integrated and the

continuously delivered itself

implementing devops leads to faster

delivery of features more stable

operation environments improv


communication and collaboration more

time to innovate rather than fix and

maintain so with the help of devops I

can have much faster deliveries I can

have more much more um much much more

pointers or I can have uh have faster

deliveries because earlier this steps

were done manually now my build is also

happening automatically so if my

developer is working at 2: a.m. in the

night I don't have to worry about my

developer is working in the afternoon I

don't have to worry about because now I

don't have to rely on another team

members it is being Auto ated more

stable operation environment now when a

person is working as per my thought

process there could be possibility that

there could be change in the operation

environments but with the help of devops

I will be remaining or I will remain

stuck to a specific code environment

okay now after this we have like uh we

have improved communication and

collaboration it's a better

communication and the teamwork is being

improvised we are using like a Version

Control System where a code is being

like a team members can work and save

the code as a versions as well more time


to innovate rather than fix and maintain

earlier everything a lot of time was

just wasted in fixing up the code but

now I can rely more or I can work more

on the Innovation side rather than

fixing and just maintaining the code

itself so this is something which is

being present up with the help of

devops now these are some of the key

principle of

devops which includes

collaboration okay now what does a

devops include does is De devops

emphasize on collaboration and

communication between the development

and operation teams and this helps

breaking down the celos and fosters a

more integrated and

efficient

environment okay

so if I explain you more collaboration

refers to the close and coordinated

working relationship between different

teams involved in the software

development and it operation processes

traditionally when say development and

operation team worked in a separate

silos with limited

communication with the help of


devops have broken down these barriers

and it for Fosters the collaboration

wherein let's say uh in different ways

we can say so there was uh there is a

cross functional teams which is there it

devops encourages the formation of a

cross functional team that include

members from the deployment operations

and sometimes other relevant

area and this diversity in the skills

and expertise ensures that the various

aspects of the software development and

and the delivery are considered through

the entire

process then there is communication

channels wherein devops emphasize on

open and transparent

communication and teams use shared

communication like for example chat

platforms video conferencing

collaborative documents to discuss the

project updates and the issues and the

Solutions in the real time

itself so yeah these are some of the

things then there's a automation which

is being present up now earlier since

everything earlier was done manually

okay before the introduction of these

devops tools later on with the help of

devops tools we have automated the


manual process to a increased efficiency

speed and quality process so ultimately

the complete manual process was being

automated with the help of the

devops itself then it comes to the

continuous delivery when I say

continuous

delivery what happens is we are

continuously delivering like say for

example if I'm working on the small

batches of my project I can continuously

deliver those particular batches so that

I have a fast feedback feedback is the

client reviews coming up and I can

continuously improvise that particular

project

okay then we have monitoring monitoring

includes the monitoring of the system

assume a situation you have delivered

the project for a client but after

delivering the project for the client

you realize that the project is not

working efficiently there is a issue

issue with the project

development

so you want to monitor you want to use

some monitoring tool to monitor the

software developments very quickly and

efficiently so for this we use


monitoring tools we use multiple

monitoring tools which are being present

up

now importance of scripting and

automation now what what does a what is

the importance of scripting and

automation is now scripting and

automations are critical enablers of

continuous

integration delivery and deployment in

devops

in which the organizations can release

small code changes frequently and

reliably by scripting a repetitive task

and integrating the automated testing

this allowed faster delivery of the

value to the clients now now what we do

is what what the companies do is they

release some small quote let's say they

working on login functionality they do

that to divide it into multiple Parts

they divide to small task and perform

those tasks M individually so as to

perform the

things in much more better and efficient

way okay now these are some of the

popular scripting

languages these are some of the popular

scripting languages which is being used

in devops so the first one is the python


it is one of the very famous high level

programming language which is used for

scripting and the automations we have

Ruby which is again an objectoriented

scripting language which is being suited

for the automations and the web

development we have PE the general

purpose it which is nothing but a

general purpose programming language

which is suitable for

processing the automation of automation

tasks as well we have the batch

scripting which is a Linux shell which

is used

uh it is a widely used Linux shell and

it is nothing but a command language

which is being used for Automation and

scripting itself we do have a Powers

shell which is a task based command line

shell and it is a scripting language

which is built on do net for automation

so these are very popular scripting

languages which are being there and you

should not know any one of them in depth

it is absolutely fine but still at least

a basic idea of any any coding language

is an added Advantage

here when I say the automation tools we

have ncble okay


so ncble is a

configuration management

tool okay which is uses SSH and the yaml

Playbook so when I say ncble it is

nothing but an open source automation

tool which simplifies the configuration

management application deployment and

task automation it is used it uses a

declarative language to describe the

system configuration making it easier

for the user to Define what state they

they want their system to be in with

ncel you can automate the repetitive

task manage multiple servers efficiently

and ensure the consistency across your

infrastructure and it is being widely

used in it operation and devops for

streamlining and orchestrating

workflows okay then we have puppet

puppet is a configuration management

tool that uses the master agent

architecture and domain specific

language so puppet is used okay it helps

to automate the provisioning and

management of the servers and ensuring

the consistency across the system with

puppet you define the desired state of

your

infrastructure using a declarative

language then it then automatically


enforces and maintains that state making

it easier to manage large scale

environments puppet widely used widely

utilized in devops practices to

streamline the deployment

configuration and ongoing maintenance of

the software and

systems then we have the chef now what

exactly is the chef is it's it's again a

configuration management tool and it's

one of the very powerful configuration

management tool which is being used in

the it and devops what does it do it

allows you to automate the setup

configuration and management of your

infrastructure with Chef you write a

code in a uh domain name

or domain specific language to Define

how each component of your system should

be configured and this code is known as

recipes or the cookbooks which helps in

maintaining the consistency and

reproducibility environment Chef is a

valuable for a large scale deployments

ensuring that systems are configured

according to your specific specification

and making it easier to manage a complex

architecture

efficiently then we have a genkins


Jenkins is we can say the heart of

devops and genkins is an open source

automation service which is widely

used okay it's an automation server

which is used for continuous integration

and continuous delivery which is

cicd in software Dev velopment it helps

the Streamline the development and

deployment processes with genkins you

can automate building testing and

deploying the code and making it easier

to catch the errors very early and

release softwares very frequently it

supports the integration with various

tools and plugins allowing it for

customization and

flexibility in your

cicd Pipeline and Jenkins I can say is a

fundamental tool in tab of practices

fostering collaborations and automations

in software development life cycle then

we have bamboo now bamboo is nothing but

it is another cicd which is continuous

integration and continuous delivery tool

which is similar to genkins and it was

developed by Italian atlassian the same

company behind jira and bitbucket bamboo

helps to automate the building testing

and de and the deployment of the

software ensuring a smooth and efficient


development workflows like Jenkins bambo

supports integration with various tools

and offers a features like parallel

builds pre-build plans and the

deployment projects it is commonly used

in conjunction with other tools to

provide a comprehensive solution for

managing the software development life

cycle

now um we have one of the use cases

which is provisioning

environment okay which is one of the use

cases of this so the this is one of the

very common use case for provisioning

the

environment so just imagine that you

you're working on a project and you need

to you need a new server environment to

develop test or deploy your application

and let's say you are a software

development working on a web application

when you start a new project you need a

development environment where you can

write test and debug your code instead

of manually setting up servers

installing dependencies and configuring

the environment you can use the

provisional tool like anible puppet or

Chef where we can uh like where we can


Define the infrastructure as a code as

well okay which is known as IAC in I we

can write the code that describes the

desired state of the environment this

includes specifying the operating system

software packages configurations and all

the other settings required to run our

application

here we also we can run the provision

provisioning scripts here okay so we can

use our choosen provisional tool to

execute the code and automatically set

up the environment this will ensure the

consistency across different development

or testing environments there's one more

thing which is

reproducibility if another team members

joins the project or if you have if you

need to recreate the environment later

you can simply rerun the provisioning

scripts this guarantees that the

everyone is working with the same

configuration reducing the it works on

my machine problem now one there is also

scalability if your applications gains

popularity and you need to scale your

infrastructure the provisioning tools

make it easy to replicate and expand

your environment and this is been

crucial for handling the increase work


workload and maintaining the performance

in Cloud environment we need to also

take care of the cloud efficiency or the

cost efficiency where we can leverage

the provisioning tool to dynamically

scale the resources on demand when the

load is low you can scale down and saves

the cost and scale up during the peak

timings as well then we have deploying

the applications

okay now what happens is in deploying

the

applications let's say you are a part of

a team member and you're responsible for

deploying a new version of

applications so in this we will be using

the okay so what we have to do is we

need to automate the complete pipeline

which will include we need to automate

the complete pipeline which will include

building testing and deploying the

application to different

environments okay

so in the deploying the application we

will be using the tools like Version

Control System where we will be using

git we use genkins as a cicd tool we use

ncble for the configuration management

for cloud platform we can use AWS we can


use a we can use gcp and there are lot

of cloud other Cloud platforms as well

for orchestration

we use kubernetes and for the

containerization we use the Dockers and

when I say when somebody ask you what do

you what are the steps or what are the

deployment steps or the processes so it

includes around 10 major tasks which the

first task is code development wherein

the developer work on the application

code to use git and use git as a for the

Version Control they follow best

practice es like feature branching and

pull request after that there's a

continuous integration now Jenkins is

being configured to monitor the VCS for

the changes and upon the C code change

Jenkins triggers an automated build and

runs the unit cases unit test then the

third step will include the art

artifacts creation and the successful CI

build generates a Deployable artifact

when I say the examples a jar file a

Docker image so these are the artifacts

which are being created up then the

infrastructure as a code IAC with ncble

we can use where ncble scripts are used

to define the infrastructure requirement

where anible playbooks ensures that the


necessary server configurations are in

the place we have containerization with

Dockers the application is containerized

using the Dockers and providing the

consistency

across the environments there is

artifact

registry okay there's a artifact

registry which is also there where

Docker images are stored in the

container registry for example AWS ECR

for Version Control and for easy

retrival during the deployments we do

have continuous deployment also where

Jenkins deploys the dockerized

application to the staging environment

for the further testing and automated

integration test and the user acceptance

test are executed in the staging

environment we do have approval and the

manual testing and upon the successful

test the deployment Pro process is

paused for the manual approvals QA teams

or the stakeholders review the

application in the staging

environment then we have a production

deployment upon on the approval Jenkins

triggers the deployment to the

production environment and kubernets is


used for orchestrating the deployment

and managing the containerized

application we do have monitoring and

logging tools like Prometheus grafana

are employed for monitoring the

applications and logging Solutions like

for example we can use elk we can uh to

collect and analyze the logs for the

troubleshoot so the benefits includes

the autom

the deployment process is automated from

code commit to production thus reducing

the manual errors and speeding up the

deliveries we do have the

consistency IAC and the

containerization ensures the consistent

environments preventing it works on my

machine issues scalability the kubernets

enables the easy scaling of the

applications based on the demand it

comes with traceability the Version

Control and the artifact repository

provide a traceability for each change

made to the

applications then on we have the running

test okay now when I say running test it

is a we are automating the unit

integration and end to end testing of

the application after each commit or the

deployment so let's talk about a


scenario wherein you you are part of a

DE op team and responsible for running

the test as a part of continuous

integration and continuous deployment

pipeline for a web application here you

can again you are using all the tools

but the major thing is you will be using

an automated unit testing where Jenkins

will run the unit test as a part of the

CI process and it will ensure that the

individual component of application work

as expected

end to we can also have end2end testing

with selenium where in selenium scripts

are being used to perform endn test uh

simulating user integration with the web

applications these test ensures that the

application functions correctly as a

whole okay there is a manual testing and

the user acceptance testing we call it a

uat as well wherein the QA teams or the

stack holders Pro perform addition

manual testing and uat in the staging

environments so these are some of the

testings wherein it helps in early

detection of issues so unit test catch

the issues at a very early stage of a

development automatic regression

testings so wherein the automated test


including the unit and other end to end

test ensures that the changes don't

introduce in the regression we do have

consistency whereas the automated test

provide consistency and the repeatable

results quality assurance where the

quality Gates and the manual testing in

the staging environment uh ensures that

the only reliable code reaches to the

production then we have the managing

infrastructure okay so we have the

managing

infrastructure whereing where so it

involves the efficiently handling the

configuration deployment and maintenance

of underline system that support our

application so let's talk about a case

study let's say we are using we are part

of a devop team and we are responsible

for managing the infrastructure of a web

application now the the tools and

Technology here which I will be using is

which I will prefer okay it's not manded

to use the same one but yes this is what

I will be preferring is cloud platform I

will use AWS for infrastructure as a

code I will be using terraform for

configuration management I will be using

ncble and for Version Control System I

will be using the git now in the


managing infrastructure first of all I

will let's say if I'm part of that team

I will be working on infrastructure

design wherein I will Define the

architecture and requirement of our

infrastructure and and considering the

scalability reliability and security we

will use the tools like AWS cloud

formation or terraform to design the

infrastructure as a code we do have

version as a Version Control

System

wherein we will store the I scripts in a

Version Control like git to track the

changes collaborate with the team and

roll back to the previous configuration

whenever required provisioning with in

terraform where we will write the

terraform scripts to declare the desired

state of our infrastructure we will

execute the terraform commands to

provision and manage the resources on

AWS based on our I scripts we will be

also in managing infrastructure we the

configuration management with ncble also

comes up wherein we will be using ncble

playbooks to configure and manage the

softwares on our servers ncble will

ensure the consistency across different


environment and automates the routine

task for me we will also be using

continuous integration for IC where we

will integrate our ISC scripts into our

cicd Pipeline and we will run automated

test on our infrastructure code to catch

the early

errors scalability and elasticity where

we will Leverage The Cloud native

features for scalability and elasticity

we will adjust the resource dynamically

based on on demand using autoscaling

group or similar features we do have

security measures where we will

implement the security best practices

within our IAC script where we will

regularly review update and security

configurations to address the potential

vulnerabilities so this is what we will

be doing in the managing

infrastructure

now here we in the adopting automation

okay we need to identify the automation

opportunities in identifying the like

automation opportunities we will analyze

the current processes whatever the

current processes are we will be

analyzing them and we will identify the

areas which can be automated like the

deployments the testing INF structures


uh the provisioning all of these things

I can use we can select the automation

tools where the research and select the

appropriate automation tools like NC

terraform genins based on the technology

Tech we can use so we can use any of the

tools like anible terraform it depends

on us what we want to use we can start a

small we can begin with a automating a

small part of a process as a pilot to

demonstrate a value before expanding it

to the scope scope develop the scripts

and the workflow where we will write the

scripts and build the automation

workflow using the choosen tools to

automate the identified task and the

processes we need to integrate the

automation workflows with the continuous

integration and continuous deployment

pipelines itself finally we will work on

the monitoring and Improvement where we

will continuously Monitor and gather the

feedback and improve the automation

scripts and the

processes now what are the challenges

which we face okay these are the

challenges which generally see obviously

the auto adopting automation has a lot

of benefits but it requires some of the


it requires the overcoming the some of

the key

challenges around a tool okay so let's

say for example we have choosing a tool

wherein using a right tool is really

important in adop in the automation if

you're using the wrong tool it might

cause your team the lot of efforts and

also your team bits would be lost

because of it so choosing a right tool

is really really important in this

particular case then we have a training

where the where we can uh like training

and enabling the developers and

operating operation teams is really

important for the successful adoption so

in devops working in a collaboration and

teaching the people to work in the

collaboration itself need to be taught

as well so this is something which is

required Legacy system integration so

integrating this automation into the

Legacy system it requires efforts to

adapt and the bridge the old and the new

systems

itself

so when I say the benefits of automation

it comes with the improve efficiency

okay it comes with increased

reliability it comes with the enhanced


scalability and better developer

productivity so in a dynamic landscape

of a technology and business operation

automat automation has emerged as a

Cornerstone for an organizations aiming

to streamline the process

boost productivity and stay competitive

first of all when I say the improved

efficiency automation stands as a Lynch

pin for driving efficiency gains across

various domain by mechanizing the

repetitive and time consuming task

organizations unlock the potential for

Accelerated processes and

resources

resources um

rigorously to predefined rules reducing

the like likelihood of Errors associated

with manual intervene this consistency

is pivotal for maintaining high quality

standards in

output deliverables resource

utilizations efficiency efficient

resource utilization is another area

where automation shines automated

systems can dynamically adjust the

resources allocation based on demand

optimizing the utilization of servers

Cloud resources or other infrastructure


components this will not only lead to

cost saving but also ensures that the

resources are available when and where

they are being needed then there is a

increased reliability now reliability is

again a critical aspect of any operation

environment automation introduce a level

of predictability and precision that is

a challenge to achieve a consistency

with manual

processes okay it comes with error

reduction wherein the human errors are

inevitable but can be costly automation

minimize the risk of Errors associated

with you manual interv intervention

ensuring that the task are executed

exactly as defined in the automation

scripts this reduction in the error

leads to increased reliability and a

higher chance of level of confidence in

the overall system we also do

standardized procedures now automation

enforces the standardized procedures by

codifying the best practices into the

scripts or the workflow this will ensure

that the every iteration of the task

follows the same set of rules reducing

the variations that might introduce the

vulnerabilities or inconsistencies into

the system the standardization enhances


the reli reliability by providing a

stable and a uniform operation

environment it comes with improv

compliances now in the industries with

the stringent regulatory requirement

autop plation plays a pivotal role in

ensuring the compliances automated

process can be designed to ader strictly

to regulate the standards and generating

auditable reports okay auditable reports

logs and the reports this not only

simplif I the compliance verification

process but also the risk of

non-compliances due to human

oversight we do have enhanced

scalability as well

wherein okay wherein

the overall the

scalability can be improvised or we can

increase the overall scalability of our

system in terms terms of infrastructure

and the operation is one of the

fundamental requirement for the

organization which are seeking to

accommodate the growth and respond to

the fluctuating demands automation

provides the agility needed to scale

efficiently okay which comes with like

you can do a dynamic resource


provisioning whereing in the cloud

computing the automation facilitates the

dynamic provision provisioning and

systems can automatically scale up or

down BAS based on the demands based on

the demands

and okay based on the demands and

um and resource provisioning okay we

have infrastructure as a code which is

also being present up then there's a

better developer productivity wherein

automatically automation is a catalyst

for enhancing the productivity of the

development team empowering them to

focus on Creative innovation of

creativity Innovation and high value

tasks now we are going to discuss a very

important topic which is Version Control

System version control system is one of

the very very important topic of

devops so just before understanding what

exactly is Version Control System let's

understand with a simple

example assume a situation in which

which you are working for a particular

client and you are developing a homepage

for that client itself so let's say this

is you you are developing a homepage for

your

client now the client is asking you for


regular iterations or the changes in the

particular code so you keep making the

changes as for the client need and the

requirements now you you're making the

change and at the end you have

completely changed this whole design and

what your client wants is your client

then wants you to roll back to the first

design saying that the first design

which you have shared was an amazing

design kindly go back to the first

design

itself but the problem is every time

whenever the client is asking for the

changes you were making the changes in

the same piece of the code

you were not rewriting the code you were

only making the changes on the same code

or you are not saving the code anywhere

as a version you were just rewriting the

code so in that particular case rolling

back to the previous version is really

very

difficult because I don't have these

code saves as a version at any place

this so the solution could have been if

I would have saved these codes as a

version at every

step before sharing it with the client


so let's suppose the first version which

I have developed the first homepage

which I have developed I would have

saved it as a version and then I would

have shared it to my client and then the

client asked for the Third change I

would have saved it as V2 and then have

shared it with the client so what I'm

doing is I I'm saving all my code as a

version and I'm managing those

particular versions with the help of a

Version Control

tool so in this particular topic we are

going to discuss about git and GitHub

which are two very major Version Control

tools so let's go and discuss about

these now when I talk about git git is a

local Version Control

System it's a local version control

system or I can say it saves the

code it saves the

code

locally

in as a

version in your

laptop so it's a

software which saves the code locally as

a version in your

laptop in this you can save the and or

you can use the Version Control System


it's a version control system which will

allow tracking the files change whatever

changes are getting TR whatever changes

are there in the file it will be

tracking

that okay you can work in the

collaborations where multiple developers

can work with you local operations it

works Loc in your laptop it we can also

create the branches let's say if there

are multiple developers working up we

need to create an isolated system

wherein I my developers can I and my

developers can work separately in the

complete isolation so we can create the

branches to isolate the work and merge

the changes through the pull

request then this is a good history and

timeline so git was being started in

2005 by the lus stal and

then uh then in 2008 the git become the

main version control system for the

Linux kernel in 2009 the GitHub launched

the git for open-source project in 2012

a sub version also came and in 2014 14

Microsoft start using git for Windows

development and by 2016 the git reaches

12 million users worldwide which is a

huge number and by


2021 GitHub reaches 73 million

developers worldwide right now GitHub is

being owned by Microsoft and Microsoft

is completely managing the

GitHub now uh when I say automating git

git Auto automation tools like gitlab C

can automate the actions like running

the test

developments git Hook is one of the

concepts wherein it will allow the

custom scripts to run when the certain

git event occurs and there are git

allies which can create a shortcut for

commonly used git commands we are going

to discuss all these things in the

practicals so let's move ahead and let's

go to the pract itical side now first of

all if you are using Windows you need to

just search for git for

Windows when you open git for Windows

you will be able to see there is a

package or there's a first website which

will open in front of you which says g-

SCM which is git Source control

manager here when you click on it you

will enter inside this and you will see

a 64bit git for video Windows setup you

need to click on this

64bit git for Windows and when you open

it it will download a software for you


okay so you will be opening the website

get Source Cod

management.com and there here you will

open the 64bit git for Windows and you

will see there is a file which is

getting

downloaded let's wait for this file to

get

downloaded and then we will open this

particular file and we will install the

git in our

laptop this is getting downloaded right

now okay and I will be opening this exe

file and after this file is being

downloaded this exe file will install

the git in my laptop itself I will say

yes here click on next next next next no

other Chang is to be done and you can

simply install the git in your laptop

those who are using Mac as an operating

system you need to search git for Mac

there you will see download for Mac

operating systems open it and here you

will be able to see Brew install Git You

need to open the terminal of your Mac

and run this command if you get some

command saying that Brew is not in

installed or unable to find Brew then

you should be in clicking on this home


brew here and when you click on this

home brew you will be able to see this

link when you copy this particular link

you will be downloading the home brew

and after that you can install the git

for Mac operating system if you are

installing on Ubuntu Ubuntu operating

system so there's a simple command a

install G hyphen y with this command you

can launch or install on uban 2 if you

are planning to install it on let's say

Amazon

Linux then it is Yum

[Music]

install get hyph y this is the command

you can use to install it on the Amazon

Linux okay so it's already being

installed in my case so I can

simply click on the Finish button now G

is installed in my laptop itself I will

be now creating I will be going to the

desktop and I will be then after going

to the desktop I will be creating a new

directory or I can say a new folder

which will

ultimately be uh which will

ultimately uh be the place where I will

be creating multiple files so let me

create a directory here first of

all so I have simply created a directory


in my desktop I have earlier paused my

screen because I don't want there was

some things on this desktop and now I

just inside this particular folder you

can simply write click and create a new

folder and get inside it if you are

using the to or if you're using the

Linux then

mkdir and the directory name would be

the command for the same now just right

click here and when you click on this

show more options you will be able to

see the git bash if you are using

Windows 10 then when you directly right

click you will be able to see git bash

if you are into the UB 2 then you don't

have to open git Bash git bash is only

available for Windows based operating

systems so open git bash

here

and this is how it will look

like in your case if you're opening and

installing and working on git for the

first time you

can there will not be any Master which

would be there but in my case since I

have been working on G on a regular

basis you can see a master here but you

will also be able to see when you run a


simple command which is known as git in

it git in it

which initialize the git what does this

git in it will do it will

initialize initialize the git for you it

would be initializing the git which is

starting the git in this git directory

which you have created up it could be

any directory not necessarily start

starting with with the word Git it could

be with any particular name git in it

will initialize the git in this

particular directory with the name

git okay so when you initialize it what

exactly does it do it creates a

directory with the name dogit inside

your working directory dot in front of

any directory means it's a hidden

directory which is there so I have

initialized the

directory now let's suppose I want to

create a file let's say I will use VI

command or I can use Touch command or I

can use any other command let's say I'm

using cat

command greater than let's say file one.

text okay so the scat command will be

creating a file with the name file 1.

text so as of now when I open it there

is no files which are there dogit is a


hidden directory hidden directory which

is there so I don't have to worry about

it but when I click on this cat greater

than file 1. text it will be creating a

file with let's say hello world

content now if I go back you will

see there is a file with a Hello World

content

here

now okay now if if you do an LS you will

also see there is a file 1. text which

is being created

up okay now to see the status of this

repository we will be using git status

to see the status what exactly is the

status is my git started tracking this

particular file or not there's a file 1.

text has my git started tracking this

particular file or not it can be seen

from the command git status so when you

run this git status command you will see

that it is untracked which means

whatever changes I'm going to make in

this particular file 1. text my git will

not be tracking those changes itself so

let's suppose earlier there was hello

world as a content in the file 1. text

later on you change it to hello world

and I'm creating the first file in that


case also if you do a git status git

will not be able to identify any kind of

changes which is being made so

suppose I manually go into this G

directory of us and here I make a change

this

is this

is star

agile

YouTube

Channel okay I will save it now I have

changed the content technically git

should track it whatever changes I'm

making in my directory git should track

those changes but you will see if you

again do a git status it will say that

your file is untracked git is not

tracking that file so I will be adding

it to the staging area which is tracking

area I will start tracking this file

which file this file 1. text this is a

file I'm going to start tracking up so I

will write git add file 1. text this git

add file 1. text will be adding get this

file one. text into the staging area

which is the place where my git will

start tracking the

files so I have created the file here

and I will be start tracking the files

with the command git add and the file


name let's do it when I press enter here

you will see and then if you do a git

status you will see that there is a git

has started tracking the file earlier it

was in the red color it was saying that

untrack file but now you will see the

command which says new file file 1. text

which means G has started tracking this

particular file for you I want to move

back this file let's say I want to

untrack this file I have been tracking

this file but I need to untrack this

file so the command is also there in

front of you get RM RM is remove from

the cache from the tracking area and the

file name to unstage unstage means to

put it back to the

untrack okay let try it of once get RM

hyphen hyph cach and file one. text I

can run it in this particular way I

think I have written it wrong so get

RM get RM hyph hyph cached file 1.

text now if I again run git status you

will see that git is not tracking the

file anymore it is again in the red

color which means it is a untrack file

so to start tracking this file what we

will do we will again run git add and

file 1 do text command which will start


tracking this particular file for you

and when I do get status you will see it

will be in the green color let's see if

my git is actually tracking this file or

it is not being tracking my file so for

that I will be going back to my

folder and here I will open this file

now in this file I will make a

change let's say we are learning git now

this is a change I'm making in the file

since my git was tracking the file git

will be able to identify that there is

some new content it will identify that

when it when did a ad before there was a

Content which says hello world this is

star agile YouTube channel but after

that there is a a content is being added

up which says we are learning get this

is a new content which is being added up

so when I open this

particular uh git bash and do a git

status you will see that

there is a file which is being present

up but there's a modification which

happened in the same file there is a

small modification which happened in the

same file which is file 1. text

so I need to Let's suppose I need to

find what exactly is a modification

which happened up in the file 1. text so


to check it out I will be running the

command

git diff which is

difference file 1. text or if you write

the git diff only it will tell you the

difference between all the files which

are which my git is tracking which are

in the staging area and which are in the

working directory which means the with

the file which git is not tracking up

but I need to see specifically for file

one so I will use G diff which is G

difference okay so when I press enter

here you will see there is some addition

and removal in the file and it says this

content was already there in my file but

there was a new content which was being

added up which says we are learning

git there was a new line which was being

added up so with this command I was able

to find what exactly changed happened in

my file 1.

text otherwise with the git status it

was only telling me that there was some

modification in the existing file but

with the git diff command it will be

telling me the exact difference of the

file which my git is tracking and the

modification which happened up just now


and when you're working in the real time

and you are working on 10 or 20

different files this is really important

and you need to make and ensure this

should not be a

problem

now okay now we will

be when I do a get status you will again

see there's a modification which

happened up now I can add this file

again which will update my existing file

in the staging area so and I run it

again and do a g status you will see now

modification is being removed because

now this file one get assumes that this

file one has this content which says

hello world this is star agile YouTube

channel and we are learning

it okay

now my

git whenever I'm going to push

ultimately my task would be I will be

connecting my git which is a which which

creates local version which are the

versions which are there created in my

laptop and there would be GitHub where I

will be saving all my versions of the

code over the centralized repository

which is the repository present over the

Internet so I will be pushing it so


suppose let's say there are multiple

people working on the project same

project let's say akhat is there Rahul

is there John is there so and one more

person let's suppose pan is there now

all of them are working onto the same

repository which is GitHub they will be

saving the code on their laptops via the

git git is a software we have already

installed the git all these codes would

be saved on their laptops via the kit

and they will be pushing their code Cod

to GitHub all of them would be pushing

their codes to GitHub and they will also

be pulling their codes onto their local

machines from the GitHub

itself But ultimately suppose all of

them are working on a code there is lot

of versions of the code which is there

there are lot of versions everyone aked

versions whatever aat is pushing the

code it will be saved as a version that

will also be present on his local

machine as well as on on GitHub for

whatever peran is being present it will

be present in his local machine as well

as on GitHub so everything would be

present on GitHub as well as on his

local
machines now in this scenario how you

will identify let's say if you're

administrator how you will identify that

this is being pushed by akar this is

being pushed by pan this is and who is

pushing the code so this is being taken

care by a config setup

okay so what I'm going to do is git

config username so I will just show you

so we will be

running

git

config Global user.name

is whatever name you want to set it up

let's say star

agile then I will also be giving the

email

address user. email and you can let say

youtubear

ail.com now after setting it up we will

be giving the git config or we will

write get commit hyphen M and let's say

uh created the file one now in this case

what we are doing is we are saving the

code AS as a

version earlier I have created the code

okay I have created the code I've have

created a file I just asked the git to

start tracking the file with the git add

command and I have saved the code as a


version with the git commit command with

this kit commit you will see there is

something as hyphen M this hyphen m

means the message and this is the

message which we are putting up this is

not restricted you can put any message

whatever you feel like it will help you

later on to identify what exact changes

you have done or what what exactly the

files which are present in the

particular version so I can just run

this command here and you will see one

file change and one insertion is being

present up now when you write git

log one line you will see that it will

show you all the versions which are

being present up all the versions which

are being present up in

your system or whatever versions you

have saved all those you can see from

git log one line so you will see I have

created the file one you will not see C1

because this I have created previously

but right now I have created this

particular version of the code so it

will be saving my code as a version and

there is a version which is being

created up and when you look at

the at the left side you will see there


is a version ID which is there this is a

unique ID which will act as a identifier

for your

version okay head means the latest

version let's say you have hundreds of

versions so latest version which you

have met met or which you have created

would be the head by

default so now we are going to see how

we can clone a repository in our local

machine so let's suppose this is a

repository which is there which is

present over the internet I want to

clone this repository in my local

machine which means I need to download

all the commits and the files which are

present in this repository on my local

machine how can I do it first of all I

will copy this link I will go to the

folder in which I want to create or I

want to copy this repository let's

suppose I went here and I clicked on

this new I will be clicking and creating

folder named let's say

Books open books

folder right

click again I will be opening the git

bash

terminal and I will write git clone and


I will pay paste that link you can use

shift insert and you can paste that link

and you will see this particular folder

which is present over the internet start

copying in my local machine as well this

is one of the ways there is one more way

you can open the link in your browser

and ensure that you log to your GitHub

account and when you click on this for

since I am this is my repository I

cannot for my own Repository but you can

click on this Fork what will happen is

you will see that let's say your your

GitHub user ID is John 1234 so this

repository would be copied to your

GitHub repository rather than let's say

if I don't need to download it in my

local I can save the repository or I can

for the repository on my GitHub account

itself now let's go ahead and let's see

again if the cloning happened no it

didn't happen till now this is little

bigger repository so you can see it is

still happening up and 93% 95% is being

completed let it complete and then you

will see this repository or all the

files which is being being present onto

that particular GitHub repository will

not will be copy to your folder itself


so if I open this now you will see the

books Repository all the books are being

present up here itself so this is one of

the ways in which you can work and Ure

that okay

now okay so these are the these are some

of the very useful commands which we use

on the GitHub and git on a very regular

basis we have already seen git in it

which will initialize the git for me we

have git clone which will clone the

Repository into the new directory git ad

will add will be adding the files to the

staging area git commit will

be saving the code as a commit or will

be saving the code as a version in my

machine itself git push to push the

particular code onto the GitHub we will

be using git git push git pull to update

my local repository if let's Suppose

there is some there is some file or some

content which is updated on GitHub I can

use the pull command to pull

the the codes or the files on my local

machine git Branch git Branch will list

create or delete the branches for you so

it will list create or it can also

delete the branches so let's

see let's

suppose okay I will move


back and I will delete this repository

which I have just now cloned

it and I will go back

here okay so I will go to my git

repository now here itself what I'm

going to do is I'm going to say run the

command git Branch now git Branch will

give you the list of all the branches

which are being present now in our case

we have a master branch which is being

present right now to create a new Branch

I will be running the command git

Branch get branch and let's say Branch

name I'm creating a branch with the name

Branch one now again if I run G Branch

command you will see I have master and

the branch one and star means the

current Branch now to check out to the

branch one I will run get

checkout okay get checkout Branch one

which will

move me from the master Branch to the

branch one now let's suppose if I'm a

developer if there are 10 developers who

are working in a particular repository

and everyone will be working onto the

master Branch or any single branch and

they are just pulling and pushing the

code on a very regular basis it might


create lot of conflicts now to avoid

those conflicts I will be giving a

separate branches to my developers so

that they can work and they can avoid

the conflict which is been coming up so

in our case I have created a branch with

the name Branch one and now whatever

files I'm going to create it will be

created in a isolated environment so

let's suppose if I create a file with

the name star agile that will not be

created in the master Branch it will be

isolated so similarly the 10 Developers

are working and they are creating their

files it would be completely isolated

with what others or other developers are

doing up so in our case let me just

create a new file let's say file in

Branch one. text simple file I have

created up I will be adding it to the

staging area and I

will commit it with this command let's

say file

one branch one

or pile sorry we can just I will just

run it up

again file Branch one

created I will press enter here after

pressing

enter I will now be pushing my code get


push hyphen U origin rather than master

I will be giving Branch one so I want to

push my code from Branch one itself so I

will be when I press

enter you will see this is being created

up and the code is being pushed as well

so to check it out I will be going back

to my repository which is this

repository and you will see when I

refresh it there is a master Branch see

that file is not present up the file

which we created just

now is not being present up see file

Branch one is not present here but if

you go to this Branch one you will see

that there is a file with the name

Branch one is being present

here but now how will you merge this

this code let's say whatever you have

done in this Branch one how will you

merge it with the master Branch for that

you will be generating a pull

request so when you as a developer you

confirm that whatever you have done is

finalized and completed you will compare

and pull

request wherein you will be creating a

file Branch

one
and whatever title you need to create

let's say pull request whatever title

you need to create you can create it if

there is some team leaders you want to

tell or you want them to review your

code you can click on the review viers

and you can add your team reviewers here

as of now I'm not creating it and I will

be creating a p

request now the pull request would be

created up you will see there would be

an option to merge the pull request now

if you have set or put any reviewer

there then that reviewer would be

accepting your pull request but as of

now since I'm the owner of this

repository I will be clicking on this

merch now when you click on this merge

finally you will see the code is being

merged here

and when you come back here you will see

in the master Branch also in the master

Branch also there would be a file with

the name file Branch 1.

text in this way you can generate the

pull request and you can work with the

branches itself

now with the git checkout as we have

already already seen to move to a

specific Branch we will be using kit


checkout git merge is a command to join

two or more development histories

together we can use the git merge

command

itself so now we are going to see how we

can clone a repository in our local

machine so let's suppose this is a

repository which which is there which is

present over the internet I want to

clone this repository in my local

machine which means I need to download

all the commits and the files which are

present in this repository on my local

machine how can I do it first of all I

will copy this link I will go to the

folder in which I want to create or I

want to copy this repository let's

suppose I went here and I clicked on

this new I will be clicking and creating

folder named let's say

Books open books

folder right

click again I will be opening the git

bash

terminal and I will write git clone and

I will paste that link you can use shift

insert and you can paste that link and

you will see this particular folder


which is Pres present over the internet

start copying in my local machine as

well this is one of the ways there is

one more way you can open the link in

your browser and ensure that you log to

your GitHub account and when you click

on this Fork since I am this is my

repository I cannot Fork my own

repository but you can click on this

Fork what will happen is you will see

that let's say your your GitHub user ID

is John 1234 so this repository would be

copied to your GitHub repository rather

than let's say if I don't need to

download it in my local I can save the

repository or I can for the repository

on my GitHub account

itself now let's go ahead and let's see

again if the cloning happen no it didn't

happen till now this is a little bigger

repository so you can see it is still

happening up and 93% 95% is being

completed let it complete and then you

will see this repository or all the

files which is being being present onto

that particular GitHub repository will

not will be copy to your folder itself

so if I open this now you will see the

books repository and all the books are

being present up here itself so this is


one of the ways in which you can work at

ensure

that okay

now okay so these are the these are some

of the very uh useful commands which we

use on the GitHub and git on a very

regular basis we have already seen git

in it which will initialize the git for

me we have git clone which will clone

the repository into the new directory

get add will add will be adding the

files to the staging area git commit

will

be saving the code as a commit or will

be saving the code as a version in my

machine itself git push to push the

particular code onto the GitHub we will

be using git git push get pull to update

my local repository if let's Suppose

there is some there is some files or

some content which is updated on GitHub

I can use the pull command to pull the

the codes or the files on my local

machine git Branch git Branch will list

create or delete the branches for you so

it will list create or it can also

delete the branches so let's

see let's

suppose okay I will move


back and I will delete this repository

which I have just now cloned

it and I will go back

here

okay so I will go to my git

repository now here itself what I'm

going to do is I'm going to say run the

command git Branch now git Branch will

give you the list of all the branches

which are being present now in our case

we have a master branch which is being

present right now to create a new Branch

I will be running the command G

Branch G branch and let's say Branch

name I'm creating a branch with the name

Branch one now again if I run get Branch

command you will see I have master and

the branch one and star means the

current Branch now to check out to the

branch one I will run get

checkout okay get checkout Branch one

which will move me from the master

Branch to the branch one now let's

suppose if I'm a developer if there are

10 developers who are working in a

particular repository and everyone will

be working onto the master Branch or any

single branch and they are just pulling

and pushing the code on a very regular

basis it might create lot of conflicts


now to avoid those conflicts I will be

giving a separate branches to my

developers so that they can work and

they can avoid the conflicts which is

been coming up

so in our case I have created a branch

with the name Branch one and now

whatever files I'm going to create it

will be created in a isolated

environment so let's suppose if I create

a file with the name star agile that

will not be created in the master Branch

it will be isolated so similarly the 10

developers are working and they are

creating their files it would be

completely isolated with what others or

or other developers are doing up so in

our case let me just create a new file

let's say file in Branch one do text

simple file I have created up I will be

adding it to the staging area and I

will commit it with this command let's

say file

one branch one

or file uh sorry we can just I will just

run it up again

file Branch one

created I will press enter here after

pressing
enter I will now be pushing my code get

push hyphen U origin rather than master

I will be giving Branch one so I want to

push my code from Branch one itself so I

will be when I press

enter you will see this is being created

up and the code is being pushed as well

so to check it out I will be going back

to my repository which is this

repository and you will see when I

refresh it there is a master Branch see

that file is not present up the file

which we created just

now is not being present up see file

Branch one is not present here but if

you go to this Branch one you will you

will see that there is a file with the

name Branch one is being present

here but now how will you merge this

this code let's say whatever you have

done in this Branch one how will you

merge it with the master Branch for that

you will be generating a pull

request so when you as a developer you

confirmed that whatever you have done is

finalized and completed you will compare

and pull request

wherein you will be creating a file

Branch

one
and whatever title you need to create

let's say pull request whatever title

you need to create you can create it if

there is some team leaders you want to

tell or you want them to review your

code you can click on the reviewers and

you can add your team reviewers here as

of now I'm not creating it and I will be

creating a full request now the pull

request would be created up you will see

there would be an option to merge the

pull request now if you have set or put

any reviewer there then that reviewer

would be accepting your pull request but

as of now since I'm the owner of this

repository I will be clicking on this

merge now when you click on this merge

finally you will see the code is being

merged here

and when you come back here you will see

in the master Branch also in the master

Branch also there would be a file with

the name file Branch 1.

text in this way you can generate the

pull request and you can work with the

branches itself

now with the git checkout as we have

already seen to move to a specific

Branch we will be using git


checkout git merge is a command to join

join two or more development histories

together we can use the git merge

command itself so now we are going to

see what exactly is the continuous

integration

is in the case of continuous integration

whenever the developer puts the code to

GitHub it is being picked up by the

genkins for the next process which is

the build wherein the code is being

compiled and converted from the user

readable language to a particular

machine readable language which is zeros

and ones after it is being picked up it

goes for a testing where the test cases

are being executed to ensure that

whatever the code is being written or

whatever the functionality is being

developed it meets the criteria which is

being given by the client after the the

testing is

done then the quality assurance is being

done now what is a quality assurance is

we check the quality of the code for the

potential bugs and the code smells here

we integrate a the tool genkins which is

a automation tool with the sonar Cube or

the PMD or the check style to check the

quality of the code and finally we


deploy the code to any of the

cloud either it is AWS AZ or gcp here

for a easy deployments we use Dockers

which is a containerization tool and the

kubernetes which is a container

management tool we will also be using

Prometheus and grafana for monitoring

purposes so this complete process

wherein the developer puts the code to

GitHub and from where from G Hub to the

deployment from the deployment to the

monitoring can be completely created in

a particular Pipeline and that pipeline

we call it a cicd pipeline now this cicd

comprises of two major steps first is CI

which is a continuous

integration now continuous

integration improves the software

quality and and

reduces the risk by building testing and

merging the changes

frequently wherein we

automate wherein we automate the build

and the test with the CI where we set up

an automation to build and the test the

code changes

frequently there's a frequent

Integrations which is there wherein the

we merging of the developer code changes


multiple times a day it is being

possible there's a rapid feedback

wherein the uh it includes providing a

feedback on the changes to the

developing developers within a

particular hours now when I say the key

concepts of CI which is a continuous

integration it includes maintaining a

code repositories where we can use a

Version Control like get to track the

code and the changes automated automate

the build where we set up the cicd

pipeline to build the code

automatically and make a build the self

testing which include the automated test

in the pipeline to validate the build

now we are going to use genkins to

create this complete cicd pipeline so

first of all we are going to discuss

some concept of genkins so that we when

we are aware of the concepts we will be

able to correlate the cicd concepts

easily with the genkins itself for the

same I will be going to the genkins

which is already installed in one of the

ec2 machines which I have previously

created up now in this first thing which

you will notice is it's running on port

number

8080 so the genkins by default works on


port number

8080 but if this port is being utilized

by some application or any other

activity you can again you can use a

different port as well it is not mandate

to use

8080 but by default you will be using or

you will be getting the port 8080

only now when you look at it when you

click on this dashboard you will be

going back to the homepage which is this

screen which is there in front of you

when you click on this plus new item it

will open a new or you can create your

own job or you can create your own

project when you click on this new item

whether you want to create a pipeline or

you want to execute some specific

project you can get it with by clicking

on this plus new item the people now in

a particular project there is not always

a single person who is working you will

see there is multiple people who are

working in a specific project now those

people we can provide a specific exess

and those people we can give the access

to the genkins and those people can be

added as a people in the Jenkins itself

I can see the complete list and I also I


can add the people build history

whenever we execute any job or any item

or any project in

Jenkins we call it a build now all the

jobs which are being executed can the

history of all the jobs which are being

executed can be seen from the build

history

tab I can manage the settings of the

genkins by clicking on the manage

genkins complete settings of the genkins

which includes

changing the global

settings making some security changes

adding some users installing the

plugins everything can be managed with

the help of manage genkins

itself so we will start with creating a

new item wherein we will just see how a

particular item is being created up for

the same I will be clicking on this plus

new item now when you click on this new

item it will you will land up on this

page wherein it will be asking you for a

item name now here I can put up an item

of any any particular name for example

I'm putting an item with uh with the

name my first item my first job I can

say anything you can select it up here

now after that at the bottom you will


see these are the various job types you

can create or I can say the item types I

can create so the first one is the

freestyle project now this in the

freestyle project genkin will build your

project combining any source code

management with any build system and we

can use this with any other software

build any software you want to integrate

you can use it now free style is really

one of the very popular kind of jobs

which are being there or the items which

are being there and you frequently use

it on day-to-day basis to perform some

activities we will be first starting

with the freestyle project only then you

will see there's a pipeline pipeline

whenever you want to create a complete

cicd pipeline wherein you will be first

putting the code to GitHub which is then

being built where which includes

compiling the codee and if it's a Java

based code then you will also be

converting that particular code to a

single executable format which is the

jar or War file after this build is

being performed then you will be doing

the testing wherein the test cases which

are already being written by the test


Engineers would be executed after this

is being done we do a quality assurance

we check we do a quality assurance

wherein we check the quality of the code

and get the task done and finally we

deploy the code or we can deliver the

code as well wherein the difference

between deployment and the delivery is

in the case of delivery we upload the

code in our testing or the development

servers whereas when we deploy the code

we deploy to the production servers now

I can create this complete pipeline via

writing a groovy scripting which we are

going to see ahead okay so if you select

it therein you will be writing the code

to get your task done then it's a

multi-configuration projects so whenever

you are using a multi-configuration

projects these are majorly suitable for

the projects that need some large number

of different

configurations for example Tes in on

multiple environments some platform

specific builds and so on so for example

you have three teams working up let's

say you have a development team you have

a testing team you have a production

team now they're working on a different

configurations Al together in those


cases you can use a multi configuration

project you can create a folder as well

which will create a container that

contains the that stores nested items in

it in that case you can use a folder you

can even use a multibranch pipeline

which creates a set of pipeline projects

according to the detected branches in

one source code management repository

for now I will be first starting with

creating a freest style project and I

will be clicking on okay

button now when I click on this okay

button you will see this this is my job

and there is a configuration like a

settings of this particular job can be

seen from here itself now the first

thing is the general settings you can

manage the general settings with the by

changing the general tab if you want to

define the source code management by

default you get git but if you want to

or if you're using any other source code

management then you can go to the plug

and install the other source code

management tool as well in our training

we will be using the git only build

triggers what exactly build your trigger

so what exactly let's suppose just to


again understand with an example what

exactly LED you to take or take this

particular course so that is the trigger

point same way what triggered this job

let's Suppose there was some code which

was being updated on GitHub that

triggered this code there was some

script which was being executed

externally that triggered this course

that code or this job there is some uh a

crown job which is being running up that

triggered this job anything it depends

on your what exactly triggered the the

particular job then you see build

environments

in what scenarios or what exactly is the

environment which is required to build

this project for

example um let's suppose in the I want

to delete a particular workspace before

the build starts so previous workspace

would be deleted and again a new

workspace would be created up when you

build or when you select the delete the

workpace before the build starts if you

want to use some secret text or some

file in those cases also that provides

an environment to execute the job that

can also be selected from here you can

add some timestamps to the console


output that is also being possible and

so on then you have the build steps

build steps is when the job would be

executed then what are the steps which

are being required to execute that job

so all those things can be taken care

from add a build steps we do have

postbuild actions so after this job is

being

executed then what exactly need to be

done if I take an example after this job

is being executed I need to send an

email to my developer about the

execution of this job whether this job

is successful or this job failed that

can be taken care from the post Bill

actions for now I will be going to the

build steps I will be clicking on the

build steps and then I will be clicking

on this add a build step and I will be

just printing or I will be printing up a

message which says hello world so I will

click on this execute shell so let's do

it so here I will

be clicking on this execute shell

wherein this Jenkins will go to my

machine where Ubuntu is running and

which is a Ubuntu operating system and

it will execute this message which is


print message and will take the output

and show show it to us in the Jenkins

itself so let's check it out once so

here I will be

eching hello world enter now I can save

it now when you save it you will see the

now you are under the dashboard you are

in your job now inside this job we just

came out from the configure but there

are other things which are also being

there let's say if whatever changes you

are making up which can be seen from the

changes the workspace which is getting

saved workspace build now to execute

this job we will be clicking on this

build now to delete the job we will be

clicking on this delete project if you

want to change from the name let's say

if you want to change the name from my

first job to any other job name you can

click on this rename button for now I

will be clicking on this build now when

you click on this build now you will see

that this job is being

executed and last build happened 3.5

seconds ago which was one of the stable

build so last stable build happened at

3.5 seconds ago last successful build

happened again in 3.5 seconds ago and

last completed build happened also


happened in 3.5 seconds ago now what

exactly happened in this build can be

seen by clicking either here or here or

here or here or here anywhere you can

click on it to get your or get the task

done so for now I will be clicking on

this last build and when I click on this

last build you will see that this build

was successful this build was being

created on 4th FB

2024 and as per the timing of this

machine it was

6:21

p.m. now to see what exactly happened

inside this particular build I will be

clicking on this console output wherein

I would be able to see the output so if

you click on this console output here

you will see it EO the hello world and

the output came up which says hello

world so there was a message which came

which says Eco hello world and the

output was Hello World this was a

message which was being printed up so

Eco hello world just got printed and

this message was being printed out I can

also click on this edit build

information and I can change the Builder

name for example if I want to change


this build from hash one to let's say

first build I can save it up so you can

see this one was being changed from hash

one to the first

build also if you click on this my first

job you will see here also it is being

changed from hash one to the first build

now let's try to introduce some error

and let's see if my gen case is able to

find that particular error or not so I

will go back to this configure button

again here I will be clicking on this

build a

step I will replace this Eco with some

other message now Eco does not exist so

it should give me an error and I will

save it again now this time when I save

it and when I try to build my job again

let's see what

happens now this time when you see it's

the error which is been coming up and

when I refresh it from the T top you

will see last build was the second build

which happened 9.1 second

ago last failed build was also the

second build it was one of the last

unsuccessful build which also happened

9.1 second ago which is the second build

we are talking about so let's check it

out so I will be clicking on this hash


two and here when you see you will be

able to see the console output you would

be able to see the console output which

is there and when I open this console

output

you will see the message coming up Eco

it tried to execute it in the machine

but it got an output saying that Eco not

found so build step execute shell marked

as a failure so this particular step was

being marked marked as a failure for

me so I'm I got it completely from here

it that there was some error in my

command which I have put in the in the

build steps

so to correct it I can go back to my

first job there I can click on this

configure I can click on this build

steps and I can move back it to Eco

again and save it

again this time when I build

it you will see the build is being

executed

properly now in this way you can work

with a job I hope you got this concept

now we will be cre creating a pipeline

in the

genkin but before that let me delete

this job I can click on the drop down


here and you will see an option to

delete a job and you can delete your

existing job let's now discuss how we

can create a cicd pipeline which will

include the checkout where the code is

being checked out from the GitHub and

then the comp filing of the code wherein

the code would be converted from a user

readable language to the machine

readable language after that we will be

checking the testing the QA which is the

quality

assurance and finally the deployment of

the code now to perform this activity we

will be first going and clicking on this

new item

here so let's click on this new

item and I will be writing a name of

this pipeline any name you can create it

for now I'm just giving the name as a

pipeline itself and I will be selecting

this pipeline here and click on okay

when you click on okay you will see that

there's an option of pipeline wherein

either you can write the complete script

in The Groovy or else you can use the

pipeline script from source code

management so my if my code there's a

Jenkins file which is there if it is

present in the GitHub then I will be


selecting this pipeline script from

source code Management in my case my

genin file is being present in the

GitHub itself now I will be selecting

the source code management as get and

the repository URL the repository URL

which I'm going to use is the github.com

the username and the project name so

let's go ahead and put the repository

URL

here now let's first go and discuss what

exactly is the Jenkins file or how

exactly is the Jenkins file will look

like so when I click on this genkins

file you will see I'm creating a

pipeline okay so pipeline is getting

created up let's create a pipeline so we

are seeing that pipeline is getting

created up here so this is the pipeline

which is

created agent is any now agent there is

a master node architecture which is

being present in the genkins so that if

let's say there are multiple parallel

jobs which are getting executed we can

distribute the task between multiple

machines which is present or which we

can create in the create and connect

with with the master machine itself


there are multiple stages which are

being involved what is the first stage

which is check out stage now first we

will check out the code from GitHub now

this is the first step which we are

going to execute what are the steps

which are involved in this first stage

is we will be taking the code from git

URL and this is where the code is being

present up now if you compare it with

here it's the same link which is being

present so my get where is my project

present onto this URL so what we are

going to do with this command it will be

cloning the git repository locally in

your machine itself in a particular

workspace now this is a message which

will be printed up which says Eco GitHub

URL checkout so this is a com this is a

print message which is being

executed after this is being performed

the Next Step which will happen is the

code compile with

AKA now this is just a name so we have

just mentioned a random name here but

yes this is the next command which will

happen The Next Step which will happen

up is the code

compiling here I'm going to start

printing that starting compiling and I'm


going to print the S which is shell I

need to execute this message which is

MVM

MN and this is the goal compile is the

goal which is being present up this is

the goal now you might be wondering from

where this goal comes up or from where

this goal is coming from so if you write

Maven and if you open this link which is

there you will

see that to execute up or to build a

particular project via Maven these are

some of the goals which are being

present for example if you want to

validate a particular project you can

validate the project whether it is

correct and all the necessary

information is being available you can

use the goal as validate so if you write

mvn validate this command will be

executed in the mavin which is installed

in your server now how to install the

maven so if you're using a Ubuntu based

operating system

you can use AP install Maven but if

you're using Linux based operating

system then you can use yum install

Maven similarly for the compile if you

want to compile the source code of the


project then you can use mvn compile

wherein the code would be converted from

the user readable language to the

machine readable language which is the

binary language similarly the same thing

will happen if you

write the mvn test wherein the code

would be or you can what it will do is

it will test the compiled source code

using some suitable testing framework we

can use any specific testing framework

and it will test that particular code

for you and we don't need this code to

be packed or deployed we don't have to

pack the code or we don't need to deploy

this particular uh code

itself now then we have the package now

after that we can convert the code to a

package for example if we are using or

if you're working on any Java based

project we can convert the code to a war

file or jar file that can be created

with the help of the or that can be done

with the help of the command which says

mvn package then we have the verify

which is there now we can run MV and

verify to run any checks on the result

of integration test to ensure the the

quality criteria is being met so to

ensure that the quality criteria is


being met we use something known as

mvn verify we can use the install mvn

install to install the packages into the

local

repository and it can be used as a

dependency in other projects as well

similarly we can use MV and deploy which

is done in the build environment and it

copies the final project or final

package to the remote repositories for

sharing with other developers and the

projects itself so these are some of the

goals which I can use and if you look at

here we are using the goal as compile we

are using the goal as compile which will

compile the code for me so what will

happen happen Jenkins will go to Maven

and Jenkins will say okay please compile

this code for me Maven will compile the

code and whatever output would be coming

up Maven will hand it over to the

genkins and I don't have to do any stuff

manually because it's a gen case which

is an automation tool and doing all

these tasks for me so I will be

executing this particular command then

the next stage the third stage we can

say is quot testing wherein the code

testing will happen


up and here as we have already seen we

will be using the mvn test where the

code would be tested up then the quality

assurance would be taken care of and in

the QA you can see we are using the the

QA tool with the name check style so

what is a goal for the check style is

check style colon check Style

there is other tools like PMD which is

also there what is a goal for PMD it's

PMD colon PMD so you can use any of

these

tools okay then we have package with AA

wherein we will be running mvn package

and what this mvn package will do it

will convert my code from the you from

or to a particular single executable

unit

like War file or jar file so it will be

converting my code in this uh it to a

jar file and these are the process which

is being involved to create a complete

pipeline okay so now if you go back to

the genkins we are saying that I need to

run the script which is present on this

location where my code is present for

now since this repository is a public

repository you will see there is none

credentials which is there but if you

want to create the or if it's a private


repository you need to use the personal

access token as the credentials wherein

you will be going to the manage genkins

there you will see the credentials and

you can add those credentials from there

itself scroll down this is the branch

which you need to

execute and when you scroll down you

will see the script path is the Gen file

let's suppose you're not using or you're

not using the keyword as jenin file the

file name is something else in that case

you can update the file name as well but

in my case my file name is also the

Jenkins file so I don't have to worry

about it I will be clicking on this pipe

save button now this time I will be

clicking on this build now now when you

click on this build now you will see

this code is getting

executed so let it execute and let's see

what exactly happens so let me just

refresh it once so there is some error

which happened in the checkout of the

code so when I click on this logs here I

would be able to see that the

recommended G tool is none no credential

specified so what is a problem is we

have not specified any kind of


credentials here so I can go back to my

repository and

here the git URL is being present so

what has happened is I have missed a

particular step which says I need to

Define that on which particular Branch

we are working on on which particular

Branch this is being present on so I

need to give that as well so let's see

how we can do it so we have made the

required changes and I will be

committing the code

here and we will be committing the

changes let's bu it

again this time if you see it is being

executed

properly so all these steps would be

executed in this way you can even check

for the errors which are being there so

let's wait right now it has been able to

check out now it is compiling the

specific

code after that the code testing will be

done and then the QA and then finally it

would be converted to a package package

itself so let's wait to see what exactly

happens

up it will take some time to get it

executed because we are building it for

the first time otherwise it will save


some part of it as a cash so you don't

have to spend a lot of time again and

again to build this particular job but

yeah this is a complete pip line for

now

so we have already seen how we can

automate and build a particular project

and convert it to a package which is a

war file or the jar file if it's a Java

based project but assume a situation

that you're working for a company named

X and your client is y

and in this company X you you are

deploying your application in a server

assume a situation that this server is

an ub2 based server with T3 micro or T3

large as a machine type

now the client this is a testing server

client saw what you exactly have done

client approves your application and

wants uh to wants you to share that

application with the client you share

the application with the client and

client now deployed that application for

you now the problem is the application

is deployed onto the production server

which is exactly the same as what is a

testing server is but the application

failed to start here the application is


running properly in the X machine but it

is failing or it is failed to start in

the X in the for the Y company which is

your

client in this situation you will be in

a trouble saying there would be some

issues between you and your client

saying that which or there is some issue

with the application or with the server

so in these cases the main issue or the

main culprit is your

dependencies that every application

requires to run so there is some

dependencies which every application

requires at every each and every point

now these dependencies might not be

present in your production server as

your client is unaware of your

dependencies and this testing server is

being launched by your developer itself

and since developer has developed the

application developer is aware of all

the dependencies which are being

required so in this situation the only

possible way is to put a application

inside a container we will be putting

the application inside the container and

we will deploy that container onto the

production server so we will create it

in a we will put create a container we


will create a container with application

with and all the dependency and we will

deploy that container in the production

server now in this way when this

container would be running up it will

not look for the

dependencies which are present in the

machine whatever this application

requires to run will be present within

this particular container only so my

first issue of my deployment is being

resolved it actually saves a lot of time

in the deployment and speeds up your

deployment itself now this process of

creating the container or containerizing

the application we have a software named

as Docker with which does this task we

have a software named as Docker which

will do this talk and which is an open

source software which is there so we

will be automate the applications and

the deployment with the help of the

softwares named as Dockers and when the

which will create the containers and to

manage these containers we have

kubernets which would be present up also

we can validate the deployments where we

can include some automated test to

verify the deployment success we can


also use the infrastructure as a code

service okay wherein we provision the

environment consistently via the scripts

itself so all those things we can do it

very EAS easily with the help of IAC

service which is being present up now we

are going to discuss all of these one by

one so to utilize the container now

utilizing the containers we have the

Dockers now which is Docker is being

used to containerize the applications

and the services containerizing will

help in the ease of the deployment where

I don't have to worry about the

deployment side as I can create an

application put all the dependencies in

a particular container and deploy that

container to the server now it does not

matter if my server has the different

operating system also because whatever

that application requires to run will be

present up within that particular

container itself so now before going

ahead with the kubernetes let's start

with the practicals of of the Dockers

wherein I will be opening the AWS

machine and I will be creating an ec2

machine here okay so I will click on

this ec2 I will open my AWS account and

I will be clicking on this ec2 I will


click on this

instances okay I will be clicking on

this launch instances and I can put like

Docker

machine here I can use Amazon Linux or

UB 2 for now I will be using the a as

uban 2 itself in the instance type I can

use T3 do micro that works seamlessly

without any

issues now in the key pair I can just

select my new key gen or any specific

key pair you can select which helps in

SSH to your particular

machine now I will be then scrolling

down I will also be enabling SSH which

will help me to connect to the machine

and which will and I will allow HTTP

which will give the internet

connectivity to the machine itself and I

will create one

instance so let me create one instance

here and you will see the machine will

start getting created up now I will be

clicking on this instance here so that I

can see the machine getting created up

now after this machine is changed to the

running status I will be selecting this

machine and then I will be clicking on

this connect button there you will see


there is a ec2 instance connect which is

being present and I can click on this

connect button now when I click on this

connect

button the machine I can directly enter

inside the machine via the browser you

can use puty mobile xter or any other

software to connect to this particular

machine but as of now I'm directly

connecting this machine via the browser

now the First Command which I'm going to

execute is Pudo SU which will give the

super user rights to my user uban 2 so

that we can perform the root or we can

perform the activities which requires

the root permission so I have used sudo

and now I will be installing the Dockers

so to install the Dockers first we need

to update the machine so we will be

using AP update hyphen Y which will

update the machine for

me and after this machine is updated I

will be

installing the

docker so I will write AP install

docker.io

hyphen y so we will be installing the

package named as docker.io hyphen Y is

yes by

default okay now I can press contrl L to


clear the screen and I will be starting

the docker so service Docker is start

just to start the docker now before

moving ahead let me also show you how

the Docker architecture looks like so I

will be opening the Google

and let's see how the docker

architecture looks like which will help

you to understand the concept in much

more better and the efficient way so

when you look at the docker architecture

it comprises of three points here one is

the client one is the docker host and

the third one is a

registry client is you who is going to

per perform the activities let's say for

example you will be using the docker

build to execute the docker file you

will be using Docker pull to download or

to pull a particular image from the

docker hub.com to the local registry

which is being present up Docker run

command is being used to create a

container we are going to discuss all

these three commands in our training

itself Docker host is a machine on which

you are going to perform the activities

so the machine where I will be creating

the containers in the end would be the


docker host why I'm calling it as a

Docker host because I will be I have

installed the docker inside this

particular machine now whenever you

write or type any command it interacts

with the docker demon now Docker demon

manages your containers and the images

okay it's one of the components of

docker to manage the containers and the

images for you and then we have the

images which is present in the machine

also Docker host is a machine right so

images would be present in the machine

as well and we call it a local

registry containers are been created up

from the images which are present in the

local registry okay now what are the

images images are nothing but a read

only template from which the containers

are being created up okay so let's

suppose I am giving an instruction that

I want to create an uban 2 container the

request will go to the docker deman and

after request goes to the docker demand

it goes it will first check in the local

registry now after checking in the local

registry if it is able to find it is

able to find that image let's say uban

to image it will create a container if

image is not there it will go to the


docker Hub and it will pull that image

and from there the container would be

created up so in this way the three

parts the client Docker host and the

registry would be communicating with

each other to perform the task in the

system so okay so Docker registry is

offered by Google Docker itself and it

is the link for that is do hub.

do.com okay you don't have to go and

manually pull the image Docker will do

that task for you but you can also

create some custom images let's say you

have some custom container you can

create some custom images and you can

push it to the docker Hub as well

now to see this images to see the images

in the docker host we will be running

the command named as Docker

images so Docker images gives the list

of all the images present in the local

registry for now since I have just now

install the docker there is no image

which is being present in the local

registry now if I write Docker run or

Docker

pull and I write let's say any image

name let's say for example

MySQL then it will pull the MySQL image


but from where this MySQL word come from

so I will go to hub.

do.com

and I can let me sign out for now you

might not have an account on dockerhub

so assume that you're starting from here

and you have opened the particular

Docker Hub now I want to download MySQL

image so I can just simply run this

MySQL here and you will see that the

MySQL page is being opened on the docker

Hub and you will see the command Docker

pull MySQL it will be pulling the latest

image but if you want to pull a specific

tag you can go to the tags and even pull

the specific tag as well for now I will

be pulling the latest image which is

there and I will execute this Docker

pull MySQL which will pull the latest

image even you can put colon and latest

that will also pull a latest image if

you want to put a or if you want to

download any specific image then you can

put colon and the version of that

particular image you can download it and

that image would be downloaded for you

so as of now the docker would be is

basically pulling the image MySQL image

here and this image would be downloaded

now at this point when you run the


docker images you will see there is a

MySQL image and this is an image ID

which is being present now I want to

remove this image so I can run Docker

RMI and MySQL which

and this command will be removing the

MySQL image if you open Docker images

you will see that that particular image

is now being removed as well if you want

to let suppose download or use or

install Alpine image you can just search

for Alpine here and when you open the

Alpine you will see the Alpine image now

Alpine is the smallest operating system

which is a Linux based operating system

with around 5 MB it claims but

approximately it's around 7 to 8 MB of

size and it is being downloaded 1

billion plus times and a lot of people

have have given a positive feedback

about it so let me just copy this

command and I will be pasting that

command here itself this time when I run

this stalker P you will see that this

image will now be present up with a

different image ID okay when it was last

created on the do ER Hub it was created

12 days

ago now I can also remove this


particular image I can use the docker

RMI command to remove this particular

image but for now I will not be removing

it so what has exactly happened is when

I run the docker pull command it went to

the docker Dem man it interacted with

the docker Dem man it went to the docker

registry pulled the image in the docker

H in the local registry itself this is

what is being done

here now at this point I will be running

Docker images and you will see that

image is being present now suppose I

want to create a new container how can I

create a new container with a Ubuntu as

a base image let's

suppose so for that I will write Docker

run it which is the interactive terminal

to get inside the container we will be

using it now I can use name let's say

c01 UB

2 /bin/bash so I will be using this

command now Docker run run

command will create a new container

execute and create a new container and

open the interactive terminal of the

container named

c01 with the uban 2 as a base image and

there would be a bash terminal which

will be installed here press


enter now since the image was not

present up locally which means first

this request went to the docker demon

and Docker demon checked if that image

is present locally or

not if the image was present locally it

can create a container but in our case

this image

Docker um this Ubuntu image was not

present up

locally so

it pulled that image it started pulling

that image and after the so it started

pulling the image from the registry and

after the pull was completed the

container was being created up so I'm

inside the container right now with the

with the ID 1

ef1 E1 f e

32051 C63 this is the container ID which

is being present up and I can write

let's say I want to create some files I

can write file one file two file three

and so on and so on and if you do an LS

you will see this files are been created

up so we inside the container it's an UB

to container and I can run all the

commands which we generally execute in

the open tool so AP update hyphen


y AP install let's say you want to

install some software let's say whm

hyphen y

so you can use an AP command to install

anything do any particular task here

okay now to come out of the container

you will just be writing exit command

which will which will take you out of

the container itself but when you do a

Docker PS which means that it will be

giving you the list of all the

containers so to see the list of all the

running containers we will be writing

Docker PS but to see the list of all the

stop as well as running container you

will be writing Docker PS hyphen a so

you can see that I have been created or

I have created a container with the name

c01 which was being created 2 minutes

ago it just got exited 16 seconds ago

the command which we have used is bin

and Bash and the image is uban 2 so

image which we have used is a Ubuntu

image and this is the container ID which

was being used used now since this

container is stopped I need to start the

container let's suppose I want to enter

inside the container now let's suppose

let's directly use the command to enter

inside the container but the thing here


is which you need to note is we need to

start the container first but let's

first encounter an issue when we try to

enter inside the container and it will

give us a message saying that you need

to start the container first so let's

first try it off off so we can use

Docker execute it and

c01 /bin and SL bash error response from

the demon that do demon said dock demon

said to me that this container with this

ID just say e l E1 Fe C E1 Fe and so on

is not running you need to start it

first

so Docker starts

c01 now you will be writing Docker

execute it to enter inside the container

and you will see these files are still

present up here even if you see I have

installed the wiim command so if whim is

present or not see whim is

installed which means that I'm able to

enter inside the correct container in

this particular way or with the help of

exit command I'm able to enter inside

the cont container okay so for now I'm

just writing exit to come out of the

container itself suppose I want to

create one more


container okay so let's say I'm creating

Docker

run

it name

C02 UB 2/ bin and/ bash so suppose I

want to create one more container itself

so when I create this one more container

you will see

see that it has not pulled from the

docker Hub because it was able to find

the image locally in the local registry

and it created the container for me and

this is a container ID which is a unique

ID 9dd f6f 6876 51 this is a container

ID which is a unique ID for me now

inside this container ID I can run let's

say I can create some files whatever

files I want I can create those files if

you want to install some softwares you

can install it whatever you want to do

you can get that particular task done

and you can press even press contrl p

and control Q to come out of this

particular container that is also

possible so I have pressed contrl p and

control Q to come out of this particular

container

itself okay now after coming out of the

container when I run Docker PS command

which is to see the processes running to


see the containers running inside the

container or inside the Dockers we can

use this command and you will see that

container we came out of the container

without stopping it so earlier we were

using the exit command by which we were

exiting the container which means we

were stopping the container and with the

help of the uh contr p and control Q I

just came out of the container without

stopping the container

here now to see the list of all the

containers let me just repeat it back it

can be Docker PS hyph a to process hyph

is all the processes I need to see as of

now both are running and that's the

reason I am able to see both the

containers now to stop any specific

container I can run the command and I

can run

C02 you can even replace this name of

the container with the container ID that

is also being possible now with this

particular command with the command

Docker stop C02 it will be stopping this

container it will be stopping the

container with the name the C02 now the

since the container is being stopped I

can run Docker PS now in this case one


container will only be shown because

only one container would be running up

and when I do Docker PS hyph a it will

show me two containers which means one

container is stop and one container is

running hyphen is all which means stop

and running both it will be showing it

to me so in this way I can get or I can

see the list of running as well as the

stop containers I can press contr L to

clear the

screen now let's suppose I want to

create a replica of the container

suppose let's

say let's consider a situation that I

have a container with some application

running inside it with all the

dependencies which are being present up

let's suppose this container name is c03

and I want to create a replica of this

container I want to create the replica

of this container

c03 now now one thing you always have to

keep it in your mind the container is

always created from an image Now to

create a replica of this container which

is

c03 I will

be converting this container or I will

first convert this container to a custom


image and from that I can launch as many

containers as I want from that Custom

Image but how to convert it to a custom

image it is being done from the command

known as Docker commit there is a

command which is known as Docker commit

which will convert this container to a

custom image so let's see how we can do

it so to perform this activity let's

first create a container with the name

c03 so I will just run Docker run hyphen

ID name c let's say c03

UB to/ bin SL

bash now inside this container let's say

some dependencies or some files are

present some dependencies and some files

are present let's say I will

update the all the packages of the

container and on top of it I will also

be

installing okay on top of it I will also

be

installing the uh Apache okay so AP

install Apache 2 hyphen y I will also be

installing the Apache

here okay Apache is also

installed now after installing the

Apache I can even start the Apache all

those task I can do it but let's come


out of the

container now I can run Docker PS hyphen

8 you can see there's a container with

the name

c03 I want to create a replica of this

container to create a replica of the

container name c03 I will be creating a

custom image of the container

c03 so to create a custom image I will

be writing

Docker commit commit is a command which

will create the custom image of the c03

which contain a c03 container and let's

say I want to create an image with the

name my IMG so so this command or I can

just put the name my name here so that

it is easily you can easily understand

when I will run the docker images now

Docker commit command will create a

custom image with the name akhat image

from the container

c03 now when I execute this

command you need to wait for just 5 to 6

seconds and you will see that some

number comes up in front of you and now

when you do a Docker images you will see

there is

a image which is being created now if

you're wondering what this number is

this is nothing but the image


ID okay now this is the latest image

which we have created up and it was

being created 10 seconds ago and the

size of this image is 234 MB this is the

size of this particular image which is

there so why the size is too much

because we have installed the Apache ubu

is already there in this particular

image now I want to create the custom or

I want to create a container from this

Custom Image with the name akhat image

how we can do it or how we can create a

container from the akhat IMG so to

create a container I will be writing

Docker

run

[Music]

it name let's say c04 I'm creating up

from akhat image slbn slatch rather than

ubu as a base image this time I will be

using akhat IMG as a base image and when

I execute it you will see that I'm able

to enter inside the container with the

ID ECB 0 B9

d35 now inside this since it's a replica

of

c03 when you do an LS you will be able

to see file one file to depth 2 depth 3

even if you see want to see Apache is


there or not you can run which Apache 2

and you will see that Apache 2 is also

present up here so Apache 2 is also

present and it is also present up in

this

particular machine or this since it's a

replica of c03 whatever was being

present up in the c03 is now being

created and copied in the c04 as well

now here I can write exit command to

come out of the

container now let's suppose my de my

client wants me to give this particular

image to them now how can I give this

image to the client there are multiple

ways by which I can give it to my client

the first way is putting or pushing this

image to the docker Hub so you need to

go to this official website of Docker

which is hub.

do.com wherein you need to just click on

sign up you can sign up from Google you

can continue with GitHub or you can

manually sign up also I already have an

account so I will be signing it to my

account itself so this is my mail ID I

will be using this mail ID and I will

sign in when I sign in it will open in

front of me and I will click on this

mail click on this data now


here okay so I will be pushing my code

here itself so for that I will now use

first let's say I will first use the

docker log basically I will be logging

in to my dockerhub account since you

have already created a dockerhub account

you might be able to log in easily okay

if you have manually logged into to the

or manually signed up to the dockerhub

account so click on this Docker login

wherein you can put the

username and I will be putting the

password which is there now when you

type the password it will not be visible

to you but after signing up you will see

that it is showing as a login succeeded

to you which means I'm able to login to

dockerhub from this account now I can

push the dockerhub image that

akhat image and this image but both of

them will have the same image ID which

is being present so both will have this

2 b490 BC e B3 6 uh e this is the image

ID which is there and it is applicable

for both akhat image and this akhat

image with the dockerhub username itself

now this time I will be pushing Docker

push I will use Aku 20791 which is a

Docker Hub
username and I will put the image

here it will start pushing this image

for

me okay now I can run Docker images and

you will see the uh okay so yeah here to

it is like that only but if you refresh

to the docker Hub you will see there

is a image with the name akhat IMG

I have been able to push it okay now I

will I can go to the collaborators and I

can add my client username here and add

it up in that case my client GitHub

account will also get the access

to this particular repository with the

name akhat image but as of now I just

have a single account so I will not be

adding any

collaborator but I will create a

different machine the client machine and

I will show you how my client will pull

onto their machine itself so I can click

a I I can click on this launch instance

and I can say

my client

machine now when you scroll down you

would be able to see Amazon Linux let's

use a different operating system I

earlier used uban 2 now I will be using

Linux as an operating system here I can

scroll down I can use T2 micro that is


absolutely fine key pair I can use the

existing key pair or any key pair that

is also absolutely fine I can allow sttp

and SSH and let's launch the

instance now when you launch the

instance there would be a client machine

which would be created for

you it is in the pending State Let It Go

live and then we can

continue

yeah it is now running I can select it I

can click on this connect and I can

again connect to this client machine as

well so in the client machine I will be

using Pudo Su and I will be installing

the G sorry

Docker so let me try it

off okay now here I will be running

pseudo Su and and I will say yum

install yum install Docker iph y I will

be installing the docker

here now after installation of the

docker okay it is being completed now I

can write I can start the docker service

Docker

start service docker start which will

start the docker and even if you can if

you want to see the docker status you

can run this command service Docker


status and it will show you that it is

running right now you can press crl C to

come out of it and crl

L now I will be using Docker pull and

which image this image I will be pulling

it

up so in a different machine I will be

pulling this image it's a different

machine Al together I will be pulling

this

image and I can press enter

here now here if you run Docker images

you will see this image is being created

up if you want to retag this image you

can run Docker tag and this current

image name and whatever name you want to

tag it with you can tag it up but for

now I will use this directly this image

only and I will create a

container it name let's say c01 or if

you are getting confused with the name

let's say client 01

container I will use the image

name and slash bin and SL

bash press

enter So I entered inside the

container this time if you do an LS you

will see this all the files are being

present up and if you do which Apache

to it will be creating a Apache or it


will show you the Apache here that which

means the Apache is installed so since

we have already installed it the

Apache in the previous container or in

the previous image and I'm using the

same image in a different machine in

that that's the reason why it is being

visible to us

now this is the client machine and this

is how you can deploy this particular

application to the client machine as

well I hope you understand this

particular concept wherein how the

deployment will happen to the client

machine Al

together now let's assume a situation

that we want to launch any application

over the Internet so the main question

which comes up is how we can launch any

application over the Internet

itself so to launch any application over

the Internet we use the concept of Port

expose so assume that this is an ec2

machine which is there and inside this

E2 machine let me create a

container let me create a container

itself now this container

will consist of some application which

is running on
the over the Internet which is port

number 80 the HTTP Port now this is

running on Port 80 which is the HTTP

Port but the problem is the container

does not have any public IP of its own

so a person from the internet want to

access this application how this person

can access it so in this case we use the

concept of Port expose where we will

expose the port of the machine with the

port of the container itself so there

are 65,000 ports which are there you can

select any any port and expose it with

your application which is being running

up so in this way your application or

your your Port would be exposed for your

client

and you would then be able

to get your application deployed over

the Internet so let let me just run it

and go it through it so I will be

running and I will be using Docker run

it name c or web app

one uban 2 let's use the uban 2

container only and Bin and

Bash now here we will use first of all

we will update all the packages inside

this particular

container

now after doing that I will install


docker.io oh sorry I don't need to

install the docker I will just install

Apache 2 which is nothing but a web

server if I want to launch any

application over the internet I need the

web server for the same so for that

particular reason only I will be running

the AP or I will be installing the

Apache 2 now after this is being

installed I will be running

or I will be let's say I will go to this

location where wwwh

HTML and I can execute it okay now here

if you do an LS you will see an existing

index. stml file but there's a small

problem which we have done here the

problem is while creating the container

we have not exposed the port and that is

something which is mandatory so what I'm

going to do do now is I can remove the

container or I can create a new

container with the port expose so let's

expose the port hyphen P you can run it

let's say 8909 Port 80 you can use there

are 65 approximately 65,000

ports you can use any port which is not

in use and you can run like for example

in this case I'm using hyphen P which is

the port
8909 is the machine Port which is being

present up and 80 is the container Port

where my container is being running up

press enter and you will see the

container already with the same name

already exists so I will change the

container name

S2 now I'm inside the container and I

have already expose the 899 port with

port number 18 now inside this I will be

first updating all the packages so I

will use

AP update hyphen y it will update all

the packages for

me now I will be using AP

install Apache 2 hyph y it will be

installing the Apache 2 for

you now we can run Apache to start which

will start the Apache 2 Apache 2 is also

started up now if you open okay so if

8909 if I open on the public IP I will

still be getting an error because the

security of the machine will not allow

me to access 8909 P but let's see it

will be giving me me the timeout error

but if you

go and open the inbound rule in the

Security Group you will see that a

default website is getting opened up so

let's go to the
security and if you click on this

SG okay you will see an edit inbound

rule click on this edit inbound

Rule and add a

rule putting

80 what was the IP I have

used contrl P control Q

8909

fine

8909 so I will use

8909 and I can change it to

0.0.0 sl0 and save the

rule now this time if you refresh it you

will see a default website is being

opened up for you now this default

website you can even launch a custom

website by running this by creating an

index.html file in the v w w w HTML

directory so I will come back here to my

machine and what is the uh it's a web

app 2 which I have created up so I can

again get inside the web app two so I

can use execute command it web app 2

/bin/bash okay I'm inside the

container now I can run inside this

container I can get inside this where

www HTML directory where you if you open

the ls you will see there is a file with

the name
index.html if you remove this

file if you remove this

file you will see this page will also

not load after that because whatever

files were present up in that location

will be actually present over the

Internet

so

okay

so I will let's say I will create

index.html

file okay VI command is not there so I

can write AP install vi vi is a file

editor which is there so I will be

installing the wiim command or VI

editor and I can now run VI index.html

which is the

homepage and I can press I and Hello

World Escape col WQ okay there is an

index.html now if you refresh this page

you will see a hello world is being

opened or hello world would be

accessible from the internet and this is

how the port expose concept Works

wherein we can expose a specific Port of

the container to perform the task in the

notes itself or to perform or to upload

an application over the internet X as

well okay now after this port

expose we do have other Concepts like


for example we have the concept of

Docker volume now in the case of Docker

volume what happens is we have a Docker

or we have a container now let's suppose

it is a tomcat container which is

generating some tomcat logs now we know

it very well that containers are very

lightweighted they tend to terminate

very

frequently but we should if the

container

terminate due to any reason then I will

be losing all the data which is being

there now I don't want that I I don't

want to lose that particular or specific

data so in that case I will be taking

the help of the persistent volume

wherein I will be using the Docker

volumes wherein the docker will be

creating a directory inside the machine

and that directory would be

utilized to

save the content of a specific directory

of my

container so

ultimately so

ultimately I will be creating a let's

say this is a machine which is there in

inside it there would be a


container so I will be map it with a

specific directory of the Mach of the

machine which is a Docker

volume with a specific directory of the

container so whatever is being there

inside the container that directory

inside the container would be mapped

with the docker volume

itself but how we can work with the

docker volumes let's see that so just

let's create a volume so first of all I

will be writing Docker

volume Docker

volume create let's say I want to create

a volume with the name new

volume so a Docker volume would be okay

I'm inside the container let me come out

of the container now I will write

Docker volume

create new wall so it will be creating a

volume with the name new wall itself

after creating this volume with the name

new wall I can list all the volume so I

can run the command named as Docker

volume LS wherein a Lo a new volume or a

volume with the name new volume would be

created up now after this volume is

being or I can see the list of all the

volumes and I have already created a

volume with the name new volume now


let's say I want to create a container

and that container need to be mapped

with this new volume which is being

present in the ec2 machine of ours so in

this case I will run Docker run

ID

name okay here I should be

using is equal to

web app

3 I'm going to

mount the

source

is new

[Music]

volume new volume comma

destination is equal

[Music]

to let's say some container volume

and ubu as a base image and Bin and Bash

so I'm going to map the source and the

destination here and when you press

enter you will see some error coming up

okay so it's saying some unwanted

content is being present here so let's

see Docker run it it's correct name I

iph name is equal to let's say web app 3

this is also correct Mount source so

Source spelling is wrong Mount source is

equal to new volume this is also correct


and destination is equal to this it's

called correct itself and this time you

will see that container is being created

now now if you do an LS you will see

that there is a new volume which is

being present so if you go and do a CD

and open container volume a new volume

in the m in the machine is being mapped

with the container volume of the

container so there would be a container

volume which will be present up inside

the container itself so I can run LS

there is no file let me create some

files there are some files which I have

created up here so three files I will be

creating it up okay now I've created

three files here and I can come out of

the container even to show you that

these files will be persistent let me

delete delete the container as well I

can even delete the container so

Docker okay so Docker uh we will be

deleting the

container by running the

command so we will run Docker container

RM and the container name which is web

app 3 okay now we have deleted we have

deleted the container if you see Docker

psph the container would be deleted up

there is no web app 3 but still the data


would be persistent now to see where

exactly that data is being present up we

will

use first let's see the volume name

Docker volume LS we have a new volume

Docker now we will be inspecting the

volume okay so we will write Docker

volume inspect new

volume so it will inspect the new volume

and you will see that there is a Mount

point which is being created up here so

there is a Mount point with the name

where lip Docker volumes new volume

there is a Mount point which is being

present and being created up so now if

you copy this location even I can copy

this completely and I can if you go to

this location Itself by clicking on the

by pasting the location and if you do an

LS you will see these files are being

present now I can I can again map this

new volume with some another container

and you reutilize this particular files

but as we have proved that with the help

of Docker volumes I can or I can use the

it as a persistent data wherein I will

not be losing the data if my container

is deleted or my container is lost due

to any specific
reason there is one more thing which is

known as bind volume s now let's suppose

I will go back and I will go back to CD

home12 now let's suppose I want

to create or I want I have a directory

okay so let's create a

directory let's suppose I have a

directory with some any specific name

okay

so let's suppose I have a directory with

the name um my directory okay so I have

a directory with the name my directory

where I

have some files okay my file new file

like this some files are being present

up

now the I have a directory I I have some

files in the directory now there is an

existing directory I want to map it with

the container so to map it with the

container I will run dock

run

it

name okay I will be writing let's say

bind

Mount this concept is known as bind

Mount any name you can say give B Mount

one okay this is the name of the

container and the volume which I'm going

to map is my di you can give the


complete location also home UB

to my

di to be mapped

with to be mapped

with uh let's suppose app directory of

the container so this my di would be

mapped with the app directory of the

container so if I go to the app

directory of the container you will see

the files my file and the new file would

be present up there here itself

/bin/bash now if you do an LS you will

see there's a app directory which is

there I can get inside this app

directory do an LS you will see my file

and the new file is already being

present up there so for an existing

directory you want to map it with a

container you can do it with the help of

the bind mounts

itself

now okay now after this is being done we

do have a very important concept which

is known as Docker

files now what exactly do I mean by

Docker files itself so before that let's

come out of the container and let's talk

about the docker file so assume a

situation that there is a developer and


the developer wants to wants you to

install like a set of dependencies which

is there now for you it is really

impossible to remember all those

dependencies because you are a devops

engineer you have not developed the

application so you might not be aware of

the significance of each and every

dependency so it is possible that the

developer is asking you to install some

XY Z dependency but you install some

another dependency or some other version

of dependencies so that is something

which is very very risky so if you have

more than one dependencies you can ask

the do developer to write a list of that

particular dependencies and you can

execute that list of dependencies to get

the task

done okay so this list of dependencies

is nothing but a Docker file which is

there now this Docker file is really

very important one of the very important

component of Dockers which is nothing

but a text file and it consists of the

instructions it consist of the list of

instructions to get the task done to get

the task done in the OR it consists of

list of instructions to create a Docker

image so it does an automation of the


docker image creation for example in the

port expose when we saw that we need to

create the machine uh sorry we need to

create a container inside the container

there would be Apache there would be uh

some file you need to remove index file

you need to create an index file we did

it all manually but all these tasks can

be performed or can be automated with

the help of the docker Hub itself but

how it is being automated or Docker file

itself so how it is being automated so

for that you need to write or you need

to learn how to write the docker file

and Docker file is composed of various

Elements which includes let say for

example you can use

from now from uh for the base image this

this particular command from be on the

top of the docker file so let's say if

you're always you will be starting your

Docker file with the from and you will

give the base image that okay I will be

using Ubuntu as a base image for the

docker file so the the container which

would be ultimately created from an

image will have the base image as a UB

to itself we do have a run command now

what does this run command will do this


run command will execute any command on

your behalf for example if you run if

you write

run

Apache or run

a install Apache 2 it will install

Apache 2 for

you so that to execute the command and

it will be creating the layer of image

for you I can also use the concept of

maintainer where where in I can add

up I can use the concept of

maintainer where

in I can Define the author owner or

description of the person who owns this

particular Docker file we do have a copy

command which is also present up and

this copies is the file from your Lo

local system which is nothing but a

Docker VM to the destination which is

inside the container itself so while

creating the file or while creating the

container it will copy the

image or it will copy the file inside

the container

itself okay we have some add command

which is also there which is similar to

the copy command but it provides a

feature to download the files from the

internet also we can extract the file


files at the docker image site with the

help of the ad Comm command we do have

expose command which is also there now

expose command is to expose the ports

let's say if you are creating a UB 2 con

if you're creating a Jenkins container

then you will be using expose

8080 because Jenkins works on Port 808

itself okay we do have the work

directory we do have a command named as

working directory now what this work

directory will do it will be setting up

a working directory for a

container okay so it will set a working

directory for any run command any CMD or

to add any instruction that being

followed in the docker file so there you

can use the working

directory then we will use CMD command

CMD is to execute but during the

container creation so to execute the

command but during the con container

creation we will be using the

CMD then the entry

point entry point is there now entry

point is similar to CMD but has a higher

priority over the CMD so First Command

would be executed by the entry point

only now let's start with writing a


simple Docker file and let's see what

all we can work or how we can utilize

the docker file to perform the

activities in the system so let's check

it out one

R so to create a Docker file first of

all I will be writing VI command I will

be using the vi command to perform a

specific

task so

now

here

VI and I can

run let's say Docker file so Docker file

will always be will be created from the

word Docker file only with the D Capital

press enter here now when you enter

inside it let's say you can press I and

you can create a simple Docker file

let's say UB 2 so the base image will be

considered as UB 2 I want to uh create a

blank file my

file inside the docker file even I can

set up the working directory so it will

execute it step by step so after I

create the working directory let's say

aure

Di and after doing it let's say if I

create

touch um second file you will see the


second file would be created inside this

working directory only I want to install

something let's say I will just write AP

install AP update first of all we need

to update

it we need to update all the packages

then only the packages will be able to

install so APD install Apache

2 hyphen

y okay also if we want after installing

it let's say I want to copy some file so

I can use copy command let's say there

is some file which is there let's

say

um Fab file okay some Fab file I will be

creating inside the machine I need to

copy it in the let's

say uh I want to copy it in the let's

say my

directory okay I want to copy it in the

my directory itself so in that case it

will be copying it in the my directory

and I can come out of it press colon

WQ now after creating this Docker file I

need to execute this Docker file how can

I execute this Docker file we will run

Docker

build hyphen

T and I can say let's say second image


dot so it will build and tag it with the

name second image in the current

directory where my Docker file is being

present up so it will build this file

and you will see all the commands are

getting executed but at the end I will

be getting an error here because I have

not created the file locally so it will

not be able to find locally that file

locally so it will be giving me an error

there itself just to show you how

significant is this Docker file is or

and how how it will copy okay so you

will see this FB file you will see an

error coming up the file is not being

present up so I need to First create

let's say FB

file okay and now I can run the same

command again and you will see the

second image would be created up now if

you do Docker images you will see there

is a second image which is being created

up now I want to create a container from

it so I can run Docker run

it name let's say uh

c05 and this time I will use second

image as the image name and slash bin

and/

bash and I'm entered inside it and you

will see your working directory by


default is the directory here and when

you do an LS you will see my directory

is also being created up because I want

to copy this FB file into a my directory

so if you go inside this my directory

also you will see there is a FB file

which is there but if you go

outside CD do do/ do dot if you go

outside you will see there is a my file

which is also being present up so I told

you very

[Music]

networks so all these things this prun

command will be deleting for you so when

you run Docker

system prune hyphen

a you will see this will remove all the

stop containers all the network which

are not being used by at least one

containers all the image which are uh at

least one container is associated to and

all the build caches you can press yes

and enter so you will see all of these

are being removed removed

by the docker prune command so un All

The Unwanted images un stop containers

will completely be removed from the with

the help of the prune command but this

command need to be used very cautiously


otherwise it might create a lot of

problems and the issues in your system

as

well so

in the Dockers we

have the the management of the docker

containers can be taken care by a some a

service known as Docker

swamp wherein there is a Master

machine there are nodes which are being

present up where my containers are being

created up and the Master machine can

even create the containers from the

Master machine we can create the

containers and even manage those

containers and this service of the

docker is known as Docker swam now with

the docker swam I can manage the

containers I can balance the load

between the containers I can do uh

scaling of the containers I can create

the

containers I can even uh if any problem

happens with the container let's say my

containers are accidentally deleted so

it will recreate the contain containers

for you so all those tasks can be taken

care by a service known as Docker swam

in the kubernets sorry in the Dockers

wherein we will be creating a master and


the two nodes and we will try to connect

them together and to and we will try to

uh perform the activities from the

Master machine into the no into the

nodes itself so for that I will be going

to the ec2 and I will be first deleting

the client machine you just keep

deleting your resources otherwise you

might end up paying a huge amount of

bill so let's say if I'm not using this

client machine I can terminate this

machine here now I can click on this

launch instance and I can click here and

I can

select the let's say Dockers SW I will

be creating three

machines I can scroll down I can use

uban 2 here as an operating system I can

use T2 micro no as such restrictions on

the type of machine which we have now I

can scroll down and I can click on edit

here in the front of network setting and

I will be enabling the all traffic

rather than

SSH now here the number of instances you

can change it to three wherein later on

I will rename them and I will make one

as a Docker Master machine and to as a

node machines which are being present up


so let's try of ones and I will click on

this launch

instances now I can refresh it and you

will see three machines are being

present Docker

node okay node one this is the node

two which is being present now I can

connect to all the three machines one by

one and then we will connect them

together to perform the activities as

well now first let me log to the docker

swam machine wherein I will be

connecting I will be clicking on this

machine I will click on this connect and

I will be connecting to this machine

here similarly I will be going to the

docker node one click on this connect

button and I will click on this connect

here so this is my Docker node one this

is my Docker swam

machine third is

[Music]

all the three machines I will be taking

care of

it now after this is one this is

done I will

be

initializing okay or I will after

installing it I will initialize the

docker swam machine as a Master machine


now to initialize it I will use Docker

swam in it and I will be running it and

I will write advertise do swam in it

hyphen hph advertise address and what

would be the manager IP address this the

private IP which is

17231 89 and 53 so I will write

172

31.

8953 this is the IP address I can just

simply put it it still says the docker

is not installed so I think I have not

installed the docker so I will run

docker.io hyphen y

in all the three machines I need to

install it up so AP install docker.io

hyphen

wi AP install docker.io hyph

y after this is installed I will start

the docker Dock Service Docker start

which will start the docker for

me here also the same task

service Docker start to start the

docker here also the same task task

after it is being updated which is

service Docker

start which will start the docker now I

will initialize this Docker swam machine

as the Master machine so I initialized


it so it is giving me again it is giving

me some error saying that Docker swam in

it there's an unknown flag which is name

as advertise so let's see what is an

issue here so Docker swam in it hyphen

hyen

advertise okay address so now there is

no gap between address and advertise so

I can just go and remove this Gap here

and press

enter now you will see that to add a

worker to this node we need to copy this

command and put it in the node itself so

I can copy this command

here okay and I will put it in the nodes

node one as well as the node two so

press it we have put it in the node one

and similarly I can put it to the node

two as well so it's being put to the

node one and node two now in Docker swam

we create a Services okay so if you

write docker

service LS you will see that currently

there is no service which is being

created

okay

but we can create a service as well okay

wherein I can create the number of

replicas which is nothing but the

containers so
Docker

service

create name let's

say

web server

replicas let's say I'm creating four

replicas and let's say I'm creating

engin replicas Eng genix as a base image

when you run it you will see four

replicas are getting created up

here now if you run Docker

service PS web server oh sorry web

server 4 if I want to

see the containers which are created in

the service named as web server 4 you

will see the four containers are being

present up here

itself okay and if you want to see the

list of service so you can write Docker

service LS which will show me the list

of the service and the service list the

service name is a web server

for so in this way I can create a list

of the service with the name web server

4 here itself

okay so in this way you can create and

deep dive into the docker swam as well

if you want to even scale it up let's


say from four I want to scale it to 10

containers due to any reason I want to

scale and increase the number of

containers to 10 so I can write

Docker

service

scale web

server 4 to I want to convert to 10 so

you will see the it will be converting

it to the 10 containers itself so it

will increase the number of containers

and it will change it to the 10

containers so if you do Docker service

LS one service only but 10 containers 10

replicas you can see

here but if you go and uh

sorry if you go and check the service

you would

be yeah here you will be able to see the

10 containers which are being created up

and you will see that containers are

being distributed between the cluster so

containers are being created with

between the master and the nodes as

well so I can use the docker swam okay

now if any containers is deleted due to

any reason if any container gets deleted

up so in that that case it would be

replaced by a new container in a very

immediate action bya very


immediately so the chances of losing the

container or my application going down

now is minimal because my Docker swam

will take care of taking my application

Live or keeping my application Live and

bring it to a minimum desired

number

okay

so with this uh we have completed the

docker swam as

well so now we are going to start with

ncble now when I say what exactly is

ncble is ncble is an open source which

means it is completely free a command

line it automation software application

which is written in Python so what does

an an will do it is a configuration

management tool okay which will automate

the software application or which will

automat the

deployments of the applications on the

Node machines which are being present up

now what it can also do is it can

configure the systems it can deploy

softwares it can work with you can

orchestrate Advanced workflows to

support the application deployments

system updates and much more okay so

let's assume a simple example let's say


you want to deploy a particular software

in 10 different machines which are your

production machines as assume now when

you want to upload or you want to

install a software to your production

level machines what will happen is you

have to manually do it one by

one you will go to the first machine

install that software you will then go

to the second machine install the

software and this will keep on happening

up now the major problem here is if the

software is very big then it will take

you a lot of time to install that

software manually you have to spend lot

of time doing that manual

step second major problem is the manmade

errors like generally as a human being

we tend to make issues and the errors so

to and that is also one of the major

things it is possible at a production

level I make some errors and at that

point of time I later on I realize that

this is an error which is being which is

being there

now then it is

really a problem okay at a production

level So to avoid it I can take the help

of ncble which is an automation tool or

I can say a CMT configuration management


tool wherein I will be installing okay I

will be using a Master machine which

would be connected to the nodes and I

will interact with the master machine to

get the task done in the nodes

itself this is what ncble is and that is

what en

does now when I say what is the ncble

module there is something known as ncble

modules which is being there now what

exactly the ncble modules are ncble

modules are

small

programs which you will execute in your

master machine to perform the task in

your

nodes so let's suppose this is your

master machine in your master machine

you will be performing the task you will

give the instruction to the Master

machine to perform the task in the noes

which are being available so let's

suppose I will have to install a

software in the nodes so I will go to

this Master machine and I will write a

module it's a single line it's a single

command I will write the

module and that module will perform the

activity in the nodes let's say I want


to install Apache so that module will

install the Apache in the notes if I

show you with an example

here let's suppose this is

my anible master machine I will be using

pseudo

Su and I have used ncble as a

username now suppose I want to install

or I want to write a module so how will

I be able to write it I will write

enable group name group name in my case

I have used demo as a group name you can

use any specific group name for now I'm

using demo as a group

name hyphen m m is a module

okay m is a module which is nothing but

command or which is m is a module which

is nothing but a modules which is being

offered by the ncble to perform some

specific task okay

so I

want to install the Apache so I will use

module as

yum I will also put hyphen B hyphen B is

become become or to perform any activity

as a root user I will be using hyphen B

hyphen m is Yum and here we will write

package is equal to

httpd and state is equal to


present so

ultimately here I will be using ncble

demo hyphen B is become to perform any

activity as a root user module I'm using

it as a yum module and package which I

need to install is httpd package and

state is present now State could be

absent also to delete any software

present will install the httpd in my

noes in the noes itself so if you press

enter here you will see that these two

nodes were there and I have received an

error here okay the module

yum is not is not available I got that

issue I made a small we have to make a

small change I have to add an

argument I have to add an argument here

saying that this is the argument I need

to pass it on okay now when I press

enter you will see the installation will

start happening up we will start

installing the Apache in the noes now

these are my node one and this is my

node two which is there two different

machines are being present up I will

just refresh it

once now here let's see if Apache is

installed or not so from the master I

have installed the Apache let's see if


Apache is installed or not I will write

which httpd and I can see aach is

installed here also if I write which

httpd and you will see Apache is present

up similarly if I want to delete Apache

let suppose there's an existing software

which is there and I want to delete that

software so in that case I just can

change it to absent it will delete this

software

Apache okay

now if you come back here and if you

write which httpd you will see there is

no httpd here also if you come and check

there is no httpd which is being there

so it means that we have removed the

Apache from the machines

now

let's if I want to just check the syntax

I don't want to execute the command but

I want to check if my syntax is correct

or not so this command I can run it

again but I will just write check this

will check my syntax but in real time it

will not execute the command it will

only tell you that if there is some

problem with your syntax or not so if I

press enter it will it will show you

that exactly the command is getting

executed but on the back end it will not


be executing the command for you okay

now let me install the Apache again so I

will write a module which say ncble demo

hyphen B hyphen m y h if

package is equal to

httpd and state is equal to

present I will be installing the module

again so I am in the

machine and I will be

using it like

this now if you again go back and check

which httpd httpd installed

here also if you go and check httpd is

installed here

now suppose httpd is a Apache software

now if I want to check if Apache is

running for me or not that can also be

done or I can

use a ad hog commands in that particular

case or I can also use the module name

service so how will I do it I will write

ncble I will be writing ncble group name

or I can use all in all the nodes I need

to check hyphen B hyphen m b is become

to execute this command as a root user

with the root privilege I will be using

hyphen B hyphen m is

service hyphen a and here I will use


name is equal to

httpd state is equal to

okay let's say started I will I want to

start the

Apache I want to start the aacha or if I

want to see the status I can just write

the status press

enter okay so it says it is not running

up so I will be I have to start running

it so I will just put started it will

start running it

up okay now again to check the status I

will just use anible

demo

hyphen

baa

service

httpd status I can run it in this

way now you will see it is started up it

is

active Apache is active here also I can

check it in the same pattern service

httpd

status

okay service

httpd status it says it is active same

thing

here service httpd status which means it

is

active now in this way I can check if


Apache is running or

not now I can also take the help or

let's suppose let's take one more

example

to understand

modules let's suppose I want to create

an empty

files via the modules so I will be

writing

ncble

demo hyphen

B hyphen M I will write the

command hyph

touch let's say file one I want to

create a file

one and uh to create a file one I will

be using a module named command and this

is an argument and here I can write the

Linux command with the Command Module I

can take the help of the Linux and it

will also follow item potency what do I

mean by item potency is if the software

is already installed in the noes it will

not be reinstalling that software for

you and if I press enter you will see

that file one would be created in the

node one and node two let me go and

check I will also log in as an ncble


user in the node one and if you do an LS

you will see there's a file one which is

created up similarly if you do an ncble

here and if you do an LS you will see

here also the file one would be created

up in both these places the file one is

being

created okay so in both these places is

the file one is being created up also

let's assume one more situation if I

want

to

copy the file

from the Master

machine to the noes I want to copy a

particular file from the Master machine

to the noes how can I do it so to

perform that activity let's assume this

is my master machine NC master so in

this machine let me create a file let me

just create a file with the name star

agile YouTube

okay do text this is the file I will be

creating up

hello welcome

to Star agile YouTube

channel YouTube

channel contr D to come out of it so in

the master I have created a

a a particular file with the name star


agile youtube. text now I want to copy

this file to the node one and node

two so how can I copy it first of all I

will check what is my present working

directory under home andil that is my

present working

directory okay now I want to copy this

file so I will write ncble

demo hyphen M module would be copy

module and I will write the source what

would be the source my

home

anible let's say

star agile

youtube.

text this is the source where my file is

being present and where I need to save

it I need to copy to let's say home

anible so this at this location on my

node let's say if I'm coming back to the

node one and if I write PWD at this

location my ncble or that file will be

copy to so when I run

it it says some error is been coming up

which says unrecognized arguments there

is some unrecognized arguments which is

there can you identify the

issue can you identify the issue

guys so
here okay here I will be

writing hyphen

a this

argument right so this we need to add it

up so it has copied from my source which

is the

ncble Master machine to the nodes I have

copied a particular file from my master

to the nodes and if I do an LS here you

will see there's a star agile youtube.

text file which is being present up and

to see the content

star agile youtube. text I can write it

and you will see the same content is

being present here similarly in the node

two also if you do an

LS if you do an LS sorry LS you will see

Star agile youtube. text is there and if

I do a cat is Cat is a command to see

the content of the file and I can run it

and I will be able to see the content of

the file which we have saved so in this

way I would be able to I would be with

this copy command or copy module I would

be able to copy the file from my

source to the

destination now sitting at the Master

machine there is a file in the nodes I

want to see the content of that file I

want to see the content of the file


which is present in the node one and

node two from your master

machine from the Master machine how we

can do it here also we can take the help

of the modules wherein I will use the

module named command here okay let's use

the module named command it is much more

simpler and one of the best modules to

use so I can write

ncble

demo hyphen

a hyphen M command hyphen a

cat home

and okay home

and and what was the name of the file I

want to see the content of this

file so this is the location where that

file is being present up okay I will

keep the proper location and let's see

if the content is being com yes in this

machine 172 31

44208 32208 in this particular machine

the this file content is being visible

let's say hello welcome to the star aile

YouTube channel and on the machine one

also the content is being visible hello

welcome to the star agile YouTube

channel in both these

places my content it is been showing up


the content for

me okay

now

now let's

create a a file okay I want to create a

file in the node one and node two or by

nodes which are being there from your

master machine with some content in it

there should be an existing content and

the file need to be created on it is not

present in your master but the file

would be created in the node one and

node two

from the master machine

so let's suppose we want to create

a file in nodes with some content in

it how can I do it we will be using the

copy module so I will write ncble

all hyphen

M Copy

module okay and then I will write hyphen

now I will add up a Content saying

that

content is equal to let's say star agile

training and the

destination should be suppose whatever

destination you want to set it up you

can set it up

here okay so destination let's say


home and

sible this is a destination I want to

create this file

into and I can also put the file name

also

okay so let's do

it so how do you think we will be doing

it how do you think we should be doing

it

I will put the file name Also let's say

my file

one okay so I have to close the brackets

also single codes bracket I'm using here

and press

enter now if I do an

LS if I do an LS

here you will see there's a my file one

which is also present up and I can write

my file

one and I will see the the content which

is says star agile training so in this

way the modules

work

now we do have

the anible Playbook which is also being

present on now what is an anible

Playbook anible playbooks are the list

of the tasks that automatically execute

for the applied


inventory okay or groups of the host so

assume a situation

in which you are working for a team now

you your developer reach out to you you

are a devops engineer in your developer

reach out to you saying that they have

some specific 10 to 15 requirements

which is there that those softwares need

to be installed now in that case you

will be going to the developer saying

that I have or please write these names

or please write this list create this

list in a documentation purpose in the

documentation so the developer will

write that document for you and you can

just create a playbook out of that

particular document and when you execute

that Playbook all the tasks would be

executed one by one so it's like when

you have a list of tasks that need to be

performed in your noes in those cases we

will be using the anible Playbook so one

of the one or more anible task can be

combined to make a play a specific or

ordering ordered grouping of the task

mapped to a specific host and the task

are executed in a order in which they

are being written so you will mention

let's say task one task two task three

task four and you will see all these


tasks would be executed one by one for

you and one one more major thing you

should understand is this yaml or this

ncble Playbook is written in the yaml

format the old full form of the yaml was

yet another markup language but the new

full form is yaml and markup language

it's it it is nothing but a data

serialization language which is being

present now let me show you how to write

a Playbook now the Playbook which you

are seeing on your screen right now this

is a very simple Playbook which is

performing two tasks it is first of all

it is using the module named yam and it

is installing or it is installing the

Apache and then it is starting the

Apache as well for you so this is what

this module is

doing now if you go to this Master

machine let me start with writing a very

simple module wherein I will just write

of I will start with writing the simple

Playbook so I will say my first Playbook

DML press enter here after pressing

enter you should remember one small

thing that whenever you are creating an

ncble Playbook it should always start

with three dashes so I will press I and


I will use three dashes here it will

always be starting with three dashes

here you will Define the host first of

all we will be defining the host so host

is the machines on which the task would

be performed so here host would be all

or you can use or if you have

distributed or segregated the machine as

per the environments or the teams you

can Define the host in that particular

way but for now my host would be okay

but for now my host would be all now I

will be writing up the user the user

which would be performing this

activities would be

anible then become as yes which means

that the this commands which I'm going

to write or the task which I'm going to

write would be executed as a root user

so become will come up as yes

here now after become I will just put

the task which we need to execute so the

first task would be the name uh first I

will be putting the name let's say I

want to install

httpd in the

notes and I will put a action what what

I need to perform I need to use

y name is equal to

httpd and state is equal to present I


will be installing the Apache in the

nodes so host user become and after the

task I can perform any number of tasks

which I want let's suppose I can

put install or start

aache start aache or sdpd just after

this I can start it up so I can write

service name is equal to okay name is

equal to

httpd and state is equal to started I

want to start the

P after

that so state is equal to

started okay now Escape colon WQ is save

and quit now how will I execute this

Playbook so I will write

anible

Playbook plus first Playbook do DML can

execute it in this way this task would

be

executed since Apache was already

installed and it will follow item

potency which means it will not

reinstall the Apache again you will see

that Apache will not be reinstalled for

you okay

now let me again go back and I will just

put it as an

absent I will put it as an absent here


here also I can just put the state as or

state will not make any difference

because ultimately I deleting the Apache

so I will remove it okay with this on

top of it let me install Docker also

install just to show you that we can

execute two different tasks as well so

in the notes I can write it up I can put

the

action which says yum

name is equal to Docker state is equal

to present present means to install

something absent would be to delete a

particular

software okay now how will I execute I

will write ncble

Playbook my first

Playbook

EML

now this will B it will delete it but I

have not changed the the name the

description name but it's absolutely

fine it will be deleted and then we are

also installing the Dockers and you will

see the Dockers would be installed in

the nodes

itself

okay so we have two changes which is

being present

up
I so I'm assuming that you have

understood the

concept

now that would be all in the case of

ncble we will now be moving to the next

topic thank

you now we are going to start with the

introduction to

kubernetes now what exactly is cubern 8

is or k8s so when I write k8s it means

means

that um it's a short form of kubernetes

because between K and S the eight letter

comes up so if you count it off Ube are

four NE is again four so eight letters

comes up that's the reason we also write

it as K8 s now it is a open

source system to deploy scale and manage

the containerization application

anywhere so assume a situation ation

that you have a machine with thousands

of containers which is being running up

now the major problem which you will be

getting is how to deploy or the major

problem which you would be getting is

how to manage those containers how to

deploy those containers how to create or

how to ensure that the scaling happens

on the containers if some problem


happens with the containers or the

applications how will you get that

particular

information so all these things can be

done with a container management tool

which is known as kubernetes kubernetes

is one of the container management tool

which is being present up and it can

helps to manage the containerized

applications at any particular

location the first thing is why do we

need

kubernets so

kubernets is basically a orchestration

tool which can manage the complex

containers setups very efficiently we

can also Al do some resource

optimization where we can enhance the

hardware usage and reduce the costing we

can also use this for the scalability

where we can dynamically adjust the

workloads uh Dynamic adjust the

Hardwares or even the containers as per

the workloads which is being present

automated deployment

suppose I want to update a new version

of application so I can take the help of

the deployments and also I can roll back

in the case of there is some issues with

the versions which we have deployed it


comes with a vibrant ecosystem wherein

it offers an ex extensive tools and the

community support since the kubernetes

an open source software it comes with a

huge Community Support which is being

present

up now this is a kubernets architect

cure wherein the you will be

communicating with the master machine

via the CLI which is the cube CTL you

will talk to the kubernets master where

you have the API server you have the

scheduler you have the control manager

and you have the

etcd cluster etcd which is also there on

the worker nodes you will have the

container runtime which is the docker we

are using up we do have the cube we have

have the cube proxy which is being

present inside the pods there's a pod

which is a smallest Deployable unit in

kubernetes inside the port we do have

the containers which are being present

now you will be using Cube CTL which is

a cube controller to communicate with

the master machine in the same way you

were using Docker to communicate with

the docker

C okay to communicate with the docker


CLI you were using Dockers and in the

same pattern

when we were communicating communicating

with ncble we were using ncble at that

point of time now Cube CTL we will use

to communicate with the API server API

server you will first go and tell let's

say create a p for me API server will

then go to the

scheduler and scheduler will tell in

which particular note we have the better

availability of the resources and where

this particular P need to be created up

so scheduler will ensure that in a

proper node where the availability is a

availability is being there to create a

pod it will decide and it will give the

instruction to the cubelet to create the

part in that particular or that specific

node we do have a controller manager

which is also there which will what does

a controller management manager will

do controller manager will manage okay

controller manager

role is to ensure that your desired

State and the current state matches for

example if I want to have five parts in

a node one and due to any reason one

port gets deleted up the controller

manager will see that desired state was


five now it is four it will try to bring

it back to the desired state so it will

talk to the cubet and will try to ensure

that it becomes back to the desired

State we do have a etcd cluster which is

also there or we do have a etcd which is

also being present up now this

etcd is stores the

metadata of the node of the nodes which

are being

available in the worker node I do have a

cubet cubet talks to the to the Master

machine and it interacts with

the container runtime like Docker to

create a containers to create a pod with

the containers in it we do have a group

proxy which will assign the IP address

to a particular pod so all these things

we do have it in the kubernets

architecture so we do have a master node

at an high level we do have a master

node which will Co coordinate with the

cluster it shedules the application it

manages the cluster

State then we have a worker note which

is being present

up we do have a worker notes where we

can host the applications inside the

containers and we can execute the task


which is assigned by the

master we do have a parts which are the

smallest Deployable units in the

kubernetes wherein we can encapsulate

one or more

containers we do have a Services where

we Define the network access rules and

we can enable the

communication to and within the pods

itself we do also have a control plane

control plane is nothing but a Master

machine machine from where you can

manage all the nodes we have etcd which

will store the cluster configuration for

high

availabilities cubet is a agent running

on each of the node ensuring that

containers are running in the P so the

cubet will also be interacting with the

API server in the control plane or the

Master machine and ensure

that the pods are running and if any

problem happens it's a tube light which

will be which would be informing the

other resources about the issues

happening now we do have a crew proxy

which will manage the network

communication within the particular

cluster

now we will be installing the


kubernetes in the machines okay before

that we will like we have to write the

code we have to write a manifest to

deploy any kind of resource in the

kubernets so let's go and check it out

first of all we will go and check it out

how we can install the kubernets in the

master node of architecture wherein I

will be creating a

machine I will just take a different

region let's say in Mumbai I will be

working on Mumbai and here I will

create three

instances so I will just say one is

kubernetes Master

machine or I will just create two

instances one would be the master and

one would be the new here we can use

uban 2 as an operating

system

and I can use T3 do

medium this is the minimum configuration

you should be using you need to have at

least two vcps to get your or to perform

any specific task in the key pair I can

create a new key pair let's say my key

gen and I can create a key pair

here now after this is being done I will

edit the network


settings and I

will enable all the traffic here I will

enable all the traffic in this after

this is being done I will be launching

the

instance after this machines are being

launched I will just rename these

machines one would be the master machine

and another would be the node

so I will rename it as node one and one

would be the master machine for me so I

will select the Master machine I will

click on this connect

button and I will be clicking on this

connect

button

now now here I will be using sudo Su

which will give me the super user

permissions

okay and I will open the node one and I

will click on the connect to the node

one as well and here also I will be

clicking on this connect

button after this is being done I have a

deployment script which I have written

and which is present in my GitHub

repository you can fog that particular

repository to get

your task done

so there is a in this link if you go to


this particular link you can fog this

repository and when you go to the read

me file you would be able to see some

commands let me execute these commands

in the node and in the Master machine to

perform the activities we are looking

forward to so this is a Master machine

which is there I will right click and

paste this link

here okay so we have this kubernetes

master. shell script which is being

created I will use the change mode and I

will give the read write and execute

permission to this part this

specific to this specific node Master

machine so I will just paste this

command here and which will give the

mode access now I will execute this

kubernets master.

shellis I will execute the kubernets

master. shellscripting

here

okay now we will be going to the node

one as well and I will write sudo Su

command

and I'll be running this command

here I will paste this link and I will

press enter after this is done I will

again be putting CH
mode and I will be putting up the

mode

now okay now I will execute the Mod

script

okay so it is created up now now we we

already we can create a token so this is

a token which we can create it in the

Master machine so I will go to the

Master machine paste this link here and

the token would be generated up the

token can be copied

okay the token can be

copied and we can put the token here now

after this token is being put up here we

need to add up the C socket this

particular Link in the

token okay and I will be putting this

link here I will be adding up this

link okay

now okay fine now I will copy this link

and I will paste it in the node to join

it in the

cluster enter and now this link uh it

this node is joined to the cluster

itself now at this point I will run the

first command to see the list of all the

nodes for that I will be using Cube

CTL get

noes will give me the list of the notes

this is currently not ready but if you


wait for 2 minutes you will see this

machine will also be ready for you okay

this machine will also be ready for

you

and I will just write the cube CTL get

notes and this is still not ready just

wait for some time this another machine

will also be ready okay take some time

to make another machine ready that's a

reason which uh that's a reason why this

is happening up

let me again execute it and you will see

both the machines are ready now I will

be writing a manifest to create a port

with a single container in it so to

create a manifest I will be let's say I

can give I can create any file with any

name but tml should be the extension

here VI is a file editor now first thing

is you will be writing the kind what I'm

creating I'm creating a

pod the API version API version depends

on the kind which we are using

up okay so API version would be

V1 then we have the

metadata which is named as

despot

annotations in annotations I can put up

some description also about this


pod let's say um our learning pod

okay then the specification what is the

specification of this part what is there

there is a container inside this pod so

there is containers inside the Pod so

the first container name is

c00 the image which would be used inside

the container would be the UB 2

image and the command which we are going

to execute

is slash bin SL Bash

Okay hyph

C while

true do Eco let's say some message we

are we are

eching

sleep 10 seconds done okay this is what

we are doing here and we will be

creating a pod okay so what exactly we

are doing is we are saying that the kind

is POD what we are creating we are

creating a pod API version API version

is V1 now what is an API version is the

API version in kubernetes refers to the

version of kubernetes API that the

resource type being used is compatible

with for example if you have a

deployment resource you would specify

the API version as apps slv1 to indicate

that you are using the V1 version of


your

or apps API the API version is used to

determine the appropriate code paths to

execute when processing a request okay

and it is really important to use a

correct API version for your resources

at it can affect the behavior and the

functionality of the resources newer

versions of the API may may add a new

fields or changes the behavior of the

existing field so it is really important

to ensure that the version you're using

is compatible with the intended use

case then we have the name name of the

Pod so we are creating a pod with the

name test pod here annotation the

annotations is basically the additional

details about this this particular

metadata so we have the description

which says are learning P under that we

do have a specification what are the

specification of this part we have to

create a container with the name c0 Z

with uban 2 as a base image and this is

a command which we are going to execute

it okay bin and Bash hyph C and this is

a command which will be which would be

executed which will print test message

okay and it will sleep after 10 seconds


every 10 seconds it would be printing up

the test message I

will escape now when I write Cube

CTL apply hyphen f p1. EML when I apply

it it is giving me an error that there

is some issues which is there in the

line number 12 so let's go and let's see

what is an issue which is being present

in the line number 12 at the command

site here we need to close this we

forgot to close the brackets here so I

will

be closing it

up just allow me a

minute okay so bin and

Bash while true okay yeah now it is

perfect I will close Escape colon WQ now

I will apply it again now you will see

there's a pod with the name test pod is

being created up now when I write Cube

CTL get pods you will see there's a test

part with zero out of one is being ready

now when you look at the test part one

out of one is being ready and it is in

the running State okay now

now I want to see in which node this

particular pod is being created up in

that case I will just write Cube CTL get

pods hyphen o it will give me the node

in which this pod is also being created


up okay so you can see this is the node

in which this particular pod is being

created I can even see what exactly is

getting executed in this particular Port

so we can run the command Cube CTL logs

hyph F and I can run test

SP here so you can see there is a test

message which is getting executed in

this

particular port or inside the container

of the

P okay even I can get inside the

container I want to let suppose get

inside the container which is running in

sub pod so for that I will run the

command Cube CTL execute exact test SP

hyphen ID okay hyphen it or I can put it

before hyphen

it and dash dash hyphen win and Bash in

this way I will be able to enter inside

the container in the P okay it is the

interactive terminal to open the

interactive terminal of the test pod we

will be using this specific command we

can run exit command to come out of the

container itself so these are the things

we should be using or we should be

taking care of while working onto

the kubernetes and this is the Manifest


which we have

written so today we are going to talk

about the terraform okay so terraform is

a infrastructure as a tool okay

infrastructure as a code tool which is

an IAC

tool so it is used for building changing

and versioning the infrastructure safely

and efficiently let's assume a simple

situation suppose you have a client and

your client

wants suppose wants you to upload or

deliver 10 different websites or same

kind of websites 10 10 same kind of

websites your client wants you to

deliver

now at this situation either you will be

creating a solution for these 10

websites and also then you will be

deploying all these 10 website and do

the testing but the second option is you

will be writing a code you will write a

code and you will execute this code 100

times making some cosmetic changes now

you will see that after with just a

single template you will be able to

deploy 10 even hundreds even thousands

of websites similar kind of

websites now which software can do it

it's a infrastructure as a code software


can do it Al which is terraform okay so

IAC is the tool or IAC is

a infrastructure as a code service

wherein you will be creating the

infrastructure whether you are creating

it in Azure or you are creating it in

AWS or you creating it any of the cloud

services via the help of a software

which is terraform you do have the

competitors like Cloud forming

which is one of the very big competitors

but the problem with the cloud formation

is it has a offering only in the AWS

Cloud it does not provide you the

infrastructure as a code on other cloud

services but terraform works with all

the cloud services and it is being owned

by a company named Hashi cop and in

which coding language you will be

writing it off you will be writing in

the X which is Hashi cop configuration

language

now let's understand why we need or why

the terraform would be supported or why

should we use the

terraform first reason is it uses

multicloud support multiple clouds even

Azure AWS gcp it provides with a

multicloud
support it comes with a large ecosystem

when I say large ecosystem it covers

more than 3,000 Plus providers it works

with plus 3,000 plus providers so it has

a very large ecosystem we use a

declarative

syntax it provides a state management

which means if a state is already there

let's say let's assume a situation you

wanted to create a machine now when you

execute the code again when you we have

written the code to create a machine you

have executed the code now when you

execute the code again the state would

be maintained in that case we will be

doing a planning and applying which

means we will plan and we can apply the

code so first you have to plan the

infrastructure you will be creating up

and when you applied it it will execute

that particular

infrastructure let's go to the terraform

official website so when you go to the

terraform doio you will see there's an

overview you will see use cases registry

is there tutorials are there documents

community so when I click on this

download you can download the Terra form

on the Mac operating system by running

this
command okay in the terminal if for the

windows you can simply download this

code here or the download this file

which is terraform file you can simply

download this file on Linux these are

the three commands you need to execute

them one by one to execute a particular

task and finally you would be able to

install the terraform in the Linux as

well okay so in the Amazon Linux this is

these are the commands you would be

using if you're using UB 2 as an

operating system then these would be the

commands if you're using send TOS then

these would be the commands you would be

using so yes for different sof for

different operating system different

kind of installations is being provided

up now I will go back to the official

website of terraform

doio here you will see registry I will

be clicking on this registry here now

when I click on this registry I do have

the browse providers browse modules

browse Library browse run task first

let's go to the browse providers

providers is which

particular softwares of which particular

cloud or the Computing devices support


the terraform so you can see there are

more than 3,000 plus Cloud providers

there are

3,987 provider which supports the

terraform now if you see there is AWS

azard gcp kubernets Alibaba Cloud Oracle

cloud and lot

many providers are being there by which

you can interact your terraform in which

your terraform will interact with these

resources resources present in these

providers okay now in this provider also

it's being divided into three parts one

is the official provider second one is a

partner third one is a community so when

I say official

provider this one is the official

providers where it covers a AWS a y gcp

kubernets these are the officially

maintained by the harik then it's a

partner provider which is being

maintained by these respective companies

only let's say Alibaba only Alibaba is

being maintaining it if you see the link

also it is alun is the company which is

maintaining the Alibaba Cloud then we

have this Alibaba cloud or I can just

select the official one and I can remove

the partner one if you go to WS you will

see the haric cop is maintaining it and


when you see the total number of

downloads you can see there are 2.6

billion downloads which is there on AWS

which is a huge number okay it you will

see it also gets updated very

frequently okay now the thing is do I

need to learn HCL no you don't have to

learn HCL there's a good documentation

which is being provided wherein whatever

in whatever resource you want to create

you can simply search for that resource

and you can copy the code from there and

you can you can create the resource

directly for example let's say I want to

create a instance which is a server in

AWS I can simply copy this resour copy

this complete code and I can post put it

in the uh Visual Studio code or if you

working on Linux then I can just simply

put this code in a directory and I can

run the command terraform plan first of

all which will plan my complete

infrastructure and then I can execute

the terraform apply wherein it will

execute that a particular infrastructure

for okay so this is all about

terraform okay some basic commands Also

let's discuss before winding it up

there's a terraform in it which will


initialize the terraform which means

start the terraform for you it will

initializing the terraform it will be

starting the terraform then we have a

terraform plan which will create an

execution plan for you okay and finally

when you apply it that particular plan

which is being created by the terraform

plan would be executed in the under the

terraform

apply we do have terraform destroy

wherein it we can destroy the resources

which are already being created so let's

suppose I have created two machines one

S3 I want to destroy those resources I

can destroy them with the help of the

terraform destroy command then we have

the terraform validate with the help of

the validate I can validate the syntax I

can validate the errors and the

configuration which is there so I can

validate what I have written all those

things can be validated with the help

of okay all those things can be

validated with the help of terraform

validate we have terraform fmt which

what does the fmt will do it will

automatically formats the terraform

configuration files to a canonical

format and style and this will help in


maintaining the consistency and

readability in the code so we have a

complete formatting which can be

done then we have the terraform output

which will display the output defined in

the terraform configuration terraform

refresh will update the local state file

against the real world resources and it

will ensure that it matches the current

state of the resources in the cloud we

have a terraform workspace which will

manage the multiple distinct set of

infrastr structure resources or the

workspace and we can use this command to

create list and switch between the

workspace as well we do have a terraform

import which allows the existing

infrastructure to be brought under the

terraform management by importing it to

your terraform State and this is useful

for bringing the realtime infrastructure

into the terraform configuration without

recreating okay so we will initialize

let's say first we will WR write the

code to create an ec2 then we will

validate it then we will put it on a

under a format then we will plan it then

we will apply we will then put it as an

output to see if what outputs are been


coming up we can then adjust and destroy

the terraform resources so that would be

all in the terraform thank you take

care

You might also like