0% found this document useful (0 votes)
7 views55 pages

DevOps - Unit - 5

The document provides an overview of Configuration Management, focusing on Ansible as an automation tool for deploying and managing software across systems. It covers key concepts such as playbooks, roles, Jinja2 templating, and Ansible Vault for securing sensitive data. Additionally, it discusses containerization using Kubernetes, highlighting its benefits in software development and deployment.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views55 pages

DevOps - Unit - 5

The document provides an overview of Configuration Management, focusing on Ansible as an automation tool for deploying and managing software across systems. It covers key concepts such as playbooks, roles, Jinja2 templating, and Ansible Vault for securing sensitive data. Additionally, it discusses containerization using Kubernetes, highlighting its benefits in software development and deployment.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

UNIT-V

Configuration Management
• ANSIBLE: Introduction to Ansible
• Ansible tasks
• Roles
• Jinja2 templating
• Vaults
• Deployments using Ansible
• Containerization using Kubernetes
• Introduction to Kubernetes
• Namespace & Resources
• CI/CD – on GCP
• Deploying Apps on Openshift Container Pods
What is Configuration Management?

• Configuration management means that it maintains configuration


of the product performance by keeping a record and updating
detailed information which describes an enterprise’s hardware and
software.
• Such information typically includes the exact versions and updates
that have been applied to installed software packages and the
locations and network addresses of hardware devices.
• For e.g. If you want to install the new version of WebLogic /
WebSphere server on all of the machines present in your enterprise,
it is not feasible for you to manually go and update each and every
machine.
• You can install WebLogic/WebSphere in one go on all of your
machines with Ansible playbooks and inventory written in the most
simple way.
• All you have to do is list out the IP addresses of your nodes in the
inventory and write a playbook to install WebLogic/WebSphere.
• Run the playbook from your control machine & it will be installed
on all your nodes.

Introduction to Ansible
• Ansible is an IT automation tool.
• It can configure systems, deploy software, and orchestrate more
advanced IT tasks such as continuous deployments or zero
downtime rolling updates.
• Ansible is easy to deploy because it does not use any agents or
custom security infrastructure.
• Ansible uses playbook to describe automation jobs, and playbook
uses very simple language i.e. YAML (Yet Another Mark-up
Language)
How Ansible Works?
• Ansible works by connecting to your nodes and pushing out small
programs, called "Ansible modules" to them.
• Ansible then executes these modules (over SSH by default), and
removes them when finished.
• Your library of modules can reside on any machine, and there are
no servers, daemons, or databases required.
• The management node in the picture is the controlling node
(managing node) which controls the entire execution of the
playbook.
• It’s the node from which you are running the installation.
• The inventory file provides the list of hosts where the Ansible
modules needs to be run and the management node does a SSH
connection and executes the small modules on the hosts machine
and installs the product/software.
• Beauty of Ansible is that it removes the modules once those are
installed so effectively it connects to host machine , executes the
instructions.
• If it’s successfully installed removes the code which was copied on
the host machine which was executed.
Ansible Tasks

• Playbooks are the files where Ansible code is written. Playbooks


are written in YAML format.
• YAML stands for Yet Another Markup Language.
• Playbooks are one of the core features of Ansible and tell Ansible
what to execute.
• They are like a to-do list for Ansible that contains a list of tasks.
• Playbooks contain the steps which the user wants to execute on a
particular machine.
• Playbooks are run sequentially.
• Playbooks are the building blocks for all the use cases of Ansible.
Create a Playbook
• Playbooks are automation blueprints, in YAML format, that Ansible
uses to deploy and configure managed nodes.
• Playbook : A list of plays that define the order in which Ansible
performs operations, from top to bottom, to achieve an overall goal.
• Play : An ordered list of tasks that maps to managed nodes in an
inventory.
• Task : A reference to a single module that defines the operations
that Ansible performs.
• Module : A unit of code or binary that Ansible runs on managed
nodes.
Complete the following steps to create a playbook that pings your hosts and prints a “Hello
world” message:

1. Create a file named playbook.yaml in


your ansible_quickstart directory, with the following content:

2. Run your playbook.


Ansible returns the following output:

• In this output you can see:


• The names that you give the play and each task. You should always use
descriptive names that make it easy to verify and troubleshoot playbooks.
• The Gather Facts task runs implicitly. By default, Ansible gathers information
about your inventory that it can use in the playbook.
• The status of each task. Each task has a status of ok which means it ran
successfully.
• The play recap that summarizes results of all tasks in the playbook per host. In
this example, there are three tasks so ok=3 indicates that each task ran
successfully.
The Different YAML Tags
The different YAML tags are described below
name :
• This tag specifies the name of the Ansible playbook. As in what this
playbook will be doing. Any logical name can be given to the playbook.
hosts :
• This tag specifies the lists of hosts or host group against which we want to
run the task. The hosts field/tag is mandatory. It tells Ansible on which
hosts to run the listed tasks. The tasks can be run on the same machine or
on a remote machine.
vars :
• Vars tag lets you define the variables which you can use in your playbook.
Usage is similar to variables in any programming language.
tasks :
• All playbooks should contain tasks or a list of tasks to be executed. Tasks
are a list of actions one needs to perform.
Ansible - Roles
• Roles provide a framework for fully independent, or interdependent
collections of variables, tasks, files, templates, and modules.
• In Ansible, the role is the primary mechanism for breaking a
playbook into multiple files.
• This simplifies writing complex playbooks, and it makes them
easier to reuse.
• The breaking of playbook allows you to logically break the
playbook into reusable components.
• Each role is basically limited to a particular functionality or desired
output, with all the necessary steps to provide that result either
within that role itself or in other roles listed as dependencies.
• Roles are not playbooks.
• Roles are small functionality which can be independently used but
have to be used within playbooks.
• There is no way to directly execute a role.
• Roles have no explicit setting for which host the role will apply to.
What is Ansible Roles
• Ansible roles are consists of many playbooks
• Roles are a to group multiple tasks together into one
container to do automation in very effective manner with
clean directory structures.
• Roles are set of task and additional files for a certain role
which allow you break up the configurations.
• It can reuse the code by anyone if the role is suitable for
anyone
• It can easily modify.
Creating a new Role
• To create an Ansible role, it's enough to make a directory following
the standard directory structure .
• First, create the roles directory and switch to it:

• Then, use the command ansible-galaxy to initialize the role:

• Now, verify the role directory structure:


• Switch into the newly created directory:

• Your Vim role does not require any dependencies.


• Here's an example of a working meta configuration file.
• Update it with your name, company name, and a suitable license, if
necessary:
Role Examples

Directory Structure:
tasks - contains the main list of tasks to be executed by the role.
handlers - contains handlers, which may be used by this role or even anywhere outside this
role.
defaults - default variables for the role.
vars - other variables for the role. Vars has the higher priority than defaults.
files - contains files required to transfer or deployed to the target machines via this role.
templates - contains templates which can be deployed via this role.
meta - defines some data / information about this role (author, dependency, versions,
examples, etc,.)

https://www.devopsschool.com/tutorial/ansible/ansible-roles-explained-with-examples.html
Jinja2 template
• Jinja2 is a powerful and easy to use python-based templating engine
that comes in handy in an IT environment with multiple servers
where configurations vary every other time.
• Creating static configuration files for each of these nodes is tedious
and may not be a viable option since it will consume more time and
energy.
• And this is where templating comes in.
• Jinja2 templates are simple template files that store variables that
can change from time to time.
• When Playbooks are executed, these variables get replaced by
actual values defined in Ansible Playbooks.
• This way, templating offers an efficient and flexible solution to
create or alter configuration file with ease.
Template architecture
• A Jinja2 template file is a text file that contains variables that get
evaluated and replaced by actual values upon runtime or code
execution.
• In a Jinja2 template file, you will find the following tags:

• In most cases, Jinja2 template files are used for creating files or
replacing configuration files on servers.
• Template files bear the .j2 extension, implying that Jinja2
templating is in use.
• Creating template files
• Here’s an example of a Jinja2 template file example_template.j2
which we shall use to create a new file with the variables shown.

• These variables are defined in a playbook and will be replaced by


actual values in the playbook YAML file example1.yml below.

• When the playbook is executed, the variables in the template file


get replaced by the actual values and a new file is either created or
replaces an already existing file.txt in the destination path.
Jinja2 template with Conditionals
• Jinja2 templating can also be used with conditional statements such
as for loops to iterate over a list of items.
• Consider the Playbook example2.yml as shown in the pictorial
below:
• We are going to create a template that will iterate over the list of car
models called ‘cars’ and print the result in the file2.txt destination
file.

• The for loop in the Jinja2 template file – example2_template.j2 –


is as shown
• When the playbook is executed, the loop iterates over the car list,
and prints out the car models in the destination file.
• You can use the cat command to examine the output and verify
where the models exist in the file.
Ansible Vault
• Ansible Vault is an Ansible feature that helps you encrypt
confidential information without compromising security.
• While working with Ansible, you can create various playbooks,
inventory files, variable files, etc.
• Some of the files contain sensitive and important data like
usernames and passwords.
• Ansible provides a feature named Ansible Vault that prevents this
data from being exposed.
• It keeps passwords and other sensitive data in an encrypted file
rather than in plain text files.
• It provides password-based authentication.
Ansible Vault performs various operations
• Encrypt a file
• Decrypt a file
• View an encrypted file without breaking the
encryption
• Edit an encrypted file
• Create an encrypted file
• Generate or reset the encrypted key
Create an encrypted file

• The ansible-vault create command is used to create the encrypted


file.
• After typing this command, it will ask for a password.
• To check that the file has been encrypted, use the cat command.

• The following command is used to create encrypted files with --


vault id.
• Editing the encrypted file
If the file is encrypted and changes are required, use the edit
command.
• Decrypting a file
The ansible-vault decrypt command is used to decrypt the encrypted
file.
• Decrypt a running playbook
To decrypt the playbook while it is running, you usually ask for its
password.
• Reset the file password
Use the ansible-vault rekey command to reset the encrypted file
password. # ansible-vault rekey secure.yml
Deployments using Ansible

• Ansible is the simplest way to deploy your applications.


• It gives you the power to deploy multi-tier applications reliably and
consistently, all from one common framework.
• You can configure needed services as well as push application
artifacts from one common system.
• Rather than writing custom code to automate your systems
• Your team writes simple task descriptions that even the newest team
member can understand on first read - saving not only up-front
costs, but making it easier to react to change over time.
Power of the playbooks
• REPEATABLE & RELIABLE
• Ansible allows you to write 'Playbooks' that are descriptions of the
desired state of your systems, which are usually kept in source
control.
• Ansible then does the hard work of getting your systems to that
state no matter what state they are currently in.
• Playbooks make your installations, upgrades and day-to-day
manage
• SIMPLE TO WRITE & MAINTAIN
• Playbooks are simple to write and maintain.
• Most users become productive with Ansible after only a few hours.
• Ansible uses the same tools you likely already use on a daily basis
and playbooks are written in a natural language so they are very
easy to evolve and edit.
• NO AGENT = MORE SECURE, MORE PERFORMANCE,
LESS EFFORT
• Ansible can be introduced into your environment without any
bootstrapping of remote systems or opening up additional ports.
• Not only does this eliminate "managing the management," but
system resource utilization is also dramatically improved.
• Zero downtime
• As alluded in the previous diagram, Ansible can orchestrate zero
downtime rolling updates trivially, ensuring you can update your
applications in production without users noticing.
• Super flexible
• Downloading artifacts from servers and configuring the OS are just
the basics.
• Talk to REST APIs, update a team chat server with a heads up, or
send an email - Ansible can drive all kinds of workflows.
• Cloud ready
• Included modules manage not just the local computer system, but
can interact with cloud services including Amazon AWS, Microsoft
Azure, and more.
• And since all cloud APIs allow you to trivially inject SSH keys, you
can start managing any cloud instance or network software without
modifying the base image.
Containerization using Kubernetes
• Containerization has become a major trend in software development as an
alternative or companion to virtualization.

• It involves encapsulating or packaging up software code and all its


dependencies so that it can run uniformly and consistently on any
infrastructure.

• The technology is quickly maturing, resulting in measurable benefits for


developers and operations teams as well as overall software infrastructure.

• Containerization allows developers to create and deploy applications


faster and more securely.

• With traditional methods, code is developed in a specific computing


environment which, when transferred to a new location, often results in
bugs and errors.
• For example, when a developer transfers code from a desktop computer to
a virtual machine (VM) or from a Linux to a Windows operating system.

• Containerization eliminates this problem by bundling the application code


together with the related configuration files, libraries, and dependencies
required for it to run.

• This single package of software or “container” is abstracted away from


the host operating system

• Hence, it stands alone and becomes portable—able to run across any


platform or cloud, free of issues.

• Containers are often referred to as “lightweight,” meaning they share the


machine’s operating system kernel and do not require the overhead of
associating an operating system within each application.

• Containerization allows applications to be “written once and run


anywhere.”
Benefits of Containerization
• Containerization offers significant benefits to developers and
development teams.
• Portability: A container creates an executable package of software
that is abstracted away from (not tied to or dependent upon) the host
operating system, and hence, is portable and able to run uniformly
and consistently across any platform or cloud.
• Agility: The open source Docker Engine for running containers
started the industry standard for containers with simple developer
tools and a universal packaging approach that works on both Linux
and Windows operating systems.
• Speed: Containers are often referred to as “lightweight,” meaning
they share the machine’s operating system (OS) kernel and are not
bogged down with this extra overhead.
• Fault isolation: Each containerized application is isolated and
operates independently of others. The failure of one container does
not affect the continued operation of any other containers.
Development teams can identify and correct any technical issues
within one container without any downtime in other containers.
• Efficiency: Software running in containerized environments shares
the machine’s OS kernel, and application layers within a container
can be shared across containers.
• Ease of management: A container orchestration platform
automates the installation, scaling, and management of
containerized workloads and services.
• Container orchestration platforms can ease management tasks such
as scaling containerized apps, rolling out new versions of apps, and
providing monitoring, logging and debugging, among other
functions.
Kubernetes

• Kubernetes is an open source orchestration tool developed by


Google for managing microservices or containerized applications
across a distributed cluster of nodes.
• Kubernetes — also known as “k8s” or “kube” — is a container
orchestration platform for scheduling and automating the
deployment, management, and scaling of containerized
applications.
• Kubernetes provides highly resilient infrastructure with zero
downtime deployment capabilities, automatic rollback, scaling, and
self-healing of containers (which consists of auto-placement, auto-
restart, auto-replication , and scaling of containers on the basis of
CPU usage).
• It groups containers that make up an application into logical units
for easy management and discovery.
• The main objective of Kubernetes is to hide the complexity of
managing a fleet of containers by providing REST APIs for the
required functionalities.
• Kubernetes is portable in nature, meaning it can run on various
public or private cloud platforms such as AWS, Azure, OpenStack,
or Apache Mesos.
• It can also run on bare metal machines.
Kubernetes Components and Architecture

• Kubernetes follows a client-server architecture.


• It’s possible to have a multi-master setup (for high availability), but
by default there is a single master server which acts as a controlling
node and point of contact.
• The master server consists of various components including a kube-
apiserver, an etcd storage, a kube-controller-manager, a cloud-
controller-manager, a kube-scheduler, and a DNS server for
Kubernetes services.
Kubernetes Architecture
Master Components
• etcd cluster – a simple, distributed key value storage which is used
to store the Kubernetes cluster data (such as number of pods, their
state, namespace, etc), API objects and service discovery details. It
is only accessible from the API server for security reasons.
• kube-apiserver– Kubernetes API server is the central management
entity that receives all REST requests for modifications (to pods,
services, replication sets/controllers and others), serving as frontend
to the cluster.
• kube-controller-manager – runs a number of distinct controller
processes in the background (for example, replication controller
controls number of replicas in a pod, endpoints controller populates
endpoint objects like services and pods, and others) to regulate the
shared state of the cluster and perform routine tasks.
• cloud-controller-manager – is responsible for managing controller
processes with dependencies on the underlying cloud provider (if
applicable).
• For example, when a controller needs to check if a node was
terminated or set up routes, load balancers or volumes in the cloud
infrastructure, all that is handled by the cloud-controller-manager.

• kube-scheduler – helps schedule the pods (a co-located group of


containers inside which our application processes are running) on
the various nodes based on resource utilization.
• It reads the service’s operational requirements and schedules it on
the best fit node.
Difference Between Docker and Kubernetes
Kubernetes Design Principles
• Scalability – Kubernetes provides horizontal scaling of pods on the basis
of CPU utilization.
• The threshold for CPU usage is configurable and Kubernetes will
automatically start new pods if the threshold is reached.
• For example, if the threshold is 70% for CPU but the application is
actually growing up to 220%, then eventually 3 more pods will be
deployed so that the average CPU utilization is back under 70%.

• High Availability – Kubernetes addresses highly availability both at


application and infrastructure level.
• Replica sets ensure that the desired (minimum) number of replicas of a
stateless pod for a given application are running.
• Stateful sets perform the same role for stateful pods.
• At the infrastructure level, Kubernetes supports various distributed
storage backends like AWS EBS, Azure Disk, Google Persistent Disk and
more.
• Security – Kubernetes addresses security at multiple levels: cluster,
application and network.
• The API endpoints are secured through transport layer security
(TLS).
• Only authenticated users (either service accounts or regular users)
can execute operations on the cluster (via API requests).
• At the application level, Kubernetes secrets can store sensitive
information (such as passwords or tokens) per cluster (a virtual
cluster if using namespaces, physical otherwise).
• Note that secrets are accessible from any pod in the same cluster.
• Portability – Kubernetes portability manifests in terms of operating
system choices (a cluster can run on any mainstream Linux
distribution), processor architectures (either virtual machines or
bare metal), cloud providers (AWS, Azure or Google Cloud
Platform), and new container runtimes, besides Docker, can also be
added.
Namespace & Resources
• In Kubernetes, namespaces provides a mechanism for isolating
groups of resources within a single cluster.
• Names of resources need to be unique within a namespace, but not
across namespaces.
• Namespace-based scoping is applicable only for namespaced
objects (e.g. Deployments, Services, etc) and not for cluster-wide
objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).
• Namespaces are a way to divide cluster resources between multiple
users.
• You can list the current namespaces in a cluster using:
> kubectl get namespace
• In Kubernetes namespaces are a way to create virtual clusters
within a ohysical cluster.
• They provide a way to divide cluster resources into logical groups
enabling multiple teams or applications to coexists and operate
independetntly within the same kubernetes cluster.
• Namespaces help in organizing and isolating resources improving
resource utilization and providing al leel of separation between
different environments , project and teams.
• They act as a scope of kubernetes objects such as pods, services,
deployments,confimaps and secrets.
• Kubernetes starts with four initial namespaces:
• default The default namespace for objects with no other namespace
• kube-system The namespace for objects created by the Kubernetes
system
• kube-public This namespace is created automatically and is
readable by all users
• kube-node-lease This namespace holds Lease objects associated
with each node.
Key Points about Kubernetes
• Isolation : Namespaces provide isolation between resources.
• Objects in one namespace are typically not aware of objects in other
namespaces unless explicitly configured to communicate.
• Resource Allocation: Resources like CPU,Memory,Storage and
Network Banwidth can be allocated and managed at Namespace
level.
• This allows for resource quotas and limits to be set for each
namespcae.
• Access Control: Kubernetes RBAC(Role Based Access Control)
can be used to define fien grained access control policies at the
namespace level.
• This enables different teams or users to have differetnt permissions
within their respective namespace.
What are resources?
• A "resource" typically refers to a unit of compute, storage, or
networking that can be managed by the Kubernetes cluster.

• Kubernetes abstracts away the underlying infrastructure and


provides a unified way to manage these resources across a cluster of
machines.
K8S Resources

• Pods
• Deployments
• Services
• ConfigMap
• Secrets
• Persistant Volumes
• Stateful sets
• Daemon Sets
• Jobs and Cronjobs
• Pods: Pods are the smallest deployable units of computing that you
can create and manage in Kubernetes.

• Deployment: Declarative way to manage and scale apps


• Autoscaling apps.
• Services:
• Enables network connectivity
• Load Balancing
• Stable network endpoint over pods
• Both internal and external connectivity is supported.
• ConfigMaps and Secrets
• Manage Configuraation data and sensitive information
• Persistent Volume
• A persistent volume is a piece of storage in a cluster that an
administrator has provisioned. It is a resource in the cluster, just as
a node is a cluster resource.
• Stateful Sets
• Managing stateful applications. Ex : Databases.
• DaemonSets
• DaemonSet is a Kubernetes feature that lets you run a Kubernetes
pod on all cluster nodes that meet certain criteria.
• Jobs and CronJobs
• Running batch and periodic tasks
• Jobs  Pods
• Cronjobs  Jobs
CI/CD on GCP
• Google Cloud Platform(GCP) is one of the leading cloud providers
in the public cloud market.
• It provides a host of managed services, and if you are running
exclusively on Google Cloud, it makes sense to use the managed
CI/CD tools that Google Cloud provides.
• A typical Continuous Integration & Deployment setup on Google
Cloud Platform looks like the below.
• Developer checks in the source code to a Version Control system
such as GitHub
• GitHub triggers a post-commit hook to Cloud Build.
• Cloud Build builds the container image and pushes to Container
Registry.
• Cloud Build then notifies Cloud Run to redeploy
• Cloud Run pulls the latest image from the Container Registry and
runs it.
• We will use Google Cloud Build to build a simple java application,
store the docker image in Google Container Registry, and deploy it
to Google Cloud Run.
How to deploy applications on Openshift
• Many people look for a container orchestrator and choose Openshift
for its features, security, performance, and you may be among them.
• Deploying applications using the Web Console or CLI is quite
simple.
• But deploying hundreds or even thousands of applications, knowing
the status of the deployments and their version - this is not the right
method.
• Just like the source code of applications, the actions you perform in
Openshift must be stored and versioned in a SCM.

• In Openshift, everything is object, and that’s what we store and


version, in YAML or JSON format.
• Requirement
• Openshift cluster up and running
• Client(CLI)
• Git
Deploying voting App

• In this example, we will deploy a voting application.


• We deploy it on Openshift, in three different ways.
• We are creating Pod,DeploymentConfig,Service,Route,BuildConfig.
• You can now clone the project from github.
• Deployments in the cluster needs to pull images to pull images from
Docker file or from source to image.

1. With container images


• First, we will deploy this example with container images, that can
be built and stored in any registry.
• This method is often used when you have already a toolchain that
build images for you and you don’t want to change this operation.
2. With Dockerfile
• In a second step, we are going to deploy the same application but
with some changes: use of Dockerfile in your Git repository.
• It’s Openshift that will take care of building our images.
3. With Source to Image (S2I)
• This method is pretty similar to the previous one, the difference is
that you’re using a builder container image instead of Dockerfile.
• You don’t have to worry about how to create and build your
container images, stay focus on your source code and Openshift
will take care of the rest for you.

You might also like