0% found this document useful (0 votes)
5 views

Unit 3

Uploaded by

pulkitshringi02
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Unit 3

Uploaded by

pulkitshringi02
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

Unit-3

Terraform – Getting
Started
Provisioning infrastructure
through software to achieve
consistent and predictable
environments.
Core Concepts

Stored in Declarative
Defined in code
source or
control imperative

Idempotent and
Push or pull
consistent
Infrastructure as Code Benefits

Automated
deployment
Consistent
environments
Repeatable process

Reusable

components

Documented
architecture
Automating Infrastructure Deployment

Provisionin Plannin Using Reusing


g g Source Templat
Resource Update Control es
s s
Terraform
● A provisioning declarative tool that based on Infrastructure as
a Code paradigm
● Uses own syntax - HCL (Hashicorp
Configuration Language)
● Written in Golang.
● Helps to evolve you infrastructure, safely and predictably
● Applies Graph Theory to IaaC
● Terraform is a multipurpose composition tool:
○ Composes multiple tiers (SaaS/PaaS/IaaS)
○ A plugin-based architecture model
● Open source. Backed by Hashicorp company and
Hashicorp Tao (Guide/Principles/Design)
Other tools

● Cloudformation, Heat, etc.


● Ansible, Chef, Puppet, etc.
● Boto, fog, apache-libcloud,
etc.
● Custom tooling and scripting
AWS Cloudformation VS OpenStack Orchestration
(Heat)

● AWS Locked-in ● Open source


● Initial release in 2011 ● Initial release around 2012
● Sources hidden behind a scene ● Heat provides
● AWS Managed Service / Free
● Cloudformation Designer
CloudFormation-compatibl
○ Drag-and-drop interface. e
● Json, Yaml (since 2016) Query API for Openstack
● Rollback actions for stack
updates ● UI: Heat Dashboard
● Change sets (since 2016) ● Yaml
Ansible, Chef, Puppet, etc

● Created for the purpose to be a configuration management tool.


● Suggestion: don’t try to mix configuration management and resource
orchestration.
● Different approaches:
○ Declarative: Puppet, Salt
○ Imperative: Ansible, Chef
● The steep learning curve if you want to use orchestration capabilities of
some
of these tools.
● Different languages and approaches:
○ Chef - Ruby
○ Puppet - Json-like syntax / Ruby
○ Ansible – Yaml | python
Boto, fog, apache-libcloud, etc.

● low-level access to APIs


● Some libs focused on specific cloud providers, others provide common
interface for few different clouds
● Inspires to create custom tooling

Custom tooling and scripting


● Error-prone and tedious
● Requires many human-hours
● The minimum viable features
● Slowness or impossibility to evolve, adopt to quickly changing
environments
Terraform is not a cloud agnostic tool

It’s not a magic wand that gives you power over all clouds and systems.

It embraces all major Cloud Providers and provides common language to orchestrate your infrastructure
resources.
Architecture
Architecture
Architecture
Architecture
Architecture
Terraform
Components
Terraform Terraform File
Executable Terraform
Terraform

Statefile
Providers API

Terraform config

file
Terraform Executable
Terraform Providers
Terraform Providers

IaaS, PaaS, and SaaS


Community and HashiCorp
- AWS, Azure, GCP, and
Oracle
Open source using APIs
Resources and data sources
Multiple instances
Terraform: Providers (Plugins)

1125+ infrastructure providers

Major Cloud
Partners
Terraform: Providers

Can be integrated with any API using providers framework

○ Note: Terraform Docs → Extending Terraform → Writing Custom


Providers
● OpenFaaS ● Docker
● GitLab ● OpenAPI ● Kubernetes
● GitHub ● Generic Rest ● Nomad
● BitBucket API ● Consul
● DNS
● Stateful ● Vault
● Palo Alto
● Terraform
Networks
● Template :)
● F5 BIG-IP
● Random ● Digital Ocean
● NewRelic
● Null ● Fastly
● Datadog
● External ● OpenStack
● PagerDuty
(escape hatch) ● Heroku
● Archive
Provider Example
provider “azurerm” {
subscription_id = “subscription-id”
client_id =
“principal-used-for-access”
client_secret =
“password-of-principal” tenant_id =
“tenant-id”
alias = “arm-1”
}
Terraform Code
Terraform Syntax

HashiCorp configuration language


Why not JSON?
Human readable and editable
Interpolation

Conditional, functions, templates


Terraform: Example (Simple
resource)

Type Name
Terraform: Example (Simple
local resource)
variable "aws_access_key" {} Variables

variable "aws_secret_key" {}

provider "aws" { Provide


access_key = "access_key“ r
secret_key = “secret_key”
region = “us-east-1”

}
resource "aws_instance"
"ex"{ Resource
ami = "ami-c58c1dd3"
instance_type =
"t2.micro"
}
output "aws_public_ip" {
Outpu
value = t
"${aws_instance.ex.public_dns}
"

}
Code Example
provider “azurerm” {
subscription_id = “subscription-id”
client_id =
“principal-used-for-access”
client_secret =
“password-of-principal” tenant_id =
“tenant-id”
alias = “arm-1”
}
resource
“azurerm_resource_group”{ name
= “resource-group-name”
location = “East US”
Terraform Syntax
#Create a variable
variable var_name {
key = value #type, default,
description
}
#Use a variable
${var.name} #get string
${var.map[“key”]} #get map element
${var.list[idx]} #get list element
Terraform Syntax
#Create provider
provider provider_name {
key = value #depends on resource, use alias as
needed
}
#Create data object
data data_type data_name

{} #Use data object

${data_type.data_name.attribute(args)}
Terraform Syntax
#Create resource
resource resource_type resource_name {
key = value #depends on resource

}
#Reference resource
${resource_type.resource_name.attribute(args)
}
Terraform Workflow
Workflow: Adoption
stages
Single
contributor
Terraform Core: Init

1. This command will never delete your existing configuration or


state.
2. Checkpoint → https://checkpoint.hashicorp.com/
3. .terraformrc → enable plugin_cache_dir, disable checkpoint
4. Parsing configurations, syntax check
5. Checking for provisioners/providers (by precedence, only once)→
“.”, terraform_bin_dir, terraform.d/plugins/linux_amd64
.terraform/plugins/linux_amd64
6. File lock.json contains sha-512 plugin hashes (.terraform)
7. Loading backend config ( if it’s available, local instead )
Backend Initialization: Storage for terraform state file.
Terraform Core: Plan + Apply

1. Starting Plugins: Provisioners/Providers


2. Building graph
a. Terraform core traverses each vertex and requests each provider using parallelism
3. Providers syntax check: resource validation
4. If backend == <nil>, use local
5. If “-out file.plan” provided - save to file - the file is not encrypted
6. Terraform Core calculates the difference between the last-known state and
the current state
7. Presents this difference as the output of the terraform plan operation to
user
in their terminal
Terraform Core: Destroy

1. Measure twice, cut once


2. Consider -target flag
3. Avoid run on production
4. No “Retain” flag - Remove resource from state file instead
5. terraform destroy tries to evaluate outputs that can refer to non
existing
resources #18026
6. prevent_destroy should let you succeed #3874
7. You can’t destroy a single resource with count in the list
Terraform state
Terraform State

JSON format (Do not touch!)


Resources mappings and metadata

Locking
Local / remote
Environments
Terraform state file

1. Backup your state files + use Versioning and


Encryption
2. Do Not edit manually!
3. Main Keys: cat terraform.tfstate.backup | jq 'keys'
a. "lineage" - Unique ID, persists after initialization
b. "modules" - Main section
c. "serial" - Increment number
d. "terraform_version" - Implicit constraint
e. "version" - state format version
4. Use “terraform state” command
a. mv - to move/rename modules
b. rm - to safely remove resource from the state. (destroy/retain like)
c. pull - to observe current remote state
d. list & show - to write/debug modules
Terraform State
• Terraform keeps the remote state
of the infrastructure
• It stores it in a file called
terraform.tfstate
• There is also a backup of

the previous state in
terraform.tfstate.backup
• When you execute terraform apply,
a new terraform.tfstate and
Terraform State
• You can keep the terraform.tfstate in version control
• e.g. git
• It gives you a history of your terraform.tfstate file (which is
just a big JSON file)
• It allows you to collaborate with other team members
• Unfortunately you can get conflicts when 2 people
work at the same time
• Local state works well in the beginning, but when you
project becomes bigger, you might want to store your
state remote
Terraform State
The terraform state can be saved remote, using the backend functionality in
terraform.

The default is a local backend (the local terraform state file)


Other backbend's include:
s3 (with a looking mechanism using dynamoDB)
consul (with locking)
terraform enterprise (the commercial solution)
Simple workflow
UpdatingYour Configuration with
More Resources
Adding a New Provider to Your
Configuration
Terraform Command
Overview
Terraform Command
Overview
Terraform Command
Overview
Terraform Advance Workflow
Workflow: Adoption
stages
Team
Collaboratio
n
Workflow: Adoption
stages
Multiple
Teams
Examine the Terraform file
Dem
Deploy the configuration
o
Review the results

Play along!
- AWS account
- Demo files
Examine the Terraform file
Deploy theconfiguration
Review theresults Play along!
Dem - AWS account
o - Azure subscription
- DNS domain
- Terraform software
(terraform.io)
- Demo files
Examine the Terraform file Deploy

Dem the configuration

o Review the results

Play along!
- AWS account
- Terraform software
(terraform.io)
- Demo files
Examine the Terraform file Deploy

Dem the configuration

o Review the results

Play along!
- AWS account
- Terraform software
(terraform.io)
- Demo files
•Ansible
Why Ansible?

Simple Powerful Agentless

Human readable automation App deployment Agentless architecture


No special coding skills needed Configuration management
Uses OpenSSH & WinRM
Tasks executed in order Workflow orchestration Network No agents to exploit or update

automation Orchestrate the Get started immediately


Usable by every team
app lifecycle
Get productive quickly More efficient & more secure
With Ansible you can automate:
CROSS PLATFORM – Linux, Windows, UNIX
Agentless support for all major OS variants, physical, virtual, cloud and network
HUMAN READABLE – YAML
Perfectly describe and document every aspect of your application environment
PERFECT DESCRIPTION OF APPLICATION
Every change can be made by playbooks, ensuring everyone is on the same page
VERSION CONTROLLED
Playbooks are plain-text. Treat them like code in your existing version control.
DYNAMIC INVENTORIES

Capture all the servers 100% of the time, regardless of infrastructure, location, etc.
ORCHESTRATION THAT PLAYS WELL WITH OTHERS – HP SA, Puppet, Jenkins, RHNSS, etc.

Homogenize existing environments by leveraging current toolsets and update mechanisms.


PUBLIC /
PRIVATE PUBLIC /
CLOUD CMDB PRIVATE
CLOUD

ANSIBLE AUTOMATION ENGINE

USER
S
HOSTS
INVENTORY CLI

MODULES PLUGINS
NETWORK
ANSIBLE DEVICES
PLAYBOOK
PUBLIC / PRIVATE
CLOUD PUBLIC / PRIVATE
CLOUD
CMDB

ANSIBLE AUTOMATION ENGINE

PLAYBOOKS ARE WRITTEN IN YAML


Tasks are executed sequentially
USERS
Invoke Ansible modules
HOSTS
INVENTORY CLI

MODULES PLUGINS
NETWORK
ANSIBLE DEVICES
PLAYBOOK
PUBLIC /
PRIVATE PUBLIC /
CLOUD PRIVATE
CMDB
CLOUD

ANSIBLE
S IN THE
AU T OMPython,
A T IOPowershell, E TOOLKIT”
N E NG orINany language Extend
M O D U L E S A R E
Ansible simplicity to the entire stack
USER “TOOL
S
HOSTS
INVENTORY CLI

MODULES PLUGINS
NETWORK
ANSIBLE DEVICES
PLAYBOOK

- name: latest index.html file is present


template:
src: files/index.html
dest: /var/www/html/
PUBLIC /
PRIVATE PUBLIC /
CLOUD PRIVATE
CMDB
CLOUD
PLUGINS ARE “GEARS IN THE ENGINE”
Code that plugs into the core engine
ANSIBALdEaApUtaTbOiMliAtyTIfOoNr
EvNaGriIoNuEs uses & platforms

USER
S
HOSTS
INVENTORY CLI

MODULES PLUGINS
NETWORK
ANSIBLE DEVICES
PLAYBOOK

{{ some_variable | to_nice_yaml }}
PUBLIC /
PRIVATE PUBLIC / PRIVATE
CLOUD CLOUD
CMDB
INVENTORY
List of systems in your infrastructure that
automation is executed against
[web] ANSIBLE AUTOMATION ENGINE
webserver1.example.com
webserver2.example.com

USERS
[db]
dbserver1.example.com HOSTS
INVENTORY CLI

[switches]
leaf01.internal.com
leaf02.internal.com
MODULES PLUGINS
NETWORK
[firewalls] DEVICES
ANSIBLE
checkpoint01.internal.com
PLAYBOOK

[lb]
f5-01.internal.com
PUBLIC /
PRIVATE PUBLIC /
CLOUD PRIVATE
CMDB
CLOUD

ANSIBLE AUTOMATION ENGINE

CLOUD
USER
S Red Hat Openstack, Red Hat Satellite, VMware,
HOSTS
AWS EC2, RackspaINcVeE,NGTOoRoYgle ComputCeLIEngine,
Azure

MODULES PLUGINS
NETWORK
ANSIBLE DEVICES
PLAYBOOK
PUBLIC /
PRIVATE PUBLIC /
CLOUD CMDB PRIVATE
CLOUD

ANSIBLE AUTOMATION ENGINE

USER CMDB
S
ServiceNow, Cobbler, BMHCO,SCTSustom
INVENTORY CLI
cmdb

MODULES PLUGINS
NETWORK
ANSIBLE DEVICES
PLAYBOOK
PUBLIC /
PRIVATE PUBLIC /
CLOUD PRIVATE
CMDB
CLOUD

ANSIBLE AUTOMATION ENGINE

USER
S
HOSTS
INVENTORY CLI

PLUGINS
AUTOMATE NETWORK
M ODULES DEVICES
AEVERYTHIN
RNSeIBdLHE at Enterprise
PLAYBOOK
G Linux, Cisco routers,
switches, Juniper routers, Windows hosts,
Arista
Checkpoint firewalls, NetApp storage, F5 load
balancers and more
Using Ansible

1
3
Ad-hoc commands
# check all my inventory hosts are ready to be
# managed by Ansible
$ ansible all -m ping

# run the uptime command on all hosts in the


# web group
$ ansible web -m command -a “uptime”

# collect and display the discovered for the


# localhost
$ ansible localhost -m setup
Inventory

An inventory is a file containing:

• Hosts
• Groups
• Inventory-specific data (variables)
• Static or dynamic sources
Ansible Playbooks

1
7
---
- name: install and start apache
hosts: web
vars:
http_port: 80
max_clients: 200
remote_user: root

tasks:
- name: install httpd
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
- name: start httpd
service: name=httpd state=started
---
- name: install and start apache
hosts: web
vars:
http_port: 80
max_clients: 200
remote_user: root

tasks:
- name: install httpd
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
- name: start httpd
service: name=httpd state=started
---
- name: install and start apache
hosts: web
vars:
http_port: 80
max_clients: 200
remote_user: root

tasks:
- name: install httpd
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
- name: start httpd
service: name=httpd state=started
---
- name: install and start apache
hosts: web
vars:
http_port: 80
max_clients: 200
remote_user: root

tasks:
- name: install httpd
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
- name: start httpd
service: name=httpd state=started
---
- name: install and start apache
hosts: web
vars:
http_port: 80
max_clients: 200
remote_user: root

tasks:
- name: install httpd
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
- name: start httpd
service: name=httpd state=started
---
- name: install and start apache
hosts: web
vars:
http_port: 80
max_clients: 200
remote_user: root

tasks:
- name: install httpd
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
- name: start httpd
service: name=httpd state=started
tasks:
- name: add cache
dir file:
path: /opt/cache
state: directory

- name: install nginx


yum:
name: nginx
state: latest
notify: restart nginx

handlers:
- name: restart nginx
service:
name: nginx
state: restarted
Variables
Ansible can work with metadata from various
sources and manage their context in the form of
variables.
• Command line parameters
• Plays and tasks
• Files
• Inventory
• Discovered facts
• Roles
Tips/Best Practices

26
Simplicity

27
Simplicity

- hosts: web
tasks:
- yum:
name: httpd
state: latest

- service:
name: httpd
state: started
enabled: yes
Simplicity
- hosts: web
name: install and start
apache tasks:
- name: install apache
packages yum:
name: httpd
state: latest

- name: start apache


service service:
name: httpd
state: started
enabled: yes
Naming example

30
Inventory

10.1.2.75
10.1.5.45
10.1.4.5
10.1.0.40

w14301.example.com
w17802.example.com
w19203.example.com
w19304.example.com
Inventory
db1 ansible_host=10.1.2.75
db2 ansible_host=10.1.5.45
db3 ansible_host=10.1.4.5
db4 ansible_host=10.1.0.40

web1 ansible_host=w14301.example.com
web2 ansible_host=w17802.example.com
web3 ansible_host=w19203.example.com
web4 ansible_host=w19203.example.com
Dynamic Inventories
● Stay in sync automatically
● Reduce human error

CMDB

PUBLIC /
PRIVATE
CLOUD
YAML Syntax

34
YAML and Syntax

- name: install telegraf


yum: name=telegraf-{{ telegraf_version }} state=present
update_cache=yes disable_gpg_check=yes enablerepo=telegraf
notify: restart telegraf

- name: configure telegraf


template: src=telegraf.conf.j2 dest=/etc/telegraf/telegraf.conf

- name: start telegraf


service: name=telegraf state=started enabled=yes
YAML and Syntax
- name: install telegraf
yum: >
name=telegraf-{{ telegraf_version
}} state=present
update_cache=yes
disable_gpg_check=yes
enablerepo=telegraf
notify: restart telegraf

- name: configure telegraf


template: src=telegraf.conf.j2
dest=/etc/telegraf/telegraf.conf

- name: start telegraf


service: name=telegraf state=started enabled=yes
YAML and Syntax
- name: install telegraf
yum:
name: telegraf-{{ telegraf_version
}} state: present
update_cache: yes
disable_gpg_check: yes
enablerepo: telegraf
notify: restart telegraf

- name: configure telegraf


template:
src: telegraf.conf.j2
dest: /etc/telegraf/telegraf.conf
notify: restart telegraf

- name: start telegraf


service:
name: telegraf
state: started
enabled: yes
ansible-playbook playbook.yml --syntax-check
Roles

39
Roles
• Think about the full life-cycle of a service, microservice or
container — not a whole stack or environment
• Keep provisioning separate from configuration and app
deployment
• Roles are not classes or object or libraries – those are
programming constructs
• Keep roles loosely-coupled — limit hard dependencies on
other roles or external variables
Variable
Precedence

41
The order in which the same variable from
different sources will override each other.
Variable
Precedence 13. Playbook host_vars
1. Extra vars
2. Include params 14. Inventory host_vars
3. Role (and include_role) 15. Inventory file/script host vars
params
16. Playbook group_vars
4. Set_facts / registered
vars 17. Inventory group_vars
5. Include_vars 18. Playbook group_vars/all
6. Task vars (only for the
task) 19. Inventory group_vars/all
7. Block vars (only for tasks 20. Inventory file or script group vars
in the block) 21. Role defaults
8. Role vars
9. Play vars_files 22. Command line values (e.g., -u user)
10. Play vars_prompt
11. Play vars
12. Host facts / Cached
set_facts
Things to Avoid

44
Things to Avoid
● Using command modules
○ Things like shell, raw, command etc.
● Complex tasks...at first
○ Start small
● Not using source control
○ But no really...
Ansible
Content
Collections
46
Collections Q and A
What are they?
● Collections are a distribution format for Ansible content that can include playbooks, roles,
modules, and plugins. You can install and use collections through Ansible Galaxy and
Automation Hub
How do I get them?
●ansible-galaxy collection install namespace.collection -p /path Where
can I get them?
● Today
○ Galaxy
○ Automation Hub
Collection Directory Structure
● docs/: local documentation for the collection
● galaxy.yml: source data for the MANIFEST.json that will be part of the collection package
● playbooks/: playbook snippets
○ tasks/: holds 'task list files' for include_tasks/import_tasks usage
● plugins/: all ansible plugins and modules go here, each in its own subdir
○ modules/: ansible modules
○ lookups/: lookup plugins
○ filters/: Jinja2 filter plugins
○ connection/: connection plugins required if not using default
● roles/: directory for ansible roles
● tests/: tests for the collection's content
Collections: Let’s Go!
1. Init collection: ansible-galaxy collection init foo.bar
2. Sanity testing: ansible-test sanity
3. Unit tests: ansible-test units
4. Integration tests: ansible-test integration
5. Build the collection: ansible-galaxy collection build
6. Publish the collection: ansible-galaxy collection publish
7. Install the collection: ansible-galaxy collection install
foo.bar
Resource Link Index
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#using-variables
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html#getting-started
https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
https://docs.ansible.com/ansible/latest/index.html
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.htm
l https://docs.ansible.com/ansible-lint/
https://github.com/ansible/ansible
https://github.com/ansible/ansible-lint
https://ansible.github.io/workshops/
https://www.ansible.com/resources/ebooks/get-started-with-red-hat-ansible-towe
r https://docs.ansible.com/ansible/latest/user_guide/collections_using.html
https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html

You might also like