Lab Guide Building Cloud Infrastructure With Terraform v1.0.PDF
Lab Guide Building Cloud Infrastructure With Terraform v1.0.PDF
Version 1.0
Table of Contents
Lab Guide Overview....................................................................... 3
Lab #1: Getting Started with Terraform......................................... 5
Lab #2: Terraform Variables and Interpolations .......................... 10
Lab #3: Resource Dependencies .................................................. 22
Terraform Provisioners ................................................... 26
Lab #5: Data Sources ................................................................... 34
Lab #6: Terraform Modules ......................................................... 36
Lab #7: Terraform State ............................................................... 40
Extra - Using an AWS Credentials File with Terraform ................. 45
References ................................................................................... 47
The References section of this document contains links to the software needed for
this lab, as well as links to online documentation for reference.
Organization
Skills Learned describes what you will get out of the lab
Overview describes the overall details of the lab
Instructions provide the details steps needed to perform the lab
Formatting Convention
The lab uses various text formatting to convey meaning and action:
Bold font will be used to emphasize actions to be taken:
o Click the button
o Open a terminal window
Courier font will be used to emphasize command syntax
and command line execution
Terraform is a fairly lightweight single binary used as a client to talk to cloud service
providers, therefore system requirements are fairly minimal.
Internet connection
A cloud account with AWS and any other cloud providers mentioned in this
lab.
Modern OS such as Linux, Windows, or OSX
Text editor (Atom, vi, Notepad)
Terraform has native support for a variety of operating systems so feel free to
choose the one that is appropriate for you.
The exercises in this lab guide have been written using Linux as the client operating
system, however the Terraform command line syntax and Terraform HCL used to
model resources in the cloud is platform agnostic.
The author will make every attempt to show both Windows and *Nix-specific
commands where possible, such as setting of environment variables.
Links to required software can be found in the References section of this lab guide.
Need Help?
If you need help with the labs or have questions please contact me directly at
[email protected].
Overview
In this lab you will get familiar with Terraform by creating a compute instance in
AWS.
If you already have an AWS account, then there is no need to create another account.
You may follow the steps to create an additional API user with keys for use with this
lab.
Skills Learned
At the end of this exercise you will have prepared your workstation to run Docker.
Instructions
https://aws.alearncdmazon.com/free/
Once you have your AWS account, log in to the AWS management
console.
2.11 AWS will generate an Access key ID and a Secret access key. Save both of
these values for later. You will need to configure Terraform to use both of
these values to authenticate against AWS.
https://www.terraform.io/downloads.html
3.3 Add the terraform binary to your system’s Path environment variable. Be sure to
replace /path/to/terraform with the actual path from the previous step.
$ export PATH=%PATH%:/path/to/terraform
$ terraform
$ mkdir -p learntf/lab1
4.2 Using for favorite text editor, create a new text file in the lab1 directory named
lab1.tf.
4.3 Add the following code to lab1.tf. Be sure to substitute actual values for below.
The region specifies in which data center resources will be created.
This lab guide will use us-region-1. OS images are specific to a particular region. If
you must use a different region, then you must also find OS images that are
available in the new region.
You can get a full list of regions and their region codes here:
https://docs.aws.amazon.com/general/latest/gr/rande.html
provider "aws" {
access_key = "<INSERT YOUR AWS ACCESS KEY>"
secret_key = "<INSERT YOUR AWS SECRET KEY>"
region = “us-east-1”
}
The code block above describes which cloud provider you want to use, the
authentication credentials, and the region in which you want to create resources.
This command will download the AWS terraform provider and prepare the
directory for use with Terraform.
Open lab1.tf in your text editor and add the following snippet of code after the
provider block. Save the file after making the change.
These attributes and their values are specific to AWS. Other providers may
use similar but different attributes and values.
5.2 Next let’s create a build execution plan by running terraform plan.
$ terraform plan
This command determines what steps terraform will take to reach the
desired state of the resources defined in the configuration.
5.3 With the build execution plan created, we can now create our EC2 instance.
$ terraform apply
Terraform will now create the EC2 instance in AWS. Terraform will block until the
instance is created.
5.4 You can verify the instance is created by logging into the AWS web console and
navigating to Services > EC2 > Instances.
Observe the output from the plan and take note of the steps terraform will take.
You should see terraform will first destroy then re-create the instance, noted by
the -/+ characters next to the name of the resource.
6.3 Verify in the AWS console that the new EC2 instance was rebuilt with the new
image.
7 Destroy the EC2 Instance
7.1 To destroy the EC2 instance, we simply need to run the following command:
$ terraform destroy
Terraform will ask you to confirm your actions before proceeding. Terraform
handles all the steps necessary to destroy or remove all resources defined in your
workspace.
This is a very powerful and destructive feature so please be sure you understand
what you are destroying.
Also note that it is a good practice to destroy your resources at the end of the day
while you are working through this lab guide so you do not incur any additional
costs from the service provider, in this case, AWS.
Part A - Variables
Overview
In this lab you will learn how to use both input and output variables as well as
interpolations. In the previous lab we used hard-coded values in our configurations
which is fine for demo purposes, however in a collaborative development
environment we need to find a way to decouple developer and environment specific
details out of our configurations.
Skills Learned
In this lab you will learn:
Instructions
1.2 Create a new directory for lab2 under the learntf directory you created
earlier.
$ mkdir -p ~/learntf/lab2
1.3 Recall from the lecture that using variables requires us to define and assign
values to them.
1.4 Edit variables.tf in your favorite text editor and add the following lines:
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "aws_region" {
default = "us-east-1"
}
The lines above are used to declare or define the variable. All variables
used within Terraform must be defined once. Variables can be defined in
any file ending in .tf since Terraform loads all files ending in .tf
automatically.
Default values for a variable can be set using the default attribute as shown
in the region example above.
1.6 Create a new file named provider.tf under the lab2 directory and add the
following lines:
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = “${var.aws_region}”
}
Be sure to replace with your actual AWS access and secret key from
earlier.
2.1 Create another terraform file named compute.tf under the lab2 directory
and add the following lines of code:
Remember that AMIs are tied to a specific AWS region. If you are using a
region different than us-east-1 then you will need to use a different AMI.
Refer to the first Lab in this guide for determining what images are available
in a region. You will learn shortly how to use a lookup table to solve this
problem.
$ terraform init
$ terraform plan
If you forgot or have not properly assigned values for any of your variables,
you will be prompted by Terraform at this point to provide those values.
$ terraform apply
3.1 In AWS and most other cloud providers, machine or OS images are usually
tied to a specific region, which makes hard coding image IDs a challenge.
In this lab you will create a lookup table using a Terraform map. The lookup
table will be used to look up an AMI based upon the region we specify.
variable map_name {
“key1” = “value1”
“key2” = “value2”
…
}
In this lab we will use AWS region codes as keys to lookup AMI image IDs.
3.2 We will define our AMI lookup table in the variables.tf file first.
variable webserver_amis {
type = "map"
}
3.3 One way to persist variable values is to put them in a file named
terraform.tfvars. This file is special in that Terraform automatically loads this
file to populate the variables.
Create a new file named terraform.tfvars and add the following code:
webserver_amis {
# US Northern Virginia
"us-east-1" = "ami-6871a115"
# US NorthCal
"us-west-1" = "ami-18726478"
# Mumbai, India
"ap-south-1" = "ami-5b673c34"
# Frankfurt, Germany
"eu-central-1" = "ami-c86c3f23"
As of the time of this publication, these AMIs refer to the RHEL 7.5 HVM
image which is part of the AWS Free Tier.
3.5 Now we need to update our compute resource configuration to use this
map. We will do this using a lookup function to look up the value for an AMI
based upon the region we specify.
Edit the compute.tf file and update the resource configuration to match the
following:
instance_type = "t2.micro"
}
The lookup function here takes two parameters, the name of the map
variable and the lookup or key value.
Let’s go ahead and create an EC2 instance in a region other than the
default us-east-1 region. To do so we will specify a region on the command
line.
4.2 Let’s go ahead and apply. Don’t forget to specify the region!
4.3 Once Terraform completes, verify the compute instanced was created in
the correct region.
To do this, log into the AWS Console then select the appropriate region
from the drop-down next to your login name.
4.4 Destroy the instance using terraform. Be sure to specify the region here a
well otherwise Terraform will default the region and the state file will
become inaccurate.
Output variables are a way of capturing these computed values and other
values as well for printing or querying.
The example you are about to walk through will capture the public IP
address generated for an EC2 instance you create.
5.2 Edit the compute.tf and add the following lines of code after the compute
resource configuration.
output "webserver_public_ip" }
value = "$aws_instance.web_server.public_ip"
}
We use the Terraform variable syntax to get at the public IP attribute for our
web server. The format is
<resource_type>.<resource_name>.<attr>
5.6 Verify you see Terraform output a value for webserver_public_ip at the end
of its execution. You should see the public IP address for the web server.
5.7 You can also query Terraform after apply to retrieve values as well.
Part B – Interpolations
Duration 30 minutes
Overview
In the Variables lab we worked with three different types of interpolations: a
variable reference, resource reference, and a built in function. In this lab we will
working with some of the other types including conditionals for incorporating basic
logic into your Terraform, templates for managing long strings or text files, and
basic math functions.
Skills Learned
In this lab you will learn how to:
Instructions
1 Conditionals
1.1
Terraform supports conditionals using the ternary syntax which looks like this:
If you have not see the ternary syntax before, you may be more familiar with
the traditional IF THEN ELSE construct in other programming languages.
1.2 First create a new variable that will be used to specify which environment we
are deploying into.
variable “target_env” {
default = “dev”
}
This code is an exact copy of our web_server, however we have given this
instance the name bastion.
1.4 Now let’s add a conditional statement that will specify the number of bastion
servers needed depending on the target environment.
The count attribute specifies the number of resources to create. This attribute
is optional and is defaulted to 1. In this case however we are using a
conditional statement to specify the count value. If target_env is equal to dev,
then count will be 0, meaning Terraform will not create this resource.
However, if target_env is anything other than dev, then Terraform will create 1
instance of bastion.
1.6 Run terraform plan and notice the number of resources Terraform will create.
The default value for target_env is dev, so Terraform will not create any
bastion instances.
1.7 Set the target environment to ‘prod’ and re-run terraform plan. The target
environment can be overwritten on the command line.
2.2 Let’s pretend for a moment that we want to create 3 bastion servers.
Update compute.tf and change the quantity of the bastion servers from 1 to 3.
2.3 Next we will create an output variable that contains the private IP addresses of
all the bastion servers.
output "bastion_ips" {
value = "${aws_instance.bastion.*.private_ip}"
}
Have a look at the interpolation being used for the output variable value. There
is a splat or asterisk inside the interpolation which is positioned right after the
resource name. The splat here tells Terraform we want all instances of
bastion. The private_ip attribute after the splat tells Terraform that we want
private IP addresses of all the bastion instances.
2.4 We can also reference a specific bastion instance using an index. Add
another output variable that captures the private IP address of the first bastion
server.
output "bastion_ip_0" {
value = "${aws_instance.bastion.*.private_ip[0]}"
}
Notice here in the interpolation syntax we are specifying an index after the
attribute. This tells Terraform to return the first element (indexed starting at
zero) in the list.
Observe the output at the end of the terraform run. You will have two new
output variables; one for a single bastion server and one containing a list of
private IPs for all bastion servers.
2.8 Run terraform destroy with target environment set to production to remove all
resources.
3 Templates
3.1 Templates are useful for managing configuration files that contain environment
specific details. We can create a template of the configuration file and then
specify variables inside the template that will be replaced with real values at
runtime.
In this lab we will create an IAM policy specific to our web server compute
instance that will allow only certain actions to be performed in the AWS
console or through the API.
3.2 Create a new file in the lab directory named policy.tpl and add the following
code:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances", "ec2:DescribeImages",
"ec2:DescribeTags", "ec2:DescribeSnapshots"
],
"Resource": "${arn}"
}
]
}
3.3 The next step is to define the template in Terraform and map the variables to
be used for substitution into the template.
template = "${file("${path.module}/policy.tpl")}"
vars {
arn = "${aws_instance.web_server.arn}"
}
}
This block of code, known as a data source configuration, defines our policy
template and assigns values to the variables that will be used inside the
template.
output "web_server_policy_output" {
value =
"${data.template_file.webserver_policy_template.rendered}"
}
Notice here the value of the output variable ‘rendered’ is the rendered attribute
of the data source configuration.
4 Simple Math
4.1 You can perform simple math operations inside of an interpolation. The
Terraform console can be used to experiment with interpolations without
affecting any existing state.
$ terraform console
>
> 1 + 3
4
> “${ 3 * 4 }”
12
4.3 You can exit the console by either typing exit or Ctrl-C or Ctrl-D.
Overview
Terraform makes managing resource dependencies fairly easy. In fact, most of the
time you need not worry about the order in which resources are created. Terraform
builds a dependency graph when it is executed, which determines which resources
are dependent on others and which are not. Terraform is able to parallelize
operations based upon this dependency graph, thereby being very efficient at
creating resources.
Terraform supports both implicit and explicit dependencies. In this lab you will use
an implicit resource dependency to add a web server to the AWS default subnet. You
will also use an explicit dependency to create a storage bucket for the web server.
This lab assumes you have a default VPC with a default subnet created. These
default networking resources are automatically created for AWS users when they
sign up for an account.
If you have deleted your default VPC, please re-create it now by following the
instructions below.
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-
vpc.html#create-default-vpc
Skills Learned
In this lab you will learn how to:
Instructions
resources which is why it does not matter what order your configurations
are placed in your terraform file or files.
In this lab we are going to create a web server that lives in the default AWS
subnet. To do this we will define two resources
1.2 Before we begin, ensure that you have destroyed all existing resources in
AWS by running terraform destroy or through the AWS console.
1.3 To get started, create a lab3 directory under the learntf directory created in
lab1.
$ mkdir -p ~/learntf/lab3
1.4 We want to re-use as much as we can from lab2, so copy all the .tf and
.tfvar files over from lab2 to lab3.
$ cp ~/learntf/lab2/*.tf ~/learntf/lab2/*.tfvars
~/learntf/lab3
1.6 Create a new terraform file named network.tf under lab3 and add the
following lines of code:
In this example we are using the us-east-1a availability zone. Subnets must
be created in these zones which are specific to a particular region. If you
specify us-east-1a as the zone, then you must use the us-east-1 as the
region, which in this lab, is the default region.
1.8 Next we want to associate our web server with the subnet we just defined.
To do this we must add a reference to the subnet using a variable. This is
our first implicit dependency.
Edit the compute.tf file and add the subnet_id attribute to the web_server
resource. The value of the subnet_id attribute will be the subnet ID of the
default subnet.
1.10 Now run terraform apply with no arguments. Be sure you have sourced or
set your AWS credentials.
$ terraform plan
1.12 Log into the AWS console to view the resources that were created.
The VPC and subnet can be viewed under Services > VPC > Your VPCs
And Services > VPC > Subnets.
2.2 Create a new terraform file named storage.tf under lab3 and add the
following lines of code:
output "bucket_url" {
value = "${aws_s3_bucket.learntf-
bins.bucket_domain_name}"
}
The last block of code creates an output variable that will display the URL
for the public bucket. After the bucket is created you can browse to this
bucket in our browser.
2.4 Edit compute.tf and add an explicit dependency on the new S3 bucket.
depends_on = ["aws_s3_bucket.learntf-bins"]
}
The depends_on attribute is used to define a list of other resources that this
compute resource requires.
Terraform will display the bucket url value, however you will notice that the
public IP for the web server is blank. This is because we created our web
server in a private subnet. As a result, no public IP address is assigned.
2.8 You can log into the AWS console to verify the S3 bucket was created
under Services > S3 or you can browse to the bucket using the bucket_url.
Terraform Provisioners
Duration 1 hour
Overview
Terraform provisioners are used to perform actions either locally or remotely when
creating a resource. Provisioners are most often used to bootstrap a compute
instance, which may involve deploying configuration management software, such as
chef client or puppet.
In this lab you will configure a provisioner to install and configure apache.
Skills Learned
In this lab you will learn the following:
Instructions
$ mkdir ~/learntf/lab4
$ cp ~/learntf/lab3/*.tf ~/learntf/lab3/*.tfvars
~/learntf/lab4
$ cd ~/learntf/lab4
$ terraform init
The steps for creating SSH keys in linux and windows differ significantly.
For Linux:
Open a terminal window and run the following command to generate SSH
keys
This command will create both public and private keys. The private key,
aws_rsa, should be protected on your system. The public key,
aws_rsa.pub, will be uploaded to the web server.
For Windows users, please follow these directions to generate public and
private SSH keys.
https://www.ssh.com/ssh/putty/windows/puttygen
2.2 The next step is to define an AWS key resource in our configuration. This
resource will be used to specify our public key.
Create a new file in the lab directory named keypairs.tf and add the
following lines of code:
2.3 Edit the compute.tf file and add the following line of code to the compute
resource:
The key_name attribute specifies the AWS key to use when creating the
EC2 instance.
In order to ssh into our web server, we are going to create a new security
group with a new rule that will allow incoming traffic on port 22.
To do this, edit the network.tf file and add the following lines of code:
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
This resource defines a new security group that will be attached to our
default VPC. The ingress rules are defined inline in the resource and
specify what traffic is allowed in.
The egress block determines what traffic is allowed out of our VPC. In this
case we are allowing all traffic to the internet from our VPC.
3.2 In order for security rule to be enforced, resources must be associated with
it. To do this we must edit our web server resource.
Your web server resource should now look like the following:
subnet_id =
"${aws_default_subnet.learntf_default_subnet.id}"
key_name = "${aws_key_pair.deployer-
keypair.key_name}"
vpc_security_group_ids =
["${aws_security_group.web_server_sec_group.id}"]
depends_on = ["aws_s3_bucket.learntf-bins"]
}
4 Verify Connectivity
4.1 Go ahead and run terraform plan and terraform apply.
4.2 Once terraform apply has completed, ssh into our web server.
In a Linux shell:
Note here that we are using the -i parameter to specify our private SSH
key. ec2-user is the default OS user that AWS creates.
You will have to replace the IP address above with the public IP address of
the web server.
You can find the public IP value in the output of terraform apply or you can
run the following command:
4.3 Once you have verified you can log into the web server using ssh, go
ahead and disconnect from the web server.
Edit the compute.tf file and add the following lines of code inside the
web_server compute resource:
provisioner "remote-exec" {
inline = [
"sudo yum install -y httpd",
"sudo service httpd start",
"sudo groupadd www",
"sudo usermod -a -G www ec2-user",
"sudo usermod -a -G www apache",
"sudo chown -R apache:www /var/www",
"sudo chmod 770 -R /var/www"
]
}
We are also setting up some permissions that will allow us to stage some
static content later.
4.5 Next we must tell the remote-exec provisioner how to connect to our web
server. We do this by configuring a connection resource inside our web
server compute resource.
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("/path_to/aws_rsa")}"
}
This code block defines how Terraform will connect to our web server. The
values here are similar to what we have been using with the ssh client on
the command line in Linux.
Make sure you specify the path to the aws_rsa private key you created
earlier.
4.6 To access our web server over the internet, we must open up port 80 which
is the default http port for apache.
Edit the network.tf file and add the following ingress rule to the
web_server_sec_group security group:
4.9 Go ahead and run terraform plan. If all looks correct, run terraform apply.
During the web server creation process, you should see Terraform remotely
connect to the web server, install apache, and start the service.
4.10 After Terraform completes, you should have a running web server that is
accessible over the internet.
http://WEB_SERVER_PUBLIC_IP
Be sure to use the public IP address for the web server. Keep in mind this
value will change everytime we re-create the web server.
In this section we will create and deploy our own web page to our web
server.
In the lab directory, create a file named learntf.index.html and add the
following code:
<html>
<body>
This is the best web page EVER!
</body>
<html>
5.2 Edit the compute.tf and add the following file provisioner after the remote-
exec provisioner we just created.
provisioner "file" {
source = "learntf.index.html"
destination = "/var/www/html/index.html"
}
5.3 Save all your changes and run terraform destroy first, then terraform apply.
5.4 Go ahead and use your browser to hit the public IP address of the web
server. You should see your new awesome web page.
5.5 When you are done, go ahead and destroy all your resources.
Create a new file named bootstrap.sh and add the following lines:
#!/bin/sh
sudo yum install -y httpd
sudo service httpd start
sudo groupadd www
sudo usermod -a -G www ec2-user
sudo usermod -a -G www apache
sudo chown -R apache:www /var/www
sudo chmod 770 -R /var/www
Note: If you are editing this file in Windows, but sure you configure ‘LF’ for
line endings and not ‘CRLF’.
provisioner "file" {
source = "bootstrap.sh"
destination = "/tmp/bootstrap.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bootstrap.sh",
"/tmp/bootstrap.sh"
]
}
The first provisioner will upload our bootstrap.sh script and the second
provisioner will execute the script.
provisioner "file" {
source = "bootstrap.sh"
destination = "/tmp/bootstrap.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bootstrap.sh",
"/tmp/bootstrap.sh"
]
}
provisioner "file" {
source = "learntf.index.html"
destination = "/var/www/html/index.html"
}
Terraform will destroy the compute resource first, then re-create it using the
new provisioners.
6.7 Once the compute resource is created and bootstrapped, verify you can
access the web page using your browser and the new public IP address
that was generated.
Overview
Data sources in Terraform allow you to fetch information on resources that already
exist in your infrastructure. These resources may have been created outside of your
Terraform scripts.
In this lab you will use a data source to fetch all default VPC subnets from AWS.
Skills Learned
At the end of this exercise, you will learn the following:
Instructions
One example is fetching an AWS AMI from a list of AMIs that we can then use to
create compute instances.
1.3 Copy over the provider.tf, variables.tf, and terraform.tfvars from lab4 to the lab5
directory.
1.4 Create a new file named datasource.tf and add the following lines of code:
filter {
name = "name"
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
}
}
This code block defines an AWS data source of type aws_ami_ids. This data
source will return a list of AMIs based upon the filters we create. Here we are
asking for Ubuntu images owned by Amazon.
1.5 Next we will create an output variable that will print out the entire list of AMIs.
output "amis" {
value = "${data.aws_ami_ids.ubuntu.*.id}"
}
This output variable will return all the AMI IDs from the data source. Notice here
the use of the splat notation between the data source name (ubuntu) and the
attribute (id). The splat tells Terraform to return all AMI IDs.
2.2 Create a new file named data.tf and add the following lines of code:
count =
"${length(data.aws_availability_zones.available.names)}"
availability_zone =
"${data.aws_availability_zones.available.names[count.index]}"
The first line of code fetches all the availability zones in the region we configured
in our AWS provider.
The rest of the code defines a compute resource where multiple instances are
created across each zone. Values from the availability zone data source are used
to specify the number of resources to create (count) and the availability zone.
2.5 You will see Terraform create a compute instance in each zone.
Once terraform apply is complete you can log into the AWS console to verify or
run terraform show on the command line:
$ terraform show
This command displays the current Terraform state maintained in the state file,
which will show each server created and which zone it was created in.
Overview
Terraform modules allow you to neatly package and organize Terraform code for re-
use. In this lab we will create resources by using already built Terraform modules
hosted in the Terraform Registry.
Most modules define required input variables that must be set when including in
your own Terraform.
Skills Learned
At the end of this exercise, you will learn the following:
Instructions
1.2 Search the registry for ‘aws vpc’ and click on the module named ‘vpc’
authored by AWS.
1.3 The module home page contains documentation, instructions, and usage
examples for the module. Take notice of all the networking related
resources that this module allows you to create.
1.4 In this lab we are going to create a simple VPC with public and private
subnets, including a NAT instance, all using the VPC module from AWS.
$ mkdir -p ~/learntf/lab6/vpc_module
1.5 Create a new file named vpc.tf in the vpc_module directory and add the
following code:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "learntf-vpc"
cidr = "10.0.0.0/16"
enable_nat_gateway = true
enable_vpn_gateway = true
}
The code above defines a module that we are calling VPC, along with all
the required attributes for the module. This module makes it easy for us to
create a fairly complete VPC across multiple availability zones, public and
private subnets, along with a NAT and VPN gateway.
The source attribute tells Terraform what module to fetch from the Registry.
All of the other attributes that follow represent input variables for the module
that we are satisfying.
Not all attributes are required. Refer to the Terraform Registry moduel’s
home page for usage documentation.
1.6 The VPC module does not make any assumptions about the AWS provider,
so we must define one ourselves. We will do this in the same vpc.tf file.
variable "aws_access_key" {}
variable "aws_secret_key" {}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
1.7 Next we want to deploy an AWS instance into the VPC. The following code
will create an AWS instance in the VPC’s public subnet. Take note of how
we are using an interpolation to refer to the public subnet ID inside of the
module.
Ter
Module attributes are referenced using the following syntax (taken from the
Interpolations lecture)
${module.NAME.ATTRIBUTE}
1.9 Run terraform plan and terraform apply and observe how many resources
the VPC module creates for us, including the AWS EC2 instance we
defined ourselves.
1.10 You can log into the AWS console to see all the resources that were
created.
In this section we are going to create a very simple module that will be used
and referenced from a separate Terraform file.
$ mkdir -p ~/learntf/lab6/mod_lab/my_mod
2.3
In the my_module directory create a new file named main.tf and add the
following code:
Save the changes. You just created a very simple, but not very useful
module.
2.4 Imagine the module we just created is something that was written by
someone else and we want to include it in our Terraform code.
Create a new file named module.tf under learntf/lab5/mod_lab and add the
following code:
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "aws_region" {
default = "us-east-1"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "${var.aws_region}"
}
module "my_module" {
source = "./my_mod/"
}
The module code at the bottom defines a module named “my_module” and
the source attribute specifies where to fetch the module from, in this case
the directory my_mod/.
2.5 First run terraform init in the mod_lab directory. This will not only initialize
the current working directory but also pull in the my_mod module.
Try running terraform plan in the mod_lab directory. This will ensure that
Terraform can load the module. You do not need to run terraform apply at
this time.
2.6 To make the module more modular we can define input variables that can
then be used inside the module.
In this lab let us take a simple example and create an input variable that will
allow us to specify the number of web servers we want to create.
Edit the main.tf module code and add the following line of code to the
web_server resource.
2.7 As with all variables we must declare this variable in our module.
Create a new file named variables.tf under the my_mod directory and add
the following code:
variable “server_count” {
default = “1”
}
2.8 Any variable that is declared inside of our module automatically becomes
an input variable for that module, which means that it must be assigned a
value when you include the module in your terraform code.
Edit the module.tf and add the server_count attribute to the “my_module”
module.
module “my_module” {
…
server_count = 3
}
Here are we setting the server_count to 3 which will cause the module to
create 3 web servers.
2.9 Save all files then run terraform plan and confirm that Terraform will create
3 AWS instances.
Duration 60 minutes
Overview
Terraform manages its own state in order to keep track of the resources it manages,
and their dependencies. State can be managed locally and remotely using a backend
data source.
For a single developer, having Terraform managing state locally is perfectly fine and
quite simple. However in a team environment, state needs to be shared and
collaborated on from both a development perspective and an operational
perspective. In terms of development, resource configuration details can be shared
with other team members in a read-only manner. In an operational setting, remote
state can be locked to prevent corruption by multiple terraform operations running
at once.
In this lab you will work with both local and remote states.
Skills Learned
At the end of this exercise, you will learn the following:
Instructions
1.2 First log into the AWS Console and navigate to Services > EC2.
1.8 You will be prompted to provide a keypair for the instance. From the drop
down, select ‘Proceed without a keypair’ and select the Acknowledgement
box.
$ mkdir -p ~/learntf/lab7
1.11 Copy over provider.tf, variables.tf, and terraform.tfvars from lab 4 to the new
directory.
1.12 Create a new file named compute.tf and add the following code:
You will have to update the instance_type and ami attributes to match the
EC2 instance you manually created earlier in this lab.
1.15 Next we import the resources by running terraform import and associating
the ID of the resource with the compute resource configuration:
You will have to replace the resource ID in this lab with the one you just
created earlier.
1.16 If the import is successful, the compute instance will now be under
terraform’s control.
To verify this, use Terraform to display the current state. The server should
appear in the output.
Log into the AWS console and navigate to Services > S3.
2.4 In the Configure Options screen, enable Versioning and Default Encryption.
These two options allow us to version our state file and encrypt the state file
at rest.
Click Next.
2.5 In the Set Permissions screen, click Next.
terraform {
backend "s3" {
bucket = "YOUR_BUCKET_NAME"
key = "learntf-state/terraform.tfstate”
region = "us-east-1"
profile = “default”
}
}
Be sure to replace the bucket name with your bucket name that you created
earlier and your access and secret keys.
The profile attribute tells Terraform to load the default profile containing
AWS access credentials from a credentials file which we will create in the
next step.
$ mkdir -p ~/.aws/
Create a new file in .aws/ called credentials and add the following lines:
# ./aws/credentials
[default]
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
Replace the values above with your access ID and secret key.
2.9 For linux users, change permissions on the file so that only you can read it.
2.10 We must re-initialize our terraform directory to enable the backend change.
$ terraform init
2.11 Terraform will ask us if we want to migrate any existing state from the local
backend over to S3.
2.12 In the AWS console, you can look at the contents of the S3 bucket to see
the state file.
Be sure to replace your_bucket with the name of the bucket created earlier.
This command displays the current terraform state. We ran this command
earlier when importing a resource. During that time, state was maintained
locally. Now when we run this command, terraform is fetching the state from
S3 object storage, not local storage.
$ terraform destroy
Terraform will destroy the imported resource and update the state file.
2.15 Run terraform state show to confirm the state file was updated.
Edit the terraform.tf and comment out the entire terraform stanza using /* */
notation.
3.3 Terraform will ask whether you want to migrate your state file from S3.
Enter ‘yes’ and hit enter.
3.4 Verify Terraform migrated your state file to the lab7 directory.
One other benefit to the credentials file is that the AWS SDK and CLI also support
using the credentials so it provides a single source for secrets.
Lastly, the environment variables need to be set every time you load or reload your
shell, while the credentials file is persistent.
To create a credentials file and use it with an AWS Terraform provider, follow the
steps below:
Linux
$ mkdir ~/.aws
On Windows
# Default Profile
[default]
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_KEY
The above file configures a profile named default. If you already have a
default profile defined, you may specify a different profile name here. AWS
allows you to support any number of profiles.
1.3 Next we must configure our AWS provider to use the credentials file. Here
is the code for the AWS provider:
provider "aws" {
region = "${var.aws_region}"
profile = "default"
}
In the provider above we have removed the environment variables for the
AWS credentials and replaced them with a profile attribute.
You may now use this provider configuration with any of the labs instead of
having to set environment variables for the access and secret keys.
References
Download Terraform
https://www.terraform.io/downloads.html
Terraform Registry
https://registry.terraform.io/
Change Log