Shimon Ifrah - Getting Started With Containers in Azure - Deploy Secure Cloud Applications Using Terraform-Apress (2024)
Shimon Ifrah - Getting Started With Containers in Azure - Deploy Secure Cloud Applications Using Terraform-Apress (2024)
with Containers
in Azure
Deploy Secure Cloud Applications
Using Terraform
—
Second Edition
—
Shimon Ifrah
Getting Started with
Containers in Azure
Deploy Secure Cloud Applications
Using Terraform
Second Edition
Shimon Ifrah
Getting Started with Containers in Azure: Deploy Secure Cloud Applications
Using Terraform
Shimon Ifrah
Melbourne, VIC, Australia
iii
Table of Contents
iv
Table of Contents
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
Index��������������������������������������������������������������������������������������������������������������������� 207
viii
About the Author
Shimon Ifrah is a solution architect, writer, tech blogger,
and author with over 15 years of experience in the
design, management, and deployment of information
technology systems, applications, and networks. In the last
decade, Shimon has specialized in cloud computing and
containerized applications for Microsoft Azure, Microsoft
365, Azure DevOps, and .NET. Shimon also holds over 20
vendor certificates from Microsoft, Amazon Web Services,
VMware, Oracle, and Cisco. During his career in the IT
industry, he has worked for some of the world’s largest managed services and technology
companies, assisting them in designing and managing systems used by millions of
people every day. He is based in Melbourne, Australia.
ix
About the Technical Reviewer
Kasam Shaikh is a prominent figure in India’s artificial
intelligence landscape, holding the distinction of being one
of the country’s first four Microsoft MVPs in AI. Currently
serving as a senior architect at Capgemini, Kasam boasts an
impressive track record as an author, having authored five
best-selling books focused on Azure and AI technologies.
Beyond his writing endeavors, Kasam is recognized as a
Microsoft certified trainer and influential tech YouTuber
(@mekasamshaikh). He also leads the largest online Azure
AI community, known as DearAzure—Azure INDIA and
is a globally renowned AI speaker. His commitment to
knowledge sharing extends to his contributions to Microsoft Learn, where he plays a
pivotal role.
Within the realm of AI, Kasam is a respected subject matter expert in Generative
AI for the Cloud, complementing his role as a senior cloud architect. He actively
promotes the adoption of no-code and Azure OpenAI solutions and possesses a strong
foundation in hybrid and cross-cloud practices. Kasam’s versatility and expertise make
him an invaluable asset in the rapidly evolving landscape of technology, contributing
significantly to the advancement of Azure and AI.
In summary, Kasam Shaikh is a multifaceted professional who excels in both his
technical expertise and knowledge dissemination. His contributions span writing,
training, community leadership, public speaking, and architecture, establishing him as a
true luminary in the world of Azure and AI.
xi
CHAPTER 1
1
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_1
Chapter 1 Getting Started with Azure and Terraform
• PowerShell 7
• Terraform
2
Chapter 1 Getting Started with Azure and Terraform
Installing VS Code
VS Code is available for the Windows, macOS, and Linux operating systems. You can
download all of these versions from the following URL: https://code.visualstudio.
com/download.
Once you download the correct version for your system, go ahead and install it.
1. Open VS Code.
3
Chapter 1 Getting Started with Azure and Terraform
To get the most out of this book and Terraform, what follows are a few VS
Code extensions I would recommend installing that will help you become a great
infrastructure developer.
• Azure Terraform: The official Microsoft VS Code extension for
Terraform offers IntelliSense, linting, autocomplete, and ARM
template support for Terraform configuration.
4
Chapter 1 Getting Started with Azure and Terraform
5
Chapter 1 Getting Started with Azure and Terraform
To lint YAML Ain’t Markup Language, or YAML, files, make sure you install the
YAMLint package for macOS or Linux.
6
Chapter 1 Getting Started with Azure and Terraform
The extensions just described will help you get started using Azure and Terraform
very quickly. Make sure you have all of them installed.
7
Chapter 1 Getting Started with Azure and Terraform
wsl–install
This command will install and enable all the features that make WSL work on your
computer and install the Ubuntu distribution of Linux, which is the default, but you can
change it.
If you’re using macOS or Linux, there is no need to change anything, as all the tools
that we will use are natively available on both operating systems.
A
zure CLI
The next tool that we need to install is the Azure CLI command-line interface, which will
allow us to manage Azure using commands. Azure CLI is a cross-platform tool that is
available on all operating systems.
This command uses WinGet, which is Windows’s package manager that allows us to
install tools and applications directly from the command line.
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
If you’re using Ubuntu Linux, you can install Azure CLI using the following single
command:
PowerShell 7
Microsoft PowerShell is a cross-platform command-line utility that allows us to
automate tasks using commands and scripts, and it is available on Windows, Linux,
and macOS.
With PowerShell, we can install the Azure PowerShell module and manage Azure
resources directly from the command line using cmdlets or scripts.
The main selling point of PowerShell 7 is its cross-platform support, which
contributed to the program’s success and widened its limited exposure, previously being
available for Windows environments only.
PowerShell 7 can be installed on all platforms using different methods. For the sake
of simplicity, I will just go over one method for each platform. For more information
about the installation options, visit PowerShell’s official website at https://github.com/
PowerShell/PowerShell.
9
Chapter 1 Getting Started with Azure and Terraform
If you already have PowerShell 7 installed on your computer and would like to
update it to the latest version, run the following command to check for updates:
winget update
Note that in some cases, you might need to uninstall PowerShell 7 before installing a
new version with WinGet. To uninstall PowerShell 7, run the following cmdlet:
Once the previous version is uninstalled, install PowerShell 7 with the command that
follows:
After the Homebrew installation is completed, close and reopen the Terminal and
run the following command to install PowerShell 7:
Once PowerShell 7 is installed, you can start using it by typing “pwsh.” The pwsh
command starts PowerShell and allows you to run PowerShell cmdlets or scripts.
10
Chapter 1 Getting Started with Azure and Terraform
brew update
After the command is finished, run the following command to start the update process:
brew upgrade
When the Homebrew update is completed, it will display a summary report of the
updated packages, including PowerShell 7.
wget -q
"https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-
microsoft-prod.deb"
rm packages-microsoft-prod.deb
packages.microsoft.com
Once the installation is complete, you can start PowerShell by using the following
command: pwsh. From this point forward, all PowerShell cmdlets will be the same on all
operating systems.
T erraform
Now that we have all the tools we need to get started using Microsoft Azure and DevOps,
it’s time to install Terraform and begin the process.
11
Chapter 1 Getting Started with Azure and Terraform
Terraform is the most popular and widely used IaC software development tool
available on the market and is considered an industry standard for infrastructure
deployment.
It’s also the oldest tool for infrastructure deployment and has been around for
many years. Terraform supports most major cloud providers, like AWS, or Amazon Web
Services, and GCP, or Google Cloud Platform.
Terraform uses the concept of domain-specific language, also known as HashiCorp
Configuration Language. The idea behind the language is to use a declarative approach
to infrastructure code.
In the declarative approach, we define the desired state of the infrastructure and let
Terraform handle the deployment and configuration.
# main.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "Apress-ch01"
location = "West Europe"
}
provider "azurerm" {
features {}
}
12
Chapter 1 Getting Started with Azure and Terraform
Next, we will tell Terraform to create a resource group in the Azure Web Europe data
center. The name of the resource group will be Apress-ch01. Once we run the code,
Terraform will go ahead and deploy the resource group.
We will go over the process for setting up and deploying a resource shortly. The
previous code is just meant to serve as a high-level example of how Terraform deploys
infrastructure
Now that we have learned a bit about Terraform, let’s take a look at how to install it.
Terraform is available for Linux, macOS, and Windows systems. My recommendation
would be to use Terraform on Linux, macOS, or WSL. Because many DevOps tools
are available natively on Linux and macOS, using Windows won’t produce the best
development results.
If you already have Terraform installed and want to update it to the latest version,
you can take the following steps.
First, update Brew using the update command: brew update. Once Brew is updated,
run this command: brew upgrade hashicorp/tap/terraform.
Now Terraform is ready to go. To check which version of Terraform is installed on
your machine, run terraform -version.
terraform -install-autocomplete.
13
Chapter 1 Getting Started with Azure and Terraform
• CentOS/RHEL
• Fedora
• Amazon Linux
Then we need to install the GPG security signature using the following command:
Touch ~/bashrc
terraform -install-autocomplete
14
Chapter 1 Getting Started with Azure and Terraform
The output from the command is shown in the following. The version of Terraform
we’re looking for is 1.5.3.
Name Id Version Match Source
--------------------------------------------------------------------------
Hashicorp Terraform Hashicorp.Terraform 1.5.3 winget
Note When the ID of the app shows “Vendor. AppName,” it means that app is the
official application.
Note Using tfenv is optional and not required to complete the labs in this book.
The tool I’m talking about is tfenv. It is a version manager for Terraform. Tfenv allows
you to manage multiple versions of Terraform on your local computer (similar to the
Python virtual environment).
The tfenv process of switching between Terraform environments is simple and
allows us to maintain the compatibility of projects.
As I mentioned previously, this tool is only available on Linux and macOS; you will
come across many tools like this.
15
Chapter 1 Getting Started with Azure and Terraform
Tfenv
The output will list all the available options, as shown in the following:
tfenv 3.0.0-18-g1ccfddb
Usage: tfenv <command> [<options>]
Commands:
install Install a specific version of Terraform
use Switch a version to use
uninstall Uninstall a specific version of Terraform
list List all installed versions
16
Chapter 1 Getting Started with Azure and Terraform
As you can tell, using tfenv is simple, which makes it very handy for operating and
managing the Terraform versions.
Let’s start by downloading a version of Terraform by typing in the following
command to view which versions are available:
Tfenv list-remote
What follows is the output of that command (note that I am only showing 14 of the
versions included in the list):
1.6.0-alpha20230719
1.5.3
1.5.2
1.5.1
1.5.0
1.5.0-rc2
1.5.0-rc1
1.5.0-beta2
1.5.0-beta1
1.5.0-alpha20230504
1.5.0-alpha20230405
1.4.6
1.4.5
1.4.4
The command output follows. If you notice, it’s being downloaded from Terraform
directly.
##########################################################################
#########################################################################
######################################## 100.0%
Downloading SHA hash file from https://releases.hashicorp.com/
terraform/1.5.2/terraform_1.5.2_SHA256SUMS
Downloading SHA hash signature file from https://releases.hashicorp.com/
terraform/1.5.2/terraform_1.5.2_SHA256SUMS.72D7468F.sig
To activate a version of Terraform, first list all the installed versions with a tfenv list,
as follows:
If you check which version is active, it will show the following output:
1.5.3
* 1.5.2 (set by /home/shimon/.tfenv/version)
1.3.0
1.1.8
18
Chapter 1 Getting Started with Azure and Terraform
A
uthenticating to Azure
The first step required to deploy resources to Azure is to authenticate, which we’ll do
using Azure CLI (PowerShell is not supported).
To authenticate to Azure, run the following command and click the resulting link to
open the Azure portal login:
az login --use-device-code
If you have multiple Azure subscriptions, run the following command to find
the ID of the subscription to which you’re going to deploy resources and copy the
subscription ID.
Using the ID you copied, run the following command to set up the subscription:
We are now authenticated and ready to deploy our first Azure resource.
In this example, we are going to deploy an Azure Resource Group using Terraform
with the following code:
#1.Create_RG.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "ApressAzureTerraform"
location = "Australia Southeast"
}
The previous code starts with a declaration of the Azure Terraform provider. The
Azure terraform provider is called azurerm.
19
Chapter 1 Getting Started with Azure and Terraform
#1.Create_RG.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
}
}
provider "azurerm" {
features {}
}
The second part of the code is the declaration of the resource group we’re going to
create and deploy to Azure.
We are deploying a resource group called ApressAzureTerraform in the Australia
Southeast data center.
Terraform init
Terraform plan
Terraform apply
Terraform destroy
20
Chapter 1 Getting Started with Azure and Terraform
In the following deployment, we’ll use all of these commands as we go through the
cycle of creating and deleting resources from Azure.
terraform init
Note We can specify a Terraform provider by using the version option in the
required_provider section.
Terraform has created a lock file called .terraform.lock.hcl to record the provider
selections it made. Include this file in your version control repository so that Terraform
can guarantee it makes the same selections by default when you run "terraform init"
in the future.
21
Chapter 1 Getting Started with Azure and Terraform
Note It is essential that you review the changes carefully, as changes made by
Terraform are irreversible.
Terraform apply
Let’s now review the planned changes one more time and type “yes” to confirm.
22
Chapter 1 Getting Started with Azure and Terraform
Enter a value:
After a little time, Terraform will display a message saying that the resources were
deployed successfully. The output of the message follows:
Terraform destroy
Terraform will then again display a detailed configuration message outlining the
changes and their impact on the infrastructure. It is critical that you review these
changes carefully, especially when managing live and production resources.
23
Chapter 1 Getting Started with Azure and Terraform
Enter a value:
If you are OK with the changes, type “yes” and Terraform will delete the resources
outlined in the output of the destroy command.
S
ummary
This chapter covered the basics of getting started with Terraform and installing the tools
required to use it. In the last section, we put all our learning into practice and deployed
an Azure resource group using Terraform.
24
CHAPTER 2
• .NET
• Java
• Python
• Node
The deployment process also allows us to pull our images from container registries
like Azure Container Registry (ACR) and Docker Hub or use source-code repositories
like Azure Repos or GitHub.
25
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_2
Chapter 2 Azure Web App for Containers
P
rovider Configuration
To simplify things and make the code more scalable and portable, I have created the
following file: provider.tf.
The provider file contains all the details of the provider, and in our case, it’s the
Azure provider. By separating the provider configuration from the main configuration
files, we centralize the provider configuration and reduce duplication.
The content of the provider.tf is:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
}
}
provider "azurerm" {
features {}
}
26
Chapter 2 Azure Web App for Containers
The first block in the configuration creates a resource group. The name of the
block is “rg.” Terraform doesn’t care what you name it, but the naming does need to be
consistent, and we will refer to it in the configuration.
The second piece of code, detailed in the following, creates a Linux Service plan
called “Linux with P1v2 plan.”
These two blocks of code define the Docker image that will be used in the Web App
for Containers and the settings that are needed for it to run.
Note Later on in the book, we will create a private container register in Azure
and use it for deployments.
The last block of code, outlined in the following, creates the actual app that will be
deployed to Web App for Containers. The important parts in the configuration are in the
applications_stack and the app_settings blocks.
27
Chapter 2 Azure Web App for Containers
site_config {
always_on = "true"
application_stack {
docker_image_name = "httpd:latest"
docker_registry_url = "https://index.docker.io/"
}
}
app_settings = {
"DOCKER_ENABLE_CI" = "true"
}
The deployment that we’re using is not overly complicated but does have all the
moving parts needed to run an application on Web App for Containers.
The purpose of the Terraform plan command is to preview the planned changes
before applying them and review their potential impact on the infrastructure.
When running a plan command, we need to navigate to the directory containing the
Terraform configuration files and issue the following command:
terraform plan
The plan command will run the previous steps against every configuration file that
ends with the .tf extension.
• Mapping the file’s resources: Mapping the resources listed in the file
(terraform.tfstate) and the resources in Azure is the purpose of
this file. If the configuration of Azure resources is different, Terraform
will try to “fix” it and give it the same configuration as those in the
state files by deleting or removing those resources that can be risky.
• Mapping: The state file holds information about the resource type;
name; provider; and attributes like DNS, IP Address, and so on.
• Sensitive data: The state file should be excluded from source control
storage because it contains sensitive data like passwords, API
(application programming interface) keys, and more. When using a
remote state file, the storage should be encrypted.
29
Chapter 2 Azure Web App for Containers
Note To skip the confirmation approval step, we can use auto-approve, which
will work with the “plan,” “apply,” and “destroy” commands.
1. Open the VS Code terminal and browse the for the folder where
the Terraform configuration exists.
az login --use-device-code
3. If you have more than one Azure subscription, use the following
command to set your subscription:
Note To list all your Azure subscription IDs using PowerShell, use the following
command: “get-azSubscription | Select-Object Name, subscriptionid.”
30
Chapter 2 Azure Web App for Containers
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
Terraform plan
+ site_config {
+ always_on = true
+ container_registry_use_managed_identity = false
+ default_documents = (known after apply)
+ detailed_error_logging_enabled = (known after apply)
+ ftps_state = "Disabled"
+ health_check_eviction_time_in_min = (known after apply)
+ http2_enabled = false
+ linux_fx_version = (known after apply)
+ load_balancing_mode = "LeastRequests"
+ local_mysql_enabled = false
+ managed_pipeline_mode = "Integrated"
+ minimum_tls_version = "1.2"
+ remote_debugging_enabled = false
+ remote_debugging_version = (known after apply)
+ scm_minimum_tls_version = "1.2"
+ scm_type = (known after apply)
32
Chapter 2 Azure Web App for Containers
+ application_stack {
+ docker_image_name = "httpd:latest"
+ docker_registry_password = (sensitive value)
+ docker_registry_url = "https://index.docker.io/"
+ docker_registry_username = (known after apply)
}
}
}
33
Chapter 2 Azure Web App for Containers
The plan command is important and you should always take a few minutes to review
the code. More specifically, always review the last line of the output that shows the
planned changes for the action.
In our case, the plan command will add the following three simple instructions:
Plan: 3 to add, 0 to change, 0 to destroy. However, in existing environments, the
output might show only the change and destroy instructions; make sure you go through
the list of changes and understand them before proceeding to the apply command.
6. Next, we’ll run the following command:
terraform apply
The output of this command will be similar to that of the plan command. However, it
will also include the following output plus confirmation:
Enter a value:
I will go ahead and type “yes” here and let Terraform create the web application as
per the configuration.
The Terraform application output is shown in the following code. The time it takes
to create the infrastructure depends on the number of resources in the configuration. In
our case, it should take less than a minute to complete the deployment.
azurerm_resource_group.rg: Creating...
azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/
subid/resourceGroups/ApressAzureTerraformCH02]
azurerm_service_plan.appservice: Creating...
azurerm_service_plan.appservice: Creation complete after 7s [id=/
subscriptions/subid/resourceGroups/ApressAzureTerraformCH02/providers/
Microsoft.Web/serverfarms/Linux]
azurerm_linux_web_app.webapp: Creating...
azurerm_linux_web_app.webapp: Still creating... [10s elapsed]
azurerm_linux_web_app.webapp: Still creating... [20s elapsed]
34
Chapter 2 Azure Web App for Containers
Now, the web app has been deployed and we can open the properties of the web app
in the Azure portal and click the URL to see it in action.
The output of the web app is shown in Figure 2-1.
In our deployment, we’re using the httpd Docker image, which runs the Apache Web
Server, and it displays the default home page.
You can find the web app URL in the Azure portal by taking the following steps:
1. Open the Azure portal using the following URL: https://portal.
azure.com.
2. Open the ApressTFWebApp web app.
35
Chapter 2 Azure Web App for Containers
T erraform Output
I have to say that retrieving the web app URL required a few clicks, opening a web
browser, and logging into the portal. To make our lives a bit easier, Terraform can also
output the same information we retrieved from the browser on the output screen after
deploying the application.
The purpose of the output command is to display information about the deployment
on our screen without requiring us to open the portal to look for it. After all, Terraform
already has all the information about our deployment, so outputting it to the screen
is simple.
The Terraform output command is very powerful and allows us to retrieve
deployment values from the Terraform state file that holds all the attributes. It also
provides access to values without having to read the state file directly.
To use the output command, I have created a file called output.tf with the
following configuration:
output "web_app_url" {
value = azurerm_linux_web_app.webapp.default_hostname
}
In the configuration, I declared one output value would be called web_app_url with
the Azure Web App default hostname value.
36
Chapter 2 Azure Web App for Containers
To view the hostname of the deployed web app, we can run the terraform apply
command as normal or output the value postdeployment using:
terraform output
The following output shows the web app URL we get when we run the Terraform
apply command:
Changes to Outputs:
+ web_app_url = "apresstfwebapp.azurewebsites.net"
You can apply this plan to save these new output values to the Terraform
state, without changing any real infrastructure.
Outputs:
web_app_url = "apresstfwebapp.azurewebsites.net"
The previous example shows one output; however, in more complex deployments,
we could output almost any attribute in the deployment.
37
Chapter 2 Azure Web App for Containers
touch .gitignore
# Ignore logs
*.log
38
Chapter 2 Azure Web App for Containers
If the .gitignore file is working, you’ll see the ignored files marked in white rather
than green, as shown in Figure 2-3, indicating that they are not being tracked.
Adding files to the .gitignore file after they’ve been tracked won’t remove them
from the repository; it will only prevent future changes from being staged.
To stop tracking files that Git already tracks, use the following Git command:
Git rm --cached
Figure 2-3. Terraform file being ignored by Git, as indicated by being marked
in white
39
Chapter 2 Azure Web App for Containers
2. After adding the file, open a terminal window and find the
repository with which you’d like to use the global file, then run the
following command:
Terraform destroy
S
caling
The first feature I want to touch on is the management of resources a Web App uses in
terms of RAM and CPU. As I mentioned earlier, Terraform has the capability of managing
almost any aspect of our deployment and scaling apps is one of them.
Regarding scaling, Azure Web Apps resources are managed at the app service plan
resource we have in our configuration. The code is as follows:
If you look closely at the code, we manage the resources by assigning a Stock Keeping
Unit (SKU) using the sku_name option. Currently, Azure offers ten app service plan
options for Linux, as listed in Figure 2-4.
40
Chapter 2 Azure Web App for Containers
The process of adding more resources to an app service plan is called “scale up,” and
the opposite process is called “scale out.”
To change an app service plan, we just need to change the sku_name value as follows,
and then run Terraform apply.
• standard
• premium
• isolated
41
Chapter 2 Azure Web App for Containers
With automatic backup, we are limited to 30 GB of backup, and backups run every
hour without the option to run manual backups. Backups are retained for 30 days and
cannot be downloaded to a local machine.
If you require a custom approach to your backups, you can use a custom backup by
setting up a storage account to hold the backups. Once configured, the backup frequency
and the retention period can be configured and changed.
Custom backups can be downloaded to an Azure storage blob.
C
ustomizing Deployment
Before we move on to the next section, I’d like to show you how powerful Terraform is
when it comes to customized deployments. In the following example, we’re going to
generate a random web app name for our application using Terraform.
To make our deployments easier and much more customized, Terraform has created
a few providers that can help us generate random numbers, IDs, passwords, and more.
Going back to our Web App for Containers code, I’m now going to add a new code
block that will generate a random number that I will use to make my web app name.
In the following code block, we’ll use the random provider to generate a random
number and use it to create the name of our web app. The provider details are available
at https://registry.terraform.io/providers/hashicorp/random/latest/docs.
In essence, this code will generate a random number between 1 and 20. I will use
that number in the web app code block to form my web app name and URL.
42
Chapter 2 Azure Web App for Containers
site_config {
always_on = "true"
application_stack {
docker_image_name = "httpd:latest"
docker_registry_url = "https://index.docker.io/"
}
}
app_settings = {
"DOCKER_ENABLE_CI" = "true"
}
When I run the Terraform apply command, Terraform will generate a number and
use it to form the web app URL, the result of which will be:
Outputs:
web_app_url = "apresstfwebapp18.azurewebsites.net"
The URL has now been formed and has the number 18 in it.
V
ariable Interpolation
You probably noticed that in the part of the previous code where we generated the
number and formed the web app URL we used the following code to form the name:
"ApressTFWebApp${random_integer.random.result}"
This example is perfect for taking the opportunity to introduce the concept of
variable interpolation.
In Terraform, variable interpolation is the process of using the values of variables
within your Terraform configuration. Interpolation uses the following syntax:
${}
43
Chapter 2 Azure Web App for Containers
H
TTPS
Azure Web Apps allows us to secure our applications using the HTTPS protocol,
and by default, every deployment comes with an HTTPS URL enabled. To take this
configuration a step further, we can also disable the use of HTTP using Terraform.
We can add the following line to the web app block if we want to make our web app
support HTTPS only:
https_only = "true"
We can also enforce our web application to communicate using only the transport
layer security (TLS) 1.2 HTTPS protocol and disable the use of unsecured TLS protocols
like TLS 1.0.
The following line of code will set the minimum TLS protocol to 1.2:
minimum_tls_version = "1.2"
Another security feature that we can use is static IP restriction. By default, access to
our web service is available to all IP (Internet protocol) addresses; however, we can limit
which IP addresses have access to our application using IP restrictions.
The following code block adds restrictions to our web app from a static IP block:
ip_restriction {
ip_address = "10.0.0.0/24"
action = "Allow"
# }
44
Chapter 2 Azure Web App for Containers
• HTTPS only
• IP restrictions
https_only = "true"
site_config {
always_on = "true"
minimum_tls_version = "1.2"
application_stack {
docker_image_name = "httpd:latest"
docker_registry_url = "https://index.docker.io/"
}
}
app_settings = {
"DOCKER_ENABLE_CI" = "true"
}
45
Chapter 2 Azure Web App for Containers
Private Endpoints
Private endpoints for web apps provide the ultimate security feature for securing web
apps in Azure. These endpoints only allow access to web apps from private networks and
block access to them by general users on the Internet.
A private network can be either an Azure Virtual Network (Vnet) or an on-premises
network.
Private endpoints allow access to web apps from on-premises networks only or from
Azure private networks.
To configure a private endpoint, we must create a Vnet and place the web app inside
the network within the internal network interface controller (NIC).
In brief, Azure private endpoints use a private network interface that is available
on a virtual network. When a private endpoint is being created, a private IP address is
assigned to the web app instead of a public IP address.
We can also use access restrictions to white list or blacklist specific IP ranges or IP
addresses.
To access a private endpoint from a Vnet, Azure uses a private domain name system
(DNS) zone to resolve the private IP address.
• Azure Subnet
• Virtual network connectivity
• a Private Endpoint
46
Chapter 2 Azure Web App for Containers
47
Chapter 2 Azure Web App for Containers
address_prefixes = ["10.0.2.0/24"]
private_endpoint_network_policies_enabled = true
}
https_only = "true"
site_config {
always_on = "true"
minimum_tls_version = "1.2"
application_stack {
docker_image_name = "nginx:latest"
docker_registry_url = "https://index.docker.io/"
}
}
app_settings = {
"DOCKER_ENABLE_CI" = "true"
vnet_route_all_enabled = "true"
resource "azurerm_app_service_virtual_network_swift_connection"
"vnetintegrationconnection" {
app_service_id = azurerm_linux_web_app.webapp.id
48
Chapter 2 Azure Web App for Containers
subnet_id = azurerm_subnet.webappssubnet.id
}
private_dns_zone_group {
name = "privatednszonegroup"
private_dns_zone_ids = [azurerm_private_dns_zone.dnsprivatezone.id]
}
private_service_connection {
name = "privateendpointconnection"
private_connection_resource_id = azurerm_linux_web_app.webapp.id
subresource_names = ["sites"]
is_manual_connection = false
}
}
49
Chapter 2 Azure Web App for Containers
Note Disabling Azure Web App public access is not possible with Terraform.
To disable public access, open your newly created web app and click “Networking,”
as shown in Figure 2-5. Then, click “Access restriction” and uncheck “Allow public
access,” as shown in Figure 2-6.
50
Chapter 2 Azure Web App for Containers
We can also use the Azure CLI to disable public access with the following CLI
command:
If someone on the Internet tries to open the web app after you’ve disabled public
access, they’ll receive the error message shown in Figure 2-7.
Figure 2-7. Error message users will receive after public access has been disabled
Summary
In this chapter, we covered the configuration and deployment of Azure Web App for
Containers using Terraform. During the process, we learned about the following:
• variable interpolation
• private endpoints
With these features, users should understand all the moving parts of a Terraform
deployment.
As always, when deploying Terraform, it’s important to take extra time to review
the terraform plan command and ensure you understand the impact of the planned
changes on the environment.
To control Azure resource costs, make sure you use the terraform destroy to delete
the resources after you’re done with the trial setup.
51
CHAPTER 3
53
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_3
Chapter 3 Azure Container Registry
• Role-based access control: ACR uses the Azure Active Directory (AD)
for RBAC and allows users to grant permissions easily to other users
and groups in Azure without creating duplicate identities.
• Private link: ACR supports Azure Private Link, which allows users
to make the registry available from internal networks connected to
Azure and block any access to it from public networks.
The main takeaway from the list of things we’re going to do is that we’re not going to
use Docker to build and push our image using a Dockerfile. This is a capability of Azure
CLI and can be done using ACR Tasks.
ACR Tasks is a suite of features in ACR that allows us to build container images for
any platform. It also allows us to automate the build process using triggers like source
code updates.
The advantage of ACR Tasks is that it eliminates the need for a local Docker engine
installation and licensing for large businesses.
For example, the Docker build command in ACR Tasks is az acr build, which
builds and pushes the image to an ACR repository.
54
Chapter 3 Azure Container Registry
Terraform Configuration
The following Terraform configuration can be used to create an ACR:
tags = {
environment = "dev"
}
}
output "acr_url" {
value = azurerm_container_registry.acr.login_server
}
55
Chapter 3 Azure Container Registry
output "admin_username" {
value = azurerm_container_registry.acr.admin_username
sensitive = true
}
output "admin_password" {
value = azurerm_container_registry.acr.admin_password
sensitive = true
}
• Admin password: the password that will be used to log in in using the
admin account.
56
Chapter 3 Azure Container Registry
Changes to Outputs:
+ acr_url = (known after apply)
+ admin_password = (sensitive value)
+ admin_username = (sensitive value)
If you look at the output, you’ll see that Terraform is going to create two resources, a
resource group and an ACR.
57
Chapter 3 Azure Container Registry
Adding Tags
In case you didn’t notice in the ACR code, I’m also tagging the ACR resource with a tag
that uses the following code:
tags = {
environment = "dev"
}
Outputs:
acr_url = "apresstfacr.azurecr.io"
admin_password = <sensitive>
admin_username = <sensitive>
{
"acr_url": {
"sensitive": false,
"type": "string",
"value": "apresstfacr.azurecr.io"
},
"admin_password": {
"sensitive": true,
"type": "string",
"value": "PASSWORD SHOWS HERE"
},
58
Chapter 3 Azure Container Registry
"admin_username": {
"sensitive": true,
"type": "string",
"value": "apresstfacr"
}
Creating the file is done in one step, and that’s to pull the hello-
world Docker image by using the following code:
2. Build a Docker image using the Azure ACR Tasks CLI: Using the
following Az CLI command, we’re going to build our image using
the Dockerfile we created. Make sure you change the registry
address to your own ACR URL.
The following command output shows all the steps ACR Tasks
takes in order to push the image to ACR:
59
Chapter 3 Azure Container Registry
61
Chapter 3 Azure Container Registry
[
"ch03/image01"
]
4. Run the image: The final step in this process is to check if the
image can run once it has been uploaded to ACR. Once again,
we’re going to use ACR Tasks using the following command:
62
Chapter 3 Azure Container Registry
63
Chapter 3 Azure Container Registry
In our ACR configuration, we set the pricing tier in the following SKU section:
name = "apresstfacr"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Basic"
admin_enabled = true
64
Chapter 3 Azure Container Registry
ACR Tasks
The following table lists several ACR tasks that are part of the Azure CLI command-line
utility and describes what they do.
Command Details
If, for example, we wanted to view all the ACR tasks that have finished running we
could use the following command:
65
Chapter 3 Azure Container Registry
• running commands
• running scripts
• custom logic
provisioner "local-exec" {
command = "echo Resource created at ${timestamp()}"
}
}
This code can be used with any Terraform configuration file to run scripts or
commands.
To use the null_resource with our configuration, for example, we can add the
following code and run the following Azure CLI Tasks command:
66
Chapter 3 Azure Container Registry
provisioner "local-exec" {
command = <<EOT
az acr task list-runs --registry ${azurerm_container_registry.acr.
name} --resource-group ${azurerm_resource_group.rg.name} -o table
EOT
}
}
In the basic example, we’re using the provisioners model with local-exec options
that run the code on the system Terraform is operating; in our case, that’s our local
machine.
We can also use Terraform variables in the code without hard-coding the ACR
registry name and the Azure resource group.
To use a variable, I’m choosing the ${} syntax and referencing the details needed by
the Azure CLI command to run.
The output from the null_resource and the command is:
null_resource.run-commands: Creating...
null_resource.run-commands: Provisioning with 'local-exec'...
null_resource.run-commands (local-exec): Executing: ["/bin/sh" "-c" "
az acr task list-runs --registry apresstfacr --resource-group
ApressAzureTerraformCH03 -o table\n"]
null_resource.run-commands (local-exec):
RUN ID TASK PLATFORM STATUS TRIGGER STARTED
DURATION
null_resource.run-commands (local-exec):
-------- ------ ---------- --------- --------- --------------------
----------
null_resource.run-commands (local-exec):
cs2 linux Succeeded Manual 00:00:14
null_resource.run-commands (local-exec):
cs1 linux Succeeded Manual 00:00:10
null_resource.run-commands: Creation complete after 1s [id=3069836889725941298]
67
Chapter 3 Azure Container Registry
tags = {
environment = "dev"
}
provisioner "local-exec" {
command = <<EOT
az acr task list-runs --registry ${azurerm_container_registry.acr.
name} --resource-group ${azurerm_resource_group.rg.name} -o table
EOT
}
}
Securing ACR
In the last section of this chapter, we’re going to focus on the security features of ACR,
which we can implement using Terraform with the help of other Azure services.
In this section, we’re going to deploy an ACR with the following features:
68
Chapter 3 Azure Container Registry
• Purge protection
In this example, we’re looking up the service principal ID of an identity object called
acr-admin.
In the configuration code in the next section, we’re going to use data sources
multiple times to reference ID and configuration items of resources in Azure and in the
actual Azure configuration.
69
Chapter 3 Azure Container Registry
The next piece of code will create a resource group. There is no change here from the
previous ACR deployment.
The code that follows will create an Azure Key Vault that we’ll use to store the
encryption key that will encrypt all the data in the container registry, including images.
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"List",
"Get",
"Create",
"Delete",
"Get",
70
Chapter 3 Azure Container Registry
"Purge",
"Recover",
"Update",
"GetRotationPolicy",
"SetRotationPolicy",
"WrapKey",
"UnwrapKey"
]
secret_permissions = [
"Get",
"List",
"Set"
]
storage_permissions = [
"Get",
"List",
"Set"
]
}
}
Next, we’ll create a Key Vault access policy, which is an access policy control
specifying what kind of permissions each Azure AD identity has access to in the Key
Vault. In our case, we’ll give a service principal account access to the vault with the
permissions listed under key_permissions.
71
Chapter 3 Azure Container Registry
key_permissions = [
"List",
"Get",
"Create",
"Delete",
"Get",
"Purge",
"Recover",
"Update",
"GetRotationPolicy",
"SetRotationPolicy",
"WrapKey",
"UnwrapKey"
]
}
The code that follows will read the name of the Azure Key Vault we created, as we’ll
need to use it soon.
This next code block will create an Azure Key Vault key that we’ll use to encrypt the
data. The code has key configuration items like type and size. We’re also defining what
kind of operations are allowed with the key and rotation policy.
key_opts = [
"decrypt",
"encrypt",
"sign",
72
Chapter 3 Azure Container Registry
"unwrapKey",
"verify",
"wrapKey",
"unwrapKey"
]
rotation_policy {
automatic {
time_before_expiry = "P30D"
}
expire_after = "P90D"
notify_before_expiry = "P29D"
}
}
The following data source code block will read the name of the next key we created
and stores it, as we will also use this one soon.
And in the next piece of code, we’ll create a user-assigned identity that we’ll use to
manage ACR encryption and interaction with Azure Key Vault. The username is acr-admin.
The final block of code that follows will create an ACR with these features:
• encryption enabled
tags = {
environment = "dev"
}
identity {
type = "UserAssigned"
identity_ids = [
azurerm_user_assigned_identity.identity.id
]
}
encryption {
enabled = true
key_vault_key_id = data.azurerm_key_vault_key.readkey.id
identity_client_id = azurerm_user_assigned_identity.identity.
client_id
}
The full code follows. It is important that you go over the code to understand how
the configuration is done. An important thing to note is that Terraform decides how to
deploy resources and determines the order of deployment without looking at the order
the code is written in.
74
Chapter 3 Azure Container Registry
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"List",
"Get",
"Create",
"Delete",
"Get",
"Purge",
"Recover",
"Update",
"GetRotationPolicy",
"SetRotationPolicy",
"WrapKey",
"UnwrapKey"
]
secret_permissions = [
"Get",
"List",
"Set"
]
75
Chapter 3 Azure Container Registry
storage_permissions = [
"Get",
"List",
"Set"
]
}
}
key_permissions = [
"List",
"Get",
"Create",
"Delete",
"Get",
"Purge",
"Recover",
"Update",
"GetRotationPolicy",
"SetRotationPolicy",
"WrapKey",
"UnwrapKey"
]
}
76
Chapter 3 Azure Container Registry
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
"unwrapKey"
]
rotation_policy {
automatic {
time_before_expiry = "P30D"
}
expire_after = "P90D"
notify_before_expiry = "P29D"
}
}
77
Chapter 3 Azure Container Registry
tags = {
environment = "dev"
}
identity {
type = "UserAssigned"
identity_ids = [
azurerm_user_assigned_identity.identity.id
]
}
encryption {
enabled = true
key_vault_key_id = data.azurerm_key_vault_key.readkey.id
identity_client_id = azurerm_user_assigned_identity.identity.
client_id
}
Now, if you open the Azure portal, go to the newly deployed ACR, and click
“Encryption,” you’ll see that encryption has been enabled using the identity we set in the
code, as shown in Figure 3-1.
78
Chapter 3 Azure Container Registry
Another thing that can be configured is the removal of public access to our ACR, so
that it is only available to internal Azure networks, as shown in Figure 3-2.
If you click the “Private Access” tab shown in Figure 3-2, you’ll see that we can also
configure ACR to have private access. After enabling private access to an ACR endpoint,
ACR will accept traffic from private virtual networks only.
ACR Georeplication
Azure ACR allows us to optimize the performance of our container registry by enabling
georeplication, so that it serves multiple regions using a single ACR registry.
The main benefits of using ACR georeplication are:
If needed, we can enable georeplication with Terraform by adding the following code
block to the Key Vault code block:
georeplications {
location = "Australia Central"
zone_redundancy_enabled = false
tags = {}
}
The entire code ACR code block should look like this:
tags = {
environment = "dev"
}
80
Chapter 3 Azure Container Registry
identity {
type = "UserAssigned"
identity_ids = [
azurerm_user_assigned_identity.identity.id
]
}
encryption {
enabled = true
key_vault_key_id = data.azurerm_key_vault_key.readkey.id
identity_client_id = azurerm_user_assigned_identity.identity.
client_id
}
georeplications {
location = "Australia Central"
zone_redundancy_enabled = false
tags = {}
}
Once the replication is enabled, you can check the status from the ACR page in the
Azure portal under “Georeplication,” as shown in Figure 3-3. The two locations that form
the Georeplications are shown under the Name header.
variable "acr_image" {
type = string
default = "ch03/image01:v1"
variable "acruser" {
type = string
default = "apresstfacr"
82
Chapter 3 Azure Container Registry
variable "acr_server" {
type = string
default = "https://apresstfacr.azurecr.io"
variable "acr_password" {
type = string
In the previous file, we have declared four variables, which we populated with
default values. The last variable is the password for the ACR username, which we aren’t
going to save to the variable file. We’ll pass the password as a parameter using the
command line, as you will soon see.
docker_image_name = var.acr_image
docker_registry_url = var.acr_server
docker_registry_username = var.acruser
docker_registry_password = var.acr_password
As you can see, we are referencing the variables and not the actual values of the
registry. To reference a variable, we use:
var.varname
The entire configuration for Web Apps for Containers with ACR registry is.
83
Chapter 3 Azure Container Registry
https_only = "true"
site_config {
always_on = "true"
minimum_tls_version = "1.1"
application_stack {
docker_image_name = var.acr_image
docker_registry_url = var.acr_server
docker_registry_username = var.acruser
docker_registry_password = var.acr_password
}
84
Chapter 3 Azure Container Registry
app_settings = {
"DOCKER_ENABLE_CI" = "true"
}
-var=varbame=varvalue
The following Terraform apply command will deploy the code and pass the
password to Azure:
1. To access the deployment logs, open the Azure portal and click on
the web App we just deployed.
85
Chapter 3 Azure Container Registry
Summary
In this chapter, we learned how to create an ACR registry with basic configuration as
well as with advanced security features. The last part of the chapter was focused on the
integration of ACR with Azure Web App for Containers.
86
CHAPTER 4
• Security: ACI workloads are secure and isolated; all applications run
in their own dedicated environment.
• Integration: ACI is fully integrated with other Azure services like ACR,
Azure Functions, and others.
87
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_4
Chapter 4 Azure Container Instances
Use Cases
ACI's main use cases include the following scenarios:
container {
name = "web-server"
image = "httpd:latest"
cpu = "2"
memory = "4"
88
Chapter 4 Azure Container Instances
ports {
port = 80
protocol = "TCP"
}
}
tags = {
environment = "dev"
}
}
The other components that are available for configuration are the image name, DNS
name, IP address type, and name of the container.
Full Code
To deploy the container, we’ll use the following full configuration and deploy it to Azure.
As a reminder, the steps to deploy the container are as follows:
89
Chapter 4 Azure Container Instances
container {
name = "web-server"
image = "httpd:latest"
cpu = "2"
memory = "4"
ports {
port = 80
protocol = "TCP"
}
}
tags = {
environment = "dev"
}
}
To check whether the deployment was successful, I created an output file called
output.tf that outputs all the information about the ACI instance to the terminal. The
output will display the following details:
• the resource ID
• the public IP
• tags
output "fqdn" {
value = azurerm_container_group.acigroup.fqdn
}
90
Chapter 4 Azure Container Instances
output "id" {
value = azurerm_container_group.acigroup.id
}
output "ip_address" {
value = data.azurerm_container_group.acigroup.ip_address
}
output "zones" {
value = data.azurerm_container_group.acigroup.zones
}
output "tags" {
value = data.azurerm_container_group.acigroup.tags
}
Please note that for us to output values that are not in the configuration, we
have to call the data resource of the entire resource, which in our case is azurerm_
container_group.
container {
name = "web-server"
image = "httpd:latest"
cpu = "2"
memory = "4"
ports {
port = 80
protocol = "TCP"
}
}
91
Chapter 4 Azure Container Instances
container {
name = "web-server02"
image = "nginx:latest"
cpu = "2"
memory = "4"
ports {
port = 82
protocol = "TCP"
}
}
variable "acr_image" {
type = string
default = "ch03/image01:v1"
}
92
Chapter 4 Azure Container Instances
variable "acruser" {
type = string
default = "apresstfacr"
variable "acr_server" {
type = string
default = "https://apresstfacr.azurecr.io"
variable "acr_password" {
type = string
To configure an ACI instance to use a Docker image, I have added the following code
block to handle the ACR authentication:
image_registry_credential {
server = var.acr_server
username = var.acruser
password = var.acr_password
}
container {
name = "container"
image = var.acr_image
cpu = "2"
memory = "2"
ports {
port = 80
protocol = "TCP"
}
93
Chapter 4 Azure Container Instances
image_registry_credential {
server = var.acr_server
username = var.acruser
password = var.acr_password
}
container {
name = "container"
image = var.acr_image
cpu = "2"
memory = "2"
ports {
port = 80
protocol = "TCP"
}
}
tags = {
environment = "dev"
}
}
94
Chapter 4 Azure Container Instances
• mount a volume
Let’s start by breaking the down the new code before deploying to Azure.
Storage Account
The following code block will create a standard-tier storage account and locally
redundant storage (LRS) replication type. Just remember that the Azure storage account
needs to be unique in the platform.
95
Chapter 4 Azure Container Instances
account_tier = "Standard"
account_replication_type = "LRS"
}
You can name the storage account with any unique name and change the storage tier
as needed. Also note that we’re using the same resource group and location, as there is
no need to change them.
• the name
volume {
name = "logs"
mount_path = "/apress/logs"
read_only = false
share_name = azurerm_storage_share.share.name
96
Chapter 4 Azure Container Instances
storage_account_name = azurerm_storage_account.storageact.name
storage_account_key = azurerm_storage_account.storageact.
primary_access_key
}
After those three code blocks are added to the configuration, all that remains is to
deploy the code with Terraform applied.
97
Chapter 4 Azure Container Instances
dns_name_label = "apressterraformbook"
os_type = "Linux"
container {
name = "web-server"
image = "httpd:latest"
cpu = "2"
memory = "4"
ports {
port = 80
protocol = "TCP"
}
volume {
name = "logs"
mount_path = "/apress/logs"
read_only = false
share_name = azurerm_storage_share.share.name
storage_account_name = azurerm_storage_account.storageact.name
storage_account_key = azurerm_storage_account.storageact.
primary_access_key
}
}
tags = {
environment = "dev"
}
}
98
Chapter 4 Azure Container Instances
With the previous configuration in place, we now need to introduce it to the Azure
management and monitoring capabilities of container instances. To get started,
we’ll begin by connecting to a running container terminal to check if the volume we
mounted exists.
99
Chapter 4 Azure Container Instances
You’ll see the following information displayed in the “Containers” section of the
page, as shown in Figure 4-2.
• the state
• start time
100
Chapter 4 Azure Container Instances
• “Events”
• “Properties”
• “Logs”
• “Connect”
Let’s click the “Connect” tab, then select “/bin/bash” as the startup command and
click “Connect.”
Once connected, you’ll be presented with a shell terminal where you can type any
command that is available inside the installed shell. To check that the Apress directory
was created, run the following command:
cd /apress
ls
root@SandboxHost-638281002340986774:/usr/local/apache2# cd /apress/
root@SandboxHost-638281002340986774:/apress# ls
logs
root@SandboxHost-638281002340986774:/apress#
101
Chapter 4 Azure Container Instances
102
Chapter 4 Azure Container Instances
The logs are very detailed and will show any requests made to the container in
real time.
az container logs
AH00558: httpd: Could not reliably determine the server's fully qualified
domain name, using 127.0.0.1. Set the 'ServerName' directive globally to
suppress this message
103
Chapter 4 Azure Container Instances
AH00558: httpd: Could not reliably determine the server's fully qualified
domain name, using 127.0.0.1. Set the 'ServerName' directive globally to
suppress this message
[Sun Aug 20 03:50:59.732696 2023] [mpm_event:notice] [pid 60:tid
140600843515776] AH00489: Apache/2.4.57 (Unix) configured -- resuming
normal operations
[Sun Aug 20 03:50:59.757204 2023] [core:notice] [pid 60:tid
140600843515776] AH00094: Command line: 'httpd -D FOREGROUND'
10.92.0.6 - - [20/Aug/2023:04:03:14 +0000] "\x16\x03\x01" 400 226
10.92.0.6 - - [20/Aug/2023:04:03:15 +0000] "\x16\x03\x01" 400 226
10.92.0.6 - - [20/Aug/2023:04:03:15 +0000] "GET / HTTP/1.1" 200 45
10.92.0.6 - - [20/Aug/2023:04:03:15 +0000] "GET /client/get_targets
HTTP/1.1" 404 196
10.92.0.6 - - [20/Aug/2023:04:03:16 +0000] "GET /upl.php HTTP/1.1" 404 196
10.92.0.6 - - [20/Aug/2023:04:03:16 +0000] "\x16\x03\x01" 400 226
10.92.0.6 - - [20/Aug/2023:04:03:16 +0000] "GET /geoip/ HTTP/1.1" 404 196
10.92.0.6 - - [20/Aug/2023:04:03:17 +0000] "GET / HTTP/1.1" 200 45
10.92.0.6 - - [20/Aug/2023:04:03:17 +0000] "GET /favicon.ico
HTTP/1.1" 404 196
10.92.0.6 - - [20/Aug/2023:04:03:17 +0000] "GET /1.php HTTP/1.1" 404 196
10.92.0.6 - - [20/Aug/2023:04:03:17 +0000] "GET /bundle.js
HTTP/1.1" 404 196
10.92.0.4 - - [20/Aug/2023:04:03:18 +0000] "GET /files/ HTTP/1.1" 404 196
10.92.0.4 - - [20/Aug/2023:04:03:18 +0000] "GET /systembc/password.php
HTTP/1.1" 404 196
10.92.0.6 - - [20/Aug/2023:04:27:08 +0000] "GET / HTTP/1.1" 200 45
10.92.0.6 - - [20/Aug/2023:04:44:20 +0000] "GET / HTTP/1.0" 200 45
10.92.0.6 - - [20/Aug/2023:04:48:37 +0000] "\x16\x03\x01" 400 226
104
Chapter 4 Azure Container Instances
Az container attach
To view the streams log of our container, we need to run the following command:
This command will produce the output of the streams, which looks like:
105
Chapter 4 Azure Container Instances
az container show
az container show --resource-group
ApressAzureTerraformCH04 --name ApressTerraform
The output of the diagnostic command will show up in a JSON format like the
following:
{
"confidentialComputeProperties": null,
"containers": [
{
"command": [],
"environmentVariables": [],
"image": "httpd:latest",
"instanceView": {
"currentState": {
"detailStatus": "",
"exitCode": null,
"finishTime": null,
"startTime": "2023-08-20T03:50:59.437000+00:00",
"state": "Running"
},
106
Chapter 4 Azure Container Instances
"events": [],
"previousState": null,
"restartCount": 0
},
"livenessProbe": null,
"name": "web-server",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"readinessProbe": null,
"resources": {
"limits": null,
"requests": {
"cpu": 2.0,
"gpu": null,
"memoryInGb": 4.0
}
},
"securityContext": null,
"volumeMounts": [
{
"mountPath": "/apress/logs",
"name": "logs",
"readOnly": false
}
]
}
],
"diagnostics": null,
"dnsConfig": null,
"encryptionProperties": null,
"extensions": null,
107
Chapter 4 Azure Container Instances
"id": "/subscriptions/SUBID/resourceGroups/ApressAzureTerraformCH04/
providers/Microsoft.ContainerInstance/containerGroups/ApressTerraform",
"identity": {
"principalId": null,
"tenantId": null,
"type": "None",
"userAssignedIdentities": null
},
"imageRegistryCredentials": null,
"initContainers": [],
"instanceView": {
"events": [
{
"count": 1,
"firstTimestamp": "2023-08-20T03:50:58.525000+00:00",
"lastTimestamp": "2023-08-20T03:50:58.525000+00:00",
"message": "Successfully mounted Azure File Volume.",
"name": "SuccessfulMountAzureFileVolume",
"type": "Normal"
}
],
"state": "Running"
},
"ipAddress": {
"autoGeneratedDomainNameLabelScope": "Unsecure",
"dnsNameLabel": "apressterraformbook",
"fqdn": "apressterraformbook.westus.azurecontainer.io",
"ip": "40.78.2.90",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"type": "Public"
},
108
Chapter 4 Azure Container Instances
"location": "westus",
"name": "ApressTerraform",
"osType": "Linux",
"priority": null,
"provisioningState": "Succeeded",
"resourceGroup": "ApressAzureTerraformCH04",
"restartPolicy": "Always",
"sku": "Standard",
"subnetIds": null,
"tags": {
"environment": "dev"
},
"type": "Microsoft.ContainerInstance/containerGroups",
"volumes": [
{
"azureFile": {
"readOnly": false,
"shareName": "aci-apress-tf-share",
"storageAccountKey": null,
"storageAccountName": "apresstfch04storage"
},
"emptyDir": null,
"gitRepo": null,
"name": "logs",
"secret": null
}
],
"zones": null
}
109
Chapter 4 Azure Container Instances
110
Chapter 4 Azure Container Instances
diagnostics {
log_analytics {
workspace_id = azurerm_log_analytics_workspace.log_analytics.
workspace_id
workspace_key = azurerm_log_analytics_workspace.log_analytics.
primary_shared_key
}
}
111
Chapter 4 Azure Container Instances
diagnostics {
log_analytics {
workspace_id = azurerm_log_analytics_workspace.log_analytics.
workspace_id
workspace_key = azurerm_log_analytics_workspace.log_analytics.
primary_shared_key
}
}
container {
name = "web-server"
image = "httpd:latest"
cpu = "2"
memory = "4"
ports {
port = 80
protocol = "TCP"
}
volume {
name = "logs"
mount_path = "/apress/logs"
read_only = false
112
Chapter 4 Azure Container Instances
share_name = azurerm_storage_share.share.name
storage_account_name = azurerm_storage_account.storageact.name
storage_account_key = azurerm_storage_account.storageact.
primary_access_key
}
}
tags = {
environment = "dev"
}
Once the restart is complete, you’ll be able to see Log Analytics in action. In the
Azure portal, search for Log Analytics workspaces.
113
Chapter 4 Azure Container Instances
In the Queries window, type the following query to view the last 200 logs in the
containers running in ACI.
114
Chapter 4 Azure Container Instances
If you’d like to learn more about the Log Analytics log query, you can visit the log
language page, Kusto Query Language (KQL) Overview used by Azure: https://learn.
microsoft.com/en-us/azure/data-explorer/kusto/query/.
For ACI deployment, the main query source tables are:
• ContainerInstanceLog_CL
• ContainerEvent_CL
These tables contain information about logs and events generated by ACI containers.
The log and event schemas are available for viewing on the Logs query page under
“Custom Logs,” as shown in Figure 4-8.
115
Chapter 4 Azure Container Instances
116
Chapter 4 Azure Container Instances
I’ve also added the verbose and debug switches to get more visibility into the stop
process.
\ Running ..
cli.azure.cli.core.sdk.policies: {"id":"/subscriptions/SUBID/
resourceGroups/ApressAzureTerraformCH04/providers/Microsoft.
ContainerInstance/containerGroups/ApressTerraform","status":"Succeeded
","startTime":"2023-08-20T09:13:01.6699127Z","properties":{"events":[{
"count":1,"firstTimestamp":"2023-08-20T08:21:33Z","lastTimestamp":"2023-
08-20T08:21:33Z","name":"Pulling","message":"pulling image \"httpd@sha25
6:18427eed921af003c951b5c97f0bde8a6df40cc7cb09b9739b9e35041a3c3acd\"","
type":"Normal"},{"count":1,"firstTimestamp":"2023-08-20T08:21:39Z","las
tTimestamp":"2023-08-20T08:21:39Z","name":"Pulled","message":"Successfu
lly pulled image \"httpd@sha256:18427eed921af003c951b5c97f0bde8a6df40cc
7cb09b9739b9e35041a3c3acd\"","type":"Normal"},{"count":2,"firstTimesta
mp":"2023-08-20T08:21:54Z","lastTimestamp":"2023-08-20T08:32:24Z","name":"S
tarted","message":"Started container","type":"Normal"},{"count":1,"firstTim
estamp":"2023-08-20T08:32:23Z","lastTimestamp":"2023-08-20T08:32:23Z","nam
e":"Killing","message":"Killing container with id 7281d7ef819dbd99d601de58
6032bfcedd0e8468b7bcdff1b0707e50067976f1.","type":"Normal"},{"count":1,"fi
117
Chapter 4 Azure Container Instances
rstTimestamp":"2023-08-20T09:13:06Z","lastTimestamp":"2023-08-20T09:13:06Z
","name":"Pulling","message":"pulling image \"httpd@sha256:18427eed921af00
3c951b5c97f0bde8a6df40cc7cb09b9739b9e35041a3c3acd\"","type":"Normal"},{"co
unt":1,"firstTimestamp":"2023-08-20T09:13:13Z","lastTimestamp":"2023-08-2
0T09:13:13Z","name":"Pulled","message":"Successfully pulled image \"httpd@
sha256:18427eed921af003c951b5c97f0bde8a6df40cc7cb09b9739b9e35041a3c3acd\"",
"type":"Normal"},{"count":1,"firstTimestamp":"2023-08-20T09:13:30Z","lastTi
mestamp":"2023-08-20T09:13:30Z","name":"Started","message":"Started contain
er","type":"Normal"},{"count":1,"firstTimestamp":"2023-08-20T08:21:53.547Z"
,"lastTimestamp":"2023-08-20T08:21:53.547Z","name":"SuccessfulMountAzureFil
eVolume","message":"Successfully mounted Azure File Volume.","type":"Normal
"},{"count":1,"firstTimestamp":"2023-08-20T09:13:29.124Z","lastTimestamp":"
2023-08-20T09:13:29.124Z","name":"SuccessfulMountAzureFileVolume","message"
:"Successfully mounted Azure File Volume.","type":"Normal"}]}}
cli.knack.cli: Event: CommandInvoker.OnTransformResult [<function _
resource_group_transform at 0x7f1c957f6cb0>, <function _x509_from_base64_
to_hex_transform at 0x7f1c957f6d40>]
cli.knack.cli: Event: CommandInvoker.OnFilterResult []
cli.knack.cli: Event: Cli.SuccessfulExecute []
cli.knack.cli: Event: Cli.PostExecute [<function AzCliLogging.deinit_cmd_
metadata_logging at 0x7f1c957af010>]
az container restart --no-wait.
118
Chapter 4 Azure Container Instances
Liveness Probes
A liveness probe runs regular checks that diagnose the health of the container instance.
It automatically triggers a restart if it detects that the container isn’t responding to
the checks.
Liveness probes are handy because they help us detect if the container needs a
restart. We configure liveness probs by specifying a path, port, or command that ACI will
run periodically.
If the command is successful, the container is marked as healthy and no action is
taken. If the check fails a few times, the container is marked as unhealthy and restarted
automatically.
Readiness Probe
A readiness probe checks if the application is ready to accept incoming traffic after it
completes a restart process. The checking mechanism is similar to that of the liveness
probe and will result in stopping traffic from being directed into the container.
To configure these two probes, let’s add the following configuration blocks and a
single command to the container configuration block:
readiness_probe {
exec = ["cat","/tmp/healthy"]
initial_delay_seconds = 2
period_seconds = 60
failure_threshold = 3
success_threshold = 1
timeout_seconds = 40
}
119
Chapter 4 Azure Container Instances
liveness_probe {
exec = ["cat","/tmp/healthy"]
initial_delay_seconds = 2
period_seconds = 60
failure_threshold = 3
success_threshold = 1
timeout_seconds = 40
}
Summary
In this chapter, we learned about deploying Azure Container Instances and how to do
the following:
• configure probes
120
CHAPTER 5
About Kubernetes
Kubernetes is an orchestration and automation container management system
developed by Google that was turned into an open-source platform in 2014. Kubernetes
is so powerful that it is used by Google to run over one billion containers and power
many of its cloud services.
The scalability of Kubernetes is almost unlimited, which gives enterprises the ability
to scale up or down during a short time. It is currently considered the standard tool
for container management and automation in small and large organizations and has a
market share of almost 80 percent and growing.
There is no doubt that Kubernetes is the current go-to tool for orchestration and
automation.
Kubernetes Components
In this section, I will go over each of the components that make up Kubernetes. Once
we move on to working with AKS, we will use the knowledge of these components to
understand the service better.
121
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_5
Chapter 5 Azure Kubernetes Service
Kubernetes is made up of the following three main components, each one consisting
of three subcomponents.
Kubernetes Master
Kubernetes Master is the most important component in Kubernetes, and as the name
implies, it is the master node, also known as the “control plane,” from which calculations
and automation are controlled.
The Kubernetes Master is made up of five components:
• Kube-apiserver: This the API server that exposes all the APIs of
Kubernetes to other components.
Kubernetes Nodes
In Kubernetes, the nodes are the actual servers that run the containers, also known
as pods.
The nodes are the computing units that perform the heavy lifting of deploying pods,
volumes, and networking.
122
Chapter 5 Azure Kubernetes Service
• The kubelet: This agent service runs on each node and ensures that
pods are running as planned.
• The kube proxy: This network component makes sure services are
operated according to the network policies and rules set by the
master component.
Kubernetes Add-Ons
Add-ons are optional features that can be added to the Kubernetes cluster. The following
add-ons are just a few of the many available:
• DNS: This is a DNS service that serves DNS name resolution for
Kubernetes; by default, all Kubernetes components are used.
Now that we have some background information about Kubernetes, let’s move to the
next section, which will cover the Azure implementation of Kubernetes.
123
Chapter 5 Azure Kubernetes Service
In AKS, Microsoft Azure is responsible for managing the master components. The
master components are beyond our reach, with users not being granted access to them.
This fact takes [^away] the complexity of managing Kubernetes and leaves [^it up to us to
only manage the nodes].
Our control of AKS involves managing the Kubernetes nodes with minimal
administrative tasks necessary, like scaling and updating them. In this section, we’ll go
over the deployment process of AKS using Terraform and Azure CLI.
default_node_pool {
name = "default"
node_count = var.node_count
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "DEV"
}
}
124
Chapter 5 Azure Kubernetes Service
To deploy the cluster, we’ll use the code in this book’s repository and run the
following Terraform commands:
• Terraform init
• Terraform plan
• Terraform apply
After running Terraform, you’ll need to apply the commands, review the output, and
then make sure the cluster was deployed successfully. In the next section, we’ll learn
how to connect to an AKS cluster using Azure CLI and deploy containerized applications
to the cluster.
If you’re following the code of this book, the command will look like this:
At this stage, we can go ahead and use the kubectl command to check the status of
the nodes in our cluster. In our case, the following command will show one node:
NAME STATUS ROLES AGE VERSION
aks-default-13103899-vmss000000 Ready agent 11m v1.26.6
125
Chapter 5 Azure Kubernetes Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 1
selector:
matchLabels:
app: web-server
template:
metadata:
labels:
app: web-server
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: web-server
image: nginx:latest
126
Chapter 5 Azure Kubernetes Service
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: web-server
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web-server
If you look at the previous configuration file, you’ll see that I’m using a separator (--
-) to divide the deployment and the service. Kubernetes allows me to create a separate
file for the service.
We set the kind of deployment in the following configuration file. In this example,
the kind is Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
127
Chapter 5 Azure Kubernetes Service
deployment.apps/web-server created
service/web-server created
We can copy the IP address in the EXTERNAL-IP field to check if the application is
working. If you open a web browser and paste the external IP address in the address bar,
you’ll come the home page of the nginx web server interface shown in Figure 5-1.
128
Chapter 5 Azure Kubernetes Service
Enabling Autoscaling
Another helpful feature of AKS is the ability to configure the deployment to autoscale
automatically based on usage. For example, autoscaling can scale the pods in case the
CPU level increases to 60 percent.
Autoscaling can be configured using the kubectl command or a configuration file.
When using kubectl, the following command will automatically scale the
deployment if the CPU usage increases to 60 percent. The minimum number of pods in
the deployment is set to 2 and the maximum to 5.
Just keep in mind that in order for autoscaling to work, a CPU limit must be defined
for the containers and pods in the deployment.
To use the configuration file for autoscaling, we can use the following deployment.yaml
file. We will deploy this file after we deploy the application and service.
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: web-app-ha
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
129
Chapter 5 Azure Kubernetes Service
apiVersion: apps/v1
kind: Deployment
name: web-server
targetCPUUtilizationPercentage: 60
To check if autoscaling is working and its status, we can use the following command:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
web-app-ha Deployment/web-server 1%/60% 2 5 2 2m40s
The following code will deploy both AKS and an ACR registry. The code is the
same as that for the previous AKS deployment except for the following two new
configuration blocks:
default_node_pool {
name = "default"
node_count = var.node_count
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "DEV"
}
}
131
Chapter 5 Azure Kubernetes Service
Once the cluster and image are deployed, we can push an image to the new registry
using the steps we learned in Chapter 3.
132
Chapter 5 Azure Kubernetes Service
memory: 256Mi
ports:
- containerPort: 80
name: redis
2. Save the file, then deploy the application using the following
kubectl command line:
3. To test the application, run the get service command with the
name of the service:
AKS Volumes
In this section, I will show you how to mount a persistent storage volume to AKS and
use it with containerized applications. The process is like the one we used in Chapter 4,
where we mounted a storage volume to our ACI deployment.
In AKS, we don’t create the underlying storage account or volumes using
Terraform; the entire process is done using the kubectl command line and the YAML
configuration files.
In the following example, I will show you how to mount a persistent volume that can
be dynamically provisioned to one or more pods. To create persistent storage volume
using the following:
• Create_Storage_Class.yaml
• Create_Volume_Claim.yaml
• Create_Pod_With_Volume.yaml
Once the storage is configured and deployed, I will deploy the nginx web server and
mount a persistent volume to it. All the data that is saved in the mounted storage will
remain intact after I delete the pods.
133
Chapter 5 Azure Kubernetes Service
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-azurefile
provisioner: file.csi.azure.com
allowVolumeExpansion: true
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Premium_LRS
134
Chapter 5 Azure Kubernetes Service
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: my-azurefile
resources:
requests:
storage: 100Gi
3. You can check if the PVC was deployed by running the following
kubectl command:
kind: Pod
apiVersion: v1
metadata:
name: mypod
135
Chapter 5 Azure Kubernetes Service
spec:
containers:
- name: mypod
image: nginx:latest
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- mountPath: /mnt/azure
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-azurefile
136
Chapter 5 Azure Kubernetes Service
To update our AKS cluster to the latest version of Kubernetes, any of the following
tools can be used:
• Azure PowerShell
• Azure CLI
• Azure portal
2. The output of the command should show the following items: (a)
the current version; and (b) available versions.
Name ResourceGroup MasterVersion Upgrades
------- ------------------------ --------------- --------------
default ApressAzureTerraformCH05 1.26.6 1.27.1, 1.27.3
5. After a few minutes, the cluster will run the latest version of
Kubernetes.
137
Chapter 5 Azure Kubernetes Service
Table 5-1. Upgrade Channels That the AKS Cluster Can Follow
Channel Function
Once you decide which channel to use, open a terminal window and connect to
AKS. Run the following command to configure auto-upgrade using the patch channel:
You can also configure autoupgrade from the Azure portal by opening the AKS
“Cluster configuration” page, as shown in Figure 5-2.
138
Chapter 5 Azure Kubernetes Service
1. Open the AKS cluster page from the Azure portal, Under the
“Setting” option, click “Cluster configuration.’
139
Chapter 5 Azure Kubernetes Service
140
Chapter 5 Azure Kubernetes Service
For these reasons, Terraform offers remote state management. With remote state
management, the state file is stored in a shared location, and in our case, that location is
an Azure storage account with file sharing enabled.
Once the remote state is configured, the following benefits will be available:
• Adding the remote state to the configuration file and switching from
the local state to the remote state
141
Chapter 5 Azure Kubernetes Service
tags = {
environment = "dev"
}
}
142
Chapter 5 Azure Kubernetes Service
output "storage_account_name" {
value = azurerm_storage_account.tfstate.name
}
a. Terraform init
b. Terraform plan
c. Terraform apply
Use the following two commands to get the storage access key.
Save the key as an environment variable.
backend "azurerm" {
resource_group_name = "tfstate"
storage_account_name = Storage_ACCOUNT_NAME
143
Chapter 5 Azure Kubernetes Service
container_name = "storage_container_name"
key = "keyname.terraform.tfstate"
}
backend "azurerm" {
resource_group_name = "chapter5remotestate"
storage_account_name = "tfstateblf8x"
container_name = "terraformstate"
key = "0.storageaccount.terraform.tfstate"
}
You can find the configuration code in the provider.tf configuration file.
To set up backend configuration:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
}
backend "azurerm" {
resource_group_name = "chapter5remotestate"
storage_account_name = "tfstateblf8x"
container_name = "tfstate"
key = "0.storageaccount.terraform.tfstate"
}
provider "azurerm" {
features {
key_vault {
144
Chapter 5 Azure Kubernetes Service
purge_soft_delete_on_destroy = true
}
}
Terraform init
3. Check the output and notice and review the third line
(start with “use this backend...”).
You may now begin working with Terraform. Try running "terraform plan" to
see any changes that are required for your infrastructure. All Terraform
commands should now work.
terraform plan
terraform apply
Once the deployment is complete, open the Azure portal and navigate to the
storage account.
Click on Storage account
Click on Containers
Click on tfstate
State Locking
To prevent two users from deploying resources to Azure at the same time, Azure storage
blobs will automatically lock the state file before any write operation. To check whether a
remote state file is locked, you can take the following steps:
146
Chapter 5 Azure Kubernetes Service
2. Click “Containers.”
3. Click “tfstate.”
4. Click the state file for which you want to check the status.
5. Look for the “LEASE STATUS” field on the blob storage page and
check the whether it is locked, as shown in Figure 5-5.
147
Chapter 5 Azure Kubernetes Service
Note To install the “aztfexport” tool on Windows or macOS, visit the following
URL: https://github.com/azure/aztfexport.
terraform init --upgrade
terraform plan
9. If the export was successful, you should see the following output
from the Terraform plan command:
If you get this output, you can now manage the resource using
Terraform.
Summary
In this chapter, we learned how to do the following: (1) deploy an AKS cluster; (2)
connect an AKS cluster to the Azure Container Registry; (3) mount a storage volume to
AKS pods; (3) upgrade the AKS cluster using a manual process or autoupgrade; (4) use
the Terraform remote state to store configuration files; and (5) export Azure resources to
Terraform.
149
CHAPTER 6
151
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_6
Chapter 6 Azure DevOps and Container Service
• branch management
• code review
• code merging
• change history
• code approval
• create workflows
• allocate resources
152
Chapter 6 Azure DevOps and Container Service
After signing up for Azure DevOps, you’ll need to create an organization to host your
projects.
153
Chapter 6 Azure DevOps and Container Service
Creating a Project
Once you log into Azure DevOps, you can create a project by clicking the “+ New Project”
button in the top right corner of the main page, as shown in Figure 6-2.
In the “Create new project” screen that appears, enter the required information, as
shown in Figure 6-3. Also note that you can change the version control system to the
team foundation server and select a different work item process that uses Agile, Basic,
Scrum, or CMMI (computer maturity model integration).
154
Chapter 6 Azure DevOps and Container Service
Figure 6-3. Entering the required information for your new Azure DevOps project
Once the project has been created, you can start using it. On the main project page,
you’ll see all the available services that can be used to deploy and manage services, as
shown in Figure 6-4.
155
Chapter 6 Azure DevOps and Container Service
To start using Azure DevOps services, you’ll need to create or define a source code
repository that will act as a trigger point for all the services. Before creating a repository,
though, you’ll first need to create a personal access token that will allow you to connect
to Azure DevOps programmatically.
156
Chapter 6 Azure DevOps and Container Service
1. Click the user settings icon located in the top-right corner of the
screen, as shown in Figure 6-5.
157
Chapter 6 Azure DevOps and Container Service
Now we’re ready to create a new repository and use our newly created PAT to
authenticate. We can also create secure shell (SSH) keys and use them to authenticate to
Azure DevOps. In this chapter, we’ll use Azure Repos and Azure Pipelines. Let’s start by
creating a repository.
Creating a Repository
To create a repository in which to store our code, we will use Azure Repos; however, if
you prefer to use another source-control service like GitHub, it’s probably also possible
to connect to its repository.
To create the repository:
158
Chapter 6 Azure DevOps and Container Service
6. Once you run the command, you’ll be asked to provide the PAT
password in order to authenticate and pull the repository.
159
Chapter 6 Azure DevOps and Container Service
160
Chapter 6 Azure DevOps and Container Service
On the Terraform page that comes up, click “Get for free” to install the extension in
the Azure DevOps organization and make it available with Azure Pipelines.
Azure Pipelines
In this section, we’re going to explore how to use Azure Pipelines and use a CI/CD
pipeline to deploy an Azure Container Registry to Azure directly from the pipeline.
For this exercise to work, we’ll utilize some of the previous concepts we learned in
this book and incorporate them into a single learning exercise, as you’ll see shortly.
In the exercise, we’ll do the following:
We’ll need to use a Terraform remote state file for this exercise to work successfully.
When using Azure Pipelines, the pipeline runs on a temporary virtual machine that gets
destroyed once the pipeline has finished running. If the state is saved on that machine,
we could manage the resources postdeployment.
161
Chapter 6 Azure DevOps and Container Service
ACR.TF
The following file will configure an Azure Container registry (ACR).
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.26"
}
}
backend "azurerm" {
resource_group_name = "tfstate"
storage_account_name = "tfstates14w8"
container_name = "tfstate"
key = "Create_acr.terraform.tfstate"
}
provider "azurerm" {
features {}
}
162
Chapter 6 Azure DevOps and Container Service
AZURE-PIPELINES.YML
What follows is the code the YAML-based pipeline file that will handle the deployment to
Microsoft Azure and use the Terraform extension for Azure DevOps:
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: TerraformInstaller@0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/Create_ACR'
backendServiceArm: 'AZURE SUBSCRIPTION DETAILS'
backendAzureRmResourceGroupName: 'tfstate'
backendAzureRmStorageAccountName: 'tfstates14w8'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'Create_acr.terraform.tfstate'
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'plan'
163
Chapter 6 Azure DevOps and Container Service
workingDirectory: '$(System.DefaultWorkingDirectory)/chapter06/
Create_ACR'
environmentServiceNameAzureRM: 'AZURE SUBSCRIPTION DETAILS''
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/Create_ACR'
environmentServiceNameAzureRM: 'AZURE SUBSCRIPTION DETAILS''
If you review the file, you’ll see that it is using three Terraform tasks to do the
following:
• run terraform plan. You don’t have to use this step, but having it
there for reference and review is nice.
• run terraform apply. This will deploy the previous code to an Azure
subscription.
Note An important thing to note is that you’ll need to configure the connection
point to your Azure subscription, which we’ll do shortly. For this exercise to work,
you’ll need to have contributor access to an Azure subscription at a minimum.
Once you save the two files, go ahead and push the repository to the Azure Repo by
running the following command from the repository’s main folder
git add .
git commit -m "Add Pipeline and ACR deployment"
git push
Note To make things easier, you can copy the files in Chapter 6 to your Azure
DevOps repository.
164
Chapter 6 Azure DevOps and Container Service
165
Chapter 6 Azure DevOps and Container Service
166
Chapter 6 Azure DevOps and Container Service
167
Chapter 6 Azure DevOps and Container Service
Note If you receive a message that the pipeline must be authorized, click the
“Resources Authorized” button to continue.
168
Chapter 6 Azure DevOps and Container Service
Figure 6-16. List of the stages and tasks of the pipeline on the “Jobs in run” page
Clicking on one of the tasks will reveal the deployment detail and what Azure
DevOps is doing in order to deploy the code from the runner machine. The following
output shows the Terraform plan task in detail:
169
Chapter 6 Azure DevOps and Container Service
Starting: TerraformTaskV3
===========================================================================
Task : Terraform
Description : Execute terraform commands to manage resources on AzureRM,
Amazon Web Services(AWS) and Google Cloud Platform(GCP)
Version : 3.209.23
Author : Microsoft Corporation
Help : [Learn more about this task](https://aka.ms/AAf0uqr)
===========================================================================
/opt/hostedtoolcache/terraform/1.5.6/x64/terraform providers
Providers required by configuration:
.
└── provider[registry.terraform.io/hashicorp/azurerm] >= 2.26.0
/opt/hostedtoolcache/terraform/1.5.6/x64/terraform plan -detailed-exitcode
Acquiring state lock. This may take a few moments...
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# azurerm_container_registry.acr will be created
+ resource "azurerm_container_registry" "acr" {
+ admin_enabled = true
+ admin_password = (sensitive value)
+ admin_username = (known after apply)
+ encryption = (known after apply)
+ export_policy_enabled = true
+ id = (known after apply)
+ location = "australiasoutheast"
+ login_server = (known after apply)
+ name = "appressacr"
+ network_rule_bypass_option = "AzureServices"
+ network_rule_set = (known after apply)
+ public_network_access_enabled = true
+ resource_group_name = "apresstfchapter06"
+ retention_policy = (known after apply)
+ sku = "Basic"
170
Chapter 6 Azure DevOps and Container Service
To make sure the resource was created, open the Azure portal and locate the
resource group called “apresstfchapter06” and confirm that you have an ACR registry
called “apressacr,” as shown in Figure 6-17.
At this stage, we have an ACR registry up and running in Azure that we’ve deployed
with an Azure DevOps pipeline. Let’s go another step further and use Azure Pipelines to
build a Docker image using the Dockerfile we used in Chapter 3. In this exercise, we’ll
complete the following tasks:
FROM mcr.microsoft.com/hello-world
171
Chapter 6 Azure DevOps and Container Service
172
Chapter 6 Azure DevOps and Container Service
4. For this exercise, let’s go ahead and select the second pipeline,
“Docker - Build and push an image to Azure Container Registry.”
173
Chapter 6 Azure DevOps and Container Service
The next screen will display the generated YAML pipeline, which looks like this:
trigger:
- main
resources:
- repo: self
variables:
dockerRegistryServiceConnection: AZURE SUBSCRIPTION
imageRepository: 'apress'
containerRegistry: 'appressacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/chapter06/Create_ACR/
Dockerfile'
tag: '$(Build.BuildId)'
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
174
Chapter 6 Azure DevOps and Container Service
7. Review the pipeline and try to understand the Docker task with
which you’ll build and push the image to ACR. Before you save
and run the file, take a moment to rename the pipeline as shown
in Figure 6-22. Click the “Rename” button and name the pipeline
“buildAndPushACR.yml.”
The output of the “Build and push to ACR” should look like this:
175
Chapter 6 Azure DevOps and Container Service
8. At this stage, the only thing left to do is to check whether the image
is available in ACR. Do that by opening the Azure portal and
checking the ACR repository, as shown in Figure 6-23.
176
Chapter 6 Azure DevOps and Container Service
Before we finish the chapter, I’d like to discuss the option of destroying resources
with Azure DevOps. We can also use Azure Pipelines to destroy a Terraform deployment,
and as a reference, I have provided the following code block you can use in another
YAML pipeline to destroy a deployment.
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'destroy'
workingDirectory: '$(System.DefaultWorkingDirectory)/Create_ACR'
environmentServiceNameAzureRM: 'AZURE SUBSCRIPTION'
177
Chapter 6 Azure DevOps and Container Service
The AzAPI provider allows us to communicate directly with the Azure REST API and
access all the API features, including preview features.
In the following exercise, we’re going to deploy an ACR using the AzAPI provider.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
azapi = {
source = "Azure/azapi"
}
}
}
provider "azurerm" {
features {}
}
provider "azapi" {
}
The second code block that needs attention is the one where we configure the
provider to connect to the Azure REST API. The API service we’re connecting to is shown
in Table 6-1.
178
Chapter 6 Azure DevOps and Container Service
The Terraform code that uses the service with AzAPI provider follows:
body = jsonencode({
sku = {
name = "Standard"
}
properties = {
adminUserEnabled = true
}
})
tags = {
"Key" = "DEV"
}
If you look at the code and API in the reference URL provided in Table 6-1, you’ll see
how the configuration calculates the API version and endpoint.
179
Chapter 6 Azure DevOps and Container Service
Full Code
You can review the full code and deploy it to Azure using Terraform. The deployment
process is the same as any we’ve used before.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
azapi = {
source = "Azure/azapi"
}
}
}
provider "azurerm" {
features {}
}
provider "azapi" {
}
body = jsonencode({
sku = {
name = "Standard"
}
180
Chapter 6 Azure DevOps and Container Service
properties = {
adminUserEnabled = true
}
})
tags = {
"Key" = "DEV"
}
output "login_server" {
value = jsondecode(azapi_resource.acr.output).properties.loginServer
}
Note You can skip the deployment section if you have an existing Key Vault store.
181
Chapter 6 Azure DevOps and Container Service
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
}
}
provider "azurerm" {
features {
key_vault {
purge_soft_delete_on_destroy = true
}
}
}
sku_name = "standard"
182
Chapter 6 Azure DevOps and Container Service
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"get",
]
secret_permissions = [
"get",
]
storage_permissions = [
"get",
]
}
}
Once you’ve deployed the code successfully, open the Azure portal.
183
Chapter 6 Azure DevOps and Container Service
2. When you get to the “Library” page, click the plus sign next to
the “Variable group” button as shown to create a new group. This
group will create a connection between the two services.
184
Chapter 6 Azure DevOps and Container Service
3. In the create new variable group details, fill in the group name;
turn on “Link Secrets from an Azure Key vault as variables”; and
select the name of the Azure Key Vault from the drop-down list
and authorize it, as shown in Figure 6-26.
185
Chapter 6 Azure DevOps and Container Service
For the details of Azure Key Vault, shown in Figure 6-28, select the names of your
"Azure subscription” and “key vault” and enter the name of the Secret you’d like to
retrieve. You can also use the * sign to load all the Secrets into the playbook.
186
Chapter 6 Azure DevOps and Container Service
After you add the details, add the code to the pipeline. The code in the YAML file
should look like this:
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: AzureKeyVault@2
inputs:
azureSubscription: 'AZURE SUBSCRIPTION'
KeyVaultName: apress
SecretsFilter: 'test'
RunAsPreJob: true
• task: CmdLine@2
inputs:
script: 'echo $(test)'
• powershell: |
Write-Host "My secret variable is $env:MyVAR"
env:
MyVAR: $(test)
The first code block uses a simple command-line task. The second block uses the
Microsoft PowerShell task. These examples show how sensitive information can be used
in Azure Pipeline tasks.
187
Chapter 6 Azure DevOps and Container Service
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: AzureKeyVault@2
inputs:
azureSubscription: 'AZURE SUBSCRIPTION'
KeyVaultName: 'vaultname'
SecretsFilter: 'test'
RunAsPreJob: true
- task: CmdLine@2
inputs:
script: 'echo $(test)'
- powershell: |
Write-Host "My secret variable is $env:MyVAR"
env:
MyVAR: $(test)
Summary
In this chapter, we focused on the core services of Azure DevOps and the integration of
Terraform and Azure using the platform. We also explored how to use Azure Repos and
Azure Pipelines.
Additionally, we learned how to configure the Terraform extension for Azure
DevOps, created an Azure Container Registry using an Azure pipeline, and used a
remote state to save the deployment.
In the last two sections of the chapter, we built and pushed a Docker image to ACR
using a pipeline and used the AzAPI to access the latest version of Azure REST API to
deploy an ACR.
188
CHAPTER 7
Azure Compliance
and Security
Introduction
In the last chapter of this book, I’d like to focus on a few security and compliance services
that can help us keep our Azure environment safe and protected from malicious code
and vulnerabilities.
In the last few years and since the release of the first edition of this book, Microsoft
has invested a lot of resources in developing tools and services that can easily and
seamlessly integrate with Azure services and even Azure DevOps.
This chapter will focus on going over how to use Microsoft Defender for Cloud to
secure and stay compliant with our Azure workload, specifically Azure DevOps and
container services.
189
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2_7
Chapter 7 Azure Compliance and Security
Like most of the Azure cloud services, Defender for Cloud costs money. Figure 7-1
shows the price of each feature in Defender for Cloud’s suite of services.
Figure 7-1. Price list for Defender for Cloud’s suite of services
190
Chapter 7 Azure Compliance and Security
Since this book is about Terraform, we’ll deploy the Defender for Cloud’s Defender
for Containers service using Terraform.
191
Chapter 7 Azure Compliance and Security
192
Chapter 7 Azure Compliance and Security
principal_id = azurerm_subscription_policy_assignment.va-auto-
provisioning.identity[0].principal_id
}
C
hecking the Deployment
You can perform the following checks to ensure the code was deployed successfully:
Figure 7-2. Checking that the Defender for Cloud service has been successfully
deployed
193
Chapter 7 Azure Compliance and Security
4. In the list of services, make sure the status of the “Containers” plan
is set to “On,” as shown in Figure 7-4.
194
Chapter 7 Azure Compliance and Security
8. Once you click the icon, the number of vulnerabilities (if any) and
unhealthy registries will be listed.
195
Chapter 7 Azure Compliance and Security
Installing Extensions
In Chapter 6, we learned about Azure DevOps extensions and installed the Terraform
extension, which allowed us to use Terraform with Azure Pipelines. To scan for
vulnerabilities and use Defender for DevOps, we need to install the following two
extensions :
196
Chapter 7 Azure Compliance and Security
The screen where you can download the Microsoft Security DevOps extension is
shown in Figure 7-7.
The screen where you can download the SARIF SAST Scans Tab extension is shown
in Figure 7-8.
197
Chapter 7 Azure Compliance and Security
198
Chapter 7 Azure Compliance and Security
199
Chapter 7 Azure Compliance and Security
1. Go to the page where you select the plan located under Defender
for Cloud, Environment Settings as shown in Figure 7-12.
200
Chapter 7 Azure Compliance and Security
Figure 7-13. Authorizing the connection between Azure DevOps and Defender
for Cloud
201
Chapter 7 Azure Compliance and Security
5. To check and review the status of the connection and see all the
reviewed projects, go back to the DevOps Security page. You can
access the page from the Defender for Cloud home page in the
“Cloud Security” section. Figure 7-15 shows the DevOps Security
page with the stats about the connector and the number of
projects.
In the next exercise, we’ll run a pipeline and integrate the IaC scanning tool to scan
our code.
202
Chapter 7 Azure Compliance and Security
This task will scan the code using all the vulnerability tools
available; however, if you’d like to limit the scope and only scan for
IaC vulnerabilities, you can use the following task:
task: MicrosoftSecurityDevOps@1
inputs:
categories: 'IaC'
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: TerraformInstaller@0
inputs:
terraformVersion: 'latest'
- task: MicrosoftSecurityDevOps@1
displayName: 'Defender for DevOps Security Scan'
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'init'
203
Chapter 7 Azure Compliance and Security
workingDirectory: '$(System.DefaultWorkingDirectory)/
chapter06/Create_ACR'
backendServiceArm: 'AZURE SUBSCRIPTION'
backendAzureRmResourceGroupName: 'tfstate'
backendAzureRmStorageAccountName: 'tfstates14w8'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'Create_acr.terraform.tfstate'
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/
chapter06/Create_ACR'
environmentServiceNameAzureRM: 'AZURE SUBSCRIPTION'
- task: TerraformTaskV3@3
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/
chapter06/Create_ACR'
environmentServiceNameAzureRM: 'AZURE SUBSCRIPTION '
2. You can add the extra code from the Azure pipeline directly or
using VS Code and push the changes. Run the pipeline check and
wait for it to complete. Once finished, click the “Scans” tab on the
job summary page, as shown in Figure 7-16.
3. The “Scans” tab will list all the vulnerabilities and best practice
recommendations that showed up in the scan. Figure 7-17 shows
some of the recommendations that might be made as a result of
the scan.
205
Chapter 7 Azure Compliance and Security
Remember to disable your Defender for Cloud plan once you’re finished with the
exercises.
Summary
This chapter has been all about how to use Microsoft Defender for DevOps security
tools to detect vulnerabilities and align configuration code with best practices. We
learned how to do the following things: (1) set up Defender for Cloud using Terraform;
(2) connect a DevOps organization to Azure DevOps; and (3) scan for security
vulnerabilities in the Terraform code.
206
Index
A storage account, 95, 96
Terraform apply command, 95
acr-admin, 69, 73
variables.tf file, 92, 93
ACR tasks, 54
benefits, 87
advantage, 54
deployment, 88, 89
az acr build, 54
full code, 89–91
Azure CLI command-line utility, 64
multiple ACI containers, 91
building, pushing and running
enable Azure Log Analytics
container images, 59–62
configuration, 110
Docker CLI, 63
Resource Block, 110, 111
Agile project management
view logs, 114–116
tools, 152
liveness probe, 119
Apache Web Server, 35
management and monitoring
Automatic backups, 41
capabilities, 99
Autoscaling, 129, 130
readiness probe, 119
AzAPI provider, 178
restart container group, 118
deploy ACR, 178, 179
to run commands inside ACI, 102
full code and deploy to
running ACI container, 99–101
Azure, 180
start container group, 117
aztfexport tool, 148
stop container group, 117
Azure CLI, 19
use cases, 88
command-line interface, 8
view ACI logs, 102, 103
on computer running Linux, 8
diagnostic information, 104, 105
on macOS, 9
reviewing diagnostic
PowerShell 7, 9–11
events, 106–109
on Windows with WinGet, 8
use Azure CLI to view logs, 103
Azure cloud services, 181, 190
Azure Container Registry (ACR), 25
Azure Container Instances (ACI)
with ACI (see Azure Container
with ACR
Instances (ACI))
complete code, 97
deploy
file share, 96
adding tags, 58
“main.tf” file, 94
notice output, 58, 59
mount data volume, 95–97
207
© Shimon Ifrah 2024
S. Ifrah, Getting Started with Containers in Azure, https://doi.org/10.1007/978-1-4842-9972-2
INDEX
208
INDEX
209
INDEX
210
INDEX
Nodes, Kubernetes T, U
components, 123
Template analyzer, 196
container runtime, 123
TenantId, 69
kubelet, 123
Terraform, 31
kube proxy, 123
cloud providers, 12
pods, 122
enable tab completion on Linux
Ubuntu, 14
HashiCorp Configuration
O
Language, 12
output command, 36
high-level example, 12, 13
output.tf configuration file, 36, 90
IaC software development tool, 12
install on macOS, 13
install on Ubuntu, 14
P, Q install on Windows, 15
Persistent volume claim (PVC), 135 on Linux, 14
Personal access token (PAT), 156–159 open-source IaC tool, 1
plan command, 34 tab completion, 13
PowerShell 7, 9 tfenv process, 15–18
cross-platform support, 9 tools and services
on Linux, 11 Azure CLI (see Azure CLI)
on macOS, 10, 11 VS Code, 2, 3 (see also VS Code
official website, 9 extensions)
on Windows computer, 9, 10 Windows Subsystem for Linux
Private domain name system (DNS) (WSL), 7
zone, 46 Terraform apply command, 22–23, 30, 37,
provider.tf, 26, 36, 55, 144 43, 85, 95
Terraform destroy command, 23–24
Terraform init command, 21, 31
R Terraform output-json command, 59
Readiness probe, 119 Terraform plan command, 21–22,
Role-based access control 28–29, 51, 149
(RBAC), 53, 54 Terraform remote state
backend configuration, 143–146
challenges, 140
S configuration file, 140
SARIF SAST Scans Tab extension, 196–198 configure, 141–143
Secure shell (SSH) keys, 158 remote state management, 141
sku_name, 40, 41 state file, 140
211
INDEX
Terraform state file, 29, 30, 36, 37, 51, 140 Terraform apply command, 30
terraform.tfstate, 28, 29 files, 36
Terrascan, 196 httpd default home page, 35
Tfenv, 15–18 steps, 30–35
install on computers, 16–18 Terraform destroy, 40
on Linux machine, 16 Terraform output
on macOS, 16 command, 36, 37
TLS 1.0, 44 using .gitignore file with
Trivy, 196 Terraform, 38, 39
management
backs up deployed web apps, 41, 42
V customized deployments, 42, 43
Variable interpolation, 43, 44 scaling, 40, 41
variables.tf file, 82, 92–93 ten app service plan options for
Visual Studio Code (VS Code), 2, 3 Linux, 40, 41
VS Code extensions, 3 variable interpolation, 43, 44
installation security features, 44
Azure Account, 5, 6 disabling public access, 50–52
Azure Terraform, 4, 5 HTTPS protocol, 44, 45
“Extensions” icon, 4 Private Endpoints, 46, 47
HashiCorp, 5 set up
Linter, 6, 7 configuration, 27–29
PowerShell, 6 provider configuration, 26
steps, 3 Terraform plan command, 28, 29
Terraform state file, 29
Web app URL, 35
W, X, Y, Z Web app URL, 35–37, 43
Web App for Containers, 25 Web UI, 123
with ACR (see Azure Container Windows Subsystem for Linux (WSL),
Registry (ACR)) 7, 8, 13
deployment WinGet, 8–10, 15
212