0% found this document useful (0 votes)
155 views200 pages

Bloque4 - Az 400t00a Enu Trainerhandbook PDF Free

This document provides information on PowerShell workflows in Azure, including: - PowerShell workflows allow long-running automated tasks to be run across multiple devices in parallel and survive interruptions. - A workflow is composed of one or more activities, like a script is composed of commands. - Creating a workflow involves using the workflow keyword, parameters, and standard PowerShell commands. - The document demonstrates how to create a simple "Hello World" PowerShell workflow runbook in Azure Automation, add code, test, and publish it.

Uploaded by

MIRo2l2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
155 views200 pages

Bloque4 - Az 400t00a Enu Trainerhandbook PDF Free

This document provides information on PowerShell workflows in Azure, including: - PowerShell workflows allow long-running automated tasks to be run across multiple devices in parallel and survive interruptions. - A workflow is composed of one or more activities, like a script is composed of commands. - Creating a workflow involves using the workflow keyword, parameters, and standard PowerShell commands. - The document demonstrates how to create a simple "Hello World" PowerShell workflow runbook in Azure Automation, add code, test, and publish it.

Uploaded by

MIRo2l2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 200

MCT USE ONLY.

STUDENT USE PROHIBITED 600  Module 15 Infrastructure and Configuration Azure Tools  

You can also use Update Management to natively onboard machines in multiple subscriptions in the
same tenant.

PowerShell Workflows
IT pros often automate management tasks for their multi-device environments by running sequences of
long-running tasks or workflows. These tasks can affect multiple managed computers or devices at the
same time. PowerShell Workflow lets IT pros and developers leverage the benefits of Windows Workflow
Foundation with the automation capabilities and ease of using Windows PowerShell. Refer to A Develop-
er's Introduction to Windows Workflow Foundation (WF) in .NET 471 for more information.
Windows PowerShell Workflow functionality was introduced in Windows Server 2012 and Windows 8, and
is part of Windows PowerShell 3.0 and later. Windows PowerShell Workflow helps automate distribution,
orchestration, and completion of multi-device tasks, freeing users and administrators to focus on high-
er-level tasks.

Activities
An activity is a specific task that you want a workflow to perform. Just as a script is composed of one or
more commands, a workflow is composed of one or more activities that are carried out in sequence. You
can also use a script as a single command in another script, and use a workflow as an activity within
another workflow.

71 https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ee342461(v=msdn.10)
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Automation with DevOps  601

Workflow characteristics
A workflow can:
●● Be long-running.
●● Be repeated over and over.
●● Run tasks in parallel.
●● Be interrupted—can be stopped and restarted, suspended and resumed.
●● Continue after an unexpected interruption, such as a network outage or computer/server restart.

Workflow benefits
A workflow offers many benefits, including:
●● Windows PowerShell scripting syntax. Is built on PowerShell.
●● Multidevice management. Simultaneously apply workflow tasks to hundreds of managed nodes.
●● Single task runs multiple scripts and commands. Combine related scripts and commands into a single
task. Then run the single task on multiple computes. The activity status and progress within the
workflow are visible at any time.
●● Automated failure recovery.

●● Workflows survive both planned and unplanned interruptions, such as computer restarts.
●● You can suspend a workflow operation, then restart or resume the workflow from the point at
which it was suspended.
●● You can author checkpoints as part of your workflow, so that you can resume the workflow from
the last persisted task (or checkpoint) instead of restarting the workflow from the beginning.
●● Connection and activity retries. You can retry connections to managed nodes if network-connection
failures occur. Workflow authors can also specify activities that must run again if the activity cannot be
completed on one or more managed nodes (for example, if a target computer was offline while the
activity was running).
●● Connect and disconnect from workflows. Users can connect and disconnect from the computer that is
running the workflow, but the workflow will remain running. For example, if you are running the
workflow and managing the workflow on two different computers, you can sign out of or restart the
computer from which you are managing the workflow, and continue to monitor workflow operations
from another computer without interrupting the workflow.
●● Task scheduling. You can schedule a task to start when specific conditions are met, as with any other
Windows PowerShell cmdlet or script.

Creating a workflow
To write the workflow, use a script editor such as the Windows PowerShell Integrated Scripting Environ-
ment (ISE). This enforces workflow syntax and highlights syntax errors. For more information, review the
tutorial My first PowerShell Workflow runbook72.

72 https://azure.microsoft.com/en-us/documentation/articles/automation-first-runbook-textual/
MCT USE ONLY. STUDENT USE PROHIBITED 602  Module 15 Infrastructure and Configuration Azure Tools  

A benefit of using PowerShell ISE is that it automatically compiles your code and allows you to save the
artifact. Because the syntactic differences between scripts and workflows are significant, a tool that knows
both workflows and scripts will save you significant coding and testing time.

Syntax
When you create your workflow, begin with the workflow keyword, which identifies a workflow com-
mand to PowerShell. A script workflow requires the workflow keyword. Next, name the workflow, and
have it follow the workflow keyword. The body of the workflow will be enclosed in braces.
A workflow is a Windows command type, so select a name with a verb-noun format:
workflow Test-Workflow

{
...
}

To add parameters to a workflow, use the Param keyword. These are the same techniques that you use to
add parameters to a function.
Finally, add your standard PowerShell commands.
workflow MyFirstRunbook-Workflow

{
Param(
[string]$VMName,
[string]$ResourceGroupName
)
....
Start-AzureRmVM -Name $VMName -ResourceGroupName $ResourceGroupName
}

Demonstration-Create and run a workflow run-


book
This walkthrough will create a new PowerShell workflow runbook, test, publish and then run the runbook.

Prerequisites
●● Note: You require an Azure subscription to perform the following steps. If you don't have one you can
create one by following the steps outlined on the Create your Azure free account today73 webpage.

73 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Automation with DevOps  603

Steps

Create a new runbook


1. In the Azure portal, open your Automation account.
2. Under Process Automation, select Runbooks to open the list of runbooks.
3. Create a new runbook by selecting the Create a new runbook.
4. Give the runbook the name MyFirstRunbook-Workflow.
5. You're going to create a PowerShell Workflow runbook, so for Runbook type, select Powershell
Workflow.
6. Select Create to create the runbook and open the text editor.

Add code to a runbook


You have two options when adding code to a runbook. You can type code directly into the runbook, or
you can select cmdlets, runbooks, and assets from the Library control and have them added to the
runbook, along with any related parameters.
For this walkthrough, you'll use the type directly into the runbook method, as detailed in the following
steps:
1. Type Write-Output "Hello World." between the braces, as per the below:
Workflow MyFirstRunbook-Workflow
{
Write-Output "Hello World"
}

2. Save the runbook by selecting Save.

Test the runbook


Before you publish the runbook to production, you want to test it to ensure that it works properly. When
you test a runbook, you run the draft version and view its output interactively, as demonstrated in the
following steps:
1. Select the Test pane.
MCT USE ONLY. STUDENT USE PROHIBITED 604  Module 15 Infrastructure and Configuration Azure Tools  

2. Select Start to start the test. This should be the only enabled option.

A runbook job is created and its status displayed. The job status will start as Queued indicating that it is
waiting for a runbook worker in the cloud to come available. It moves to Starting when a worker claims
the job, and then Running when the runbook actually starts running. When the runbook job completes,
its output displays. In your case, you should see Hello World.
3. When the runbook job finishes, close the Test pane.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Automation with DevOps  605

Publish and run the runbook


The runbook that you created is still in draft mode. You need to publish it before you can run it in
production. When you publish a runbook, you overwrite the existing published version with the draft
version. In your case, you don't have a published version yet because you just created the runbook.
Use the following steps to publish your runbook:
1. In the runbook editor, select Publish to publish the runbook.
2. When prompted, select Yes.
3. Scroll left to view the runbook in the Runbooks pane, and ensure that it shows an Authoring Status
of Published.
4. Scroll back to the right to view the pane for MyFirstRunbook-Workflow. Notice the options across
the top:
●● Start
●● View
●● Edit
●● Link to schedule to start at some time in the future
●● Add a webhook
●● Delete
●● Export
MCT USE ONLY. STUDENT USE PROHIBITED 606  Module 15 Infrastructure and Configuration Azure Tools  

5. You just want to start the runbook, so select Start, and then when prompted, select Yes.
6. When the job pane opens for the runbook job that you created, leave it open so you can watch the
job's progress.
7. Verify that at when the job completes, the job statuses that display in Job Summary match the
statuses that you saw when you tested the runbook.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Automation with DevOps  607

Checkpoint and Parallel Processing


Workflows let you implement complex logic within your code. Two features available with workflows are
checkpoints, and parallel processing.

Checkpoints
A checkpoint is a snapshot of the current state of the workflow. Checkpoints include the current value
for variables, and any output generated up to that point. (For more information on what a checkpoint is,
read the checkpoint74 webpage.)
If a workflow ends in an error or is suspended, the next time it runs it will start from its last checkpoint,
instead of at the beginning of the workflow. You can set a checkpoint in a workflow with the Check-
point-Workflow activity.
For example, in the following sample code if an exception occurs after Activity2, the workflow will end.
When the workflow is run again, it starts with Activity2 because this followed just after the last checkpoint
set.

74 https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow#checkpoints
MCT USE ONLY. STUDENT USE PROHIBITED 608  Module 15 Infrastructure and Configuration Azure Tools  

<Activity1>
Checkpoint-Workflow
<Activity2>
<Exception>
<Activity3>

Parallel processing
A script block has multiple commands that run concurrently (or in parallel) instead of sequentially, as for
a typical script. This is referred to as parallel processing. (More information about parallel processing is
available on the Parallel processing75 webpage.)
In the following example, two vm0 and vm1 VMs will be started concurrently, and vm2 will only start after
vm0 and vm1 have started.
Parallel
{
Start-AzureRmVM -Name $vm0 -ResourceGroupName $rg
Start-AzureRmVM -Name $vm1 -ResourceGroupName $rg
}

Start-AzureRmVM -Name $vm2 -ResourceGroupName $rg

Another parallel processing example would be the following constructs that introduce some additional
options:
●● ForEach -Parallel. You can use the ForEach -Parallel construct to concurrently process commands for
each item in a collection. The items in the collection are processed in parallel while the commands in
the script block run sequentially.
In the following example, Activity1 starts at the same time for all items in the collection. For each item,
Activity2 starts after Activity1 completes. Activity3 starts only after both Activity1 and Activity2 have
completed for all items.
●● ThrottleLimit - We use the ThrottleLimit parameter to limit parallelism. Too high of a ThrottleLimit can
cause problems. The ideal value for the ThrottleLimit parameter depends on several environmental
factors. Try start with a low ThrottleLimit value, and then increase the value until you find one that
works for your specific circumstances:
ForEach -Parallel -ThrottleLimit 10 ($<item> in $<collection>)
{
<Activity1>
<Activity2>
}
<Activity3>

A real world example of this could be similar to the following code, where a message displays for each
file after it is copied. Only after all files are completely copied does the final completion message display.
Workflow Copy-Files
{

75 https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow#parallel-processing
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Automation with DevOps  609

$files = @("C:\LocalPath\File1.txt","C:\LocalPath\File2.txt","C:\LocalPath\File3.txt")

ForEach -Parallel -ThrottleLimit 10 ($File in $Files)


{
Copy-Item -Path $File -Destination \\NetworkPath
Write-Output "$File copied."
}

Write-Output "All files copied."


}
MCT USE ONLY. STUDENT USE PROHIBITED 610  Module 15 Infrastructure and Configuration Azure Tools  

Additional Automation Tools


Azure Software Development Kits
Azure provides several Software Development Kits (SDKs) to help you get operational and developing on
Azure as quickly as possible. For a full and up-to-date list of available SDKs and developer tools, visit
Azure Developer Tools76.
Many SDKs available for Azure include the following, which have different versions available for different
platforms:
●● .NET SDK77
●● Java SDK78
●● Node SDK79
●● Python SDK 80
●● PHP SDK81
●● Go SDK82
✔️ Note: You can also browse through or search https://github.com/Azure83 for languages relevant to
you.

Azure REST APIs.md


Representational state transfer (REST) APIs are service endpoints that support sets of HTTP operations
(methods). These service endpoints are responsible for providing, creating, retrieving, updating, or
deleting access to the service's resources. To see a comprehensive set of REST APIs for Azure services, go
to REST API Browser84.
Additional API specific services available on Azure are:
●● API Management, which allows you to securely publish APIs to external, partner, and employee
developers at scale. For more information, go to API Management85.
●● Bing Maps API, which enables you to leverage Bing maps for services such as locations, routes, and
fleet tracking. For more information, go to BING Maps API86.

Components of a REST API request/response


A REST API request/response pair can be separated into five different components:
1. The request URI. This consists of:

76 https://azure.microsoft.com/en-us/tools/
77 https://azure.microsoft.com/en-us/develop/net/
78 https://docs.microsoft.com/en-us/java/azure/?view=azure-java-stable
79 https://docs.microsoft.com/en-us/javascript/azure/?view=azure-node-latest
80 https://github.com/Azure/azure-sdk-for-python
81 https://azure.microsoft.com/en-us/develop/php/
82 https://docs.microsoft.com/en-us/go/azure/
83 https://github.com/Azure
84 https://docs.microsoft.com/en-us/rest/api/?view=Azure
85 https://docs.microsoft.com/en-us/azure/api-management/
86 https://www.microsoft.com/en-us/maps/choose-your-bing-maps-api
MCT USE ONLY. STUDENT USE PROHIBITED
 Additional Automation Tools  611

{URI-scheme} :// {URI-host} / {resource-path} ? {query-string}.

●● Although the request URI is included in the request message header, we call it out separately here
because most languages or frameworks require you to pass it separately from the request message.

●● URI scheme. Indicates the protocol used to transmit the request. For example, http or https.
●● URI host. Specifies the domain name or IP address of the server where the REST service endpoint is
hosted, such as graph.microsoft.com.
●● Resource path. Specifies the resource or resource collection, which might include multiple seg-
ments used by the service in determining the selection of those resources.
●● Query string (optional). Provides additional, simple parameters, such as the API version or resource
selection criteria.
2. HTTP request message header fields. This is a required HTTP method (also known as an operation or
verb) that tells the service what type of operation you are requesting. Azure REST APIs support GET,
HEAD, PUT, POST, and PATCH methods.
3. Optional additional header fields. These are only if required by the specified URI and HTTP method.
For example, an Authorization header that provides a bearer token containing client authorization
information for the request.
4. HTTP response message header fields. This is an HTTP status code, ranging from 2xx success codes to
4xx or 5xx error codes.
5. Optional HTTP response message body fields. These MIME-encoded response objects are returned in
the HTTP response body, such as a response from a GET method that is returning data. Typically,
these objects are returned in a structured format such as JSON or XML, as indicated by the Con-
tent-type response header.
For additional details, review the video How to call Azure REST APIs with Postman87.

Azure Cloud Shell


Azure Cloud Shell is a browser-based scripting environment for interacting with Azure. It provides the
flexibility of choosing the shell experience that best suits the way you work, and is accessible from
anywhere using the latest versions of the following browsers:
●● Microsoft Edge
●● Microsoft Internet Explorer
●● Google Chrome
●● Mozilla Firefox
●● Safari
Linux users can opt for a Bash experience, while Windows users can also opt for Windows PowerShell.
You must have a storage account to use the Cloud Shell, and you will be prompted to create one when
accessing Azure Cloud Shell.
✔️ Note: You can access Azure Cloud Shell by going to https://shell.azure.com/88.

87 https://docs.microsoft.com/en-us/rest/api/azure/
88 https://shell.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED 612  Module 15 Infrastructure and Configuration Azure Tools  

Cloud Shell is also accessible from within the Azure portal by selecting the Azure Cloud Shell icon at the
top of the browser.

If you have time you can also review the PowerShell in Azure Cloud Shell GA89 video for more details.

Package Management
Package management allows you to install all the software you need in an environment, into your VM
during or after its deployment, or after its install.
Using package management, it's possible to manage all aspects of software such as installation, configu-
ration, upgrade, and uninstall. There's a wide range of packaged software available for you to install using
the package managers, such as Java, Microsoft Visual Studio, Google Chrome, GIT, and many more.
There are also a number of package management solutions available for you to use depending on your
environment and needs:
●● apt90: apt is the package manager for Debian Linux environments.
●● Yum91: Yum is the package manager for CentOS Linux environments.
●● Chocolatey92: Chocolatey is the software management solution built on Windows PowerShell for
Windows operating systems.
✔️ Note: In the following section we will cover installing Chocolatey, as an example. However, while the
other package management solutions use different syntax and commands, they have similar concepts.

Install Chocolatey
Chocolatey does not have an .msi package; it installs as a nupkg using a PowerShell install script. The
installation script is available to review at https://chocolatey.org/install.ps193.
You can run the script and install it in a variety of ways, which you can read about at More Install
Options94.
The following example installs the script via PowerShell:
1. Open a PowerShell window as administrator, and run the following command.
Set-ExecutionPolicy Bypass -Scope Process -Force; iwr https://chocolatey.org/install.ps1 -UseBasicParsing
| iex

2. After the command completes, run the following command:


choco /?

3. To search for a Visual Studio package that you can use, run the following command:

89 https://azure.microsoft.com/en-us/resources/videos/azure-friday-powershell-in-azure-cloud-shell-ga/
90 https://wiki.debian.org/Apt
91 https://wiki.centos.org/PackageManagement/Yum
92 https://chocolatey.org/
93 https://chocolatey.org/install.ps1
94 https://chocolatey.org/install#install-with-powershellexe
MCT USE ONLY. STUDENT USE PROHIBITED
 Additional Automation Tools  613

choco search visualstudio2017

You can install packages manually via the command line using choco install. To install packages into your
development, test, and production environments, identify a package you want to install, and then list that
package in a PowerShell script. When deploying your VM, you then call the package as part of a custom
script extension. An example of such a PowerShell script would be:
# Set PowerShell execution policy
Set-ExecutionPolicy RemoteSigned -Force

# Install Chocolatey
iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

refreshenv

# Install Chocolatey packages


& choco install poshgit -y
& choco install googlechrome -y
& choco install firefox -y
& choco install notepadplusplus -y
& choco install putty -y
& choco install chefdk -y

refreshenv
MCT USE ONLY. STUDENT USE PROHIBITED 614  Module 15 Infrastructure and Configuration Azure Tools  

Lab
Azure Deployments using Resource Manager
Templates

Steps for the labs are available on GitHub at the following websites in their Infrastructure as Code
sections:
●● Parts unlimited95
●● Parts unlimited MRP96
For the individual lab tasks for this module, select the following PartsUnlimited link and follow the
outlined steps for each lab task.
PartsUnlimited (PU)
●● Azure Deployments using Resource Manager templates97

95 https://microsoft.github.io/PartsUnlimited
96 https://microsoft.github.io/PartsUnlimitedMRP
97 http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaC-AZ-400T05AppInfra.html
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  615

Module Review and Takeaways


Module Review Questions
Checkbox
What benefits from the list below can you achieve by modularizing your infrastructure and configuration
resources?
(Choose three)
†† Easy to reuse across different environments
†† Easier to manage and maintain your code
†† More difficult to sub-divide up work and ownership responsibilities
†† Easier to troubleshoot
†† Easier to extend and add to your existing infrastructure definitions

Multiple choice
Which method of approach for implementing Infrastructure as Code states what the final state of an
environment should be without defining how it should be achieved?
†† Scripted
†† Imperative
†† Object orientated
†† Declarative

Review Question 3
Which term defines the ability to apply one or more operations against a resource, resulting in the same
outcome every time?
†† Declarative
†† Idempotency
†† Configuration drift
†† Technical debt

Checkbox
Which of the following are possible causes of Technical debt?
(choose all that apply)
†† Unplanned for localization of an application
†† Accessibility
†† Changes made quickly, or directly to an application without using DevOps methodologies
†† Changing technologies or versions that are not accounted for as part of the dev process
MCT USE ONLY. STUDENT USE PROHIBITED 616  Module 15 Infrastructure and Configuration Azure Tools  

Multiple choice
Which term is the process whereby a set of resources change their state over time from their original state in
which they were deployed?
†† Modularization
†† Technical debt
†† Configuration drift
†† Imperative

Multiple choice
Which of the following options is a method for running configuration scripts on a VM either during or after
deployment?
†† Using the (CSE)
†† Using Quickstart templates
†† Using the dependsOn parameter
†† Using Azure Key Vault

Multiple choice
When using Azure CLI, what's the first action you need to take when preparing to run a command or script ?
†† Define the Resource Manager template.
†† Specify VM extension details.
†† Create a resource group.
†† Log in to your Azure subscription.

Multiple choice
Which Resource Manager deployment mode only deploys whatever is defined in the template, and does not
remove or modify any other resources not defined in the template?
†† Validate
†† Incremental
†† Complete
†† Partial
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  617

Multiple choice
Which package management tool is a software management solution built on Powershell for Windows
operating systems?
†† Yum
†† Chocolatey
†† apt
†† Apache Maven

Checkbox
Which of the following version control tools are available for use with Azure DevOps?
(choose all that apply)
†† Subversion
†† Git
†† BitBucket
†† TFVC
MCT USE ONLY. STUDENT USE PROHIBITED 618  Module 15 Infrastructure and Configuration Azure Tools  

Answers
Checkbox
What benefits from the list below can you achieve by modularizing your infrastructure and configuration
resources?
(Choose three)
■■ Easy to reuse across different environments
■■ Easier to manage and maintain your code
†† More difficult to sub-divide up work and ownership responsibilities
■■ Easier to troubleshoot
■■ Easier to extend and add to your existing infrastructure definitions
Explanation
The following answers are correct:

More difficult to sub-divide up work and ownership responsibilities is incorrect. It is easier to sub-divide up
work and ownership responsibilities.
Multiple choice
Which method of approach for implementing Infrastructure as Code states what the final state of an envi-
ronment should be without defining how it should be achieved?
†† Scripted
†† Imperative
†† Object orientated
■■ Declarative
Explanation
Declarative is the correct answer. The declarative approach states what the final state should be. When run,
the script or definition will initialize or configure the machine to have the finished state that was declared,
without defining how that final state should be achieved.
All other answers are incorrect. Scripted is not a methodology, and in the imperative approach, the script
states the how for the final state of the machine by executing through the steps to get to the finished state.
It defines what the final state needs to be, but also includes how to achieve that final state.
Object orientated is a coding methodology, but does include methodologies for how states and outcomes
are to be achieved.
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  619

Review Question 3
Which term defines the ability to apply one or more operations against a resource, resulting in the same
outcome every time?
†† Declarative
■■ Idempotency
†† Configuration drift
†† Technical debt
Explanation
Idempotency is the correct answer. It is a mathematical term that can be
used in the context of Infrastructure as Code and Configuration as Code, as
the ability to apply one or more operation against a resource, resulting in
the same outcome.

All other answers are incorrect.


Checkbox
Which of the following are possible causes of Technical debt?
(choose all that apply)
■■ Unplanned for localization of an application
■■ Accessibility
■■ Changes made quickly, or directly to an application without using DevOps methodologies
■■ Changing technologies or versions that are not accounted for as part of the dev process
Explanation
All answers are correct.
Multiple choice
Which term is the process whereby a set of resources change their state over time from their original
state in which they were deployed?
†† Modularization
†† Technical debt
■■ Configuration drift
†† Imperative
Explanation
Configuration drift is the correct answer. It is the process whereby a set of resources change their state over
time from the original state in which they were deployed.
All other answers are incorrect.
MCT USE ONLY. STUDENT USE PROHIBITED 620  Module 15 Infrastructure and Configuration Azure Tools  

Multiple choice
Which of the following options is a method for running configuration scripts on a VM either during or
after deployment?
■■ Using the (CSE)
†† Using Quickstart templates
†† Using the dependsOn parameter
†† Using Azure Key Vault
Explanation
Using CSE is the correct answer, because it is a way to download and run scripts on your Azure VMs.
All other answers are incorrect.
Quickstart templates are publicly available starter templates that allow you get up and running quickly with
Resource Manager templates.
The *depensdOn *parameter defines depend resources in a Resource Manager template.
Azure Key Vault is a secrets-management service in Azure that allows you to store certificates, keys,
passwords, and so forth.
Multiple choice
When using Azure CLI, what's the first action you need to take when preparing to run a command or
script ?
†† Define the Resource Manager template.
†† Specify VM extension details.
†† Create a resource group.
■■ Log in to your Azure subscription.
Explanation
Log in to your Azure subscription is the correct answer. You can do so using the command az login.
All other answers are incorrect.
You do not need to define the Resource Manager template or specify the VM extension details, and you can-
not create a resource group without first logging into your Azure subscription.
Multiple choice
Which Resource Manager deployment mode only deploys whatever is defined in the template, and does
not remove or modify any other resources not defined in the template?
†† Validate
■■ Incremental
†† Complete
†† Partial
Explanation
Incremental is the correct answer.
Validate mode only compiles the templates, and validates the deployment to ensure the template is
functional. For example, it ensures there no circular dependencies and the syntax is correct.
Incremental mode only deploys whatever is defined in the template, and does not remove or modify any
resources that are not defined in the template. For example, if you have deployed a VM via template, and
then renamed the VM in the template, the first VM deployed will still remain after the template is run again.
Incremental mode is the default mode.
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  621

In Complete mode, Resource Manager deletes resources that exist in the resource group but aren't specified
in the template. For example, only resources defined in the template will be present in the resource group
after the template is deployed. As a best practice, use the Complete mode for production environments
where possible, to try to achieve idempotency in your deployment templates.
Multiple choice
Which package management tool is a software management solution built on Powershell for Windows
operating systems?
†† Yum
■■ Chocolatey
†† apt
†† Apache Maven
Explanation
Chocolatey is the correct answer.
apt is the package manager for Debian Linux environments.
Yum is the package manager for CentOS Linux environments.
Maven is a build automation for build artifacts used as part of a build and release pipeline, with Java-based
projects
Checkbox
Which of the following version control tools are available for use with Azure DevOps?
(choose all that apply)
■■ Subversion
■■ Git
■■ BitBucket
■■ TFVC
Explanation
All answers are correct.
Subversion, Git, BitBucket and TFVC are all repository types that are available with Azure DevOps.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 16 Azure Deployment Models and
Services

Module Overview
Module Overview
You’re ready to start deploying and migrating applications into Microsoft’s Azure cloud platform — but
there are different deployment models to contend with. Which should you choose? Each has strengths
and weaknesses depending on the service you are setting up. Some might require more attention than
others, but offer additional control. Others integrate services like load balancing or Operating Systems as
more of a Platform as a Service.
Learn the differences between IaaS, PaaS and FaaS, and when you might want to choose one over
another.

Learning Objectives
After completing this module, students will be able to:
●● Describe deployment models and services that are available with Azure
MCT USE ONLY. STUDENT USE PROHIBITED 624  Module 16 Azure Deployment Models and Services  

Deployment Modules and Options


IaaS vs PaaS vs FaaS
Infrastructure as a Service (IaaS)
With IaaS, you provision the VMs that you need along with associated network and storage components.
You then deploy whatever software and applications you want onto those VMs. This model is closest to a
traditional on-premises environment except that Microsoft manages the infrastructure. You still manage
the individual VMs.

Platform as a Service (PaaS)


PaaS provides a managed hosting environment where you can deploy your application without needing
to manage VMs or networking resources. For example, instead of creating individual VMs, you specify an
instance count and the service will provision, configure, and manage the necessary resources. Azure App
Service is an example of a PaaS service.
There is a spectrum from IaaS to pure PaaS. For example, Azure VMs can auto-scale by using VM Scale
Sets. This automatic scaling capability isn't strictly with PaaS, but it's the type of management feature that
might be found in a PaaS service.

Function as a Service (FaaS)


FaaS goes even further in removing the need to worry about the hosting environment. Instead of creating
compute instances and deploying code to those instances, you simply deploy your code and the service
automatically runs it. You don’t need to administer the compute resources, because the services make use
of serverless architecture. They seamlessly scale up or down to whatever level necessary to manage the
traffic. Azure Functions are a FaaS service.
When comparing the three environments, remember that:
●● IaaS gives the most control, flexibility, and portability.
●● FaaS provides simplicity, elastic scale, and potential cost savings because you pay only for the time
your code is running.
●● PaaS falls somewhere between the two.
In general, the more flexibility a service provides, the more responsible you are for configuring and
managing the resources. FaaS services automatically manage nearly all aspects of running an application,
while IaaS solutions require you to provision, configure, and manage the VMs and network components
you create.

Azure Compute options


The main compute options currently available in Azure are:
●● IaaS:
●● Azure virtual machines1 allow you to deploy and manage VMs inside an Azure Virtual Network.

1 https://azure.microsoft.com/en-us/services/virtual-machines/
MCT USE ONLY. STUDENT USE PROHIBITED
 Deployment Modules and Options  625

●● PaaS:
●● Azure App service2 is a managed PaaS offering for hosting web apps, mobile app back-ends,
RESTful APIs, or automated business processes.
●● Azure Container Instances3 offer the fastest and simplest way to run a container in Azure without
having to provision any VMs or adopting a higher-level service.
●● Azure Cloud services4 is a managed service for running cloud applications.
●● FaaS:
●● Azure Functions5 is a managed FaaS service.
●● Azure Batch6 is also a managed FaaS service, and is for running large-scale parallel and HPC
applications.
●● Modern native cloud apps, providing massive scale and distribution:
●● Azure Service Fabric7 is a distributed systems platform that can run in many environments,
including Azure or on premises. Service Fabric is an orchestrator of microservices across a cluster
of machines.
●● AKS 8 lets you create, configure, and manage a cluster of VMs that are preconfigured to run
containerized applications.

Choosing a compute service


Azure offers a number of ways to host your application code. The term compute refers to the hosting
model for the computing resources that your application runs on.
The following flowchart will help you to choose a compute service for your application. It guides you
through a set of key decision criteria to reach a recommendation.

2 https://azure.microsoft.com/en-us/services/app-service/
3 https://azure.microsoft.com/en-us/services/container-instances/
4 https://azure.microsoft.com/en-us/services/cloud-services/
5 https://azure.microsoft.com/en-us/services/functions/
6 https://azure.microsoft.com/en-us/services/batch/
7 https://azure.microsoft.com/en-us/services/service-fabric/
8 https://azure.microsoft.com/en-us/services/kubernetes-service/
MCT USE ONLY. STUDENT USE PROHIBITED 626  Module 16 Azure Deployment Models and Services  

Treat this flowchart as a starting point. Every application has unique requirements, so use the recommen-
dation as a starting point. Then perform a more detailed evaluation, looking at aspects such as:
●● Feature sets
●● Service limits
●● Cost
●● SLA
●● Regional availability
●● Developer ecosystem and team skills
●● Compute comparison tables
If your application consists of multiple workloads, evaluate each workload separately. A complete solu-
tion might incorporate two or more compute services.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  627

Azure Infrastructure-as-a-Service (IaaS) Servic-


es
Azure virtual machine
A Microsoft Azure virtual machine (VM) provides the flexibility of virtualization without having to buy and
maintain the physical hardware that runs it. However, you still need to maintain the VM by performing
tasks such as deploying, configuring, and maintaining the software that runs on it. Azure VMs provides
more control over computing environments than other choices offer.

Operating system support


Azure virtual machines supports both Windows operating system (OS) and Linux OS deployments. You
can also choose to upload and use your own image.
Note: If you opt to use your own image, the publisher name, offer, and SKU aren’t used.

Windows operating systems


Azure provides both Windows Server and Windows client images for use in development, test, and
production. Azure provides several marketplace images to use with various versions and types of Win-
dows Server operating systems. Marketplace images are identified by image publisher, offer, SKU, and
version (typically version is specified as latest). However, only 64-bit operating systems are supported.
As an example, to find and list available Windows Server SKUs in westeurope, run the following com-
mands one after the other:
az vm image list-publishers --location westeurope --query "[?starts_with(name, 'Microsoft')]"
az vm image list-offers --location westeurope --publisher MicrosoftWindowsServer
az vm image list-skus --location westeurope --publisher MicrosoftWindowsServer --offer windowsserver

Linux
Azure provides endorsed Linux distributions. Endorsed distributions are distributions that are available on
Azure Marketplace and are fully supported. The images in Azure Marketplace are provided and main-
tained by the Microsoft partner who produces them. Some of the endorsed distributions available in
Azure Marketplace include:
●● CentOS
●● CoreOS
●● Debian
●● Oracle Linux
●● Red Hat
●● SUSE Linux Enterprise
●● OpenSUSE
●● Ubuntu
●● RancherOS
MCT USE ONLY. STUDENT USE PROHIBITED 628  Module 16 Azure Deployment Models and Services  

There are several other Linux-based partner products that you can deploy to Azure VMs, including
Docker, Bitnami by VMWare, and Jenkins. A full list of endorsed Linux distributions is available at En-
dorsed Linux distributions on Azure9.
As an example, to find and list available RedHat SKUs in westus, run the following commands one after
the other:
az vm image list-publishers --location westus --query "[?contains(name, 'RedHat')]"
az vm image list-offers --location westus --publisher RedHat
az vm image list-skus --location westus --publisher RedHat --offer RHEL

If you want to use a Linux version not on the endorsed list and not available in Azure Marketplace, you
can install it directly.

Azure VMs usage scenarios


You can use Azure VMs in various ways. Some examples are:
●● Development and test. Azure VMs offer a quick and easier way to create a computer with specific
configurations that are required to code and test an application.
●● Applications in the cloud. Because demand for an application can fluctuate, it might make economic
sense to run it on a VM in Azure. You pay for extra VMs when you need them, and shut them down
when you don’t.
●● Extended datacenter. VMs in an Azure virtual network can more easily be connected to your organiza-
tion’s network.
The number of VMs that your application uses can scale up and out to whatever is required to meet your
needs. Conversely, it can scale back down when you don't need it, thereby avoiding unnecessary charges.

Deployment, configuration management and extensions


Azure VMs support a number of deployment and configuration management toolsets, including:
●● Azure Resource Manager (ARM) templates
●● Windows PowerShell Desired State Configuration (DSC)
●● Ansible
●● Chef
●● Puppet
●● Terraform by HashiCorp
VM extensions give your VM additional capabilities through post-deployment configuration and auto-
mated tasks. Some common tasks you can complete using extensions include:
●● Running custom scripts. The Custom Script Extension (CSE) helps you configure workloads on the VM
by running your script when you provision the VM.
●● Deploying and managing configurations. The PowerShell DSC extension helps you set up DSC on a
VM to manage configurations and environments.
●● Collect diagnostics data. The Azure Diagnostics extension helps you configure the VM to collect
diagnostics data that you can use to monitor your application's health.

9 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  629

Azure VM service limits and ARM


Each Azure service has its own service limits, quotas, and constraints. For example, some of the service
limits for VMs when you use ARM and Azure resource groups are in the following table.

Resource Default Limit


VMs per availability set 200
Certificates per subscription Unlimited

Scaling Azure VMs


Scale is provided for in Azure virtual machines (VMs) using VM scale sets (or VMSS). Azure VM scale sets
let you create and manage a group of identical, load-balanced VMs. The number of VM instances can
automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high
availability to your applications, and allow you to centrally manage, configure, and update a large number
of VMs. With VM scale sets, you can build large-scale services for areas such as compute, big data, and
container workloads.

Why and when to use VM scale sets


Azure VM scale sets provide the management capabilities for applications that run across many VMs,
automatic scaling of resources, and load-balancing of traffic. Scale sets provide the following key bene-
fits:
●● Easier to create and manage multiple VMs
●● Provide high availability and application resiliency
●● Allow your application to automatically scale as resource demand changes
●● Work at large-scale
●● Useful when deploying highly available infrastructure where a set of machines has similar configura-
tions
There are two basic ways to configure VMs deployed in a scale set:
●● Use extensions to configure the VM after it's deployed. With this approach, new VM instances could
take longer to start than a VM with no extensions.
●● Deploy a managed disk with a custom disk image. This option might be quicker to deploy, but it
requires you to keep the image up to date.

Differences between virtual machines and scale sets


Scale sets are built from VMs. With scale sets, the management and automation layers are provided to
run and scale your applications. However, you also could manually create and manage individual VMs, or
integrate existing tools to build a similar level of automation. The following table outlines the benefits of
scale sets compared to manually managing multiple VM instances.

Scenario Manual group of VMs VM scale set


Add additional VM instances Manual process to create, Automatically create from central
configure, and ensure compli- configuration
ance
MCT USE ONLY. STUDENT USE PROHIBITED 630  Module 16 Azure Deployment Models and Services  

Scenario Manual group of VMs VM scale set


Traffic balancing and distribution Manual process to create and Can automatically create and
configure Azure Load Balancer or integrate with Azure Load
Application Gateway Balancer or Application Gateway
High availability and redundancy Manually create an availability Automatic distribution of VM
set or distribute and track VMs instances across availability
across availability zones zones or availability sets
Scaling of VMs Manual monitoring and Azure Autoscale based on host metrics,
Automation in-guest metrics, Application
Insights, or schedule

VM scale set limits


Each Azure service has service limits, quotas, and constraints. Some of the service limits for VM scale sets
are listed in the following table.

Resource Default limit Maximum limit


Maximum number of VMs in a 1,000 1,000
scale set
Maximum number of scale sets 600 600
in a region
Maximum number of scale sets 2,000 2,000
in a region
✔️ Note: There is no additional cost to use the VM scale sets service. You only pay for the underlying
compute resources such as the VM instances, Azure Load Balancer, or managed disk storage that you
consume. The management and automation features, such as autoscale and redundancy, incur no
additional charges over the use of VMs.

Demonstration-Create virtual machine scale set


In the following steps we will create a virtual machine scale set (VM scale set), deploy a sample applica-
tion, and configure traffic access with Azure Load Balancer.

Prerequisites
●● You require an Azure subscription to perform the following steps. If you don't have one, you can
create one by following the steps outlined on the Create your Azure free account today10 webpage.
Note: If you use your own values for the parameters used in the following commands, such as resource
group name and scale set name, remember to change them in the subsequent commands as well to
ensure the commands run successfully.

Steps
1. Create a scale set. Before you can create a scale set, you must create a resource group using the
following command:

10 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  631

az group create --name myResourceGroup --location < your closest datacenter >

Now create a virtual machine scale set with the following command:
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--generate-ssh-keys

This creates a scale set named myScaleSet that is set to automatically update as changes are applied. It
also generates Secure Shell (SSH) keys if they do not exist in ~/.ssh/id_rsa.
2. Deploy a sample application. To test your scale set, install a basic web application, and a basic NGINX
web server. The Azure Custom Script Extension (CSE) downloads and runs a script that installs the
sample web application on the VM instance (or instances). To install the basic web application, run the
following command:
az vmss extension set \
--publisher Microsoft.Azure.Extensions \
--version 2.0 \
--name CustomScript \
--resource-group myResourceGroup \
--vmss-name myScaleSet \
--settings '{"fileUris":["https://raw.githubusercontent.com/Microsoft/PartsUnlimitedMRP/master/
Labfiles/AZ-400T05-ImplemntgAppInfra/Labfiles/automate_nginx.sh"],"commandToExecute":"./automate_
nginx.sh"}'

3. Allow traffic to access the application. When you created the scale set, the Azure Load Balancer
deployed automatically. As a result, traffic distributes to the VM instances in the scale set. To allow
traffic to reach the sample web application, create a load balancer rule with the following command:
az network lb rule create \
--resource-group myResourceGroup \
--name myLoadBalancerRuleWeb \
--lb-name myScaleSetLB \
--backend-pool-name myScaleSetLBBEPool \
--backend-port 80 \
--frontend-ip-name loadBalancerFrontEnd \
--frontend-port 80 \
--protocol tcp

4. Obtain the public IP Address. To test your scale set and observe your scale set in action, access the
sample web application in a web browser, and then obtain the public IP address of your load balancer
using the following command:
az network public-ip show \
--resource-group myResourceGroup \
--name myScaleSetLBPublicIP \
--query '[ipAddress]' \
MCT USE ONLY. STUDENT USE PROHIBITED 632  Module 16 Azure Deployment Models and Services  

--output tsv

5. Test your scale set. Enter the public IP address of the load balancer in a web browser. The load
balancer distributes traffic to one of your VM instances, as in the following screenshot:

6. Remove the resource group, scale set, and all related resources as follows. The –no-wait parameter
returns control to the prompt without waiting for the operation to complete. The –yes parameter
confirms that you wish to delete the resources without an additional prompt to do so.
az group delete --name myResourceGroup --yes --no-wait

Availability
Availability in infrastructure as a service (IaaS) services in Azure is provided both through the core
physical structural components of Azure (such as Azure regions, and Availability Zones), and also through
logical components that together provide for overall availability during outages, maintenance, and other
downtime scenarios.

Availability sets
Availability sets ensure that the VMs you deploy on Azure are distributed across multiple, isolated
hardware clusters. Doing this ensures that if a hardware or software failure within Azure happens, only a
subset of your VMs are impacted and your overall solution remains available and operational.
Availability sets are made up of update domains and fault domains:
●● Update domains. When a maintenance event occurs (such as a performance update or critical security
patch applied to the host), the update is sequenced through update domains. Sequencing updates
using update domains ensures that the entire datacenter isn't unavailable during platform updates
and patching. Update domains are a logical section of the datacenter, and they are implemented with
software and logic.
●● Fault domains. Fault domains provide for the physical separation of your workload across different
hardware in the datacenter. This includes power, cooling, and network hardware that supports the
physical servers located in the server racks. In the event the hardware that supports a server rack
becomes unavailable, only that rack of servers is affected by the outage.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  633

Update management
You can also leverage the Update management service within a Windows or Linux VM to manage
updates and patches for the VMs. Directly from your VM, you can quickly assess the status of available
updates, schedule required updates installation, and review deployment results to verify updates were
applied successfully.

Enable update management


To enable Update management for your VM:
1. On the left side of the screen in the Azure Portal, select Virtual machines.
2. select a VM From the list.
3. On the VM screen, in the **Operations **section, click Update management. The **Enable Update
Management **window opens.
If any of the following prerequisites were found to be missing during onboarding, they're automatically
added:
●● Log Analytics workspace. Provides a single location to review and analyze data generated by the VM
features and services, such as update management.
●● Automation. allows you to run runbooks against VMs, such as download and apply updates.
●● A Hybrid runbook worker is enabled on the VM. Used to communicate with the VM and obtain
information about the update status.

Additional IaaS considerations\


When using Infrastructure as a Service (IaaS) virtual machines that are relevant to DevOps scenarios, there
are a number of other relevant areas to consider.

Monitoring and analytics

Boot diagnostics
The boot diagnostic agent captures screen output that can be used for troubleshooting purposes. This
capability is enabled by default with Windows VMs, but it's not automatically enabled when you create a
Linux VM using Azure CLI. The captured screenshots are stored in an Azure storage account, which is also
created by default.

Host metrics
An Azure VM, for both the Windows and Linux operating systems, have a dedicated host in Azure that it
interacts with. Metrics are automatically collected for the host, which you can view in the Azure portal.

Enable diagnostic extensions


To see more granular and VM-specific metrics, you need to install the Azure diagnostics extension on the
VM. These allow you to retrieve additional monitoring and diagnostics data from the VM. You can view
these performance metrics and create alerts based on the VM performance.
MCT USE ONLY. STUDENT USE PROHIBITED 634  Module 16 Azure Deployment Models and Services  

You can enable diagnostic extensions in the Portal using the following steps:
1. In the Azure portal, choose Resource Groups, select myResourceGroupMonitor, and then in the
resource list, select myVM.
2. Select Diagnosis settings. In the Pick a storage account drop-down, choose or create a storage
account.
3. Select the Enable guest-level monitoring button.

Alerts
You can create alerts based on specific performance metrics. You can use these alerts to notify you, for
example, when average CPU usage exceeds a certain threshold or available free disk space drops below a
certain amount. Alerts display in the Azure portal, or you can have them sent via email. You can also
trigger Azure Automation runbooks or Azure Logic Apps in response to alerts being generated.
The following steps create an alert for average CPU usage:
1. In the Azure portal, select Resource Groups, select myResourceGroupMonitor, and then in the
resource list select myVM.
2. On the VM blade, select Alert rules. Then from the top of the Alerts blade, select Add metric alert.
3. Provide a name for your alert, such as myAlertRule.
4. To trigger an alert when CPU percentage exceeds 1.0 for five minutes, leave all the other default
settings selected.
5. Optionally, you can select the Email owners, Contributors, and Readers check boxes to send email
notifications. The default action is to present a notification in the portal only.
6. Select OK.

Load balancing
Azure Load Balancer is a Layer-4 (Transmission Control Protocol (TCP), User Datagram Protocol (UDP))
load balancer that provides high availability by distributing incoming traffic among healthy VMs.
For load balancing, you define a front-end IP configuration that contains one or more public IP address-
es. This configuration allows your load balancer and applications to be accessible over the internet.
Virtual machines connect to a load balancer using their virtual network interface card (NIC). To distribute
traffic to the VMs, a back-end address pool contains the IP addresses of the virtual NICs connected to the
load balancer.
To control the flow of traffic, you define load-balancer rules for specific ports and protocols that map to
your VMs.
When creating a VM scale set, a load balancer is automatically created as part of that process.

Migrating workloads to Azure


Using Azure infrastructure as a service (IaaS) VMs, you can migrate workloads from AWS and on-premises
to Azure. You can upload VHD files from Amazon Web Services (AWS) or on-premises virtualization
solutions to Azure to create VMs that utilize the Azure feature, Managed Disks.
You can also move from classic Azure cloud services deployments to Azure Resource Manager type
deployments.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  635

VMs versus containers


Containers are becoming increasingly popular depending on an organization's needs. This is because
they offer numerous advantages over other environments such as VMs. The following are descriptions of
some of these advantages.

Less resource intensive


Compared to running a VM, running a container requires relatively few resources. This is because the
operating system is shared. When you start a VM you boot an entirely new operating system on top of
the running operating system, and they share only the hardware. With containers, you share the memory,
the disk, and the CPU. This means that the overhead associated with starting a container is quite low, and
that the container also provides optimal isolation.

Fast startup
Because running a container requires only a few extra resources over the operating system, startup time
is faster, and roughly equivalent to the time required to start a new process. The only additional items the
OS needs to set up is the isolation for the process, which is done at the kernel level, and occurs quickly.

Improved server density


When you own hardware, you want to utilize that hardware as efficiently and cost-effectively as possible.
With the introduction of VMs, sharing hardware among multiple VMs was introduced.
Containers take this sharing concept one step further by enabling even more efficient utilization of
memory, disk, and CPU from the available hardware. This is because VMs only consume the memory and
CPU that they need. This results in fewer idle servers, resulting in better utilization of the existing com-
pute resources.
This is an especially important consideration for cloud providers. The higher the server density (the
number of things that you can do with the hardware that you have), the more cost efficient the data-
center becomes. It's not surprising that containers are becoming more popular and that new tools for
managing and maintaining containerized solutions are rapidly emerging.

Density and isolation comparison


Containers allow for more density, scalability, and agility then VMs do, but they also enable more isola-
tion than processes permit. The following diagram exhibits a comparison of density and isolation levels
across the range of possible environments.
MCT USE ONLY. STUDENT USE PROHIBITED 636  Module 16 Azure Deployment Models and Services  

Windows vs. Linux containers


It is important to understand that containers are part of the operating system, and that each container
shares the kernel of that operating system. This generally means that if you create an image on a ma-
chine running the Windows operating system, it is a Windows-specific image that needs a windows
environment to run and vice versa.
There are tools native with Windows, such as Windows Subsystem on Linux (WSL) available in both
Windows 10 and Windows Server 2019, and Docker for Windows, which do allow you to run Linux and
Windows containers side by side.

Windows Server and Hyper-V containers


Windows Server and Windows 10 can create and run different types of containers and they each have
different characteristics.

Windows Server containers


The following diagram of high-level architecture shows how containers works on the Windows operating
system.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  637

As the diagram shows, the Windows operating system always has its default host user mode processes
running Windows. On Windows, you now have additional services such as Docker and compute services
that manage containers.
When you start a new container, Docker talks to the compute services to create a new container based on
an image. For each container, Docker will create a Windows container, each of which will require a set of
system processes. These are always the same in every container. You then use your own application
process to differentiate each container. These can be Microsoft Internet Information Services (IIS) or SQL
Server processes that you run in the container.
On Windows Server 2016 and Windows Server 2019, you can run these containers so that they share the
Windows kernel. This method is quite efficient, and the processes that run in the container will have no
performance effect on the container because they access the kernel objects without indirect action.
On Windows Server 2019, you can now run Windows and Linux containers alongside each other.

Hyper-V containers
When containers share the kernel and memory, it creates a slight chance that if a vulnerability occurs in
the Windows operating system, an application might break out of its sandbox environment and inadvert-
ently do something malicious. To avoid this, Windows provides a more secure alternative of running
containers called Hyper-V containers.
The following diagram depicts the high-level architecture of Hyper-V containers on the Windows operat-
ing system. Hyper-V containers are supported on both Windows Server 2016 and newer versions, and on
the Windows 10 Anniversary edition.
MCT USE ONLY. STUDENT USE PROHIBITED 638  Module 16 Azure Deployment Models and Services  

The main difference between Windows Server containers and Hyper-V containers is the isolation that the
latter provides. Hyper-V containers are the only type of containers you can run on the Windows 10
operating system. Hyper-V containers have a small footprint and start fast compared to a full VM. You
can run any image as a Hyper-V isolated container by using the –isolation option on the Docker com-
mand line, and specifying that the isolation be type hyperv. Refer to the following command for an
example:
docker run -it --isolation hyperv microsoft/windowsservercore cmd
This command will run a new instance of a container based on the image microsoft/windows-
servercore, and will run the command cmd.exe in interactive mode.

Nano Server
Nano Server is the headless deployment option for Windows Server 2016 and Windows Server 2019,
available via the semi-annual channel releases. It is specifically optimized for private clouds and data-
centers and for running cloud-based applications. It is intended to be run as a container in a container
host, such as a Server Core installation of Windows Server.
It's a remotely administered server operating system optimized for private clouds and datacenters. It's
similar to Windows Server in Server Core mode, but it's significantly smaller, has no local logon capability,
and only supports 64-bit applications, tools, and agents.
Nano Server also takes up far less disk space, sets up significantly faster, and requires far fewer updates
and restarts than Windows Server. When it does restart, it restarts much faster. Nano Server is ideal for a
number of scenarios:
●● As a compute host for Hyper-V VMs, either in clusters or not.
●● As a storage host for Scale-Out File Server.
●● As a Domain Name System (DNS) server
●● As a web server running IIS
●● As a host for applications that are developed using cloud application patterns and run in a container
or VM guest operating system
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  639

Azure VMs and containers


It's also possible to run containers under and IaaS model in Azure using Azure VMs.
You can run the following containers:
●● Windows Server or Windows Hyper-V containers
●● Docker containers on Windows Server 2016 or Windows Server 2019 Nano servers
●● Docker containers on Linux deployments.
You can use tools such as cloud-init11 or the custom script extension12 to install the Docker version of
choice.
However, installing Docker does mean that when you need to deploy to multiple VMs in a load balanced
infrastructure, you're dealing with infrastructure operations, VM OS patches, and infrastructure complexi-
ty for highly scalable applications. In production, for large scale or complex applications or for deploy-
ment models, this is not a best practice.
The main scenarios for using containers in an Azure VM are:
●● Dev/test environment. A VM in the cloud is optimal for development and testing in the cloud. You can
rapidly create or stop the environment depending on your needs.
●● Small and medium scalability needs, In scenarios where you might need just a couple of VMs for your
production environment, managing a small number of VMs might be affordable until you can move to
more advanced platform as a service (PaaS) environments, such as Orchestrator.
●● Production environments with existing deployment tools. You might be migrating from an on-premis-
es environment that you have invested in tools to make complex deployments to VMs or bare-metal
servers. To move to the cloud with minimal changes to production environment deployment proce-
dures you could continue to use those tools to deploy to Azure VMs. However, you'll want to use
Windows Containers as the unit of deployment to improve the deployment experience.

Automating IaaS Infrastructure


Azure VMs support a wide range of deployment and configuration management toolsets, including:
●● Azure Resource Manager templates
●● Scripting using bash, Azure CLI and PowerSell
●● Windows PowerShell Desired State Configuration (DSC) or Azure Automation DSC
●● Ansible
●● Chef
●● Puppet
●● Terraform
VM extensions give your VM additional capabilities through post-deployment configuration and auto-
mated tasks. These following common tasks can be accomplished using extensions such as:
●● Run custom scripts. The Custom Script Extension (CSE) helps you configure VM workloads by running
your script when the VM is provisioned.

11 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-automate-vm-deployment
12 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-linux
MCT USE ONLY. STUDENT USE PROHIBITED 640  Module 16 Azure Deployment Models and Services  

●● Deploy and manage configurations. The PowerShell DSC extension helps you set up DSC on a VM to
manage configurations and environments.
●● Collect diagnostics data. The Azure Diagnostics extension helps you configure the VM to collect
diagnostics data you can use to monitor the health of your applications.
You can read more about vm extensions at Virtual machine extensions and features for Linux13, and
Virtual machine extensions and features for Windows.14

Azure DevTest Labs


Azure DevTest Labs enables you to quickly set up an environment for your team (for example: develop-
ment or test environment) in the cloud. A lab owner creates a lab, provisions Windows VMs or Linux VMs,
installs the necessary software and tools, and makes them available to lab users. Lab users connect to the
VMs in the lab, and use them for their day-to-day short-term projects. Once users start utilizing resources
in the lab, a lab admin can analyze cost and usage across multiple labs, and set overarching policies to
optimize their organization's or team's costs.

✔️ Note: Azure DevTest Labs is being expanded with new types of labs, namely Azure Lab Services. Azure
Lab Services lets you create managed labs, such as classroom labs. The service itself handles all the
infrastructure management for a managed lab, from spinning up VMs to handling errors, and scaling the
infrastructure. The managed labs are currently in preview, at the time of writing. Once the preview ends,
the new lab types and existing DevTest Labs come under the new common umbrella name of Azure Lab
Services where all lab types continue to evolve.

Usage scenarios
Some common use cases for using Azure DevTest Labs are as follows:
●● Use DevTest Labs for development environments. This enables you to host development machines for
developers so they can:
●● Quickly provision their development machines on demand.
●● Provision Windows and Linux environments using reusable templates and artifacts.
●● More easily customize their development machines whenever needed.
●● Use DevTest Labs for test environments. This enables you to host machines for testers so they can:
●● Test the latest version of their application by quickly provisioning Windows and Linux environ-
ments using reusable templates and artifacts.
●● Scale up their load testing by provisioning multiple test agents.
In addition, administrators can use DevTest Labs to control costs by ensuring that testers cannot get
more VMs than they need for testing, and VMs are shut down when not in use.

13 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/features-linux
14 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/features-windows
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Infrastructure-as-a-Service (IaaS) Services  641

●● Integrate DevTest Labs with Azure DevOps CI/CD pipeline. You can use the Azure DevTest Labs Tasks
extension that's installed in Azure DevOps to easily integrate your CI/CD build-and-release pipeline
with Azure DevTest Labs. The extension installs three tasks:
●● Create a VM
●● Create a custom image from a VM
●● Delete a VM
The process makes it easy to, for example, quickly deploy an image for a specific test task, and then
delete it when the test completes.
For more information on DevTest Labs, go to Azure Lab Services Documentation15.

15 https://docs.microsoft.com/en-us/azure/lab-services/
MCT USE ONLY. STUDENT USE PROHIBITED 642  Module 16 Azure Deployment Models and Services  

Azure Platform-as-a-Service (PaaS) Services


Azure App Service
Azure App Service is a PaaS offering on Azure for hosting web applications, REST APIs, and mobile
backends.
Web App, a component of Azure App Services, is a fully-managed compute platform that is optimized for
hosting websites and web applications. You can build applications and run them natively in Windows or
Linux environments, and many languages are supported for building your application.
Supported Languages include:
●● Node.js
●● Java
●● PHP
●● Python (Preview)
●● .NET
●● .NET Core
●● Ruby
As a PaaS offering, Azure App services offers services such as security, load balancing, autoscaling, and
automated management. You can also utilize its DevOps capabilities, such as continuous deployment
from Azure DevOps, GitHub, Docker Hub, and other sources. Other capabilities you can utilize include
package management, staging environments, custom domain, and Secure Sockets Layer (SSL) certificates.

Why use App Service?


App Service provides you with some key features, including:
●● Multiple languages and frameworks. App Service supports many languages as listed above, and allows
you to run PowerShell and other scripts or executables as background services.
●● DevOps optimization. You can set up continuous integration and deployment using either Azure
DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry. You also can promote updates
through test and staging environments, and manage your apps in App Service by using Azure Power-
Shell or the cross-platform command-line interface (CLI).
●● Global scale with high availability. Using App Service, you can scale up or out manually or automati-
cally, and host your apps anywhere in Microsoft's global datacenter infrastructure. In addition, the
App Service service-level agreement (SLA) ensures high availability.
●● Connections to SaaS platforms and on-premises data. Choose from more than 50 connectors for
different enterprise systems.
●● Security and compliance. App Service is ISO, Service Organization Controls (SOC), and Payment Card
Industry (PCI) compliant. Users can authenticate with Azure Active Directory (Azure AD), or with social
media accounts such as Google, Facebook, Twitter, and Microsoft. Create IP address restrictions and
manage service identities.
●● Application templates. Choose from an extensive list of application templates in the Azure Market-
place, such as WordPress, Joomla, and Drupal.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  643

●● Microsoft Visual Studio integration. Dedicated tools in Visual Studio streamline the work of creating,
deploying, and debugging.
●● API and mobile features. App Service provides turn-key CORS support for RESTful API scenarios, and
simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and
more.
●● Serverless code. Run a code snippet or script on-demand without having to explicitly provision or
manage infrastructure. Pay only for the compute time your code actually uses.
More general details are available on the App Service Documentation16 page.

App Service plans


In App Service, an app runs in an App Service plan. An App Service plan defines a set of compute resourc-
es for a web app to run. These compute resources are analogous to the server farm in conventional web
hosting. You can configure one or more apps to run on the same computing resources, or in the same
App Service plan.
When you create an App Service plan in a certain region (for example, West Europe), a set of compute
resources is created for that plan in that region. Whatever apps you put into this App Service plan run on
these compute resources as defined by your App Service plan.
Each App Service plan defines:
●● Region (such as West US, East US)
●● Number of VM instances
●● Size of VM instances (small, medium, and large)
●● Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, Isolated, Consumption)

Pricing tiers
The pricing tier for an App Service plan determines what App Service features you can use, and how
much you pay for the plan. Pricing tiers are:
●● Shared compute. Shared compute has two base tiers, Free, and Shared. They both run an app on the
same Azure VM as other App Service apps, including apps of other customers. These tiers allocate
CPU quotas to each app that runs on the shared resources, and the resources cannot scale out.
●● Dedicated compute. The Dedicated compute Basic, Standard, Premium, and PremiumV2 tiers run apps
on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources.
The higher the tier, the more VM instances are available for scale-out.
●● Isolated. This tier runs dedicated Azure VMs on dedicated Azure virtual networks. This provides
network isolation (on top of compute isolation) to your apps. It also provides the maximum scale-out
capabilities.
●● Consumption. This tier is only available to function apps. It scales the functions dynamically depend-
ing on workload.
More detail about the App Service plans are available on the Azure App Service plan overview17
webpage.

16 https://docs.microsoft.com/en-us/azure/app-service/
17 https://docs.microsoft.com/en-us/azure/app-service/overview-hosting-plans?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json
MCT USE ONLY. STUDENT USE PROHIBITED 644  Module 16 Azure Deployment Models and Services  

Demonstration-Create a Java app in App Service


on Linux
This walkthrough shows how to use the Azure CLI with the Maven Plugin for Azure Web Apps (Preview)
to deploy a Java Web archive file.

Prerequisites
●● You require an Azure subscription to perform the following steps. If you don't have one you can
create one by following the steps outlined on the Create your Azure free account today18 webpage.

Steps:
1. Open Azure Cloud Shell by going to https://shell.azure.com19, or by using the Azure Portal, and
select Bash as the environment option.

2. Create a Java app by executing the Maven command in the Cloud Shell prompt to create a new app
named helloworld: Accept the default values as you go:
mvn archetype:generate -DgroupId=example.demo -DartifactId=helloworld -DarchetypeArtifactId=ma-
ven-archetype-webapp

18 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
19 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  645

3. Select the braces icon in Cloud Shell to open the editor. Use this code editor to open the project file
pom.xml in the helloworld directory.
4. Add the following plugin definition inside the &#60build&#62 element of the pom.xml file"
<plugins>
<!--*************************************************-->
<!-- Deploy to Tomcat in App Service Linux -->
<!--*************************************************-->

<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-webapp-maven-plugin</artifactId>
<version>1.4.0</version>
<configuration>

<!-- App information -->


<resourceGroup>${RESOURCEGROUP_NAME}</resourceGroup>
<appName>${WEBAPP_NAME}</appName>
<region>${REGION}</region>

<!-- Java Runtime Stack for App on Linux-->


<linuxRuntime>tomcat 8.5-jre8</linuxRuntime>

</configuration>
</plugin>
</plugins>

5. Update the following placeholders in the plugin configuration:


●● RESOURCEGROUP_NAME (can be any name)
●● WEBAPP_NAME (must be a unique name)
●● REGION (your nearest datacenter location. For example, westus)
6. Deploy your Java app to Azure using the following command:
mvn package azure-webapp:deploy
MCT USE ONLY. STUDENT USE PROHIBITED 646  Module 16 Azure Deployment Models and Services  

7. After this step completes, verify deployment by opening the deployed application using the following
URL in your web browser, replacing webapp with the name of the deployed application. For example,
http://<webapp>.azurewebsites.net/helloworld.

Demonstration-Deploy a .NET Core based app


This walkthrough shows how to create an ASP.NET Core web app, and then deploy it to Azure App
Services.

Prerequisites
You require the following items to complete these walkthrough steps:
●● Visual Studio 2017. If you don't have Visual Studio 2017, you can install the Visual Studio Community
edition from the Visual Studio downloads20 webpage.
●● An Azure subscription. If you don't have one you can create one by following the steps outlined on
the Create your Azure free account today21 webpage.

Steps
1. In Visual Studio, create a project by selecting File, New, and then Project.
2. In the New Project dialog, select Visual C#, Web, and then ASP.NET Core Web Application.
3. Name the application myFirstAzureWebApp, and then select OK.

20 https://visualstudio.microsoft.com/downloads/
21 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  647

You can deploy any type of ASP.NET Core web app to Azure. For this walkthrough, select the **Web
Application** template. Ensure authentication is set to **No Authentication** and no other option is
selected, and then Select **OK**.

4. To run the web app locally. from the menu, select Debug, Start, without Debugging.

5. Launch the Publish wizard by going to Solution Explorer, right-clicking the myFirstAzureWebApp
project, and then selecting Publish.
MCT USE ONLY. STUDENT USE PROHIBITED 648  Module 16 Azure Deployment Models and Services  

6. To open the Create App Service dialog, select App Service, and then select Publish.

7. Sign in to Azure by going to the Create App Service dialog, select Add an account…, and sign in to
your Azure subscription. If you're already signed in, select the account you want from the drop-down.
If you're already signed in, don't select the Create button.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  649

8. Next to Resource Group, select New. Name the resource group myResourceGroup, and then select
OK.
9. Next to Hosting Plan, select New. Use the following values, and then select OK
●● App Service Plan: myappserviceplan
●● Location: your nearest datacenter
●● Size: Free
MCT USE ONLY. STUDENT USE PROHIBITED 650  Module 16 Azure Deployment Models and Services  

✔️ Note: An App Service plan specifies the location, size, and features of the web server farm that hosts
your app. You can save money when hosting multiple apps by configuring the web apps to share a single
App Service plan"
- Region (for example: North Europe, East US, or Southeast Asia)
- Instance size (small, medium, or large)
- Scale count (1 to 20 instances)
- SKU (Free, Shared, Basic, Standard, or Premium)
10. While still in the Create App Service dialog, enter a value for the app name, and then select Create.
✔️ Note:The app name must be a unique value. Valid characters are a-z, 0-9, and -). Alternatively, you can
accept the automatically-generated unique name. The resulting URL of the web app is http://app_name.
azurewebsites.net, where app_name is your app's name.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  651

11. After the wizard completes, it publishes the ASP.NET Core web app to Azure, and then launches the
app in the default browser.

The app name specified in the create and publish step is used as the URL prefix in the format
http://<app_name>.azurewebsites.net.
Congratulations, your ASP.NET Core web app is running live in Azure App Service.
✔️ Note: If you do not plan on using the resources, you should delete them to avoid incurring charges.
MCT USE ONLY. STUDENT USE PROHIBITED 652  Module 16 Azure Deployment Models and Services  

Scale App Services


Scaling ensures that you have the right amount of resources running to manage an application's needs.
Scaling can be done manually or automatically.
There are two workflows for scaling Azure App services:
●● Scale up. Add additional resources to your app such as more CPU, memory, disk space. You can also
add extra features such as dedicated VMs, custom domains and certificates, staging slots, and autos-
caling.
●● To scale up, you will need to change the pricing tier of your App Service plan.
●● Scale out. Increase the number of VM instances that run your app.
●● You can scale out to as many as 20 instances, depending on your pricing tier.
●● App Service environments in Isolated tier further increases your scale-out count to 100 instances.
Note: The scale settings take seconds to apply and affect all apps in your App Service plan. They don't
require you to change your code or redeploy your application.
You can also scale based on a set of predefined rules or schedule, and configure webhooks and email
notifications and alerts.

Autoscale
Autoscale settings help ensure that you have the right amount of resources running to manage the
fluctuating load of your application. You can configure Autoscale settings to trigger based on metrics that
indicate load or performance, or at a scheduled date and time.

Metric
You can scale based on a resource metric, such as:
●● Scale based on CPU. You want to scale out or scale in based on a percentage CPU value.
●● Scale based on custom metric. To scale based on a custom metric, you designate a specific metric that
is relevant to your app architecture. For example, you might have a web front end and an API tier that
communicates with the backend, and you want to scale the API tier based on custom events in the
front end.

Schedule
You can scale based on a schedule as well. For example, you can:
●● Scale differently on weekdays Vs weekends. You don't expect traffic on weekends, hence you want to
scale down to 1 instance on weekends.
●● Scale differently during holidays. During holidays or specific days that are important for your business,
you might want to override the defaults scaling settings and have more capacity at your disposal.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  653

Autoscale profiles
There are three types of Autoscale profiles that you can configure depending on what you want to
achieve. Azure then evaluates which profile to execute at any given time. The profile types are:
●● Regular profile. This is the most common profile. If you don’t need to scale your resource based on
the day of the week, or on a particular day, you can use a regular profile.
●● Fixed date profile. This profile is for special cases. For example, let’s say you have an important event
coming up on December 26, 2019 (PST). You want the minimum and maximum capacities of your
resource to be different on that day, but still scale on the same metrics.
●● Recurrence profile. This type of profile enables you to ensure that this profile is always used on a
particular day of the week. Recurrence profiles only have a start time. They run until the next recur-
rence profile or fixed date profile is set to start.

Example
An example of how this looks in an Azure Resource Manager template is an Autoscale setting with one
profile, as below:
●● There are two metric rules in this profile: one for scale out, and one for scale in.

●● The scale-out rule is triggered when the VM scale set's average percentage CPU metric is greater
than 85 percent for the past 10 minutes.
●● The scale-in rule is triggered when the VM scale set's average is less than 60 percent for the past
minute.
{
"id": "/subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/autoscalesettings/setting1",
"name": "setting1",
"type": "Microsoft.Insights/autoscaleSettings",
"location": "East US",
"properties": {
"enabled": true,
"targetResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMa-
chineScaleSets/vmss1",
"profiles": [
{
"name": "mainProfile",
"capacity": {
"minimum": "1",
"maximum": "4",
"default": "1"
},
"rules": [
{
"metricTrigger": {
"metricName": "Percentage CPU",
"metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/
virtualMachineScaleSets/vmss1",
"timeGrain": "PT1M",
"statistic": "Average",
MCT USE ONLY. STUDENT USE PROHIBITED 654  Module 16 Azure Deployment Models and Services  

"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "GreaterThan",
"threshold": 85
},
"scaleAction": {
"direction": "Increase",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
},
{
"metricTrigger": {
"metricName": "Percentage CPU",
"metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/
virtualMachineScaleSets/vmss1",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "LessThan",
"threshold": 60
},
"scaleAction": {
"direction": "Decrease",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
}
]
}
]
}
}

You can review some best auto-scale best practices on the Best practices for Autoscale22 page.

Web App for containers


Azure App Service also supports container deployment. The service container hosting service offering in
Azure App Service is often referred to as Web App for Containers.
Web App for Containers allows customers to use their own containers and deploy them to App Service as
a web app. Similar to the Web App solution, Web App for Containers eliminates time-consuming infra-
structure management tasks during container deployment, updating, and scaling. This enables develop-
ers to focus on coding, and getting their apps in front of end users faster.

22 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-best-practices
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  655

To boost developer productivity, Web App for Containers also provides:


●● Integrated CI/CD capabilities with Docker Hub
●● Azure Container Registry
●● Visual Studio Team Services (VSTS)
●● Built-in staging
●● Rollback
●● Testing-in-production
●● Monitoring
●● Performance testing capabilities
For Operations, Web App for Containers also provides rich configuration features that enable developers
to more easily add custom domains, integrate with Azure AD authentication, add SSL certificates, and
more. These are all crucial to web app development and management. Web App for Containers is an
ideal environment to run web apps that do not require extensive infrastructure control.
Web App for Containers supports both Linux and Windows containers. It also provides the following
features:
●● Deploy containerized applications using Docker Hub, Azure Container Registry, or private registries.
●● Incrementally deploy apps into production with deployment slots and slot swaps to allow for blue/
green deployments (also known as A/B deployments).
●● Scale out automatically with auto-scale.
●● Enable application logs, and use the App Service Log Streaming feature to see logs from your applica-
tion.
●● Use PowerShell and Windows Remote Management (WinRM) to remotely connect directly into your
containers.

Demonstration-Deploy custom Docker image to


Web App for containers
In this walkthrough you will deploy a custom Docker image running a Go application to Web App for
Containers.

Prerequisites
●● You require need an Azure subscription to perform these steps. If you don't have one you can create
one by following the steps outlined on the Create your Azure free account today23 webpage.

Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or by using the Azure portal and select-
ing Bash as the environment option.

23 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED 656  Module 16 Azure Deployment Models and Services  

2. Run the following command to configure a deployment user, replacing <username> and <password>
(including brackets) with a new user name and password. The user name must be unique within Azure,
and the password must be at least eight characters long with two of the following three elements:
letters, numbers, symbols:
Note: This deployment user is required for FTP and local Git deployment to a web app. The user name
and password are account level. They are different from your Azure subscription credentials.
az webapp deployment user set --user-name <*username*> --password <*password*>

You should get a JSON output, with the password shown as null. If you get a ‘Conflict’. Details: 409 error,
change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password.
3. Create a resource group in Azure by using the following command and substituting the resource
group name value for one of your own choice, and the location to a datacenter near you:
az group create --name myResourceGroup --location "West Europe"

4. Create an Azure App Service plan by running the following command, which creates an App Service
plan named myAppServicePlan in the Basic pricing tier:
az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1
--is-linux

5. Run the following command to create a web app in your App service plan, replacing <app name>
with a globally unique name, and the resource group and App Service plan names to values you
created earlier. This command points to the public Docker Hub image:
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name < app name >
--deployment-container-image-name microsoft/azure-appservices-go-quickstart

When the web app has been created, the Azure CLI shows output similar to the following example:
{
"availabilityState": "Normal",
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  657

"clientAffinityEnabled": true,
"clientCertEnabled": false,
"cloningInfo": null,
"containerSize": 0,
"dailyMemoryTimeQuota": 0,
"defaultHostName": "<app name>.azurewebsites.net",
"deploymentLocalGitUrl": "https://<username>@<app name>.scm.azurewebsites.net/<app name>.git",
"enabled": true,
< JSON data removed for brevity. >
}

6. Browse to the app http://<app_name>.azurewebsites.net/hello.


Congratulations! You've deployed a custom Docker image running a Go application to Web App for
Containers.

Azure Container Instances


Azure Container Instances is a PaaS service on Azure that offers the capability to run both Linux and
Windows containers. By default, Azure Container Instances runs single containers, which means that
individual containers are isolated from each other and cannot interact with one another.
Azure Container Instances offers the one of the fastest and simplest way to run a container in Azure,
without having to provision any virtual machines and without having to adopt a higher-level service.

Eliminate VM management
With Azure Container Instances you don’t need to own a VM to run your containers. This means that you
don’t need to worry about creating, managing, and scaling them. In the following picture, the network,
MCT USE ONLY. STUDENT USE PROHIBITED 658  Module 16 Azure Deployment Models and Services  

virtual machine, and container host are entirely managed for you. However, this also means you have no
control over them.

Hypervisor-level security
Historically, containers have offered application dependency isolation and resource governance. Howev-
er, they have not been considered sufficiently hardened for hostile multi-tenant usage. In Azure Contain-
er Instances, your application is as isolated in a container as it would be in a VM.
The isolation between individual containers is achieved by using Hyper-V containers.

Public IP connectivity and DNS name


Azure Container Instances enables exposing your containers directly to the internet with an IP address
and a fully qualified domain name (FQDN). When you create a container instance you can specify a
custom Domain Name System (DNS) name label so your application is reachable at customlabel.
azureregion.azurecontainer.io.
By assigning a public IP address to your container, you make it accessible from the outside world. If a
container that exposes port 80, as in the previous image, the port is connected to a public IP address that
accepts traffic on port 80 of the virtual host.
✔️ Note: Port mappings are not available in Azure Container Instances, so the both the container and
container host must use the same port number.

Custom sizes and resources


Containers are typically optimized to run a single application, but the exact needs of applications can
differ greatly. Azure Container Instances provides optimum utilization by allowing exact specifications of
CPU cores and memory.
You specify the amount of memory in gigabytes for each container, and the CPUs to assign. For com-
pute-intensive jobs such as machine learning, Azure Container Instances can schedule Linux containers to
use NVIDIA Tesla GPU resources (preview).
Because you pay based only on what you need and get billed by the second, you can fine-tune your
spending based on actual need and not pay for resources you don’t need or use.

Virtual network deployment (preview)


Currently in preview, this feature of Azure Container Instances enables you to deploy container instances
to an Azure virtual network. By deploying container instances into a subnet within your virtual network,
they can communicate securely with other resources in the virtual network, including those that are
on-premises (through a virtual private network (VPN) gateway or ExpressRoute).

Persistent storage
To retrieve and persist state with Azure Container Instances, use Azure Files shares.
It is possible to run both long-running processes and task-based containers. This is controlled by the
container restart policy.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  659

Container groups
By default, containers are isolated from each other. But what if you need interaction between containers?
To support this kind of scenario, there is the concept of container groups. Containers inside a container
group are deployed on the same machine, and they use the same network. They also share their lifecycle,
meaning all containers in the group are started and stopped together.
Containers are always part of a container group. Even if you deploy a single container, it will be placed
into a new group automatically. When using Windows containers, a group can have only one container.
This is because network namespaces are not available on the Windows operating system.

Usage scenario
Azure Container Instances is a recommended compute option for any scenario that can operate in
isolated containers, such as simple applications, task automation, and build jobs. For scenarios requiring
full container orchestration (including service discovery across multiple containers, automatic scaling, and
coordinated application upgrades) we recommend Azure Kubernetes Service (AKS).

Demonstration-Create a container on ACI


This walkthrough demonstrates how to use Azure CLI to create a container in Azure and make its applica-
tion available with an FQDN. A few seconds after you execute a single deployment command, you can
browse to the running application:
MCT USE ONLY. STUDENT USE PROHIBITED 660  Module 16 Azure Deployment Models and Services  

Prerequisites
●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the Create your Azure free account today24 webpage.

Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or using the Azure portal and selecting
Bash as the environment option

**Note**: You can use a local version on Azure CLI if you want, but it must be version 2.0.27 or later.

2. Create a resource group using the following command, substituting your values for the resource
group name and location:
az group create --name myResourceGroup --location eastus

3. Create a container, substituting your values for the resource group name and container name. Ensure
it is a unique DNS name. In the following command, we specify to open port 80 and apply a DNS
name on the container:
az container create --resource-group myResourceGroup --name mycontainer --image microsoft/aci-hel-
loworld --dns-name-label aci-demo --ports 80

4. Verify the container status by running the following command, again substituting your values where
appropriate:
az container show --resource-group myResourceGroup --name mycontainer --query "{FQDN:ipAddress.
fqdn,ProvisioningState:provisioningState}" --out table

24 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Platform-as-a-Service (PaaS) Services  661

5. If the container's ProvisioningState is Succeeded, navigate to its FQDN in your browser. If you see a
webpage similar to the follow example, congratulations! You've successfully deployed an application
running in a Docker container to Azure.

✔️ Note: You can check the container in the portal if you want. If you are finished using the resources in
Azure, delete it to ensure you do not incur costs.
MCT USE ONLY. STUDENT USE PROHIBITED 662  Module 16 Azure Deployment Models and Services  

Serverless and HPC Computer Services


Serverless computing
Serverless computing is a cloud-hosted execution environment that runs your code, yet abstracts the
underlying hosting environment. You create an instance of the service and you add your code. No
infrastructure configuration or maintenance is required, or even allowed.
You configure your serverless apps to respond to events. An event could be a REST endpoint, a periodic
timer, or even a message received from another Azure service. The serverless app runs only when it's
triggered by an event.
Scaling and performance are managed automatically, and you're billed only for the exact resources you
use. You don't even need to reserve resources.

Serverless definition
The core characteristics that define a serverless service are:
●● Service is consumption based. The service provisions resources on demand, and you only pay for what
you use. Billing is typically calculated by the number of function calls, code execution time, and
memory used. (Supporting services such as networking and storage could be charged separately.)
●● Low management overhead. Because serverless service is cloud-hosted, you won't need to be patch-
ing VMs or have a burdensome operational workflow. Serverless services provide for the full abstrac-
tion of servers, so developers can just focus on their code. There are no distractions around server
management, capacity planning, or availability.
●● Auto-scale. Compute execution can be in milliseconds, so it's almost instant. It provides for event-
drive scalability. Application components react to events and trigger in near real-time with virtually
unlimited scalability.

Benefits of the serverless computing model


The benefits of serverless computing are:
●● Efficiency:
●● Serverless computing can result in a shorter times for the product to get to market as developers
can focus more on their applications and customer value.
●● Fixed costs are converted to variable costs, and you are only paying for what is consumed.
●● Cost savings are realized by the variable costs model.
●● Focus:
●● You can focus on solving business problems, and not on allocating time to defining and carrying
out operational tasks such as VM management.
●● Developers can focus on their code. There are no distractions around server management, capacity
planning, or availability.
●● Flexibility:
●● Serverless computing provides a simplified starting experience.
●● Easier pivoting means more flexibility.
MCT USE ONLY. STUDENT USE PROHIBITED
 Serverless and HPC Computer Services  663

●● Experimentation is easier as well.


●● You can scale at your pace.
●● Serverless computing is a natural fit for microservices.

Serverless Azure services


Some of the serverless services in Azure are listed in the following table.

Azure service Functionality


Azure Event grid Manage all events that can trigger code or logic
Azure Functions Execute code based on events you specify
Azure Automation Automate tasks across Azure and hybrid environ-
ments
Azure Logic Apps Design workflows and orchestrate processes
The service we're interested in from a DevOps and compute point of view is Azure Functions.

Functions as a service
Function as a service (FaaS) is an industry programming model that uses Functions to help achieve
serverless compute. These functions have the following characteristics:
●● Single responsibility. Functions are single purposed, reusable pieces of code that process an input and
return a result.
●● Short-lived. Functions don't stick around when they've finished executing, which frees up resources
for further executions.
●● Stateless. Functions don't hold any persistent state and don't rely on the state of any other process.
●● Event driven and scalable. Functions respond to predefined events and are instantly replicated as
many times as needed.

Azure Functions
Azure Functions are Azure's implementation of the FaaS programming model, with additional capabili-
ties.

Azure Functions are ideal when you're only concerned with the code running your service and not the
underlying platform or infrastructure. Azure Functions are commonly used when you need to perform
work in response to an event (often via a REST request, timer, or message from another Azure service),
and when that work can be completed quickly, within seconds or less.
Azure Functions scale automatically, and charges accrue only when a function is triggered, so they're a
good choice when demand is variable. For example, you might be receiving messages from an Internet of
MCT USE ONLY. STUDENT USE PROHIBITED 664  Module 16 Azure Deployment Models and Services  

Things (IoT) solution that monitors a fleet of delivery vehicles. You'll likely have more data arriving during
business hours. Azure Functions can scale out to accommodate these busier times.
Furthermore, Azure Functions are stateless; they behave as if they're restarted every time they respond to
an event. This is ideal for processing incoming data. And if state is required, they can be connected to an
Azure storage service. See Functions25 for more details.

Azure Functions features


Some key features of Azure Functions are:
●● Choice of language. Write functions using your choice of C#, F#, or JavaScript.
●● Pay-per-use pricing model. Pay only for the time spent running your code.
●● Bring your own dependencies. Functions support NuGet and NPM, so you can use your favorite
libraries.
●● Integrated security. Protect HTTP-triggered functions with OAuth providers such as Azure AD, Face-
book, Google, Twitter, and your Microsoft Account.
●● Simplified integration. Easily leverage Azure services and software-as-a-service (SaaS) offerings.
●● Flexible development. Code your functions directly in the portal, or set up continuous integration and
deploy your code through GitHub, Azure DevOps Services, and other supported development tools.
●● Open-source. The Functions runtime is open-source and available on GitHub.
You can download the free eBook, Azure Serverless Computing cookbook, from the Azure Serverless
Computing Cookbook26 webpage.

Demonstration-Create Azure Function using Az-


ure CLI
This walkthrough shows how to create a function from the command line or terminal. You use the Azure
CLI to create a function app, which is the serverless infrastructure that hosts your function. The function
code project is generated from a template by using the Azure Functions Core Tools, which is also used to
deploy the function app project to Azure.

Prerequisites
●● Use Azure Cloud Shell.
●● You require an Azure subscription to perform these steps. If you don't have one, you can create one
by following the steps outlined on the Create your Azure free account today27 webpage.

Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or via the Azure Portal and selecting Bash
as the environment option.

25 https://azure.microsoft.com/en-us/services/functions/
26 https://azure.microsoft.com/en-us/resources/azure-serverless-computing-cookbook/
27 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Serverless and HPC Computer Services  665

2. Create the local function app project by running the following command from the command line to
create a function app project in the MyFunctionProj folder of the current local directory. A GitHub
repo is also created in MyFunctionProj:
func init MyFunctionProj

When prompted, select a worker runtime from the following language choices:
- dotnet. Creates a .NET class library project (.csproj).
- node. Creates a JavaScript project.
3. Use the following command to navigate to the new MyFunctionProj project folder:
cd MyFunctionProj

4. Create a function using the following command, which creates an HTTP-triggered function named
MyHttpTrigger:
func new --name MyHttpTrigger --template "HttpTrigger"

5. Update the function. By default, the template creates a function that requires a function key when
making requests. To make it easier to test the function in Azure, you need to update the function to
allow anonymous access. The way that you make this change depends on your functions project
language. For C#:
●● Open the MyHttpTrigger.cs code file that is your new function. Use the following command to
update the AuthorizationLevel attribute in the function definition to a value of anonymous, and
save your changes:
[FunctionName("MyHttpTrigger")]
public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous,
"get", "post", Route = null)]HttpRequest req, ILogger log)

6. Run the function locally. The following command starts the function app, which runs using the same
Azure Functions runtime that is in Azure:
func host start --build
MCT USE ONLY. STUDENT USE PROHIBITED 666  Module 16 Azure Deployment Models and Services  

The –build option is required to compile C# projects.


7. Confirm the output. When the Functions host starts, it write something similar to the following output,
which has been truncated for readability:

%%%%%%
%%%%%%
@ %%%%%% @
@@ %%%%%% @@
@@@ %%%%%%%%%%% @@@
@@ %%%%%%%%%% @@
@@ %%%% @@
@@ %%% @@
@@ %% @@
%%
%

...

Content root path: C:\functions\MyFunctionProj


Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.

...

Http Functions:

HttpTrigger: http://localhost:7071/api/MyHttpTrigger

[8/27/2018 10:38:27 PM] Host started (29486ms)


[8/27/2018 10:38:27 PM] Job host started

●● Copy the URL of your HttpTrigger function from the runtime output, and paste it into your browser's
address bar. Append the query string ?name=yourname to this URL, and execute the request. The
following image is the response in the browser to the GET request returned by the local function.

Now that you have run your function locally, you can create the function app and other required resourc-
es in Azure.
8. Create a resource group. An Azure resource group is a logical container into which Azure resources
such as function apps, databases, and storage accounts are deployed and managed. Use the following
command to create the resource group az group create.
az group create --name myResourceGroup --location westeurope
MCT USE ONLY. STUDENT USE PROHIBITED
 Serverless and HPC Computer Services  667

9. Create an Azure Storage account. Functions uses a general-purpose account in Azure Storage to
maintain state and other information about your functions. Create a general-purpose storage account
in the resource group you created by using the az storage account create command.
In the following command, substitute a globally unique storage account name where you see the
<storage_name> placeholder. Storage account names must be between 3 and 24 characters in length,
and can contain numbers and lowercase letters only:
az storage account create --name <storage_name> --location westeurope --resource-group myRe-
sourceGroup --sku Standard_LRS

After the storage account has been created, the Azure CLI shows information similar to the following
example:
{
"creationTime": "2017-04-15T17:14:39.320307+00:00",
"id": "/subscriptions/bbbef702-e769-477b-9f16-bc4d3aa97387/resourceGroups/myresourcegroup/...",
"kind": "Storage",
"location": "westeurope",
"name": "myfunctionappstorage",
"primaryEndpoints": {
"blob": "https://myfunctionappstorage.blob.core.windows.net/",
"file": "https://myfunctionappstorage.file.core.windows.net/",
"queue": "https://myfunctionappstorage.queue.core.windows.net/",
"table": "https://myfunctionappstorage.table.core.windows.net/"
},
....
// Remaining output has been truncated for readability.
}

10. Create a function app. You must have a function app to host the execution of your functions. The
function app provides an environment for serverless execution of your function code. It lets you group
functions as a logic unit for easier management, deployment, and sharing of resources. Create a
function app by using the az functionapp create command.
In the following command, substitute a unique function app name where you see the <app_name>
placeholder, and the storage account name for <storage_name>. The <app_name> is used as the default
DNS domain for the function app, and so the name needs to be unique across all apps in Azure. You
should also set the <language> runtime for your function app, from dotnet (C#) or node (JavaScript).
az functionapp create --resource-group myResourceGroup --consumption-plan-location westeurope \
--name <app_name> --storage-account <storage_name> --runtime <language>

Setting the consumption-plan-location parameter means that the function app is hosted in a Consump-
tion hosting plan. In this serverless plan, resources are added dynamically as required by your functions,
and you only pay when functions are running.
After the function app has been created, the Azure CLI shows information similar to the following
example:
{
"availabilityState": "Normal",
"clientAffinityEnabled": true,
"clientCertEnabled": false,
"containerSize": 1536,
MCT USE ONLY. STUDENT USE PROHIBITED 668  Module 16 Azure Deployment Models and Services  

"dailyMemoryTimeQuota": 0,
"defaultHostName": "quickstart.azurewebsites.net",
"enabled": true,
"enabledHostNames": [
"quickstart.azurewebsites.net",
"quickstart.scm.azurewebsites.net"
],
....
// Remaining output has been truncated for readability.
}

11. Deploy the function app project to Azure. After the function app is created in Azure, you can use the
func azure functionapp publish command to deploy your project code to Azure:
func azure functionapp publish <FunctionAppName>

You'll see something like the following output, which has been truncated for readability.
Getting site publishing info...
Preparing archive...
Uploading content...
Upload completed successfully...
Deployment completed successfully...
Syncing triggers...

You are now ready to test your functions in Azure.


12. Test the function. Use cURL to test the deployed function on a Mac or Linux computer, or using Bash
on Windows. Execute the following **cURL **command, replacing the < app_name > placeholder with
the name of your function app. Append the query string &name=< yourname > to the URL:
curl https://< app_name >.azurewebsites.net/api/MyHttpTrigger?name=< yourname >

✔️ Note: Remember to delete the resources if you are no longer using them.

Batch services
Azure Batch is a fully-managed cloud service that provides job scheduling and compute resource
management. It creates and manages a pool of compute nodes (VMs), installs the applications you want
to run, and schedules jobs to run on the nodes. It enables applications, algorithms, and computationally
intensive workloads to be broken into individual tasks for execution to be easily and efficiently run in
parallel at scale.
Using Azure Batch, there is no cluster or job scheduler software to install, manage, or scale. Instead, you
use Batch APIs and tools, command-line scripts, or the Azure portal to configure, manage, and monitor
your jobs.

Usage scenarios and application types


The following topics are just some of the usage scenarios for Azure Batch.
MCT USE ONLY. STUDENT USE PROHIBITED
 Serverless and HPC Computer Services  669

Independent\parallel
This is the most commonly used scenario. The applications or tasks, do not communicate with each other.
Instead, they operate independently. The more VMs or nodes you can bring to a task, the quicker it will
complete. Examples of usage would be Monte Carlo risk simulations, transcoding, and rendering a movie
frame by frame.

Tightly coupled
From traditional high performance computing (HPC) such as scientific, or computing and engineering
tasks, applications or tasks communicate with each other. They would typically use the Message Passing
Interface (MPI) API for this inter-node communication. However, they can also use low-latency,
high-bandwidth Remote Direct Memory Access (RDMA) networking. Examples of usage would be car
crash simulations, fluid dynamics, and Artificial Intelligence (AI) training frameworks.

Multiple tightly coupled in parallel


You can also expand on this tightly coupled MPI scenario. For example, instead of having four nodes
carrying out a job, you can have 40 nodes and run the job 10 times in parallel to scale out the job task.
MCT USE ONLY. STUDENT USE PROHIBITED 670  Module 16 Azure Deployment Models and Services  

Batch Service components


Batch service primarily consists of two components:
●● Resource management. Batch service manages resources by creating, managing, monitoring, and
scaling the pool (or pools) of VMs, which are required to run the application. You can scale from a few
VMs up to tens of thousands of VMs, enabling you to run the largest, most resource-intensive
workloads. Furthermore, no on-premises infrastructure is required.
●● Job Scheduler. Batch service provides a job scheduler. You submit your work via jobs, which are
effectively a series of tasks, and specify individual tasks into the VM pool (or set of VM pools).

Running an application
To get an application to run, you must have the following items:
●● An application. This could just be a standard desktop application; it doesn't need to be cloud aware.
●● Resource management. You need a pool of VMs, which Batch service creates, manages, monitors, and
scales.
●● A method to get the application onto the VMs. You can:

●● Store the application in blob storage, and then copy it onto each VM.
●● Have a container image and deploy it.
●● Upload a zip or application package.
●● Create a custom VM image, then upload and use that.
●● Job scheduler. Create and define the tasks that will combine to make the job.
●● Output Storage: You need somewhere to place the output data, typically use Blob storage.
✔️ Note: The unit of execution is what can be run on the command line in the VM. The application itself
does not need to be repackaged.
MCT USE ONLY. STUDENT USE PROHIBITED
 Serverless and HPC Computer Services  671

Cost
Batch services provide:
●● The ability to scale VMs as needed.
●● The ability to increase and decrease resources on demand.
●● Efficiency, as it makes best use of the resources.
●● Cost effectiveness, because you only pay for the infrastructure you use when you are using it.
✔️ Note: There is no additional charge for using a Batch service. You only pay for the underlying resourc-
es consumed, such as the VMs, storage, and networking.
MCT USE ONLY. STUDENT USE PROHIBITED 672  Module 16 Azure Deployment Models and Services  

Azure Service Fabric


Azure Service Fabric Overview
Azure Service Fabric is a distributed systems platform that makes it easier to package, deploy, and
manage scalable and reliable containerized microservice applications. It has two primary functions, which
are providing for:
●● Microservices applications
●● Container orchestration
Developers and administrators can avoid complex infrastructure problems, and instead focus on imple-
menting mission-critical workloads.
Service Fabric is designed for modern cloud, native application. It represents the next-generation plat-
form for building and managing these enterprise-class, tier-1, cloud-scale applications running in con-
tainers.

Where and what can Service Fabric run?


Service Fabric runs everywhere. You can create clusters for Service Fabric in many environments, including
Azure or on premises, on both Windows Server and Linux operating systems. You can even create clusters
on other public clouds. In addition, the development environment in the SDK is identical to the produc-
tion environment, with no emulators involved. In other words, what runs on your local development
cluster deploys to the clusters in other environments

Applications and services


Service Fabric enables you to build and manage scalable and reliable applications. It is composed of
microservices that run at high density on a shared pool of machines, which is referred to as a cluster.
It provides a sophisticated, lightweight runtime to build that is distributed, scalable, stateless, and stateful
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  673

microservices running in containers. It also provides comprehensive application management capabilities


to provision, deploy, monitor, upgrade/patch, and delete deployed applications including containerized
services

Key capabilities
By using Service Fabric, you can:
●● Deploy to Azure or to on-premises datacenters running Windows or Linux operating systems, with
zero code changes. You write once, and then deploy anywhere to any Service Fabric cluster.
●● Develop scalable applications composed of microservices by using the Service Fabric programming
models, containers, or any code.
●● Develop highly reliable stateless and stateful microservices. Simplify the design of your application by
using stateful microservices.
●● Use the Reliable Actors programming model to create cloud objects with self-contained code and
state.
●● Deploy and orchestrate containers that include Windows containers and Linux containers. Service
Fabric is a data-aware, stateful container orchestrator.
●● Deploy applications in seconds at high density, with hundreds or thousands of applications or con-
tainers per machine.
●● Deploy different versions of the same application side by side, and upgrade each application inde-
pendently.
●● Manage the lifecycle of your applications without any downtime, including breaking and nonbreaking
upgrades.
●● Scale out or scale in the number of nodes in a cluster. As you scale nodes, your applications automati-
cally scale.
●● Monitor and diagnose the health of your applications and set policies for performing automatic
repairs.
●● Watch the resource balancer orchestrate the redistribution of applications across the cluster. Service
Fabric recovers from failures and optimizes the distribution of load based on available resources.
✔️ Note: Service Fabric is currently undergoing a transition to open development. The goal is to move
the entire build, test, and development process to GitHub. You can view, investigate and contribute on
the
https://github.com/Microsoft/service-fabric/28 page. There are also many sample files and scenarios
to help in deployment and configuration.

Application Model
Service Fabric applications consist of one or more services that work together to automate business
processes. A service is an executable that runs independently of other services, and is composed of code,
configuration, and data. Each element is separately versionable and deployable.

28 https://github.com/Microsoft/service-fabric/
MCT USE ONLY. STUDENT USE PROHIBITED 674  Module 16 Azure Deployment Models and Services  

Application and service types

Creating an application instance requires an application type, which is the template that specifies which
services are part of the application. This concept is similar to object-oriented programming. The applica-
tion type is comparable to class definition, and the application is comparable to the instance. You can
create multiple named application instances from one application type.
The same concept applies to services. The service type defines the code and configuration for the service
and the endpoints that the service uses for interaction. You can create multiple service instances by using
one service type. An application specifies how many instances of a service type should be created.
Both application type and service type are described through XML files. Every element of the application
model is independently versionable and deployable.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  675

Resource balancing

While applications are running in Azure Service Fabric, they are constantly monitored for health. Service
Fabric ensures that services keep running well and that the available server resources are used optimally.
This means that sometimes services are moved from busy nodes to less busy nodes to keep overall
resource consumption well balanced. The image above displays an imbalanced cluster. Node 2 hosts
three services, while Node 1, and Nodes 3-5 are empty. Service Fabric will detect this situation and
resolve it.

After Service Fabric completes the balancing operation, your cluster will look like the image above. Every
node now runs one service.
In reality, each node will likely run many services. Because every service is different, it's usually possible to
combine services on one node to make optimal use of the server's resources. Service Fabric does all of
this automatically.

Programming Models
When developing applications for use on Service Fabric, there are a number of options available.
MCT USE ONLY. STUDENT USE PROHIBITED 676  Module 16 Azure Deployment Models and Services  

Windows
Azure Service Fabric comes with an SDK, and development tool support. You can develop Windows
clusters in C# using Microsoft Visual Studio 2015 or Visual Studio 2017.

Linux
Developing Java-based services for Linux clusters is probably easiest by using Eclipse Neon29. However,
it's also possible to program in C# using .NET Core and Visual Studio Code.

Programming models
You can choose from four different programming models to create a Service Fabric application:
●● Reliable Services. Reliable Services is a framework you can use to create services that use specific
features, which Service Fabric provides. One important feature is a distributed data storage mecha-
nism. Others are custom load and health reporting, and automatic endpoint registration. These enable
discoverability and interaction between services.
●● There are two distinct types of Reliable Services that you can create:
●● Stateless services are intended to perform operations that don't require keeping an internal
state. Examples are services that host ASP.NET Web APIs, or services that autonomously
process items read from a Service Bus Queue.
●● Stateful services keep an internal state, which is automatically stored redundantly across
multiple nodes for availability and error recovery. The data stores are called Reliable Collections.
●● Reliable Actors. Reliable Actors is a framework built on top of Reliable Services, which implements the
Virtual Actors' design pattern. An Actor encapsulates a small piece of state and behavior. One exam-
ple is Digital Twins, in which an Actor represents the state and the abilities of a device in the real
world. Many IoT applications use the Actor model to represent the state and abilities. The state of an
Actor can be volatile, or it can be kept in the distributed store. This store can be memory-based or on
a disk.
●● Guest executables. You can also package and run existing applications as a Service Fabric (stateless)
service. This makes Applications highly available. The platform ensures that the instances of an
application are running. You can also upgrade Applications with no downtime. If problems are
reported during an upgrade, Service Fabric can automatically roll back the deployment. Service Fabric
also enables you to run multiple applications together in a cluster, which reduces the need for
hardware resources.
✔️ Note: When using Guest executables, you cannot use some of the platform capabilities (such as the
Reliable Collections).
●● Containers. You can run Containers in a similar way as running guest executables. What’s different is
that Service Fabric can restrict resource consumption (CPU and memory, for example) per container.
Limiting resource consumption per service enables you to achieve even higher densities on your
cluster.

29 https://www.eclipse.org/neon/
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  677

Scaling Azure Clusters


Before we talk about scaling, let's briefly explain some concepts:
●● Scaling in and out is the process of removing and adding nodes to the cluster.
●● Scaling up and down is the process of changing the size (SKU) of the VMs that make up the cluster.
●● Node types are one or more individual VM scale sets that make up the cluster.
●● Fault domain is a hierarchical structure of infrastructure levels in which faults can occur. For example,
faults can happen on disk drives, machines, power supplies, or server racks, and in entire data centers.
To create a system that is highly available, you must consider these fault domains. Having two nodes
that share a fault domain means they share a single point of failure.
●● Upgrade domain is a policy that describes groups of nodes that will be upgraded simultaneously.
Nodes in different upgrade domains will not be upgraded at the same time. This means that upgrade
domains are useful when performing rolling upgrades of your software. One by one, every upgrade
domain will be processed.


For example, in this image you see six nodes. Imagine that node pairs one and two, three and four,
and five and six each share a server rack, which means that they share a fault domain. Then, by policy,
node pairs one and six, two and three, and four and five were put in upgrade domains. This means
that changes to these node pairs are applied simultaneously. This includes changes to the cluster
software and to running services.
By upgrading two nodes at the same time, they complete quickly. By adding more upgrade domains,
your upgrades become more granular, and because of that, have a lower impact. The most commonly
used setup is to have one upgrade domain for one fault domain.
MCT USE ONLY. STUDENT USE PROHIBITED 678  Module 16 Azure Deployment Models and Services  


This means that as exhibited in this image, upgrades are applied to one node at a time. When services
are deployed to both multiple fault domains and correctly configured upgrade domains, they are able
to manage node failures, even during upgrades.

Boundaries
You can grow or shrink the number of servers that define in the cluster by changing the size of a VM scale
set. However, there are some restrictions that apply.

Lower boundary
Earlier in this module, you learned that one of the platform services of Service Fabric is a distributed data
store. This means that data stored on one node is replicated to a quorum (majority) of secondary nodes.
to work properly, you'll need to have multiple healthy nodes in your cluster. The precise number needed
depends on the desired reliability of your services.

Upper boundary in Azure


When using existing platform images, VM scale sets are limited to 1,000 VMs. In Azure, Service Fabric can
scale up to 100 nodes in each scale set.

Durability levels in Azure


Whether Service Fabric can automatically manage infrastructural changes depends on the cluster's
configured durability level. The durability level indicates the level of privileges Service Fabric has to
influence Azure infrastructural operations on the cluster's underlying VMs.
There are three durability levels, as exhibited in the following table.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  679

Durability tier Required minimum Supported VM Updates you make Updates and
number of VMs SKUs to your virtual maintenance
machine scale set initiated by Azure
Gold 5 Full-node SKUs Can be delayed Can be paused for
dedicated to a until approved by 2 hours per update
single customer the Service Fabric domain to allow
(for example, L32s, cluster additional time for
GS5, G5, DS15_v2, replicas to recover
D15_v2) from earlier
failures
Silver 5 VMs of single core Can be delayed Cannot be delayed
or above until approved by for any significant
the Service Fabric period of time
cluster
Bronze 1 All Will not be Cannot be delayed
delayed by the for any significant
Service Fabric period of time
cluster

Adding or changing nodes in Azure


You can extend your Azure cluster capacity simply by increasing the VM scale set capacity. You can do
this by using PowerShell, Azure Resource Manager templates, and by code. Changing the SKU influences
all cluster nodes, so whether you can do this depends on the configured durability level. When you use
Silver or Gold levels, you can do this, whereas when using the Bronze level, you'll have some downtime
because all VMs upgrade simultaneously.
Unlike Azure Access Control (ACS) clusters, you can configure Service Fabric cluster scale sets to scale
automatically based on performance counter metrics.

Adding nodes in standalone clusters


Scaling up a standalone cluster also must be done using PowerShell scripts. Running the script Add-
Node.ps1 on a prepared server will add it to an existing cluster.

Removing nodes in Azure


Although adding nodes to an Azure cluster is a matter of increasing the scale set capacity, removing
them can be more complicated. Removing nodes from a cluster has implications for the distributed data
store that contains the Reliable Collections of your services. Decreasing the instance count of your VM
scale set results in the removal of cluster nodes. The impact of this depends on the durability level.

If you're using the Bronze durability level, you must notify Service Fabric beforehand of your intention to
remove a node. This instructs Service Fabric to move services and data away from the node. In other
words, it drains the node.
Next, you need to remove that node from the cluster. You must run the PowerShell script Disable-Ser-
viceFabricNode for each node that you want to remove, and wait for Service Fabric to complete the
operation.
MCT USE ONLY. STUDENT USE PROHIBITED 680  Module 16 Azure Deployment Models and Services  

If you don't properly remove the node from the cluster, Service Fabric will assume that the nodes have
simply failed and will return later after reporting them as having the status Down.

Of course, you can also remove nodes using code.

Removing nodes from standalone clusters


Removing nodes is somewhat different from the process in Azure clusters. Instead of executing Disa-
ble-ServiceFabricNode, you execute the script RemoveNode.ps1 on the server that you want to
remove.

Create Clusters anywhere


You can create Service Fabric clusters in many environments:
●● Windows operating system
●● Linux operating system
●● On premises
●● In the cloud
●● On one machine
●● On multiple servers
There are two distinct versions of the Service Fabric binaries, one that runs on the Windows operating
system and one for Linux. Both are restricted to the 64-bit platform.
For more details see the Create Service Fabric clusters on Windows Server or Linux30 page.

Windows Server
You can deploy a cluster manually by using a set of PowerShell tools on a prepared group of servers, and
then running Windows Server 2016 or Windows Server 2019. This approach is called a Service Fabric
standalone cluster.
It's also possible to create a cluster in Azure. You can do this using the Azure Portal, or by using an Azure
Resource Manager template. This will create a cluster and everything that's needed to run applications on
it.

30 https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-anywhere
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  681

Finally, there is an option to create a cluster specifically for development use that runs on just one
machine simulating multiple servers. This is called a local development cluster, and it allows developers to
debug their applications before deploying them to a production cluster.
✔️ Note: It's important to know that for every type of deployment, the actual Service Fabric binaries are
the same. This means that an application that works on a development cluster will also work on an
on-premises or cloud-hosted cluster without requiring modifications to the code. This is similar to the
portability that containerization offers.

Linux
At the time of this writing, Service Fabric for Linux has been released. However, it does not yet have
complete feature parity between Windows and Linux. This means that some Windows features are not
available on Linux. For example, you cannot create a Service Fabric anywhere cluster, and all program-
ming models are in preview (including Java/C# Reliable Actors, Reliable Stateless Services, and Reliable
Stateful Services).
You can create a Linux cluster in Azure and create a local development cluster on the Linux Ubuntu 16.04
and Red Hat Enterprise Linux 7.4 (preview support) operating systems.
For more details, see the Differences between Service Fabric on Linux and Windows31 page.
✔️ Note: Standalone clusters currently aren't supported for Linux. Linux is supported on one-box for
development and Linux virtual machine clusters.

Demonstration-Create a standalone Service Fab-


ric cluster on Windows Server
Creating a standalone Azure Service Fabric cluster requires provisioning the hardware resources up front.
After you have created and configured the required machines, you can use PowerShell to create a cluster.
The required scripts and binaries can be downloaded in a single package, the Service Fabric standalone-
package.

✔️ Note: Detailed steps on how to set up a Service Fabric cluster are available on the Create a stan-
dalone cluster running on Windows Server page.32

Prerequisites:
●● You need to download setup files and PowerShell scripts for the Serve Fabric standalone package,
which you run to setup a service fabric cluster. You can download these from the Download Link -
Service Fabric Standalone Package - Windows Server page.33

31 https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-linux-windows-differences
32 https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server
33 https://go.microsoft.com/fwlink/?LinkId=730690
MCT USE ONLY. STUDENT USE PROHIBITED 682  Module 16 Azure Deployment Models and Services  

✔️ Note: Remember that currently standalone clusters aren't supported for Linux. Linux is supported on
one-box for development and Linux multi-machine clusters on Azure. As such there is no equivalent
download package for Linux.

Process for creating a standalone server


The following steps broadly apply to all deployments. However, the steps below are completed in the
context of a standalone deployment on Windows Server:
1. Prepare:
●● You need to plan the required amount of server resources. Those servers need to meet the
minimum requirements for Service Fabric.
●● Fault domains and upgrade domains need to be defined.
●● The software prerequisites need to be installed.
●● Ensure you have downloaded the Serve Fabric standalone package, which contains scripts and
details that you need to set up a Service Fabric cluster. You can download the package from the
Download Link - Service Fabric Standalone Package - Windows Server page.34
●● Familiarize yourself with the downloaded files.
●● Several sample cluster configuration files are installed with the setup package that you download-
ed. ClusterConfig.Unsecure.DevCluster.json is the simplest cluster configuration, which is an
unsecure, three-node cluster running on a single computer.
2. Validate:
●● A validation script TestConfiguration.ps1 is provided as part of the package. It has a Best
Practices Analyzer that can validate some of the criteria on your resources. You can validate the
environment before creating the cluster by running the following command:
.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.Unsecure.DevCluster.json

3. Create:
●● A creation script CreateServiceFabricCluster.ps1 is also provided. Running this will create
the entire cluster for you on all designated machines. You can run the following command:
.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -AcceptEULA

4. Connect and Visualize:


●● Connect to the cluster to verify that it is running and available, using the following command:
Connect-ServiceFabricCluster -ConnectionEndpoint <*IPAddressofaMachine*>:<Client connection end
point port>

i.e.
Connect-ServiceFabricCluster -ConnectionEndpoint 192.13.123.2345:19000

34 https://go.microsoft.com/fwlink/?LinkId=730690
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  683

- Service Fabric Explorer is a service that runs in the cluster, which you
access using a browser. Open a browser and go to http://localhost:19000/
explorer.

5. Upgrade:
●● You can run the PowerShell script Start-ServiceFabricClusterUpgrade that is installed on
the nodes as part of the cluster deployment, to upgrade the cluster software to a specific version.
6. Remove:
●● If you need to remove a cluster, run the script RemoveServiceFabricCluster.ps1. This
removes Service Fabric from each machine in the configuration. Use the following command:
# Removes Service Fabric from each machine in the configuration
.\RemoveServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -Force

- To removes Service Fabric from the current machine, use the `.\CleanFab-
ric.ps1` script. You can also run the following command:

# Removes Service Fabric from the current machine


.\CleanFabric.ps1

You'll likely add your own public IP address and load balancer to this cluster, if you need to run services
that are reachable over the internet.

Placement Constraints
You can use Placement constraints to:
●● Isolate workloads from each other.
●● Lift and shift an existing N-tier application into Azure Service Fabric.
●● Run services on specific server configurations.
MCT USE ONLY. STUDENT USE PROHIBITED 684  Module 16 Azure Deployment Models and Services  

Placement constraints are put in place in two steps:


1. Add key-value pairs to cluster nodes. You can create a Web and Worker pool by creating two VM scale
sets in the cluster, and marking one VM as:
'NodeType Web'

(These are the blue nodes in the image.)


and VMs in the other as:
'NodeType Worker'
(These are the green nodes in the image.)
2. Add constraint statements to your service. Creating a service that must run on the Web pool would
have a constraint statement such as:
'NodeType == Web'
Service Fabric will take care of the rest. Services that were already running will be moved if necessary, and
services will be placed on the proper nodes immediately for new deployments.
✔️ Note: It's important to realize that placement constraints restrict Service Fabric in its ability to balance
overall cluster resource consumption. Ensure that your placement constraints are not too restrictive. If
Service Fabric is unable to comply with a placement constraint, your service won't be able to run. Always
create pools of multiple nodes when defining constraints.

Configure Monitoring and Logging Overview


We'll look at some details about OMS and explore more options for logging and analyzing diagnostics.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  685

OMS and Azure Service Fabric

In the Microsoft Operations Management Suite (OMS) There is a management solution—or plug-in—for
OMS that is designed specifically for diagnostics on Service Fabric clusters. This solution is called Service
Fabric Analytics.
Adding this to your OMS workspace provides you with a dashboard. In one glance, you'll get an overview
of important issues and cluster and application events. These graphs are based on diagnostics data
gathered from the servers forming the Service Fabric cluster. By using the VM extension Windows Azure
Diagnostics, you install an agent on your VMs that is able to collect and upload this diagnostics data into
a storage account. OMS can access that data to analyze and present the information on the dashboard.

If you want to drill down to the details of the graphs, you can do so by clicking on the items in the table.
This will navigate you to the OMS Log Analytics management solution. By using Log Analytics, you can
view detailed descriptions of all captured diagnostics data.
You can review details of diagnostics events such as Event Trace for Windows (ETW), that were generated
by Services and Actors running in Service Fabric. ETW is a high-performance logging system that you can
use for logging information such as errors or diagnostics traces from your application.
OMS has been used for a lot of our diagnostics so far, but there are alternative tools that you can use.
MCT USE ONLY. STUDENT USE PROHIBITED 686  Module 16 Azure Deployment Models and Services  

Service Fabric diagnostics alternatives


Logging by using an agent process is generally advisable, as it can keep working even if your service does
not. This in-process approach works well for custom logs created by services that would otherwise
require infrastructure changes to be properly logged. Specifically, all event sources used in code must be
registered on every cluster node to work. The recommended way to do this is to register them in your
Azure Resource Manager template.
If you don't want to register every event provider in your template, using Microsoft Diagnostics Event-
Flow for these specific logs might be a good solution. However, it's fine to use a combination of both
methods.

EventFlow
created by the Microsoft Visual Studio Team, EventFlow is an open-source library designed specifically for
in-process log collection. This library enables your services to send logs directly to a central location,
while not relying on an agent such as the Azure Diagnostics extension to do that. This makes sense if
services come and go, or when services need to send their data to varying central locations.
EventFlow does not rely on the Event Trace for Windows infrastructure; it can send logs to many outputs,
including Application Insights, OMS, Azure Event Hubs, and the console window.

Backup and recovery


Creating backups requires development effort, and a method to be called within a Reliable Service. This
causes the data to be copied to a local backup folder.
After that, it's possible to copy that data to a central storage location. This location should be in a
different fault domain, for example, in a different cloud. Because data is shared between partitions, every
primary replica of your Stateful Reliable Services needs to create its own backups. Use the following code
to do this:

public async Task BeginCreateBackup()


{

var backupDescription = new BackupDescription(BackupOption.Full, PostBackupCallbackAsync);
await BackupAsync(backupDescription);
}

private async Task<bool> PostBackupCallbackAsync(BackupInfo backupInfo, CancellationToken cancella-


tionToken)
{
await _centralBackupStore.UploadBackupFolderAsync(backupInfo.Directory, cancellationToken);
return true;
}

In the code sample above, a backup is created first. After that operation completes, the method Post-
BackupCallAsync will be invoked. In this method, the local backup folder is copied to the central loca-
tion. The implementation of _centralBackupStore is omitted.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  687

Restoring backups
Restoring backups also requires some development effort. The Reliable Service needs to have code that is
executed by Service Fabric in a data loss situation such as a node failure or using code. After the data loss
is triggered, your Reliable Service can access your central storage location and download the contents to
the local backup folder. After that, data can be restored from the local backup folder. Every running
primary replica of your Stateful Reliable Services must restore its own backups. The following command
demonstrates how to do these steps:
public async Task BeginRestoreBackup()
{
var partitionSelector = PartitionSelector.PartitionKeyOf(Context.ServiceName,
((Int64RangePartitionInformation) Partition.PartitionInfo).LowKey);
var operationId = Guid.NewGuid();

await new FabricClient(FabricClientRole.Admin).TestManager.StartPartitionDataLossAsync(operationId,


partitionSelector, DataLossMode.FullDataLoss);
//Causes OnDataLossAsync to be called.
}

protected override async Task<bool> OnDataLossAsync(RestoreContext restoreCtx, CancellationToken


cancellationToken)
{
string backupFolder = Context.CodePackageActivationContext.WorkDirectory;
await _centralBackupStore.DownloadBackupFolderAsync(backupFolder, cancellationToken);

var restoreDescription = new RestoreDescription(backupFolder, RestorePolicy.Force);


await restoreCtx.RestoreAsync(restoreDescription, cancellationToken);
return true;
}

In this code sample, backups are retrieved from a central location, and the local folder is used to call
RestoreAsync. The call to OnDataLossAsync can be triggered by executing the following command:
FabricClient.TestManagementClient.StartPartitionDataLossAsync
Alternatively, you can use the following PowerShell command:
Start-ServiceFabricPartitionDataLoss
Again, the implementation of _centralBackupStore is omitted.

Fire drills
Consider your backup strategy carefully. The amount of data loss that is acceptable differs for every
service. The size of the central store will grow quickly if you create many full backups.
Make sure to gain hands-on experience with creating and restoring backups by practicing it. This way,
you'll know that your solution works, and you won't discover that your backup strategy is insufficient
during a real disaster-recovery situation.
MCT USE ONLY. STUDENT USE PROHIBITED 688  Module 16 Azure Deployment Models and Services  

Demonstration-Docker Compose deployment to


Service Fabric
Docker Compose is a way to deploy containers to a cluster by means of a declaration of desired state.
It uses a YAML file to describe which containers need to be deployed.

YAML definition
A sample of a YAML definition would be similar to the following code:
version: '3'

services:

web:
image: microsoft/iis:nanoserver
ports:
- "80:80"

networks:
default:
external:
name: nat

This sample file results in a single container based on the image named microsoft/iis:nanoserver.
We did not specify a container registry to use, so Docker Hub is used by default. The container exposes
IIS at port 80. We connect the container port to the host port 80, by specifying 80:80. Finally, we select-
ed the default nat network to connect containers.

Deploying a Docker Compose file


Use the following steps to deploy a Docker compose file:
1. Connect to the existing cluster using the PowerShell command Connect-ServiceFabricCluster:
`Connect-ServiceFabricCluster -ConnectionEndpoint devopsdemowin.westeurope.cloudapp.azure.
com:19000 -FindType FindByThumbprint -FindValue B00B6FF39F5A50702AF3493B2C13237E80DE6734
-StoreName My -StoreLocation CurrentUser -X509Credential -ServerCertThumbprint B00B6FF-
39F5A50702AF3493B2C13237E80DE6734`

Note: The values for this command will be different for your own cluster.
2. Deploy the docker-compose.yml file by using the following command:
`New-ServiceFabricComposeDeployment -DeploymentName ComposeDemo -Compose x:\docker-com-
pose.yml`

3. Use the following command to check the deployment status:


`Get-ServiceFabricComposeDeploymentStatus -DeploymentName ComposeDemo`
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Service Fabric  689

4. When the deployment has finished, you can navigate to your new service by opening a browser and
navigating to the Service Fabric cluster domain name at port 80.
For example: http://devopsdemowin.westeurope.cloudapp.azure.com.
You should see the IIS information page running as a container on your Service Fabric cluster.
5. To remove a deployment, run the following command:
Remove-ServiceFabricComposeDeployment -DeploymentName ComposeDemo
MCT USE ONLY. STUDENT USE PROHIBITED 690  Module 16 Azure Deployment Models and Services  

Lab
Deploying a Dockerized Java app to Azure Web
App for Containers
In this lab, Deploying a Dockerized Java app to Azure Web App for Containers35, you will learn:
●● Configuring a CI pipeline to build and publish Docker image
●● Deploying to an Azure Web App for containers
●● Configuring MySQL connection strings in the Web App

35 https://azuredevopslabs.com/labs/vstsextend/dockerjava/
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  691

Module Review and Takeaways


Module Review Questions
Multiple choice
Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of traffic?
†† Azure Service Fabric
†† Virtual Machine Scale Sets
†† Azure Kubernetes Service
†† Virtual Network

Checkbox
Availability sets are made up of which of the following?
(choose two)
†† Update Domains
†† Azure AD Domain Services
†† Fault Domains
†† Event Domains

Dropdown
Complete the following sentence.
Azure App Service is an Azure Platform-as-Service offering that is used for ____________.
†† processing events with serverless code.
†† detecting, triaging, and diagnosing issues in your web apps and services.
†† building, testing, releasing, and monitoring your apps from within a single software application.
†† hosting web applications, REST APIs, and mobile back ends.

Checkbox
Which of the following are features of Web App for Containers?
(choose all that apply)
†† Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
†† Incrementally deploys apps into production with deployment slots and slot swaps.
†† Scales out automatically with auto-scale.
†† Uses the App Service Log Streaming feature to allow you to see logs from your application.
†† Supports PowerShell and Win-RM for remotely connecting directly into your containers.
MCT USE ONLY. STUDENT USE PROHIBITED 692  Module 16 Azure Deployment Models and Services  

Multiple choice
Which of the following statements is best practice for Azure Functions?
†† Azure Functions should be stateful.
†† Azure Functions should be stateless.

Checkbox
Which of the following features are supported by Azure Service Fabric?
(choose all that apply)
†† Reliable Services
†† Reliable Actor patterns
†† Guest Executables
†† Container processes

Checkbox
Which of the following describe primary uses for Placement Constraints?
(choose all that apply)
†† Isolate workloads from each other
†† Control which nodes in a cluster that a service can run on
†† ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
†† Describe resources that nodes have, and that services consume, when they are run on a node.

Checkbox
Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
†† Basic Networking
†† Native Model
†† Advanced Networking
†† Resource Model

Multiple choice
True or false: containers are a natural fit for an event-driven architecture?
†† True
†† False
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  693

Multiple choice
Which of the following cloud service models provides the most control, flexibility, and portability?
†† Infrastructure-as-a-Service (IaaS)
†† Functions-as-a-Service (FaaS)
†† Platform-as-a-Service (PaaS)
MCT USE ONLY. STUDENT USE PROHIBITED 694  Module 16 Azure Deployment Models and Services  

Answers
Multiple choice
Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of
traffic?
†† Azure Service Fabric
■■ Virtual Machine Scale Sets
†† Azure Kubernetes Service
†† Virtual Network
Explanation
Virtual Machine Scale Sets is the correct answer.
All other answers are incorrect.
Azure Service Fabric is for developing microservices and orchestrating containers on Windows or Linux.
Azure Kubernetes Service (AKS) simplifies the deployment, management, and operations of Kubernetes.
Virtual Network is for setting up and connecting virtual private networks.
With Azure VMs, scale is provided for by Virtual Machine Scale Sets (VMSS). Azure VMSS let you create and
manage groups of identical, load balanced VMs. The number of VM instances can increase or decrease
automatically, in response to demand or a defined schedule. Azure VMSS provide high availability to your
applications, and allow you to centrally manage, configure, and update large numbers of VMs. With Azure
VMSS, you can build large-scale services for areas such as compute, big data, and container workloads.
Checkbox
Availability sets are made up of which of the following?
(choose two)
■■ Update Domains
†† Azure AD Domain Services
■■ Fault Domains
†† Event Domains
Explanation
Update Domains and Fault Domains are the correct answers.
Azure AD Domain Services and Event Domains are incorrect answers.
Azure AD Domain Service provides managed domain services to a Windows Server Active Directory in
Azure. An event domain is a tool for managing and publishing information.
Update Domains are a logical section of the datacenter, implemented by software and logic. When a
maintenance event occurs (such as a performance update or critical security patch applied to the host), the
update is sequenced through Update Domains. Sequencing updates by using Update Domains ensures that
the entire datacenter does not fail during platform updates and patching.
Fault Domains provide for the physical separation of your workload across different hardware in the
datacenter. This includes power, cooling, and network hardware that supports the physical servers located in
server racks. If the hardware that supports a server rack becomes unavailable, only that specific rack of serv-
ers would be affected by the outage.
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  695

Dropdown
Complete the following sentence.
Azure App Service is an Azure Platform-as-Service offering that is used for ____________.
†† processing events with serverless code.
†† detecting, triaging, and diagnosing issues in your web apps and services.
†† building, testing, releasing, and monitoring your apps from within a single software application.
■■ hosting web applications, REST APIs, and mobile back ends.
Explanation
Hosting web applications, REST APIs, and mobile back ends, is the correct answer.
The other answers are incorrect because:
Processing events with serverless code is performed by Azure Functions.
Detecting, triaging, and diagnosing issues in your web apps and services is performed by Application
Insights.
Building, testing, releasing, and monitoring your apps from within a single software application is performed
by Visual Studio App Center.
Azure App Service is a Platform as Service offering on Azure, for hosting web applications, REST APIs, and
mobile back ends. With Azure App Service you can create powerful cloud apps quickly within a fully
managed platform. You can use Azure App Service to build, deploy, and scale enterprise-grade web, mobile,
and API apps to run on any platform. Azure App Service ensures your application meet rigorous perfor-
mance, scalability, security and compliance requirements, and benefit from using a fully managed platform
for performing infrastructure maintenance.
Checkbox
Which of the following are features of Web App for Containers?
(choose all that apply)
■■ Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
■■ Incrementally deploys apps into production with deployment slots and slot swaps.
■■ Scales out automatically with auto-scale.
■■ Uses the App Service Log Streaming feature to allow you to see logs from your application.
■■ Supports PowerShell and Win-RM for remotely connecting directly into your containers.
Explanation
All of the answers are correct.
Web App for Containers from the Azure App Service allows customers to use their own containers, and
deploy them to Azure App Service as a web app. Similar to the Azure Web App solution, Web App for
Containers eliminates time-consuming infrastructure management tasks during container deployment,
updating, and scaling to help developers focus on coding and getting their apps to their end users faster.
Furthermore, Web App for Containers provides integrated CI/CD capabilities with DockerHub, Azure
Container Registry, and VSTS, as well as built-in staging, rollback, testing-in-production, monitoring, and
performance testing capabilities to boost developer productivity.
For Operations, Web App for Containers also provides rich configuration features so developers can easily
add custom domains, integrate with AAD authentication, add SSL certificates and more — all of which are
crucial to web app development and management. Web App for Containers provides an ideal environment
to run web apps that do not require extensive infrastructure control.
MCT USE ONLY. STUDENT USE PROHIBITED 696  Module 16 Azure Deployment Models and Services  

Multiple choice
Which of the following statements is best practice for Azure Functions?
†† Azure Functions should be stateful.
■■ Azure Functions should be stateless.
Explanation
Azure Functions should be stateless is the correct answer.
Azure Functions should be stateful is an incorrect answer.
Azure Functions are an implementation of the Functions-as-a-Service programming model on Azure, with
additional capabilities. It is best practice to ensure that your functions are as stateless as possible. Stateless
functions behave as if they have been restarted, every time they respond to an event. You should associate
any required state information with your data instead. For example, an order being processed would likely
have an associated state member. A function could process an order based on that state, update the data as
required, while the function itself remains stateless. If you require stateful functions, you can use the Durable
Functions Extension for Azure Functions or output persistent data to an Azure Storage service.
Checkbox
Which of the following features are supported by Azure Service Fabric?
(choose all that apply)
■■ Reliable Services
■■ Reliable Actor patterns
■■ Guest Executables
■■ Container processes
Explanation
All of the answers are correct.
Reliable Services is a framework for creating services that use specific features provided by Azure Service
Fabric. The two distinct types of Reliable Services you can create are stateless services and stateful services.
Reliable Actors is a framework built on top of Reliable Services which implements the Virtual Actors design
pattern. An Actor encapsulates a small piece of a state or behavior. The state of an Actor can be volatile, or
it can be kept persistent in a distributed store. This store can be memory-based or on a disk.
Guest Executables are existing applications that you package and run as Service Fabric services (stateless).
This makes the applications highly available, as Service Fabric keeps the instances of your applications
running. Applications can be upgraded with no downtime, and Service Fabric can automatically roll back
deployments if needed.
Containers can be run in a way that is similar to running guest executables. Furthermore, with containers,
Service Fabric can restrict resource consumption per container (by CPU processes or memory usage, for
example). Limiting resource consumption per service allows you to achieve higher densities on your cluster.
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  697

Checkbox
Which of the following describe primary uses for Placement Constraints?
(choose all that apply)
■■ Isolate workloads from each other
■■ Control which nodes in a cluster that a service can run on
■■ ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
†† Describe resources that nodes have, and that services consume, when they are run on a node.
Explanation
The correct answers are: Isolate workloads from each other, control which nodes in a cluster that a service
can run on, and ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
Describe resources that nodes have, and that services consume, when they are run on a node, is an incor-
rect answer. Metrics are used to describe resources that nodes have, and services consume, when they are
run on a node.
Placement Constraints can control which nodes in a cluster that a service can run on. You can define any
set of properties by node type, and then set constraints for them. Placement Constraints are primarily used
to: Isolate workloads from each other; 'Lift and shift' an existing N-tier application into Azure Service Fabric;
Run services on specific server configurations.
Placement Constraints can restrict Service Fabric's ability to balance overall cluster resource consumption.
Make sure that your Placement Constraints are not too restrictive. Otherwise, if Service Fabric cannot
comply with a Placement Constraint, your service will not run.
Checkbox
Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
■■ Basic Networking
†† Native Model
■■ Advanced Networking
†† Resource Model
Explanation
Basic Networking and Advanced Networking are correct answers.
Native Model and Resource Model are incorrect answers because these are two deployment models
supported by Azure Service Fabric.
In AKS, you can deploy a cluster to use either Basic Networking or Advanced Networking. With Basic
Networking, the network resources are created and configured as the AKS cluster is deployed. Basic Net-
working is suitable for small development or test workloads, as you don't have to create the virtual network
and subnets separately from the AKS cluster. Simple websites with low traffic, or to lift and shift workloads
into containers, can also benefit from the simplicity of AKS clusters deployed with Basic Networking.
With Advanced Networking, the AKS cluster is connected to existing virtual network resources and configu-
rations. Advanced Networking allows for the separation of control and management of resources. When you
use Advanced Networking, the virtual network resource is in a separate resource group to the AKS cluster.
For most production deployments, you should plan for and use Advanced Networking.
MCT USE ONLY. STUDENT USE PROHIBITED 698  Module 16 Azure Deployment Models and Services  

Multiple choice
True or false: containers are a natural fit for an event-driven architecture?
†† True
■■ False
Explanation
False is the correct answer.
True is an incorrect answer.
Architecture styles don't require the use of particular technologies, but some technologies are well-suited for
certain architectures. For example, containers are a natural fit for microservices, and an event-driven
architecture is generally best suited to IoT and real-time systems.
An N-tier architecture model is a natural fit for migrating existing applications that already use a layered
architecture
A Web-queue-worker architecture model is suitable for relatively simple domains with some resource-inten-
sive tasks.
The CQRS architecture model makes the most sense when it's applied to a subsystem of a larger architec-
ture.
A Big data architecture model divides a very large dataset into chunks, performing paralleling processing
across the entire set, for analysis and reporting.
Finally, the Big compute architecture model, also called high-performance computing (HPC), makes parallel
computations across a large number (thousands) of cores.
Multiple choice
Which of the following cloud service models provides the most control, flexibility, and portability?
■■ Infrastructure-as-a-Service (IaaS)
†† Functions-as-a-Service (FaaS)
†† Platform-as-a-Service (PaaS)
Explanation
Infrastructure-as-a-Service (IaaS) is the correct answer.
Functions-as-a-Service (FaaS) and Platform-as-a-Service (PaaS) are incorrect answers.
Of the three cloud service models mentioned, IaaS provides the most control, flexibility, and portability.
FaaS provides simplicity, elastic scale, and potential cost savings, because you pay only for the time your
code is running. PaaS falls somewhere between the two.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 17 Create and Manage Kubernetes
Service Infrastructure

Module Overview
Module Overview
As most modern software developers can attest, containers have provided enginnering teams with
dramatically more flexibility for running cloud-native applications on physical and virtual infrastructure.
Containers package up the services comprising an application and make them portable across different
compute environments, for both dev/test and production use. With containers, it’s easy to quickly ramp
application instances to match spikes in demand. And because containers draw on resources of the host
OS, they are much lighter weight than virtual machines. This means containers make highly efficient use
of the underlying server infrastructure.
So far so good. But though the container runtime APIs are well suited to managing individual containers,
they’re woefully inadequate when it comes to managing applications that might comprise hundreds of
containers spread across multiple hosts. Containers need to be managed and connected to the outside
world for tasks such as scheduling, load balancing, and distribution, and this is where a container orches-
tration tool like Kubernetes comes into its own.
An open source system for deploying, scaling, and managing containerized applications, Kubernetes
handles the work of scheduling containers onto a compute cluster and manages the workloads to ensure
they run as the user intended. Instead of bolting on operations as an afterthought, Kubernetes brings
software development and operations together by design. By using declarative, infrastructure-agnostic
constructs to describe how applications are composed, how they interact, and how they are managed,
Kubernetes enables an order-of-magnitude increase in operability of modern software systems.
Kubernetes was built by Google based on its own experience running containers in production, and it
surely owes much of its success to Google’s involvement. The Kubernetes platform is open source and
growing dramatically through open source contributions at a very rapid pace. Kubernetes marks a
breakthrough for devops because it allows teams to keep pace with the requirements of modern soft-
ware development.
MCT USE ONLY. STUDENT USE PROHIBITED 700  Module 17 Create and Manage Kubernetes Service Infrastructure  

Learning Objectives
After completing this module, students will be able to:
●● Deploy and configure a Managed Kubernetes cluster
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  701

Azure Kubernetes Service (AKS)


Kubernetes Overview
Kubernetes is a cluster orchestration technology that originated with Google. Sometimes referred to a
k8s, it's an open-source platform for automating deployment, scaling, and operations of application
containers across clusters of hosts. This creates a container-centric infrastructure.

There are several other container cluster orchestration technologies available such as Mespshpere DC/
OS1 and Docker Swarm2.
For more details about Kubernetes, go to  Production-Grade Container Orchestration3 on the Kuber-
netes website.

Azure Kubernetes Service (AKS)


AKS is Microsoft's implementation of Kubernetes. AKS makes it easier to deploy a managed Kubernetes
cluster in Azure. It also reduces the complexity and operational overhead of managing Kubernetes, by
offloading much of that responsibility to Azure.

AKS manages much of the Kubernetes resources for the end user, making it quicker and easier to deploy
and manage containerized applications without container orchestration expertise. It also eliminates the
burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on
demand without taking applications offline.
Azure AKS manages the following aspects of a Kubernetes cluster for you:
●● It manages critical tasks such as health monitoring and maintenance, such as Kubernetes version
upgrades and patching.
●● It performs simple cluster scaling.
●● It enables master nodes to be fully managed by Microsoft.
●● It leaves you responsible only for managing and maintaining the agent nodes.
●● It ensures master nodes are free, and you only pay for running agent nodes.

AKS Architectural components


A Kubernetes cluster is divided into two components:
●● Cluster master nodes, which provide the core Kubernetes services and orchestration of application
workloads.
●● Nodes that run your application workloads.

1 https://mesosphere.com/product/
2 https://www.docker.com/products/orchestration
3 https://kubernetes.io/
MCT USE ONLY. STUDENT USE PROHIBITED 702  Module 17 Create and Manage Kubernetes Service Infrastructure  

Cluster master
When you create an AKS cluster, a cluster master is automatically created and configured. This cluster
master is provided as a managed Azure resource abstracted from the user. There is no cost for the cluster
master, only the nodes that are part of the AKS cluster.
The cluster master includes the following core Kubernetes components:
●● kube-apiserver. The API server is how the underlying Kubernetes APIs are exposed. This component
provides the interaction for management tools such as kubectl or the Kubernetes dashboard.
●● etcd. To maintain the state of your Kubernetes cluster and configuration, the highly available etcd is a
key value store within Kubernetes.
●● kube-scheduler. When you create or scale applications, the Scheduler determines what nodes can run
the workload, and starts them.
●● kube-controller-manager. The Controller Manager oversees a number of smaller controllers that
perform actions such as replicating pods and managing node operations.

Nodes and node pools


To run your applications and supporting services, you need a Kubernetes node. An AKS cluster, which is
an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime,
contains one or more nodes:
●● The kubelet is the Kubernetes agent that processes the orchestration requests from the cluster master,
and scheduling of running the requested containers.
●● Virtual networking is handled by the kube-proxy on each node. The proxy routes network traffic and
manages IP addressing for services and pods.
●● The container runtime is the component that allows containerized applications to run and interact
with additional resources such as the virtual network and storage. In AKS, Docker is used as the
container runtime.
Nodes of the same configuration are grouped together into node pools. A Kubernetes cluster contains
one or more node pools. The initial number of nodes and size are defined when you create an AKS
cluster, which creates a default node pool. This default node pool in AKS contains the underlying VMs
that run your agent nodes.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  703

Pods
Kubernetes uses pods to run an instance of your application. A pod represents a single instance of your
application. Pods typically have a 1:1 mapping with a container, although there are advanced scenarios
where a pod might contain multiple containers. These multi-container pods are scheduled together on
the same node, and allow containers to share related resources.
When you create a pod, you can define resource limits to request a certain amount of CPU or memory
resources. The Kubernetes Scheduler attempts to schedule the pods to run on a node with available
resources to meet the request. You can also specify maximum resource limits that prevent a given pod
from consuming too much compute resource from the underlying node.
✔️ Note: A best practice is to include resource limits for all pods to help the Kubernetes Scheduler under-
stand what resources are needed and permitted.
A pod is a logical resource, but the container (or containers) is where the application workloads run. Pods
are typically ephemeral, disposable resources. Therefore, individually scheduled pods miss some of the
high availability and redundancy features Kubernetes provides. Instead, pods are usually deployed and
managed by Kubernetes controllers, such as the Deployment controller.

Kubernetes networking
Kubernetes pods have limited lifespan, and are replaced whenever new versions are deployed. Settings
such as the IP address change regularly, so interacting with pods by using an IP address is not advised.
This is why Kubernetes services exist. To simplify the network configuration for application workloads,
Kubernetes uses Services to logically group a set of pods together and provide network connectivity.
Kubernetes Service is an abstraction that defines a logical set of pods, combined with a policy that
describes how to access them. Where pods have a shorter lifecycle, services are usually more stable and
are not affected by container updates. This means that you can safely configure applications to interact
with pods through the use of services. The service redirects incoming network traffic to its internal pods.
Services can offer more specific functionality, based on the service type that you specify in the Kuber-
netes deployment file.
If you do not specify the service type, you will get the default type, which is ClusterIP. This means that
your services and pods will receive virtual IP addresses that are only accessible from within the cluster.
Although this might be a good practice for containerized back-end applications, it might not be what you
want for applications that need to be accessible from the internet. You need to determine how to config-
ure your Kubernetes cluster to make those applications and pods accessible from the internet.

Services
The following Service types are available:
●● Cluster IP. This service creates an internal IP address for use within the AKS cluster. However, it's good
for internal-only applications that support other workloads within the cluster.
MCT USE ONLY. STUDENT USE PROHIBITED 704  Module 17 Create and Manage Kubernetes Service Infrastructure  

●● NodePort. This service creates a port mapping on the underlying node, which enables the application
to be accessed directly with the node IP address and port.

●● Load Balancer. this service creates an Azure Load Balancer resource, configures an external IP address,
and connects the requested pods to the load balancer backend pool. To allow customers traffic to
reach the application, load balancing rules are created on the desired ports.

Ingress controllers
When you create a Load Balancer–type Service, an underlying Azure Load Balancer resource is created.
The load balancer is configured to distribute traffic to the pods in your service on a given port. The Load
Balancer only works at layer 4. The Service is unaware of the actual applications, and can't make any
additional routing considerations.
Ingress controllers work at layer 7, and can use more intelligent rules to distribute application traffic. A
common use of an Ingress controller is to route HTTP traffic to different applications based on the
inbound URL.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  705

There are different implementations of the Ingress Controller concept. One example is the Nginx
Ingress Controller, which translates the Ingress Resource into a nginx.conf file. Other examples are
the ALB Ingress Controller (AWS) and the GCE Ingress Controllers (Google Cloud),
which make use of cloud native resources. Using the Ingress setup within Kubernetes makes it possible to
easily switch the reverse proxy implementation so that your containerized workload leverages the most
out of the cloud platform on which it is running.

Azure virtual networks


In AKS, you can deploy a cluster that uses one of the following two network models:
●● Basic networking. The network resources are created and configured when the AKS cluster is de-
ployed.
●● Advanced networking. The AKS cluster is connected to existing virtual network resources and configu-
rations.

Deployment
Kubernetes uses the term pod to package applications. A pod is a deployment unit, and it represents a
running process on the cluster. It consists of one or more containers, and configuration, storage resourc-
es, and networking support. Pods are usually created by a controller, which monitors it and provides
self-healing capabilities at the cluster level.
Pods are described by using YAML or JSON. Pods that work together to provide functionality are grouped
into services to create microservices. For example, a front-end pod and a back-end pod could be grouped
into one service.
You can deploy an application to Kubernetes by using the kubectl CLI, which can manage the cluster. By
running kubectl on your build agent, it's possible to deploy Kubernetes pods from Azure DevOps. It's
also possible to use the management API directly. There is also a specific Kubernetes task called Deploy
To Kubernetes that is available in Azure DevOps. More information about this will be covered in the
upcoming demonstration.

Continuous delivery
To achieve continuous delivery, the build-and-release pipelines are run for every check-in on the Source
repository.
MCT USE ONLY. STUDENT USE PROHIBITED 706  Module 17 Create and Manage Kubernetes Service Infrastructure  

Demonstration-Deploying and connecting to an


AKS cluster
This walkthrough shows how to deploy an AKS cluster using the Azure CLI. A multi-container application
that includes a web front end and a Redis Cache instance is run in the cluster. You then see how to
monitor the health of the cluster and pods that run your application

Prerequisites
●● Use the cloud shell.
●● You require an Azure subscription to be able to perform these steps. If you don't have one, you can
create it by following the steps outlined on the Create your Azure free account today4 page.

Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or using the Azure Portal and selecting
Bash as the environment option.

2. Create an Azure resource group by running the following command:


az group create --name myResourceGroup --location < datacenter nearest you >

3. Create an AKS cluster by running the following command:


az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--node-count 1 \
--enable-addons monitoring \
--generate-ssh-keys

4 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  707

After a few minutes, the command completes and returns JSON-formatted information about the cluster.
4. To manage a Kubernetes cluster, you use kubectl, the Kubernetes command-line client. If you use
Azure Cloud Shell, kubectl is already installed. To install kubectl locally, use the following com-
mand:
az aks install-cli

5. To configure kubectl to connect to your Kubernetes cluster, use the az aks get-credentials
command. This command downloads credentials and configures the Kubernetes CLI to use them:
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

6. Verify the connection to your cluster by running the following command. Make sure that the status of
the node is Ready:
kubectl get nodes

7. Create a file named azure-vote.yaml, and then copy it into the following YAML definition. If you use
the Azure Cloud Shell, you can create this file using vi or nano as if working on a virtual or physical
system:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
MCT USE ONLY. STUDENT USE PROHIBITED 708  Module 17 Create and Manage Kubernetes Service Infrastructure  

spec:
ports:
- port: 6379
selector:
app: azure-vote-back
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front

8. Deploy the application by running the following command:


kubectl apply -f azure-vote.yaml

You should receive output showing the Deployments and Services were created successfully after it runs
as per the below.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  709

deployment "azure-vote-back" created


service "azure-vote-back" created
deployment "azure-vote-front" created
service "azure-vote-front" created

9. When the application runs, a Kubernetes service exposes the application front end to the internet. This
process can take a few minutes to complete. To monitor progress run the command
kubectl get service azure-vote-front --watch

10. Initially the EXTERNAL-IP for the azure-vote-front service is shown as pending.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-vote-front LoadBalancer 10.0.37.27 < pending > 80:30572/TCP 6s

11. When the EXTERNAL-IP address changes from pending to an actual public IP address, use CTRL-C to
stop the kubectl watch process. The following example output shows a valid public IP address
assigned to the service:
azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m

12. To see the Azure Vote app in action, open a web browser to the external IP address of your service.

Monitor health and logs. When the AKS cluster was created, Azure Monitor for containers was enabled to
capture health metrics for both the cluster nodes and pods. These health metrics are available in the
Azure portal. To see current status, uptime, and resource usage for the Azure Vote pods, complete the
following steps in the Azure portal:
13. Open a web browser to the Azure portal https://portal.azure.com.
14. Select your resource group, such as myResourceGroup, then select your AKS cluster, such as myAKS-
Cluster.
15. Under Monitoring on the left-hand side, choose Insights
16. Across the top, choose to + Add Filter
17. Select Namespace as the property, then choose < All but kube-system >
MCT USE ONLY. STUDENT USE PROHIBITED 710  Module 17 Create and Manage Kubernetes Service Infrastructure  

18. Choose to view the Containers. The azure-vote-back and azure-vote-front containers are displayed, as
shown in the following example:

19. To see logs for the azure-vote-front pod, select the View container logs link on the right-hand side of
the containers list. These logs include the stdout and stderr streams from the container

✔️ Note: If you are not continuing to use the Azure resources, remember to delete them to avoid
incurring costs.

Continuous Deployment
In Kubernetes you can update the service by using a rolling update. This will ensure that traffic to a
container is first drained, then the container is replaced, and finally, traffic is sent back again to the
container. In the meantime, your customers won't see any changes until the new containers are up and
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  711

running on the cluster. The moment they are, new traffic is routed to the new containers and stopped to
the old containers. Running a rolling update is easy to do with the following command:
kubectl apply -f nameofyamlfile

The YAML file contains a specification of the deployment. The apply command is convenient because it
makes no difference whether the deployment was already on the cluster. This means that you can always
use the exact same steps regardless of whether you are doing an initial deployment or an update to an
existing deployment.
When you change the name of the image for a service in the YAML file Kubernetes will apply a rolling
update, taking into account the minimum number of running containers you want and how many at a
time it is allowed to stop. The cluster will take care of updating the images without downtime, assuming
that your application container is built stateless.

Updating Images
After you've successfully containerized your application, you'll need ensure that you update your image
regularly. This entails creating a new image for every change you make in your own code, and ensuring
that all layers receive regular patching.
A large part of a container image is the base OS layer, which contains the elements of the operating
system that are not shared with the container host.

The base OS layer gets updated frequently. Other layers, such as the IIS layer and ASP.NET layer in the
image are also updated. Your own images are built on top of these layers, and it's up to you to ensure
that they incorporate those updates.
Fortunately, the base OS layer actually consists of two separate images: a larger base layer and a smaller
update layer. The base layer changes less frequently than the update layer. Updating your image's base
OS layer is usually a matter of getting the latest update layer.
MCT USE ONLY. STUDENT USE PROHIBITED 712  Module 17 Create and Manage Kubernetes Service Infrastructure  

If you're using a Docker file to create your image, patching layers should be done by explicitly changing
the image version number using the following commands:
```yml
FROM microsoft/windowsservercore:10.0.14393.321
RUN cmd /c echo hello world
```
into

```yml
FROM microsoft/windowsservercore:10.0.14393.693
RUN cmd /c echo hello world
```

When you build this Docker file, it now uses version 10.0.14393.693 of the
image microsoft/windowsservercore.

Latest tag
Don't be tempted to rely on the latest tag. To define repeatable custom images and deployments, you
should always be explicit about the base image versions that you are using. Also, just because an image is
tagged as the latest doesn't mean that it actually is the latest. The owner of the image needs to ensure
this.

MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Kubernetes Service (AKS)  713

✔️ Note: The last two segments of the version number of Windows Server Core and Nano images will
match the build number of the operating system inside.
MCT USE ONLY. STUDENT USE PROHIBITED 714  Module 17 Create and Manage Kubernetes Service Infrastructure  

Lab
Deploying a multi-container application to Az-
ure Kubernetes Services
Azure Kubernetes Service (AKS) is the quickest way to use Kubernetes on Azure. Azure Kubernetes Service
(AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage
containerized applications without container orchestration expertise. It also eliminates the burden of
ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand,
without taking your applications offline. Azure DevOps helps in creating Docker images for faster deploy-
ments and reliability using the continuous build option.
One of the biggest advantage to use AKS is that instead of creating resources in cloud you can create
resources and infrastructure inside Azure Kubernetes Cluster through Deployments and Services manifest
files.
In this lab, Deploying a multi-container application to Azure Kubernetes Services5, you will learn:
●● Setting up an AKS Cluster
●● CI/CD Pipeline for building artifacts and deploying to Kubernetes
●● Access the Kubernetes web dashboard in Azure Kubernetes Service (AKS)

5 https://azuredevopslabs.com/labs/vstsextend/kubernetes/#access-the-kubernetes-web-dashboard-in-azure-kubernetes-service-aks
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  715

Module Review and Takeaways


Module Review Questions
Multiple choice
Is this statement true or false?
Azure Policy natively integrates with AKS, allowing you to enforce rules across multiple AKS clusters. Track,
validate and configure nodes, pods and container images for compliance.
†† True
†† False

Multiple choice
Kubernetes CLI is called?
†† HELM
†† ACI
†† AKS
†† KUBECTL

Checkbox
For workloads running in AKS Kubernetes Web Dashboard allows you to view _______________________. Select
all that apply.
†† Config Map & Secrets
†† Logs
†† Storage
†† Azure Batch Metrics

Checkbox
Pods can be described using which of the following languages? Select all that apply.
†† JSON
†† XML
†† PowerShell
†† YAML
MCT USE ONLY. STUDENT USE PROHIBITED 716  Module 17 Create and Manage Kubernetes Service Infrastructure  

Answers
Multiple choice
Is this statement true or false?
Azure Policy natively integrates with AKS, allowing you to enforce rules across multiple AKS clusters.
Track, validate and configure nodes, pods and container images for compliance.
■■ True
†† False
 
Multiple choice
Kubernetes CLI is called?
†† HELM
†† ACI
†† AKS
■■ KUBECTL
 
Checkbox
For workloads running in AKS Kubernetes Web Dashboard allows you to view _______________________.
Select all that apply.
■■ Config Map & Secrets
■■ Logs
■■ Storage
†† Azure Batch Metrics
 
Checkbox
Pods can be described using which of the following languages? Select all that apply.
■■ JSON
†† XML
†† PowerShell
■■ YAML
 
MCT USE ONLY. STUDENT USE PROHIBITED
Module 18 Third Party Infrastructure as Code
Tools available with Azure

Module Overview
Module Overview
Configuration management tools enable changes and deployments to be faster, repeatable, scalable,
predictable, and able to maintain the desired state, which brings controlled assets into an expected state.
Some advantages of using configuration management tools include:
●● Adherence to coding conventions that make it easier to navigate code
●● Idempotency, which means that the end state remains the same, no matter how many times the code
is executed
●● Distribution design to improve managing large numbers of remote servers
Some configuration management tools use a pull model, in which an agent installed on the servers runs
periodically to pull the latest definitions from a central repository and apply them to the server. Other
tools use a push model, where a central server triggers updates to managed servers.
Configuration management tools enables the use of tested and proven software development practices
for managing and provisioning data centers in real-time through plaintext definition files.

Learning Objectives
After completing this module, students will be able to:
●● Deploy and configure infrastructure using 3rd party tools and services with Azure, such as Chef,
Puppet, Ansible, SaltStack, and Terraform
MCT USE ONLY. STUDENT USE PROHIBITED 718  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Chef
What is Chef
Chef is an infrastructure automation tool that you use for deploying, configuring, managing, and ensur-
ing compliance of applications and infrastructure. It provides for a consistent deployment and manage-
ment experience.
Chef helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment by
using instructions (or recipes) to configure nodes. A node , or chef-client is any physical or virtual machine
(VM), cloud, or network device that is under management by Chef.
The following diagram is of the high-level Chef architecture:

Chef components
Chef has three main architectural components:
●● Chef Server. This is the management point. There are two options for the Chef Server: a hosted
solution and an on-premises solution.
●● Chef Client (node). This is a Chef agent that resides on the servers you are managing.
MCT USE ONLY. STUDENT USE PROHIBITED
 Chef  719

●● Chef Workstation. This is the Admin workstation where you create policies and execute management
commands. You run the knife command from the Chef Workstation to manage your infrastructure.
Chef also uses concepts called cookbooks and recipes. Chef cookbooks and recipes are essentially the
policies that you define and apply to your servers.

Chef Automate
You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.
Chef Automate is a Chef product that allows you to package and test your applications, and provision and
update your infrastructure. Using Chef, you can manage changes to your applications and infrastructure
using compliance and security checks, and dashboards that give you visibility into your entire stack.
The Chef Automate image is available on the Azure Chef Server and has all the functionality of the legacy
Chef Compliance server. You can build, deploy, and manage your applications and infrastructure on
Azure. Chef Automate is available from the Azure Marketplace, and you can try it out with a free 30-day
license. You can deploy it in Azure straight away.

Chef Automate structure and function


Chef Automate integrates with the open-source products Chef, Chef InSpec, Chef Habitat, and their
associated tools, including chef-client and ChefDK. The following image is an overview of the structure of
Chef Automate, and how it functions.

Let's break down the Chef Automate architecture components:


●● Habitat is an open-source project that offers an entirely new approach to application management. It
makes the application and its automation the unit of deployment by creating platform-independent
build artifacts that can run on traditional servers and virtual machines (VMs). They also can be export-
ed into your preferred container platform, enabling you to deploy your applications in any environ-
ment. When applications are wrapped in a lightweight habitat (the runtime environment), whether the
MCT USE ONLY. STUDENT USE PROHIBITED 720  Module 18 Third Party Infrastructure as Code Tools available with Azure  

habitat is a container, a bare metal machine, or platform as a service (PaaS) is no longer the focus and
does not constrain the application.
For more information about Habitat, go to Use Habitat to deploy your application to Azure1.
●● InSpec is a free and open-source framework for testing and auditing your applications and infrastruc-
ture. InSpec works by comparing the actual state of your system with the desired state that you
express in easy-to-read and easy-to-write InSpec code. InSpec detects violations and displays findings
in the form of a report, but you are in control of remediation.
You can use InSpec to validate the state of your VMs running in Azure. You can also use InSpec to
scan and validate the state of resources and resource groups inside a subscription.
More information about InSpec is available at Use InSpec for compliance automation of your
Azure infrastructure2.

Chef Cookbooks
Chef uses a cookbook to define a set of commands that you execute on your managed client. A cook-
book is a set of tasks that you use to configure an application or feature. It defines a scenario, and
everything required to support that scenario. Within a cookbook, there are a series of recipes, which
define a set of actions to perform. Cookbooks and recipes are written in the Ruby language.
After you create a cookbook, you can then create a Role. A Role defines a baseline set of cookbooks and
attributes that you can apply to multiple servers. To create a cookbook, you use the chef generate
cookbook command.

Create a cookbook
Before creating a cookbook, you first configure your Chef workstation by setting up the Chef Develop-
ment Kit on your local workstation. You'll use the Chef workstation to connect to, and manage your Chef
server.
✔️ Note: You can download and install the Chef Development Kit from Chef downloads3.
Choose the Chef Development Kit that is appropriate to your operating system and version. For example:
●● macOSX/macOS
●● Debian
●● Red Hat Enterprise Linux SUSE
●● Linux Enterprise Server
●● Ubuntu
●● Windows
1. Installing the Chef Development Kit creates the Chef workstation automatically in your C:\Chef
directory. After installation completes, run the following example command that calls the Cookbook
web server for a policy that automatically deploys IIS:
chef generate cookbook webserver

1 https://docs.microsoft.com/en-us/azure/chef/chef-habitat-overview
2 https://docs.microsoft.com/en-us/azure/chef/chef-inspec-overview
3 https://downloads.chef.io/chefdk
MCT USE ONLY. STUDENT USE PROHIBITED
 Chef  721

This command generates a set of files under the directory C:\Chef\cookbooks\webserver. Next, you
need to define the set of commands that you want the Chef client to execute on your managed VM. The
commands are stored in the default.rb file.
2. For this example, we will define a set of commands that installs and starts Microsoft Internet Informa-
tion Services (IIS), and copies a template file to the wwwroot folder. Modify the C:\chef\cookbooks\
webserver\recipes\default.rb file by adding the following lines:
powershell_script 'Install IIS' do

action :run

code 'add-windowsfeature Web-Server'

end

service 'w3svc' do

action [ :enable, :start ]

end

template 'c:\inetpub\wwwroot\Default.htm' do

source 'Default.htm.erb'

rights :read, 'Everyone'

end

3. Save the file after you are done.


4. To generate the template, run the following command:
chef generate template webserver Default.htm

5. Now navigate to the C:\chef\cookbooks\webserver\templates\default\Default.htm.erb file. Edit


the file by adding some simple Hello World HTML code, and then save the file.
6. Run the following command to upload the cookbook to the Chef server so that it appears under the
Policy tab:
chef generate template webserver Default.htm

We have now created our cookbook and it's ready to use.


7. The next steps (which we will not be covering in detail at this time) would be to:
●● Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers.
●● Create a node to deploy the configuration to the machine you want to configure.
●● Bootstrap the machine using Chef to add the role to the node that deployed the configuration to the
machine.
MCT USE ONLY. STUDENT USE PROHIBITED 722  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Chef Knife command


Knife is a command that's available from the command line. It's made available as part of the Chef Devel-
opment Kit installation. You can use the Knife command to complete a wide variety of tasks, such as:
●● Generate a cookbook template. You do this by running the following command:
chef generate cookbook < cookbook name >

●● Upload your cookbooks and recipes to the Chef Automate server using the following command:
knife cookbook upload < cookbook name> --include-dependencies

●● Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers. Use the following command to create this role:
knife role create < role name >

●● Bootstrap the a node or client and assign a role using the following command:
knife bootstrap < FQDN-for-App-VM > --ssh-user <app-admin-username> --ssh-password <app-vm-
admin-password> --node-name < node name > --run-list role[ < role you defined > ] --sudo --verbose

You can also bootstrap Chef VM extensions for the Windows and Linux operating systems, in addition to
provisioning them in Azure using the Knife command. For more information, look up the ‘cloud-api’
bootstrap option in the Knife plugin documentation at https://github.com/chef/knife-azure4.
✔️ Note: You can also install the Chef extensions to an Azure VM using Windows PowerShell. By installing
the Chef Management Console, you can manage your Chef server configuration and node deployments
via a browser window.

4 https://github.com/chef/knife-azure
MCT USE ONLY. STUDENT USE PROHIBITED
 Puppet  723

Puppet
What is Puppet
Puppet is a deployment and configuration management toolset that provides you with enterprise tools
that you need to automate an entire lifecycle on your Azure infrastructure. It also provides consistency
and transparency into infrastructure changes.
Puppet provides a series of open-source configuration management tools and projects. It also provides
Puppet Enterprise, which is a configuration management platform that allows you to maintain state in
both your infrastructure and application deployments.

Puppet architectural components


Puppet operates using a client server model, and consists of the following core components:
●● Puppet Master. The Puppet Master is responsible for compiling code to create agent catalogs. It's
also where Secure Sockets Layer (SSL) certificates are verified and signed. Puppet Enterprise infra-
structure components are installed on a single node, the master. The master always contains a
compile master and a Puppet Server. As your installation grows, you can add additional compile
masters to distribute the catalog compilation workload.
●● Puppet Agent. Puppet Agent is the machine (or machines) managed by the Puppet Master. An agent
that is installed on those managed machines allows them to be managed by the Puppet Agent.
●● Console Services. Console Services are the web-based user interface for managing your systems.
●● Facts. Facts are metadata related to state. Puppet will query a node and determine a series of facts,
which it then uses to determine state.

Deploying Puppet in Azure


Puppet Enterprise lets you automate the entire lifecycle of your Azure infrastructure simply, scalably, and
securely, from initial provisioning through application deployment.
Puppet Enterprise is available to install directly into Azure using the Azure Marketplace5. The Puppet
Enterprise image allows you to manage up to 10 Azure VMs for free, and is available to use immediately.
After you select it, you need to fill in the VM's parameter values. A preconfigured system will then run
and test Puppet, and will preset many of the settings. However, these can be changed as needed. The VM
will then be created, and Puppet will run the install scripts.
Another option for creating a Puppet master in Azure is to install a Linux VM in Azure and deploy the
Puppet Enterprise package manually.

5 https://azure.microsoft.com/en-us/marketplace/
MCT USE ONLY. STUDENT USE PROHIBITED 724  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Manifest files
Puppet uses a declarative file syntax to define state. It defines what the infrastructure state should be, but
not how it should be achieved. You must tell it you want to install a package, but not how you want to
install the package.
Configuration or state is defined in manifest files known as Puppet Program files. These files are responsi-
ble for determining the state of the application, and have the file extension .pp.
Puppet program files have the following elements:
●● class. This is a bucket that you put resources into. For example, you might have an Apache class with
everything required to run Apache (such as the package, config file. running server, and any users that
need to be created). That class then becomes an entity that you can use to compose other workflows.
●● resources. These are single elements of your configuration that you can specify parameters for.
●● module. This is the collection of all the classes, resources, and other elements of the Puppet program
file in a single entity.

Sample manifest (.pp) file


In the following sample .pp file, notice where classes are being defined, and within that, where resources
and package details are defined.
✔️ Note: The -> notation is an “ordering arrow”: it tells Puppet that it must apply the “left” resource
before invoking the “right” resource. This allows us to specify order, when necessary:

class mrpapp {
class { 'configuremongodb': }
class { 'configurejava': }
}

class configuremongodb {
include wget
class { 'mongodb': }->

wget::fetch { 'mongorecords':
source => 'https://raw.githubusercontent.com/Microsoft/PartsUnlimitedMRP/master/deploy/Mon-
goRecords.js',
destination => '/tmp/MongoRecords.js',
timeout => 0,
}->
exec { 'insertrecords':
command => 'mongo ordering /tmp/MongoRecords.js',
path => '/usr/bin:/usr/sbin',
unless => 'test -f /tmp/initcomplete'
}->
file { '/tmp/initcomplete':
ensure => 'present',
}
}

class configurejava {
MCT USE ONLY. STUDENT USE PROHIBITED
 Puppet  725

include apt
$packages = ['openjdk-8-jdk', 'openjdk-8-jre']
apt::ppa { 'ppa:openjdk-r/ppa': }->
package { $packages:
ensure => 'installed',

}
}

You can download customer Puppet modules that Puppet and the Puppet community have created from
puppetforge6. Puppetforge is a community repository that contains thousands of modules for download
and use, or modification as you need. This saves you the time necessary to recreate modules from
scratch.

6 https://forge.puppet.com/
MCT USE ONLY. STUDENT USE PROHIBITED 726  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Ansible
What is Ansible
Ansible is an open-source platform by Red Hat that automates cloud provisioning, configuration man-
agement, and application deployments. Using Ansible, you can provision VMs, containers, and your entire
cloud infrastructure. In addition to provisioning and configuring applications and their environments,
Ansible enables you to automate deployment and configuration of resources in your environment such
as virtual networks, storage, subnets, and resources groups.
Ansible is designed for multiple tier deployments. Unlike Puppet or Chef, Ansible is agentless, meaning
you don't have to install software on the managed machines.
Ansible also models your IT infrastructure by describing how all of your systems interrelate, rather than
managing just one system at a time.

Ansible Components
The following workflow and component diagram outlines how playbooks can run in different circum-
stances, one after another. In the workflow, Ansible playbooks:

1. Provision resources. Playbooks can provision resources. In the following diagram, playbooks create
load-balancer virtual networks, network security groups, and VM scale sets on Azure.
2. Configure the application. Playbooks can deploy applications to run particular services, such as
installing Apache Tomcat on a Linux machine to allow you to run a web application.
3. Manage future configurations to scale. Playbooks can alter configurations by applying playbooks to
existing resources and applications—in this instance to scale the VMs.
In all cases, Ansible makes use of core components such as roles, modules, APIs, plugins, inventory, and
other components.
✔️ Note: By default, Ansible manages machines using the ssh protocol.
✔️ Note: You don't need to maintain and run commands from any particular central server. Instead, there
is a control machine with Ansible installed, and from which playbooks are run.
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  727

Ansible core components


Ansible models your IT infrastructure by describing how all of your systems interrelate, rather than just
managing one system at a time. The core components of Ansible are:
●● Control Machine. This is the machine from which the configurations are run. It can be any machine
with Ansible installed on it. However, it requires that Python 2 or Python 3 be installed on the control
machine as well. You can have multiple control nodes, laptops, shared desktops, and servers all
running Ansible.
●● Managed Nodes. These are the devices and machines (or just machines) and environments that are
being managed. Managed nodes are sometimes referred to as hosts. Ansible is not installed on nodes.
●● Playbooks. Playbooks are ordered lists of tasks that have been saved so you can run them repeatedly
in the same order. Playbooks are Ansible’s language for configuration, deployment, and orchestration.
They can describe a policy that you want your remote systems to enforce, or they can dictate a set of
steps in a general IT process.
When you create a playbook, you do so using YAML, which defines a model of a configuration or
process, and uses a declarative model. Elements such as name, hosts, and tasks reside within play-
books.
●● Modules. Ansible works by connecting to your nodes, and then pushing small programs (or units of
code)—called modules—out to the nodes. Modules are the units of code that define the configuration.
They are modular, and can be reused across playbooks. They represent the desired state of the system
(declarative), are executed over SSH by default, and are removed when finished.
A playbook is typically made up of many modules. For example, you could have one playbook
containing three modules: a module for creating an Azure Resource group, a module for creating a
virtual network, and a module for adding a subnet.
Your library of modules can reside on any machine, and do not require any servers, daemons, or
databases. Typically, you’ll work with your favorite terminal program, a text editor, and most likely a
version control system to track changes to your content. A complete list of available modules is
available on Ansible's All modules7 page.
You can preview Ansible Azure modules on the Ansible Azure preview modules8 webpage.
●● Inventory. An inventory is a list of managed nodes. Ansible represents what machines it manages
using a .INI file that puts all your managed machines in groups of your own choosing. When adding
new machines, you don't need to use additional SSL-signing servers, thus avoiding Network Time
Protocol (NTP) and Domain Name System (DNS) issues. You can create the inventory manually, or for
Azure, Ansible supports dynamic inventories> This means that the host inventory is dynamically
generated at runtime. Ansible supports host inventories for other managed hosts as well.
●● Roles. Roles are predefined file structures that allow automatic loading of certain variables, files, tasks,
and handlers, based on the file's structure. It allows for easier sharing of roles. You might, for example,
create roles for a web server deployment.
●● Facts. Facts are data points about the remote system that Ansible is managing. When a playbook is
run against a machine, Ansible will gather facts about the state of the environment to determine the
state before executing the playbook.
●● Plug-ins. Plug-insare code that supplements Ansible's core functionality.

7 https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html
8 https://galaxy.ansible.com/Azure/azure_preview_modules
MCT USE ONLY. STUDENT USE PROHIBITED 728  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Installing Ansible
To enable a machine to act as the control machine from which to run playbooks, you need to install both
Python and Ansible.

Python
When you install Python, you must install either Python 2 (version 2.7), or Python 3 (versions 3.5 and
later). You can use pip, the Python package manager, to install Python, or you can use other installation
methods.

Ansible installation characteristics


An Ansible installation has the following characteristics:
●● You only need to install Ansible on one machine, which could be a workstation or a laptop. You can
manage an entire fleet of remote machines from that central point.
●● No database is installed as part of the Ansible setup.
●● No daemons are required to start or keep Ansible running.

Ansible on Linux
You can install Ansible on many different distributions of Linux, including, but not limited to:
●● Red Hat Enterprise Linux
●● CentOS
●● Debian
●● Ubuntu
●● Fedora
✔️ Note: Fedora is not supported as an endorsed Linux distribution on Azure. However, you can run it on
Azure by uploading your own image. All other Linux distributions are supported on Azure as endorsed by
Linux.
You can use the appropriate package manager software to install Ansible and Python, such as yum, apt,
or pip. For example, To install Ansible on Ubuntu, run the following command:
## Install pre-requisite packages
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev python-pip
## Install Ansible and Azure SDKs via pip
sudo pip install ansible[azure]

macOS
You can also install Ansible and Python on macOS, and use that environment as the control machine.
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  729

Windows operating system


You cannot install Ansible on the Windows operating system. However, you can run playbooks from
Windows by utilizing other products and services. You can install Ansible and Python on operating
systems such as:
●● Windows Subsystem for Linux. This is an Ubuntu Linux environment available as part of Windows.
●● Azure Cloud Shell. You can use Azure Cloud Shell via a web browser on a Windows machine.
●● Microsoft Visual Studio Code. Using Visual Studio Code, choose one of the following options:

●● Run Ansible playbook in Docker.


●● Run Ansible playbook on local Ansible.
●● Run Ansible playbook in Azure Cloud Shell.
●● Run Ansible playbook remotely via SSH.

Upgrading Ansible
When Ansible manages remote machines, it doesn't leave software installed or running on them. There-
fore, there’s no real question about how to upgrade Ansible when moving to a new version.

Managed nodes
When managing nodes you need a way to communicate on the managed nodes or environments, which
is normally using SSH by default. This uses the SSH file transfer protocol. If that’s not available, you can
switch to Simple Control Protocol (SCP), which you can do in ansible.cfg. For Windows machines, use
Windows PowerShell.
You can find out more about installing Ansible on the Install Ansible on Azure virtual machines9 page.

Ansible on Azure
There are a number of ways you can use Ansible in Azure.

Azure marketplace
You can use one of the following images available as part of the Azure Marketplace:
●● Red Hat Ansible on Azure is available as an image on Azure Marketplace, and it provides a fully
configured version. This enables easier adoption for those looking to use Ansible as their provisioning
and configuration management tool. This solution template will install Ansible on a Linux VM along
with tools configured to work with Azure. This includes:
●● Ansible (the latest version by default. You can also specify a version number.)
●● Azure CLI 2.0
●● MSI VM extension
●● apt-transport-https

9 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ansible-install-configure?toc=%2Fen-us%2Fazure%2Fansible%2Ftoc.
json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
MCT USE ONLY. STUDENT USE PROHIBITED 730  Module 18 Third Party Infrastructure as Code Tools available with Azure  

●● Ansible Tower (by Red Hat). Ansible Tower by Red Hat helps organizations scale IT automation and
manage complex deployments across physical, virtual, and cloud infrastructures. Built on the proven
open-source Ansible automation engine, Ansible Tower includes capabilities that provide additional
levels of visibility, control, security, and efficiency necessary for today's enterprises. With Ansible
Tower you can:
●● Provision Azure environments with ease using pre-built Ansible playbooks.
●● Use role-based access control (RBAC) for secure, efficient management.
●● Maintain centralized logging for complete auditability and compliance.
●● Utilize the large community of content available on Ansible Galaxy.
This offering requires the use of an available Ansible Tower subscription eligible for use in Azure. If you
don't currently have a subscription, you can obtain one directly from Red Hat.

Azure VMs
Another option for running Ansible on Azure is to deploy a Linux VM on Azure virtual machines, which is
infrastructure as a service (IaaS). You can then install Ansible and the relevant components, and use that
as the control machine.
✔️ Note: The Windows operating system is not supported as a control machine. However, you can run
Ansible from a Windows machine by utilizing other services and products such as Windows Subsystem
for Linux, Azure Cloud Shell, and Visual Studio Code.
For more details about running Ansible in Azure, visit:
●● Ansible on Azure documentation10 website
●● Microsoft Azure Guide11

Playbook structure
Playbooks are the language of Ansible's configurations, deployments, and orchestrations. You use them
to manage configurations of and deployments to remote machines. Playbooks are structured with YAML
(a data serialization language), and support variables. Playbooks are declarative and include detailed
information regarding the number of machines to configure at a time.

YML structure
YAML is based around the structure of key-value pairs. In the following example, the key is name, and the
value is namevalue:
name: namevalue

In the YAML syntax, a child key value pair is placed on new, and indented, line below its parent key. Each
sibling key value pair occurs on a new line at the same level of indentation as its sibling key value pair.
parent:
children:
first-sibling: value01

10 https://docs.microsoft.com/en-us/azure/ansible/?ocid=AID754288&wt.mc_id=CFID0352
11 https://docs.ansible.com/ansible/latest/scenario_guides/guide_azure.html
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  731

second-sibling: value02

The specific number of spaces used for indentation is not defined. You can indent each level by as many
spaces as you want. However, the number of spaces used for indentations at each level must be uniform
throughout the file.
When there is indentation in a YAML file, the indented key value pair is the value of it parent key.

Playbook components
The following list is of some of the playbook components:
●● name. The name of the playbook. This can be any name you wish.
●● hosts. Lists where the configuration is applied, or machines being targeted. Hosts can be a list of one
or more groups or host patterns, separated by colons. It can also contain groups such as web servers
or databases, providing that you have defined these groups in your inventory.
●● connection. Specifies the connection type.
●● remote_user. Specifies the user that will be connected to for completing the tasks.
●● var. Allows you to define the variables that can be used throughout your playbook.
●● gather_facts. Determines whether to gather node data or not. The value can be yes or no.
●● tasks. Indicates the start of the modules where the actual configuration is defined.

Running a playbook
You run a playbook using the following command:
ansible-playbook < playbook name >

You can also check the syntax of a playbook using the following command.
ansible-playbook --syntax-check

The syntax check command runs a playbook through the parser to verify that it has included items, such
as files and roles, and that the playbook has no syntax errors. You can also use the --verbose com-
mand.
●● To see a list of hosts that would be affected by running a playbook, run the command:
ansible-playbook playbook.yml --list-hosts

Sample Playbook
The following code is a sample playbook that will create a Linux virtual machine in Azure:
- name: Create Azure VM
hosts: localhost
connection: local
vars:
resource_group: ansible_rg5
location: westus
MCT USE ONLY. STUDENT USE PROHIBITED 732  Module 18 Third Party Infrastructure as Code Tools available with Azure  

tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network inteface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  733

ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: <your-key-data>
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest

✔️ Note: Ansible Playbook samples for Azure are available on GitHub on the Ansible Playbook Samples
for Azure12 page.

Demonstration-Run Ansible in Azure Cloud Shell


You can run Ansible playbooks on a Windows machine by using the Azure Cloud Shell with Bash. This is
the quickest and easiest way to begin using playbook's provisioning and management features in Azure.

Run commands
Azure Cloud Shell has Ansible preinstalled. After you are signed into Azure Cloud Shell, specify the bash
console. You do not need to install or configure anything further to run Ansible commands from the Bash
console in Azure Cloud Shell.

Editor
You can also use the Azure Cloud Shell editor to review, open, and edit your playbook .yml files. You can
open the editor by selecting the curly brackets icon on the Azure Cloud Shell taskbar.

12 https://github.com/Azure-Samples/ansible-playbooks
MCT USE ONLY. STUDENT USE PROHIBITED 734  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Create a resource group


The following steps outline how to create a resource group in Azure using Ansible in Azure Cloud Shell
with bash:
1. Go to the Azure Cloud Shell13. You can also launch Azure Cloud Shell from within the Azure portal by
selecting the Azure PowerShell icon on the taskbar.
2. Authenticate to Azure by entering your credentials, if prompted.
3. On the taskbar, ensure Bash is selected as the shell.
4. Create a new file using the following command:
vi rg.yml

5. Enter insert mode by selecting the I key.


6. Copy and paste the following code into the file, and remove the, #, comment character. (It's included
here for displaying code in the learning platform.) The code should be aligned as in the previous
screenshot.
#---
- hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: ansible-rg

13 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  735

location: eastus

7. Exit insert mode by selecting the Esc key.


8. Save the file and exit the vi editor by entering the following command:
:wq

9. Run the playbook with the following command:


ansible-playbook rg.yml

10. Verify that you receive output similar to the following code:
PLAY [localhost] *********************************************************************************

TASK [Gathering Facts] ***************************************************************************


ok: [localhost]

TASK [Create resource group] *********************************************************************


changed: [localhost]

TASK [debug] *************************************************************************************


ok: [localhost] => {
"rg": {
"changed": true,
"contains_resources": false,
"failed": false,
"state": {
"id": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/ansible-rg",
"location": "eastus",
"name": "ansible-rg",
"provisioning_state": "Succeeded",
"tags": null
}
}
}

PLAY RECAP ***************************************************************************************


localhost : ok=3 changed=1 unreachable=0 failed=0

11. Open Azure portal and verify that the resource group is now available in the portal.

Demonstration-Run Ansible in Visual Studio


Code
You can also run Ansible playbooks on a Windows machine using Visual Studio Code. This leverages
other services that can also be integrated using Visual Studio Code.
MCT USE ONLY. STUDENT USE PROHIBITED 736  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Create network resources in Azure using Visual Studio


Code
Complete the following steps to create network resources in Azure using Visual Studio Code:
1. If not already installed, install Visual Studio Code by downloading it from the https://code.visualstu-
dio.com/14 page. You can install it on the Windows, Linux, or macOS operating systems.
2. Go to File > Preferences > Extensions.
3. Search for and install the extension Azure Account.

4. Search for and install the extension Ansible.

14 https://code.visualstudio.com/
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  737

You can also view details of this extension on the Visual Studio Marketplace Ansible 15 page.
5. In Visual Studio Code, go to View > Command Palette…. Alternatively, you can select the settings
(cog) icon in the bottom, left corner of the Visual Studio Code window, and then select Command
Palette.

6. In the Command Palette, Type Azure:, select Azure:Sign in.

15 https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible&ocid=AID754288&wt.mc_id=CFID0352
MCT USE ONLY. STUDENT USE PROHIBITED 738  Module 18 Third Party Infrastructure as Code Tools available with Azure  

7. When a browser launches and prompts you to sign in, select your Azure account. Verify that a mes-
sage displays stating that you are now signed in and can close the page.

8. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
9. Create a new file and paste in the following playbook text:
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: myResourceGroup
location: eastus
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  739

allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: true
admin_password: Password0134
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest

10. Save the file locally, and name it createavm.yml.


11. Right-click on the file name in the tab at the top of Visual Studio Code, and review the available
options available to run the Ansible playbook:
●● Run Ansible Playbook in Docker
●● Run Ansible Playbook in Local Ansible
●● Run Ansible Playbook Cloud Shell
●● Run Ansible Playbook Remotely via ssh
12. Select the third option, Run Ansible Playbook Cloud Shell.
MCT USE ONLY. STUDENT USE PROHIBITED 740  Module 18 Third Party Infrastructure as Code Tools available with Azure  

13. A notice might appear in the bottom, left side, informing you that the action could incur a small
charge as it will use some storage when the playbook is uploaded to cloud shell. Select Confirm &
Don't show this message again.

14. Verify that the Azure Cloud Shell pane now displays in the bottom of Visual Studio Code and is
running the playbook.
MCT USE ONLY. STUDENT USE PROHIBITED
 Ansible  741

15. When the playbook finishes running, open Azure and verify the resource group, resources, and VM
have all been created. If you have time, sign in with the user name and password specified in the
playbook to verify as well.

✔️ Note: If you want to use a public or private key pair to connect to the Linux VM, instead of a user
name and password you could use the following code in the previous Create VM module steps:
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert your ssh public key here... >
MCT USE ONLY. STUDENT USE PROHIBITED 742  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Terraform
What is Terraform
HashiCorp Terraform is an open-source tool that allows you to provision, manage, and version cloud
infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resourc-
es such as VMs, storage accounts, and networking interfaces.

Terraform's command-line interface (CLI) provides a simple mechanism to deploy and version the
configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and
preview infrastructure changes before you deploy them.
Terraform also supports multi-cloud scenarios. This means it enables developers to use the same tools
and configuration files to manage infrastructure on multiple cloud providers.
You can run Terraform interactively from the CLI with individual commands, or non-interactively as part of
a continuous integration pipeline.
There is also an enterprise version of Terraform available, Terraform Enterprise.
You can view more details about Terraform on the HashiCorp Terraform16 website.

Terraform components
Some of Terraform's core components include:
●● Configuration files. Text-based configuration files allow you to define infrastructure and application
configuration. These files end in the .tf or .tf.json extension. The files can be in either of the following
two formats:
●● Terraform. The Terraform format is easier for users to review, thereby making it more user friendly.
It supports comments, and is the generally recommended format for most Terraform files. Terra-
form files ends in .tf
●● JSON. The JSON format is mainly for use by machines for creating, modifying, and updating
configurations. However, it can also be used by Terraform operators if you prefer. JSON files end in
.tf.json.
The order of items (such as variables and resources) as defined within the configuration file does not
matter, because Terraform configurations are declarative.
●● Terraform CLI. This is a command-line interface from which you run configurations. You can run
command such as Terraform apply and Terraform plan, along with many others. A CLI configuration
file that configures per-user setting for the CLI is also available. However, this is separate from the CLI
infrastructure configuration. In Windows operating system environments, the configuration file is
named terraform.rc, and is stored in the relevant user's %APPDATA% directory. On Linux systems, the
file is named .terraformrc (note the leading period), and is stored in the home directory of the
relevant user.

16 https://www.terraform.io/
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  743

●● Modules. Modules are self-contained packages of Terraform configurations that are managed as a
group. You use modules to create reusable components in Terraform and for basic code organization.
A list of available modules for Azure is available on the Terraform Registry Modules17 webpage.
●● Provider. The provider is responsible for understanding API interactions and exposing resources.
●● Overrides. Overrides are a way to create configuration files that are loaded last and merged into
(rather than appended to) your configuration. You can create overrides to modify Terraform behavior
without having to edit the Terraform configuration. They can also be used as temporary modifications
that you can make to Terraform configurations without having to modify the configuration itself.
●● Resources. Resources are sections of a configuration file that define components of your infrastruc-
ture, such as VMs, network resources, containers, dependencies, or DNS records. The resource block
creates a resource of the given TYPE (first parameter) and NAME (second parameter). However, the
combination of the type and name must be unique. The resource's configuration is then defined and
contained within braces.
●● Execution plan. You can issue a command in the Terraform CLI to generate an execution plan. The
execution plan shows what Terraform will do when a configuration is applied. This enables you to
verify changes and flag potential issues. The command for the execution plan is Terraform plan.
●● Resource graph. Using a resource graph, you can build a dependency graph of all resources. You can
then create and modify resources in parallel. This helps provision and configure resources more
efficiently.

Terraform on Azure
You download Terraform for use in Azure via: Azure Marketplace, Terraform Marketplace, or Azure VMs.

Azure Marketplace
Azure Marketplace offers a fully-configured Linux image containing Terraform with the following charac-
teristics:
●● The deployment template will install Terraform on a Linux (Ubuntu 16.04 LTS) VM along with tools
configured to work with Azure. Items downloaded include:
●● Terraform (latest)
●● Azure CLI 2.0
●● Managed Service Identity (MSI) VM extension
●● Unzip
●● Jq
●● apt-transport-https
●● This image also configures a remote back-end to enable remote state management using Terraform.

Terraform Marketplace
The Terraform Marketplace image makes it easy to get started using Terraform on Azure, without having
to install and configure Terraform manually. There are no software charges for this Terraform VM image.

17 https://registry.terraform.io/browse?provider=azurerm
MCT USE ONLY. STUDENT USE PROHIBITED 744  Module 18 Third Party Infrastructure as Code Tools available with Azure  

You pay only the Azure hardware usage fees that are assessed based on the size of the VM that's provi-
sioned.

Azure VMs
You can also deploy a Linux or Windows VM in Azure VM's IaaS service, install Terraform and the relevant
components, and then use that image.

Installing Terraform
To get started, you must install Terraform on the machine from which you are running the Terraform
commands.
Terraform can be installed on Windows, Linux or macOS environments. Go to the Download Terraform18
page, and choose the appropriate download package for your environment.

Windows operating system


If you download Terraform for the Windows operating system:
1. Find the install package, which is bundled as a zip file.
2. Copy files from the zip to a local directory such as C:\terraform. That is the Terraform PATH, so make
sure that the Terraform binary is available on the PATH.
3. To set the PATH environment variable, run the command set PATH=%PATH%;C:\terraform, or point
to wherever you have placed the Terraform executable.
4. Open an administrator command window at C:\Terraform and run the command Terraform to
verify the installation. You should be able to view the terraform help output.

18 https://www.terraform.io/downloads.html
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  745

Linux
1. Download Terraform using the following command:
wget https://releases.hashicorp.com/terraform/0.xx.x/terraform_0.xx.x_linux_amd64.zip

2. Install Unzip using the command:


sudo apt-get install unzip

3. Unzip and set the path using the command:


unzip terraform_0.11.1_linux_amd64.zip
sudo mv terraform /usr/local/bin/

4. Verify the installation by running the command Terraform. Verify that the Terraform help output
displays.
MCT USE ONLY. STUDENT USE PROHIBITED 746  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Authenticating Terraform with Azure


Terraform supports a number of different methods for authenticating to Azure. You can use:
●● The Azure CLI
●● A Managed Service Identity (MSI)
●● A service principal and a client certificate
●● A service principal and a client secret
When running Terraform as part of a continuous integration pipeline, you can use either an Azure service
principal or MSI to authenticate.
To configure Terraform to use your Azure Active Directory (Azure AD) service principal, set the following
environment variables:
●● ARM_SUBSCRIPTION_ID
●● ARM_CLIENT_ID
●● ARM_CLIENT_SECRET
●● ARM_TENANT_ID
●● ARM_ENVIRONMENT
These variables are then used by the Azure Terraform modules. You can also set the environment if you
are working with an Azure cloud other than an Azure public cloud.
Use the following sample shell script to set these variables:
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  747

#!/bin/sh
echo "Setting environment variables for Terraform"
export ARM_SUBSCRIPTION_ID=your_subscription_id
export ARM_CLIENT_ID=your_appId
export ARM_CLIENT_SECRET=your_password
export ARM_TENANT_ID=your_tenant_id

# Not needed for public, required for usgovernment, german, china


export ARM_ENVIRONMENT=public

✔️ Note: After you install Terraform, before you can apply config .tf files you must run the following
command to initialize Terraform for the installed instance:
Terraform init

Terraform config file structure


Take a moment to skim through the following example of a terraform .tf file. Try to identify the different
elements within the file. The file performs the following actions on Azure:
●● Authenticates
●● Creates a resource group
●● Creates a virtual network
●● Creates a subnet
●● Creates a public IP address
●● Creates a network security group and rule
●● Creates a virtual network interface card
●● Generates random text for use as a unique storage account name
●● Creates a storage account for diagnostics
●● Creates a virtual machine

Sample Terraform .tf file


# Configure the Microsoft Azure Provider
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

# Create a resource group if it does not exist


resource "azurerm_resource_group" "myterraformgroup" {
name = "myResourceGroup"
location = "eastus"

tags {
MCT USE ONLY. STUDENT USE PROHIBITED 748  Module 18 Third Party Infrastructure as Code Tools available with Azure  

environment = "Terraform Demo"


}
}

# Create virtual network


resource "azurerm_virtual_network" "myterraformnetwork" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

tags {
environment = "Terraform Demo"
}
}

# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.name}"
address_prefix = "10.0.1.0/24"
}

# Create public IPs


resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
public_ip_address_allocation = "dynamic"

tags {
environment = "Terraform Demo"
}
}

# Create Network Security Group and rule


resource "azurerm_network_security_group" "myterraformnsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  749

destination_address_prefix = "*"
}

tags {
environment = "Terraform Demo"
}
}

# Create network interface


resource "azurerm_network_interface" "myterraformnic" {
name = "myNIC"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
network_security_group_id = "${azurerm_network_security_group.myterraformnsg.id}"

ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}

tags {
environment = "Terraform Demo"
}
}

# Generate random text for a unique storage account name


resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = "${azurerm_resource_group.myterraformgroup.name}"
}

byte_length = 8
}

# Create storage account for boot diagnostics


resource "azurerm_storage_account" "mystorageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
location = "eastus"
account_tier = "Standard"
account_replication_type = "LRS"

tags {
environment = "Terraform Demo"
}
}

# Create virtual machine


MCT USE ONLY. STUDENT USE PROHIBITED 750  Module 18 Third Party Infrastructure as Code Tools available with Azure  

resource "azurerm_virtual_machine" "myterraformvm" {


name = "myVM"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
network_interface_ids = ["${azurerm_network_interface.myterraformnic.id}"]
vm_size = "Standard_DS1_v2"

storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}

os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}

os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}

boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
}

tags {
environment = "Terraform Demo"
}
}

Demonstration-Run Terraform in Azure Cloud


Shell
Terraform is pre-installed in Azure Cloud Shell, so you can use it immediately and no additional configu-
ration is required. Because you can install Terraform on both the Windows and Linux operating systems,
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  751

you can use either a PowerShell or Bash shell to run it. In this walkthrough you create a resource group in
Azure using Terraform, in Azure Cloud Shell, with Bash.
The following screenshot displays Terraform running in Azure Cloud Shell with PowerShell.

The following image is an example of running Terraform in Azure Cloud Shell with a Bash shell.

Editor
You can also use the Azure Cloud Shell editor to review, open, and edit your .tf files. To open the editor,
select the braces on the Azure Cloud Shell taskbar.
MCT USE ONLY. STUDENT USE PROHIBITED 752  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Prerequisites
●● You do require an Azure subscription to perform these steps. If you don't have one you can create
one by following the steps outlined on the Create your Azure free account today19 webpage.

Steps
The following steps outline how to create a resource group in Azure using Terraform in Azure Cloud Shell,
with bash.
1. Open the Azure Cloud Shell at https://shell.azure.com. You can also launch Azure Cloud Shell
from within the Azure portal by selecting the Azure Cloud Shell icon.
2. If prompted, authenticate to Azure by entering your credentials.
3. In the taskbar, ensure that Bash is selected as the shell type.
4. Create a new .tf file and open the file for editing with the following command:
vi terraform-createrg.tf

5. Enter insert mode by selecting the I key.


6. Copy and paste the following code into the file:

19 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  753

provider "azurerm" {
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}

7. Exit insert mode by selecting the Esc key.


8. Save the file and exit the vi editor by entering the following command:
:wq

9. Use the following command to initialize Terraform:


terraform init

You should receive a message saying Terraform was successfully initiated.

10. Run the configuration .tf file with the following command:
terraform apply

You should receive a prompt to indicate that a plan has been generated. Details of the changes should be
listed, followed by a prompt to apply or cancel the changes.
MCT USE ONLY. STUDENT USE PROHIBITED 754  Module 18 Third Party Infrastructure as Code Tools available with Azure  

11. Enter a value of yes, and then select Enter. The command should run successfully, with output similar
to the following screenshot.

12. Open Azure portal and verify the new resource group now displays in the portal.

Demonstration-Run Terraform in Visual Studio


Code
You can also run Terraform configuration files using Visual Studio Code. This leverages other Terraform
services that you can integrate with Visual Studio Code. Two Visual Studio code extensions that are
required, are Azure Account, and Terraform.
In this walkthrough you will create a VM in Visual Studio Code using Terraform
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  755

Prerequisites
●● This walkthrough requires Visual Studio Code. If you do not have Visual Studio Code installed, you can
download it from https://code.visualstudio.com/20. Download and install a version of Visual Studio
Code that is appropriate to your operating system environment, for example Windows, Linux, or
macOS.
●● You will require an active Azure subscription to perform the steps in this walkthrough. If you do not
have one, create an Azure subscription by following the steps outlined on the Create your Azure free
account today21 webpage.

Steps
1. Launch the Visual Studio Code editor.
2. The two Visual Studio Code extensions Azure Account and Azure Terraform must be installed. To install
the first extension, from inside Visual Studio Code, select File > Preferences > Extensions.
3. Search for and install the extension Azure Account.

4. Search for and install the extension Terraform. Ensure that you select the extension authored by
Microsoft, as there are similar extensions available from other authors

20 https://code.visualstudio.com/
21 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED 756  Module 18 Third Party Infrastructure as Code Tools available with Azure  

You can view more details of this extension at the Visual Studio Marketplace on the Azure Terraform22
page.
5. In Visual Studio Code, open the command palette by selecting View > Command Palette. You can
also access the command palette by selecting the settings (cog) icon on the bottom, left side of the
Visual Studio Code window, and then selecting Command Palette.

6. In the Command Palette search field, type Azure:, and from the results, select Azure: Sign In.

7. When a browser launches and prompts you to sign in to Azure, select your Azure account. The
message You are signed in now and can close this page., should display in the browser.

22 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureterraform
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  757

8. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.

9. Create a new file, then copy the following code and paste it into the file.
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
name = "terraform-rg2"
location = "eastus"

tags {
environment = "Terraform Demo"
}
}

# Create virtual network


resource "azurerm_virtual_network" "myterraformnetwork" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

tags {
environment = "Terraform Demo"
}
}

# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.name}"
address_prefix = "10.0.1.0/24"
}

# Create public IPs


resource "azurerm_public_ip" "myterraformpublicip" {
name = "myPublicIP"
MCT USE ONLY. STUDENT USE PROHIBITED 758  Module 18 Third Party Infrastructure as Code Tools available with Azure  

location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
public_ip_address_allocation = "dynamic"

tags {
environment = "Terraform Demo"
}
}

# Create Network Security Group and rule


resource "azurerm_network_security_group" "myterraformnsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"

security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}

tags {
environment = "Terraform Demo"
}
}

# Create network interface


resource "azurerm_network_interface" "myterraformnic" {
name = "myNIC"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
network_security_group_id = "${azurerm_network_security_group.myterraformnsg.id}"

ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}

tags {
environment = "Terraform Demo"
}
}
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  759

# Generate random text for a unique storage account name


resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = "${azurerm_resource_group.myterraformgroup.name}"
}

byte_length = 8
}

# Create storage account for boot diagnostics


resource "azurerm_storage_account" "mystorageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
location = "eastus"
account_tier = "Standard"
account_replication_type = "LRS"

tags {
environment = "Terraform Demo"
}
}

# Create virtual machine


resource "azurerm_virtual_machine" "myterraformvm" {
name = "myVM"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
network_interface_ids = ["${azurerm_network_interface.myterraformnic.id}"]
vm_size = "Standard_DS1_v2"

storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}

os_profile {
computer_name = "myvm"
admin_username = "azureuser"
admin_password = "Password0134!"
}
MCT USE ONLY. STUDENT USE PROHIBITED 760  Module 18 Third Party Infrastructure as Code Tools available with Azure  

os_profile_linux_config {
disable_password_authentication = false
}
}

boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
}

tags {
environment = "Terraform Demo"
}
}

10. Save the file locally with the file name terraform-createvm.tf.
11. In Visual Studio Code,select View > Command Palette. Search for the command by entering terra-
form into the search field. Select the following command from the dropdown list of commands:
Azure Terraform: apply

12. If Azure Cloud Shell is not open in Visual Studio Code, a message might appear in the bottom, left
corner asking you if you want to open Azure Cloud Shell. Choose Accept, and select Yes.
13. Wait for the Azure Cloud Shell pane to appear in the bottom of Visual Studio Code window, and start
running the file terraform-createvm.tf. When you are prompted to apply the plan or cancel,
type Yes, and then press Enter.
MCT USE ONLY. STUDENT USE PROHIBITED
 Terraform  761

14. After the command completes successfully, review the list of resources created.

15. Open the Azure Portal and verify the resource group, resources, and the VM has been created. If you
have time, sign in with the user name and password specified in the .tf config file to verify.
MCT USE ONLY. STUDENT USE PROHIBITED 762  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Note: If you wanted to use a public or private key pair to connect to the Linux VM instead of a user name
and password, you could use the os_profile_linux_config module, set the disable_password_authenti-
cation key value to true and include the ssh key details, as in the following code.
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}

You'd also need to remove the password value in the os_profile module that present in the example
above.
Note: You could also embed the Azure authentication within the script. In that case, you would not need
to install the Azure account extension, as in the following example:
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
MCT USE ONLY. STUDENT USE PROHIBITED
 Labs  763

Labs
Infrastructure as Code

Steps for the labs are available on GitHub at the below sites under the Infrastructure as Code sections
●● https://microsoft.github.io/PartsUnlimited
●● https://microsoft.github.io/PartsUnlimitedMRP
You should click on the links below, for the individual lab tasks for this module, and follow the steps
outlined there for each lab task.
PartsUnlimitedMRP (PUMRP)
●● Deploy app with Chef on Azure 23
●● Deploy app with Puppet on Azure24
●● Ansible with Azure25

Automating your infrastructure deployments


Overview
Terraform26 is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform
can manage existing and popular cloud service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your
entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired
state, and then executes it to build the described infrastructure. As the configuration changes, Terraform
is able to determine what changed and create incremental execution plans which can be applied.

What’s covered in this lab


In the lab, Automating infrastructure deployments in the Cloud with Terraform and Azure Pipe-
lines27. you will see:
●● How open source tools, such as Terraform can be leveraged to implement Infrastructure as Code (IaC)
●● How to automate your infrastructure deployments in the Cloud with Terraform and Azure Pipelines

23 http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithChefonAzure.html
24 http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithPuppetonAzure.html
25 http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-AnsiblewithAzure.html
26 https://www.terraform.io/intro/index.html
27 https://azuredevopslabs.com/labs/vstsextend/terraform/
MCT USE ONLY. STUDENT USE PROHIBITED 764  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Module Review and Takeaways


Module Review Questions
Checkbox
Which of the following are main architectural components of Chef?
(choose all that apply)
†† Chef Server
†† Chef Facts
†† Chef Client
†† Chef Workstation

Checkbox
Which of the following are open-source products that are integrated into the Chef Automate image availa-
ble from Azure Marketplace?
†† Habitat
†† Facts
†† Console Services
†† InSpec

Checkbox
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
†† Master
†† Agent
†† Facts
†† Habitat

Dropdown
Complete the following sentence.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and ________.
†† Module
†† Habitat
†† InSpec
†† Cookbooks
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  765

Checkbox
Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
†† Puppet
†† Chef
†† Ansible

Multiple choice
True or false: The Control Machine in Ansible must have Python installed?
†† True
†† False

Checkbox
Which of the following statements about the cloud-init package are correct?
†† The --custom-data parameter passes the name of the configuration file (.txt).
†† Configuration files (.txt) are encoded in base64.
†† The YML syntax is used within the configuration file (.txt).
†† cloud-init works across Linux distributions.

Multiple choice
True or false: Terraform ONLY supports configuration files with the file extension .tf.
†† True
†† False

Multiple choice
Which of the following core Terraform components can modify Terraform behavior, without having to edit
the Terraform configuration?
†† Configuration files
†† Overrides
†† Execution plan
†† Resource graph
MCT USE ONLY. STUDENT USE PROHIBITED 766  Module 18 Third Party Infrastructure as Code Tools available with Azure  

Answers
Checkbox
Which of the following are main architectural components of Chef?
(choose all that apply)
■■ Chef Server
†† Chef Facts
■■ Chef Client
■■ Chef Workstation
Explanation
The correct answers are Chef Server, Chef Client and Chef Workstation.
Chef Facts is an incorrect answer.
Chef Facts is not an architectural component of Chef. Chef Facts misrepresents the term 'Puppet Facts'.
Puppet Facts are metadata used to determine the state of resources managed by the Puppet automation
tool.
Chef has the following main architectural components. 'Chef Server' is the Chef management point. The
two options for the Chef Server are 'hosted' and 'on-premises'. 'Chef Client (node)' is an agent that sits on
the servers you are managing. 'Chef Workstation' is an Administrator workstation where you create Chef
policies and execute management commands. You run the Chef 'knife' command from the Chef Worksta-
tion to manage your infrastructure.
Checkbox
Which of the following are open-source products that are integrated into the Chef Automate image
available from Azure Marketplace?
■■ Habitat
†† Facts
†† Console Services
■■ InSpec
Explanation
The correct answers are Habitat and InSpec.
Facts and Console Services are incorrect answers.
Facts are metadata used to determine the state of resources managed by the Puppet automation tool.
Console Services is a web-based user interface for managing your system with the Puppet automation tool.
Habitat and InSpec are two open-source products that are integrated into the Chef Automate image
available from Azure Marketplace. Habitat makes the application and its automation the unit of deploy-
ment, by allowing you to create platform-independent build artifacts called 'habitats' for your applications.
InSpec allows you to define desired states for your applications and infrastructure. InSpec can conduct audits
to detect violations against your desired state definitions, and generate reports from its audit results.
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  767

Checkbox
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
■■ Master
■■ Agent
■■ Facts
†† Habitat
Explanation
The correct answers are Master, Agent and Facts.
Habitat is an incorrect answer.
Habitat is used with Chef for creating platform-independent build artifacts called for your applications.
Master, Agent and Facts are core components of the Puppet automation platform. Another core component
is 'Console Services'. Puppet Master acts as a center for Puppet activities and processes. Puppet Agent runs
on machines managed by Puppet, to facilitate management. Console Services is a toolset for managing and
configuring resources managed by Puppet. Facts are metadata used to determine the state of resources
managed by Puppet.
Dropdown
Complete the following sentence.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and ________.
■■ Module
†† Habitat
†† InSpec
†† Cookbooks
Explanation
Module is the correct answer.
All other answers are incorrect answers.
Habitat, InSpec and Cookbooks are incorrect because they relate to the Chef automation platform.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and Module. Classes define
related resources according to their classification, to be reused when composing other workflows. Resources
are single elements of your configuration which you can specify parameters for. Modules are collections of
all the classes, resources and other elements in a single entity.
Checkbox
Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
■■ Puppet
■■ Chef
†† Ansible
Explanation
The correct answers are: Puppet and Chef.
Ansible is an incorrect answer.
Ansible is agentless because you do not need to install an Agent on each of the target machines it manages.
Ansible uses the Secure Shell (SSH) protocol to communicate with target machines. You choose when to
conduct compliance checks and perform corrective actions, instead of using Agents and a Master to perform
MCT USE ONLY. STUDENT USE PROHIBITED 768  Module 18 Third Party Infrastructure as Code Tools available with Azure  

these operations automatically.


Puppet and Chef use Agents to communicate with target machines. With Puppet and Chef, you install an
Agent on each target machine managed by the platform. Agents typically run as a background service and
facilitate communication with a Master, which runs on a server. The Master uses information provided by
Agents to conduct compliance checks and perform corrective actions automatically.
Multiple choice
True or false: The Control Machine in Ansible must have Python installed?
■■ True
†† False
Explanation
A Control Machine in Ansible must have Python installed. Control Machine is one of the core components of
Ansible. Control Machine is for running configurations. The other core components of Ansible are Managed
Nodes, Playbooks, Modules, Inventory, Roles, Facts, and Plug-ins. Managed Nodes are resources managed
by Ansible. Playbooks are ordered lists of Ansible tasks. Modules are small blocks of code within a Playbook
that perform specific tasks. Inventory is list of managed nodes. Roles allow for the automatic and sequenced
loading of variables, files, tasks and handlers. Facts are data points about the remote system which Ansible
is managing. Plug-ins supplement Ansible's core functionality.
Checkbox
Which of the following statements about the cloud-init package are correct?
■■ The --custom-data parameter passes the name of the configuration file (.txt).
■■ Configuration files (.txt) are encoded in base64.
■■ The YML syntax is used within the configuration file (.txt).
■■ cloud-init works across Linux distributions.
Explanation
All of the answers are correct answers.
In Azure, you can add custom configurations to a Linux VM with cloud-init by appending the --custom-data
parameter, and passing the name of a configuration file (.txt), to the az vm create command. The --cus-
tom-data parameter passes the name of the configuration file (.txt) as an argument to cloud-init. Then,
cloud-init applies Base64 encoding to the contents of the configuration file (.txt), and sends it along with any
provisioning configuration information that is contained within the configuration file (.txt). Any provisioning
configuration information contained in the specified configuration file (.txt) is applied to the new VM, when
the VM is created. The YML syntax is used within the configuration file (.txt) to define any provisioning
configuration information that needs to be applied to the VM.
Multiple choice
True or false: Terraform ONLY supports configuration files with the file extension .tf.
†† True
■■ False
Explanation
False is the correct answer.
True is an incorrect answer because Terraform supports configuration files with the file extensions .tf and .tf.
json.
Terraform configuration files are text based configuration files that allow you to define infrastructure and
application configurations. Terraform uses the file extension .tf for Terraform format configuration files, and
MCT USE ONLY. STUDENT USE PROHIBITED
 Module Review and Takeaways  769

the file extension .tf.json for Terraform JSON format configuration files. Terraform supports configuration
files in either .tf or .tf.json format. The Terraform .tf format is more human-readable, supports comments,
and is the generally recommended format for most Terraform files. The JSON format .tf.json is meant for
use by machines, but you can write your configuration files in JSON format if you prefer.
Multiple choice
Which of the following core Terraform components can modify Terraform behavior, without having to
edit the Terraform configuration?
†† Configuration files
■■ Overrides
†† Execution plan
†† Resource graph
Explanation
Overrides is the correct answer.
All other answers are incorrect answers.
Configuration files, in .tf or .tf.json format, allow you to define your infrastructure and application configura-
tions with Terraform.
Execution plan defines what Terraform will do when a configuration is applied.
Resource graph builds a dependency graph of all Terraform managed resources.
Overrides modify Terraform behavior without having to edit the Terraform configuration. Overrides can also
be used to apply temporary modifications to Terraform configurations without having to modify the
configuration itself.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 19 Implement Compliance and Securi-
ty in your Infrastructure

Module Overview
Module Overview
As many as four out of five companies leveraging a DevOps approach to software engineering do so
without integrating the necessary information security controls, underscoring the urgency with which
companies should be evaluating “Rugged” DevOps (also known as “shift left”) to build security into their
development life cycle as early as possible.
Rugged DevOps represents an evolution of DevOps in that it takes a mode of development in which
speed and agility are primary and integrates security, not just with automated tools and processes, but
also through cultural change emphasizing ongoing, flexible collaboration between release engineers and
security teams. The goal is to bridge the traditional gap between the two functions, reconciling rapid
deployment of code with the imperative for security.
For many companies, a common pitfall on the path to implementing rugged DevOps is implementing the
approach all at once rather than incrementally, underestimating the complexity of the undertaking and
producing cultural disruption in the process. Putting these plans in place is not a one-and-done process;
instead the approach should continuously evolve to support the various scenarios and needs that
DevOps teams encounter. The building blocks for Ruggid DevOps involves understanding and implemen-
tation of the following concepts,
●● Code Analysis
●● Change Management
●● Compliance Monitoring
●● Threat Investigation
●● Vulnerability assessment & KPIs
MCT USE ONLY. STUDENT USE PROHIBITED 772  Module 19 Implement Compliance and Security in your Infrastructure  

Learning Objectives
After completing this module, students will be able to:
●● Define an infrastructure and configuration strategy and appropriate toolset for a release pipeline and
application infrastructure
●● Implement compliance and security in your application infrastructure
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  773

Security and Compliance Principles with Dev-


Ops
What is Rugged DevOps
While the adoption of cloud computing is on the rise to support business productivity, a lack of security
infrastructure can result in inadvertently compromising data. The 2018 Microsoft Security Intelligence
Report Understand top trends in the threat landscape1 finds that:
●● Data is not encrypted both at rest and in transit by:
●● 7% of software as a service (SaaS) storage apps.
●● 86% percent of SaaS collaboration apps.
●● HTTP headers session protection is supported by only:
●● 4% of SaaS storage apps.
●● 3% of SaaS collaboration apps.

Rugged DevOps (or DevSecOps)


DevOps is about working faster. Security is about emphasizing thoroughness. Security concerns are
typically addressed at the end of the cycle. This can potentially create unplanned work right at the end of
the pipeline. Rugged DevOps integrates DevOps with security into a set of practices that are designed to
meet the goals of both DevOps and security more effectively.

The goal of a Rugged DevOps pipeline is to allow development teams to work fast without breaking their
project by introducing unwanted security vulnerabilities.
Note: Rugged DevOps is also sometimes referred to as DevSecOps. You might encounter both terms, but
each term refers to the same concept.

1 https://www.microsoft.com/en-us/security/operations/security-intelligence-report
MCT USE ONLY. STUDENT USE PROHIBITED 774  Module 19 Implement Compliance and Security in your Infrastructure  

Security in the context of Rugged DevOps


Historically, security typically operated on a slower cycle and involved traditional security methodologies,
such as:
●● Access control
●● Environment hardening
●● Perimeter protection
Rugged DevOps includes these traditional security methodologies, and more. With Rugged DevOps,
security is about securing the pipeline. Rugged DevOps involves determining where you can add security
to the elements that plug into your build and release pipelines. Rugged DevOps can show you how and
where you can add security to your automation practices, production environments, and other pipeline
elements, while benefiting from the speed of DevOps.
Rugged DevOps addresses broader questions, such as:
●● Is my pipeline consuming third-party components, and if so, are they secure?
●● Are there known vulnerabilities within any of the third-party software we use?
●● How quickly can I detect vulnerabilities (also referred to as time to detect)?
●● How quickly can I remediate identified vulnerabilities (also referred to as time to remediate)?
Security practices for detecting potential security anomalies need to be as robust and as fast as the other
parts of your DevOps pipeline, including infrastructure automation and code development.

Rugged DevOps pipeline


As previously stated, the goal of a Rugged DevOps pipeline is to enable development teams to work fast
without introducing unwanted vulnerabilities into their project.
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  775

Two important features of Rugged DevOps pipelines that are not found in standard DevOps pipelines are:
●● Package management and the approval process associated with it. The previous workflow diagram
details additional steps that account for how software packages are added to the pipeline, and the
approval processes that packages must go through before they are used. These steps should be
enacted early in the pipeline, so that issues can be identified sooner in the cycle.
●● Source Scanner, also an additional step for scanning the source code. This step allows for security
scanning and checking for security vulnerabilities that are not present in the application code. The
scanning occurs after the app is built, but before release and pre-release testing. Source scanning can
identify security vulnerabilities earlier in the cycle.
In the remainder of this lesson, we address these two important features of Rugged DevOps pipelines,
the problems they present, and some of the solutions for them.

Software Composition Analysis (SCA)


Two important areas from the Rugged DevOps pipeline are Package management and Open Source
Software OSS components.

Package management
Just as teams use version control as a single source of truth for source code, Rugged DevOps relies on a
package manager as the unique source of binary components. By using binary package management, a
MCT USE ONLY. STUDENT USE PROHIBITED 776  Module 19 Implement Compliance and Security in your Infrastructure  

development team can create a local cache of approved components, and make this a trusted feed for
the Continuous Integration (CI) pipeline.
In Azure DevOps, Azure Artifacts is an integral part of the component workflow for organizing and
sharing access to your packages. Azure Artifacts allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in
Git.
●● Protect your packages. Keep every public source package you use (including packages from npmjs
and NuGet .org) safe in your feed where only you can delete it and where it's backed by the enter-
prise-grade Azure Service Level Agreement (SLA).
●● Integrate seamless package handling into your Continuous Integration (CI)/ Continuous Development
(CD) pipeline. Easily access all your artifacts in builds and releases. Azure Artifacts integrates natively
with the Azure Pipelines CI/CD tool.
For more information about Azure Artifacts, visit the webpage What is Azure Artifacts?2

Versions and compatibility


The following table lists the package types supported by Azure Artifacts. The availability of each package
in Azure DevOps Services also displays. The following table details the compatibility of each package with
specific versions of Azure DevOps Server, previously known as Team Foundation Server (TFS).

Feature Azure DevOps Services TFS


NuGet Yes TFS 2017
npm Yes TFS 2017 update 1 and later
Maven Yes TFS 2017 update 1 and later
Gradle Yes TFS 2018
Universal Yes No
Python Yes No
Maven, npm, and NuGet packages can be supported from public and private sources with teams of any
size. Azure Artifact comes with Azure DevOps, but the extension is also available from the Visual Studio
Marketplace Azure DevOps page3.

2 https://docs.microsoft.com/en-us/azure/devops/artifacts/overview?view=vsts
3 https://marketplace.visualstudio.com/items?itemName=ms.feed
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  777

Note: After you publish a particular version of a package to a feed, that version number is permanently
reserved. You cannot upload a newer revision package with that same version number, or delete that
version and upload a new package with the same version number. The published version is immutable.

The Role of OSS components


Development work is more productive as a result of the wide availability of reusable Open-source
software (OSS) components. This practical approach to reuse includes runtimes, which are available on
Windows and Linux operating systems such as Microsoft .NET Core and Node.js.
However, OSS component reuse comes with the risk that reused dependencies can have security vulnera-
bilities. As a result, many users find security vulnerabilities in their applications due to the Node.js
package versions they consume.
To address these security concerns, OSS offers a new concept called Software Composition Analysis (SCA),
which is depicted in the following image.
MCT USE ONLY. STUDENT USE PROHIBITED 778  Module 19 Implement Compliance and Security in your Infrastructure  

When consuming an OSS component, whether you are creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest, correct version to avoid any old vulnerabilities or license misuses.
2. Validate that the OSS components are the correct binaries for your version. In the release pipeline,
validate binaries to ensure that they are correct and to keep a traceable bill of materials.
3. Get notifications of component vulnerabilities immediately, and correct and redeploy the component
automatically to resolve security vulnerabilities or license misuses from reused software.

Whitesource integration with Azure DevOps


pipeline
Visual Studio Code marketplace4 is an important site for addressing Rugged DevOps issues. From here
you can integrate specialist security products into your Azure DevOps pipeline. Having a full suite of
extensions that allow seamless integration into Azure DevOps pipelines is invaluable.

WhiteSource
The WhiteSource5 extension is available on the Azure DevOps Marketplace. Using WhiteSource, you can
integrate extensions with your CI/CD pipeline to address Rugged DevOps security-related issues. For a
team consuming external packages, the WhiteSource extension specifically addresses open source
security, quality, and license compliance concerns. Because most breaches today target known vulnerabil-
ities in common components, robust tools are essential to securing problematic open source compo-
nents.

4 https://marketplace.visualstudio.com/
5 https://marketplace.visualstudio.com/items?itemName=whitesource.whitesource
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  779

Continuously detect all open-source components in your


software
WhiteSource will automatically detect all open-source components—including their transitive dependen-
cies—every time you run a build. This means you can generate a comprehensive inventory report within
minutes based on the last build you ran. It also gives full visibility to your security, DevOps, and legal
teams into your organization’s software development process.

Receive alerts on open-source security vulnerabilities


When a new security vulnerability is discovered, WhiteSource automatically generates an alert and
provides targeted remediation guidance. This can include links to patches, fixes, relevant source files,
even recommendations to change system configuration to prevent exploitation.

Automatically enforce open-source security and license


compliance policies
According to a company’s policies, WhiteSource automatically approves, rejects, or triggers a manual
approval process every time a new open-source component is added to a build. Developers can set up
policies based on parameters such as security-vulnerability severity, license type, or library age. When a
developer attempts to add a problematic open source component, the service will send an alert and fail
the build.
MCT USE ONLY. STUDENT USE PROHIBITED 780  Module 19 Implement Compliance and Security in your Infrastructure  

For searching online repositories such as GitHub and Maven Central, WhiteSource also offers an innova-
tive browser extension. Even before choosing a new component, a developer can review its security
vulnerabilities, quality, and license issues, and whether it fits their company’s policies.

Micro Focus Fortify integration with Azure Dev-


Ops pipeline
Micro Focus Fortify6 is another example of an extension that you can leverage from the Azure DevOps
Marketplace for integration with your CI/CD pipeline. Micro Focus Fortify allows you to address Rugged
DevOps security-related concerns by adding build tasks for continuous integration to help you to identify
vulnerabilities in your source code.
Micro Focus Fortify provides a comprehensive set of software security analyzers that search for violations
of security-specific coding rules and guidelines. Development groups and security professionals use it to
analyze application source code for security issues.

Fortify Static Code Analyzer


The Micro Focus Fortify Static Code Analyzer (Fortify SCA) identifies root causes of software security
vulnerabilities. It then delivers accurate, risk-ranked results with line-of-code remediation guidance.

6 https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  781

Fortify on Demand
Fortify on Demand delivers application SaaS. It automatically submits static and dynamic scan requests to
the application's SaaS platform. Static assessments are uploaded to Fortify on Demand. For dynamic
assessments, you can pre-configure a specific application URL.

CheckMarx integration with Azure DevOps


Checkmarx CxSAST7 is another example of an extension from the Azure DevOps Marketplace that you
can apply to your CI/CD pipeline to address Rugged DevOps security-related issues. Checkmarx CxSAST
is designed to identify, track, and fix technical and logical security flaws. Checkmarx is a powerful, unified
security solution for Static Application Security Testing (SAST) and Checkmarx Open Source Analysis
(CxOSA). You can download Checkmarx from the Azure DevOps Marketplace.

Checkmarx functionality
Checkmarx functionality includes:
●● Best fix location. Checkmarx highlights the best place to fix your code to minimize the time required
to remediate the issue. A visual chart of the data flow graph indicates the ideal location in the code to
address multiple vulnerabilities within the data flow using a single line of code.
●● Quick and accurate scanning. Checkmarx helps reduce false positives, adapt the rule set to minimize
false positives, and understand the root cause for results.
●● Incremental scanning. Using Checkmarx, you can test just the code parts that have changed since last
code check in. This helps reduce scanning time by more than 80 percent. It also enables you to
incorporate the security gate within your continuous integration pipeline.
●● Seamless integration. Checkmarx works with all integrated development environments (IDEs), build
management servers, bug tracking tools, and source repositories.
●● Code portfolio protection. Checkmarx helps protect your entire code portfolio, both open source and
in-house source code. It analyzes open-source libraries, ensuring licenses are being adhered to, and
removing any open-source components that expose the application to known vulnerabilities. In
addition, Checkmarx Open Source helps provide complete code portfolio coverage under a single
unified solution with no extra installations or administration required.
●● Easy to initiate Open Source Analysis. With Checkmarx’s Open Source analysis, you don't need
additional installations or multiple management interfaces; you simply turn it on, and within minutes a
detailed report is generated with clear results and detailed mitigation instructions. Because analysis

7 https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast
MCT USE ONLY. STUDENT USE PROHIBITED 782  Module 19 Implement Compliance and Security in your Infrastructure  

results are designed with the developer in mind, no time is wasted trying to understand the required
action items to mitigate detected security or compliance risks.

Veracode integration with Azure DevOps


Veracode8 is another example of an Azure DevOps Marketplace extension that you can integrate with
your CI/CD pipeline to address Rugged DevOps security-related issues. The Veracode Application Security
Platform is a SaaS that enables developers to automatically scan an application for security vulnerabilities.
The SAST, dynamic application security testing (DAST), and software composition analysis (SCA) capabili-
ties provided by Veracode allow development teams to assess both first-party code and third-party
components for security risks.

Veracode functionality
Veracode's functionality includes the following features:
●● Integrate application security into the development tools you already use. From within Azure DevOps
and Microsoft Team Foundation Server (TFS) you can automatically scan code using the Veracode
Application Security Platform to find security vulnerabilities. With Veracode you can import any
security findings that violate your security policy as work items. Veracode also gives you the option to
stop a build if serious security issues are discovered.
●● No stopping for false alarms. Because Veracode gives you accurate results and prioritizes them based
on severity, you don't need to waste resources responding to hundreds of false positives. Microsoft
has assessed over 2 trillion lines of code in 15 languages and over 70 frameworks. In addition, this
process continues to improve with every assessment because of rapid update cycles and continuous
improvement processes. If something does get through, you can mitigate it using the easy Veracode
workflow.
●● Align your application security practices with your development practices. Do you have a large or
distributed development team? Do you have too many revision control branches? You can integrate
your Azure DevOps workflows with the Veracode Developer Sandbox, which supports multiple
development branches, feature teams, and other parallel development practices.
●● Find vulnerabilities and fix them. Veracode gives you remediation guidance with each finding and the
data path that a malicious user would use to reach the application's weak point. Veracode also
highlights the most common sources of vulnerabilities to help you prioritize remediation. In addition,
when vulnerability reports don't provide enough clarity, you can set up one-on-one developer

8 https://marketplace.visualstudio.com/items?itemName=Veracode.veracode-vsts-build-extension
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  783

consultations with Microsoft experts who have backgrounds in both security and software develop-
ment. Security issues that are found by Vercode and which could prevent you from releasing your
code show up automatically in your teams' list of work items, and are automatically updated and
closed after you scan your fixed code.
●● Proven onboarding process allows for scanning on day one. The cloud-based Veracode Application
Security Platform is designed to get you going quickly, in minutes even. Veracode's services and
support team can make sure that you are on track to build application security into your process.

How to Integrate SCA checks into pipelines


Security scanning used to be thought of as an activity that was completed once per release by a dedicat-
ed security team whose members had little involvement with other teams. This practice creates a danger-
ous pattern in which security specialists find large batches of issues at the exact time when developers
are under the most pressure to release a software product. This pressure often results in software deploy-
ment with security vulnerabilities that will need to be addressed after a product has been released.
By integrating scanning into a team’s workflow at multiple points along the development path, Rugged
DevOps can help to make all quality-assurance activities, including security, continuous and automated.

Pull request code scan analysis integration


DevOps teams can submit proposed changes to an application's (master) codebase using pull requests
(PRs). To avoid introducing new issues, before creating a PR, developers need to verify the effects of the
code changes that they make. In a DevOps process a PR is typically made for each small change. Changes
are continuously merged with the master codebase to keep the master codebase up to date. Ideally, a
developer should check for security issues prior creating to a PR.
Azure Marketplace extensions that facilitate integrating scans during PRs include:
●● WhiteSource9. Facilitates validating dependencies with its binary fingerprinting.
●● Checkmarx10. Provides an incremental scan of changes.
●● Veracode11. Implements the concept of a developer sandbox.
●● Black Duck by Synopsis12. An auditing tool for open-source code to help identify, fix, and manage
compliance.
These extensions allow a developer to experiment with changes prior to submitting them as part of a PR.

Build and release definition code scan, analysis, and inte-


gration
Developers need to optimize CI for speed so that build teams get immediate feedback about build issues.
Code scanning can be performed quickly enough for it to be integrated into the CI build definition
thereby preventing a broken build. This enables developers to restore a build's status to ready/ green by
fixing potential issues immediately.
At the same time, CD needs to be thorough. In Azure DevOps, CD is typically managed through release
definitions (which progress the build output across environments), or via additional build definitions.

9 https://www.whitesourcesoftware.com/
10 https://www.checkmarx.com/
11 https://www.veracode.com/
12 https://www.blackducksoftware.com/
MCT USE ONLY. STUDENT USE PROHIBITED 784  Module 19 Implement Compliance and Security in your Infrastructure  

Build definitions can be scheduled (perhaps daily), or triggered with each commit. In either case, the build
definition can perform a longer static analysis scan (as illustrated in the following image). You can scan
the full code project and review any errors or warnings offline without blocking the CI flow.

Ops and pipeline security


In addition to protecting your code, it's essential to protect credentials and secrets. In particular, phishing
is becoming ever more sophisticated. The following list is several operational practices that a team ought
to apply to protect itself:
●● Authentication and authorization. Use multifactor authentication (MFA), even across internal domains,
and just-in-time administration tools such as Azure PowerShell Just Enough Administration (JEA)13,
to protect against escalations of privilege. Using different passwords for different user accounts will
limit the damage if a set of access credentials is stolen.
●● The CI/CD Release Pipeline. If the release pipeline and cadence are damaged, use this pipeline to
rebuild infrastructure. If you manage Infrastructure as Code (IaC) with Azure Resource Manager, or
use the Azure platform as a service (PaaS) or a similar service, then your pipeline will automatically
create new instances and then destroy them. This limits the places where attackers can hide malicious

13 http://aka.ms/jea
MCT USE ONLY. STUDENT USE PROHIBITED
 Security and Compliance Principles with DevOps  785

code inside your infrastructure. Azure DevOps will encrypt the secrets in your pipeline, as a best
practice rotate the passwords just as you would with other credentials.
●● Permissions management. You can manage permissions to secure the pipeline with role-based access
control (RBAC), just as you would for your source code. This keeps you in control of who can edit the
build and release definitions that you use for production.
●● Dynamic scanning. This is the process of testing the running application with known attack patterns.
You could implement penetration testing as part of your release. You also could keep up to date on
security projects such as the Open Web Application Security Project (OWASP14) Foundation, then
adopt these projects into your processes.
●● Production monitoring. This is a key DevOps practice. The specialized services for detecting anoma-
lies related to intrusion are known as Security Information and Event Management. Azure Security
Center15 focuses on the security incidents that relate to the Azure cloud.
✔️ Note: In all cases, use Azure Resource Manager Templates or other code-based configurations. You
should also implement IaC best practices, such as only making changes in templates, to make changes
traceable and repeatable. Also, use provisioning and configuration technologies such as Desired State
Configuration (DSC), Azure Automation, and other third-party tools and products that can integrate
seamlessly with Azure.

Secure DevOps kit for Azure (AzSK)


The Secure DevOps Kit for Azure16 (AzSK) was created by the Core Services Engineering and Operations
(CSEO) division at Microsoft. AzSK is a collection of scripts, tools, extensions, automations, and other
resources that cater to the end-to-end security needs of DevOps teams who work with Azure subscrip-
tions and resources. Using extensive automation, AzSK smoothly integrates security into native DevOps
workflows. AzSK operates across the different stages of DevOps to maintain your control of security and
governance.
AzSK can help to secure DevOps by:
●● Helping secure the subscription. A secure cloud subscription provides a core foundation upon which
to conduct development and deployment activities. An engineering team can deploy and configure
security in the subscription by using alerts, ARM policies, RBAC, Security Center policies, JEA, and
Resource Locks. Likewise, it can verify that all settings conform to a secure baseline.
●● Enabling secure development. During the coding and early development stages, developers need to
write secure code and then test the secure configuration of their cloud applications. Similar to build
verification tests (BVTs), AzSK introduces the concept of security verification tests (SVTs), which can
check the security of various resource types in Azure.
●● Integrating security into CI/CD. Test automation is a core feature of DevOps. Secure DevOps provides
the ability to run SVTs as part of the Azure DevOps CI/CD pipeline. You can use SVTs to ensure that
the target subscription (used to deploy a cloud application) and the Azure resources that the applica-
tion is built on are set up securely.
●● Providing continuous assurance. In a constantly changing DevOps environment, it's important to
consider security as more than a milestone. You should view your security needs as varying in accord-
ance with the continually changing state of your systems. Secure DevOps can provide assurances that
security will be maintained despite changes to the state of your systems, by using a combination of
tools such as automation runbooks and schedules.

14 https://www.owasp.org
15 https://azure.microsoft.com/en-us/services/security-center/
16 https://github.com/azsk/DevOpsKit-docs
MCT USE ONLY. STUDENT USE PROHIBITED 786  Module 19 Implement Compliance and Security in your Infrastructure  

●● Alerting & monitoring. Security status visibility is important for both individual application teams and
central enterprise teams. Secure DevOps provides solutions that cater to the needs of both. Moreover,
the solution spans across all stages of DevOps, in effect bridging the security gap between the Dev
team and the Ops team through the single, integrated view it can generate.
●● Governing cloud risks. Underlying all activities in the Secure DevOps kit is a telemetry framework that
generates events such as capturing usage, adoption, and evaluation results. This enables you to make
measured improvements to security by targeting areas of high risk and maximum usage.
You can leverage and utilize the tools, scripts, templates, and best practice documentation that are
available as part of AzSK.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  787

Azure security Center


Azure Security Center
Azure Security Center is a monitoring service that provides threat protection across all your services
both in Azure, and on-premises. Security Center can:
●● Provide security recommendations based on your configurations, resources, and networks.
●● Monitor security settings across on-premises and cloud workloads, and automatically apply required
security to new services as they come online.
●● Continuously monitor all your services, and perform automatic security assessments to identify
potential vulnerabilities before they can be exploited.
●● Use Azure Machine Learning to detect and block malicious software from being installed on your
virtual machines (VMs) and services. You can also define a list of allowed applications to ensure that
only the apps you validate are allowed to execute.
●● Analyze and identify potential inbound attacks, and help to investigate threats and any post-breach
activity that might have occurred.
●● Provide just-in-time (JIT) access control for ports, thereby reducing your attack surface by ensuring
the network only allows traffic that you require.

Azure Security Center is part of the Center for Internet Security (CIS) Benchmarks17 recommendations.

Azure Security Center versions


Azure Security Center supports both Windows and Linux operating systems. It can also provide security
to features in both IaaS and platform as a service (PaaS) scenarios.
Azure Security Center is available in two versions:
●● Free. Available as part of your Azure subscription, this tier is limited to assessments and Azure
resources' recommendations only.
●● Standard. This tier provides a full suite of security-related services including continuous monitoring,
threat detection, JIT access control for ports, and more.
To access the full suite of Azure Security Center services you'll need to upgrade to a Standard version
subscription. You can access the 60-day free trial from within the Azure Security Center dashboard in the
Azure portal.

17 https://www.cisecurity.org/cis-benchmarks/
MCT USE ONLY. STUDENT USE PROHIBITED 788  Module 19 Implement Compliance and Security in your Infrastructure  

After the 60-day trial period is over, Azure Security Center is $15 per node per month. To upgrade a
subscription from the Free trial to the Standard version, you must be assigned the role of Subscription
Owner, Subscription Contributor, or Security Admin.
You can read more about Azure Security Center at Azure Security Center18.

Azure Security Center usage scenarios


You can integrate Azure Security Center into your workflows and use it in many ways. Here are two
example usage scenarios:
●● Use Azure security center as part of your incident response plan.
Many organizations only respond to security incidents after an attack has occurred. To reduce costs and
damage, it's important to have an incident response plan in place before an attack occurs.

The following examples are of how you can use Azure Security Center for the detect, assess, and diag-
nose stages of your incident response plan.
●● Detect. Review the first indication of an event investigation. For example, use the Azure Security
Center dashboard to review the initial verification of a high-priority security alert occurring.

18 https://azure.microsoft.com/en-us/services/security-center/
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  789

●● Assess. Perform the initial assessment to obtain more information about a suspicious activity. For
example, you can obtain more information from Azure Security Center about a security alert.
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies. For example, you can follow the remediation steps described by Azure Security Center for a
particular security alert.
●● Use Azure Security Center recommendations to enhance security.
You can reduce the chances of a significant security event by configuring a security policy, and then
implementing the recommendations provided by Azure Security Center. A security policy defines the set
of controls that are recommended for resources within a specified subscription or resource group. In
Azure Security Center, you can define policies according to your company's security requirements.
Azure Security Center analyzes the security state of your Azure resources. When it identifies potential
security vulnerabilities, it creates recommendations based on the controls set in the security policy. The
recommendations guide you through the process of configuring the corresponding security controls. For
example, if you have workloads that don't require the Azure SQL Database Transparent Data Encryption
(TDE) policy, turn off the policy at the subscription level and enable it only on the resource groups where
SQL Database TDE is required.
You can read more about Azure security center at Azure security center19. More implementation and
scenario details are also available in the Azure security center planning and operations guide20.

Azure Policy
Azure Policy is an Azure service that you can use to create, assign, and, manage policies. Policies enforce
different rules and effects over your Azure resources, which ensures that your resources stay compliant
with your standards and SLAs.

Azure Policy uses policies and initiatives to provide policy enforcement capabilities. Azure Policy evaluates
your resources by scanning for resources that do not comply with the policies you create. For example,
you might have a policy that specifies a maximum size limit for VMs in your environment. After you
implement your maximum VM size policy, whenever a VM is created or updated Azure Policy will evaluate
the VM resource to ensure that the VM complies with the size limit that you set in your policy.
Azure Policy can help to maintain the state of your resources by evaluating your existing resources and
configurations, and remediating non-compliant resources automatically. It has built-in policy and initia-
tive definitions for you to use. The definitions are arranged in categories, such as Storage, Networking,
Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration (CI) and
continuous delivery (CD) pipeline policies that apply to the pre-deployment and post-deployment of your
applications.

19 https://azure.microsoft.com/en-us/services/security-center/
20 https://docs.microsoft.com/en-us/azure/security-center/security-center-planning-and-operations-guide
MCT USE ONLY. STUDENT USE PROHIBITED 790  Module 19 Implement Compliance and Security in your Infrastructure  

CI/CD pipeline integration


An example of an Azure policy that you can integrate with your DevOps CI/CD pipeline is the Check Gate
task. Using Azure policies, Check gate provides security and compliance assessment on the resources
with an Azure resource group or subscription that you can specify. Check gate is available as a release
pipeline deployment task.
For more information go to:
●● Azure Policy Check Gate task21
●● Azure Policy22

Policies
Applying a policy to your resources with Azure Policy involves the following high-level steps:
1. Policy definition. Create a policy definition.
2. Policy assignment. Assign the definition to a scope of resources.
3. Remediation. Review the policy evaluation results and address any non-compliances.

Policy definition
A policy definition specifies the resources to be evaluated and the actions to take on them. For example,
you could prevent VMs from deploying if they are exposed to a public IP address. You could also prevent
a specific hard disk from being used when deploying VMs to control costs. Policies are defined in the Java
Script Object Notation (JSON) format.
The following example defines a policy that limits where you can deploy resources:
{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be specified when deploying resources",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy enables you to restrict the locations your organization can specify when
deploying resources.",
"policyRule": {
"if": {
"not": {
"field": "location",
"in": "[parameters('allowedLocations')]"

21 https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate?view=vsts
22 https://azure.microsoft.com/en-us/services/azure-policy/
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  791

}
},
"then": {
"effect": "deny"
}
}
}
}

The following list are example policy definitions:


●● Allowed Storage Account SKUs. This policy defines conditions (or rules) that limit storage accounts to
a set of specified sizes, or Stock Keeping Units (SKUs). Its effect is to deny all storage accounts that do
not adhere to the set of defined SKU sizes.
●● Allowed Resource Type. This policy definition has a set of conditions to specify the resource types that
can be deployed. Its effect is to deny all resources that are on the list.
●● Allowed Locations. This policy restricts the locations that new resources can be deployed to. It's used
to enforce geographic compliance requirements.
●● Allowed Virtual Machine SKUs. This policy specifies a set of VM SKUs that can be deployed. The policy
effect is that VMs cannot be deployed from unspecified SKUs.

Policy assignment
Policy definitions, whether custom or built in, need to be assigned. A policy assignment is a policy defini-
tion that has been assigned to a specific scope. Scopes can range from a management group to a
resource group. Child resources will inherit any policy assignments that have been applied to their
parents. This means that if a policy is applied to a resource group, it's also applied to all the resources
within that resource group. However, you can define subscopes for excluding particular resources from
policy assignments.
You can assign policies via:
●● Azure portal
●● Azure CLI
●● PowerShell

Remediation
Resources found not to comply to a deployIfNotExists or modify policy condition can be put into a
compliant state through Remediation. Remediation instructs Azure Policy to run the deployIfNotExists
effect or the tag operations of the policy on existing resources. To minimize configuration drift, you can
bring resources into compliance using automated bulk remediation instead of going through them one
at a time.
You can read more about Azure Policy on the Azure Policy23 webpage.

23 https://azure.microsoft.com/en-us/services/azure-policy/
MCT USE ONLY. STUDENT USE PROHIBITED 792  Module 19 Implement Compliance and Security in your Infrastructure  

Initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for meeting large-scale compliance goals. Even if you have a single
policy, we recommend using initiatives if you anticipate increasing your number of policies over time. The
application of an initiative definition to a specific scope is called an initiative assignment.

Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping sets of
policies into a single item. For example, you can create an initiative named Enable Monitoring in Azure
Security Center to monitor security recommendations from Azure Security Center. Under this example
initiative, you would have the following policy definitions:
●● Monitor unencrypted SQL Database in Security Center. This policy definition monitors unencrypted
SQL databases and servers.
●● Monitor OS vulnerabilities in Security Center. This policy definition monitors servers that do not satisfy
a specified OS baseline configuration.
●● Monitor missing Endpoint Protection in Security Center. This policy definition monitors servers
without an endpoint protection agent installed.

Initiative assignments
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. Scopes can
range from a management group to a resource group. You can assign initiatives in the same way that you
assign policies.
You can read more about policy definition and structure at Azure Policy definition structure24.

Azure Key Vault


Azure Key Vault is a centralized cloud service for storing your applications' secrets. Key Vault helps you
control your applications' secrets by keeping them in a single central location and by providing secure
access, permissions control, and access logging capabilities.

Usage scenarios
Azure Key Vault capabilities and usage scenarios include:
●● Secrets management. You can use Key Vault to securely store and control access to tokens, passwords,
certificates, API keys, and other secrets.

24 https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  793

●● Key management. As a key management solution, Key Vault makes it easy to create and control
encryption keys used to encrypt your data.
●● Certificate management. Key Vault lets you provision, manage, and deploy public and private Secure
Sockets Layer (SSL)/ Transport Layer Security (TLS) certificates for your Azure subscription and con-
nected resources more easily.
●● Store secrets backed by Hardware Security Modules (HSMs). Secrets and keys can be protected either
by software, or by Federal Information Processing Standard (FIPS) 140-2 Level 2 validated HSMs.

Key Vault benefits


The benefits of using Key Vault include:
●● Centralized application secrets. Centralizing storage for application secrets allows you to control their
distribution, and reduces the chances that secrets might accidentally be leaked.
●● Securely stored secrets and keys. Azure uses industry-standard algorithms, key lengths, and HSMs to
ensure that access requires proper authentication and authorization.
●● Monitoring access and use. Using Key Vault, you can monitor and control access to company secrets.
●● Simplified administration of application secrets. Key Vault makes it easier to enroll and renew certifi-
cates from public Certificate Authorities (CAs). You can also scale up and replicate content within
regions, and use standard certificate management tools.
●● Integration with other Azure services. You can integrate Key Vault with storage accounts, container
registries, event hubs, and many more Azure services.
You can learn more about Key Vault services on the Key Vault 25webpage.

Role-Based Access Control (RBAC)


RBAC provides fine-grained access management for Azure resources, which enables you to grant users
only the rights they need to perform their jobs. RBAC is provided at no additional cost to all Azure
subscribers.

Usage scenarios
The following examples provide use-case scenarios for RBAC:
●● Allow a specific user to manage VMs in a subscription, and another user to manage virtual networks.
●● Permit the database administrator (DBA) group management access to Microsoft SQL Server databas-
es in a subscription.
●● Grant a user management access to certain types of resources in a resource group, such as VMs,
websites, and subnets.
●● Give an application access to all resources in a resource group.
To review access permissions, in a deployed VM, open the Access Control (IAM) blade in the Azure
portal. On this blade, you can review who has access, their role, and grant or remove access permissions.

25 https://azure.microsoft.com/en-us/services/key-vault/
MCT USE ONLY. STUDENT USE PROHIBITED 794  Module 19 Implement Compliance and Security in your Infrastructure  

The following illustration is an example of the Access Control (IAM) blade for a resource group. Note
how the Roles tab displays the built-in roles that are available.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  795

For a full list of available built-in roles, go to the Built-in roles for Azure resources26 webpage.
RBAC uses an allow model for permissions. This means that when you are assigned a role, RBAC allows
you to perform certain actions such as read, write, or delete. Therefore, if one role assignment grants you
read permissions to a resource group, and a different role assignment grants you write permissions to the
same resource group, you will have both read and write permissions for that resource group.

Best practice
When using RBAC, segregate duties within your team and only grant users the access they need to
perform their jobs. Instead of giving everybody unrestricted access to your Azure subscription and
resources, allow only certain actions at a particular scope. In other words, grant users the minimal
privileges that they need to complete their work.
Note: For more information about RBAC, visit What is role-based access control (RBAC) for Azure
resources?27.

Locks
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage
locks from within the Azure portal. In Azure portal, locks are called Delete and Read-only respectively. To
review, add, or delete locks for a resource in Azure portal, go to the Settings section on the resource's
settings blade.
You might need to lock a subscription, resource group, or resource to prevent users from accidentally
deleting or modifying critical resources. You can set a lock level to CanNotDelete or ReadOnly:
●● CanNotDelete means that authorized users can read and modify a resource, but they cannot delete
the resource.
●● ReadOnly means that authorized users can read a resource, but they cannot modify or delete it.
You can read more about Locks on the Lock resources to prevent unexpected changes28 webpage.

Subscription governance
The following considerations apply to creating and managing Azure subscriptions:
●● Billing. You can generate billing reports for Azure subscriptions. If, for example, you have multiple
internal departments and need to perform a chargeback, you can create a subscription for a depart-
ment or project.
●● Access control. A subscription acts as a deployment boundary for Azure resources. Every subscription
is associated with an Azure Active Directory (Azure AD) tenant that provides administrators with the
ability to set up RBAC. When designing a subscription model, consider the deployment boundary.
Some customers keep separate subscriptions for development and production, and manage them
using RBAC to isolate one subscription from the other (from a resource perspective).
●● Subscription limits. Subscriptions are bound by hard limitations. For example, the maximum number
of Azure ExpressRoute circuits per subscription is 10. If you reach a limit, there is no flexibility. Keep
these limits in mind during your design phase. If you need to exceed the limits, you might require
additional subscriptions.

26 https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
27 https://docs.microsoft.com/en-us/azure/role-based-access-control/overview
28 https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-lock-resources
MCT USE ONLY. STUDENT USE PROHIBITED 796  Module 19 Implement Compliance and Security in your Infrastructure  

Management groups
Management Groups can assist you with managing your Azure subscriptions. Management groups
manage access, policies, and compliance across multiple Azure subscriptions. They allow you to order
your Azure resources hierarchically into collections. Management groups facilitate the management of
resources at a level above the level of subscriptions.
In the following image, access is divided across different regions and business functions, such as market-
ing and IT. This helps you track costs and resource usage at a granular level, add security layers, and
segment workloads. You could even divide these areas further into separate subscriptions for Dev and
QA, or for specific teams.

You can manage your Azure subscriptions more effectively by using Azure Policy and Azure RBACs. These
tools provide distinct governance conditions that you can apply to each management group. Any
conditions that you apply to a management group will automatically be inherited by the resources and
subscriptions within that group.
Note: For more information about management groups and Azure, go to the Azure management
groups documentation, Organize your resources29 webpage.

Azure Blueprints
Azure Blueprints enables cloud architects to define a repeatable set of Azure resources that implement
and adhere to an organization's standards, patterns, and requirements. Azure Blueprints helps develop-
ment teams build and deploy new environments rapidly with a set of built-in components that speed up
development and delivery. Furthermore, this is done while staying within organizational compliance
requirements.

29 https://docs.microsoft.com/en-us/azure/governance/management-groups/
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  797

Azure Blueprints provides a declarative way to orchestrate deployment for various resource templates
and artifacts, including:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
To implement Azure Blueprints, complete the following high-level steps:
1. Create a blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved.
The blueprints in Azure Blueprints are different from Azure Resource Manager templates. When Azure Re-
source Manager templates deploy resources, they have no active relationship with the deployed resourc-
es. (They exist in a local environment or in source control). By contrast, with Azure Blueprints, each
deployment is tied to an Azure Blueprints package. This means that the relationship with resources will be
maintained, even after deployment. This way, maintaining relationships improves deployment tracking
and auditing capabilities.

Usage scenario
Adhering to security and compliance requirements, whether government, industry, or organizational
requirements, can be difficult and time consuming. To help you to trace your deployments and audit
them for compliance, Azure Blueprints uses artifacts and tools that expedite your path to certification.
Azure Blueprints is also useful in Azure DevOps scenarios where blueprints are associated with specific
build artifacts and release pipelines, and blueprints can be tracked rigorously.
You can learn more about Azure Blueprints at Azure Blueprints30.

Azure Advanced Threat Protection (ATP)


Azure Advanced Threat Protection (Azure ATP) is a cloud-based security solution that identifies and
detects advanced threats, compromised identities, and malicious insider actions directed at your organi-
zation. Azure ATP is capable of detecting known malicious attacks and techniques, and can help you
investigate security issues and network vulnerabilities.

30 https://azure.microsoft.com/services/blueprints/
MCT USE ONLY. STUDENT USE PROHIBITED 798  Module 19 Implement Compliance and Security in your Infrastructure  

Azure ATP components


Azure ATP consists of the following components:
●● Azure ATP portal. Azure ATP has its own portal. Through Azure ATP portal, you can monitor and
respond to suspicious activity. The Azure ATP portal allows you to manage your Azure ATP instance
and review data received from Azure ATP sensors.
You can also use the Azure ATP portal to monitor, manage, and investigate threats to your network
environment. You can sign in to the Azure ATP portal at https://portal.atp.azure.com31. Note that you
must sign in with a user account that is assigned to an Azure AD security group that has access to the
Azure ATP portal.
●● Azure ATP sensor. Azure ATP sensors are installed directly on your domain controllers. The sensors
monitor domain controller traffic without requiring a dedicated server or port mirroring configura-
tions.
●● Azure ATP cloud service. The Azure ATP cloud service runs on the Azure infrastructure, and is currently
deployed in the United States, Europe, and Asia. The Azure ATP cloud service is connected to the
Microsoft Intelligent Security Graph.

31 https://portal.atp.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure security Center  799

Purchasing Azure ATP


Azure ATP is available as part of the Microsoft Enterprise Mobility + Security E5 offering, and as a stan-
dalone license. You can acquire a license directly from the Enterprise Mobility + Security pricing
options32 page, or through the Cloud Solution Provider (CSP) licensing model.
✔️ Note: Azure ATP is not available for purchase via the Azure portal. For more information about Azure
ATP, review the Azure Advanced Threat Protection33 webpage.

32 https://www.microsoft.com/en-ie/cloud-platform/enterprise-mobility-security-pricing
33 https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection/

You might also like