Bloque4 - Az 400t00a Enu Trainerhandbook PDF Free
Bloque4 - Az 400t00a Enu Trainerhandbook PDF Free
STUDENT USE PROHIBITED 600 Module 15 Infrastructure and Configuration Azure Tools
You can also use Update Management to natively onboard machines in multiple subscriptions in the
same tenant.
PowerShell Workflows
IT pros often automate management tasks for their multi-device environments by running sequences of
long-running tasks or workflows. These tasks can affect multiple managed computers or devices at the
same time. PowerShell Workflow lets IT pros and developers leverage the benefits of Windows Workflow
Foundation with the automation capabilities and ease of using Windows PowerShell. Refer to A Develop-
er's Introduction to Windows Workflow Foundation (WF) in .NET 471 for more information.
Windows PowerShell Workflow functionality was introduced in Windows Server 2012 and Windows 8, and
is part of Windows PowerShell 3.0 and later. Windows PowerShell Workflow helps automate distribution,
orchestration, and completion of multi-device tasks, freeing users and administrators to focus on high-
er-level tasks.
Activities
An activity is a specific task that you want a workflow to perform. Just as a script is composed of one or
more commands, a workflow is composed of one or more activities that are carried out in sequence. You
can also use a script as a single command in another script, and use a workflow as an activity within
another workflow.
71 https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ee342461(v=msdn.10)
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Automation with DevOps 601
Workflow characteristics
A workflow can:
●● Be long-running.
●● Be repeated over and over.
●● Run tasks in parallel.
●● Be interrupted—can be stopped and restarted, suspended and resumed.
●● Continue after an unexpected interruption, such as a network outage or computer/server restart.
Workflow benefits
A workflow offers many benefits, including:
●● Windows PowerShell scripting syntax. Is built on PowerShell.
●● Multidevice management. Simultaneously apply workflow tasks to hundreds of managed nodes.
●● Single task runs multiple scripts and commands. Combine related scripts and commands into a single
task. Then run the single task on multiple computes. The activity status and progress within the
workflow are visible at any time.
●● Automated failure recovery.
●● Workflows survive both planned and unplanned interruptions, such as computer restarts.
●● You can suspend a workflow operation, then restart or resume the workflow from the point at
which it was suspended.
●● You can author checkpoints as part of your workflow, so that you can resume the workflow from
the last persisted task (or checkpoint) instead of restarting the workflow from the beginning.
●● Connection and activity retries. You can retry connections to managed nodes if network-connection
failures occur. Workflow authors can also specify activities that must run again if the activity cannot be
completed on one or more managed nodes (for example, if a target computer was offline while the
activity was running).
●● Connect and disconnect from workflows. Users can connect and disconnect from the computer that is
running the workflow, but the workflow will remain running. For example, if you are running the
workflow and managing the workflow on two different computers, you can sign out of or restart the
computer from which you are managing the workflow, and continue to monitor workflow operations
from another computer without interrupting the workflow.
●● Task scheduling. You can schedule a task to start when specific conditions are met, as with any other
Windows PowerShell cmdlet or script.
Creating a workflow
To write the workflow, use a script editor such as the Windows PowerShell Integrated Scripting Environ-
ment (ISE). This enforces workflow syntax and highlights syntax errors. For more information, review the
tutorial My first PowerShell Workflow runbook72.
72 https://azure.microsoft.com/en-us/documentation/articles/automation-first-runbook-textual/
MCT USE ONLY. STUDENT USE PROHIBITED 602 Module 15 Infrastructure and Configuration Azure Tools
A benefit of using PowerShell ISE is that it automatically compiles your code and allows you to save the
artifact. Because the syntactic differences between scripts and workflows are significant, a tool that knows
both workflows and scripts will save you significant coding and testing time.
Syntax
When you create your workflow, begin with the workflow keyword, which identifies a workflow com-
mand to PowerShell. A script workflow requires the workflow keyword. Next, name the workflow, and
have it follow the workflow keyword. The body of the workflow will be enclosed in braces.
A workflow is a Windows command type, so select a name with a verb-noun format:
workflow Test-Workflow
{
...
}
To add parameters to a workflow, use the Param keyword. These are the same techniques that you use to
add parameters to a function.
Finally, add your standard PowerShell commands.
workflow MyFirstRunbook-Workflow
{
Param(
[string]$VMName,
[string]$ResourceGroupName
)
....
Start-AzureRmVM -Name $VMName -ResourceGroupName $ResourceGroupName
}
Prerequisites
●● Note: You require an Azure subscription to perform the following steps. If you don't have one you can
create one by following the steps outlined on the Create your Azure free account today73 webpage.
73 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Automation with DevOps 603
Steps
2. Select Start to start the test. This should be the only enabled option.
A runbook job is created and its status displayed. The job status will start as Queued indicating that it is
waiting for a runbook worker in the cloud to come available. It moves to Starting when a worker claims
the job, and then Running when the runbook actually starts running. When the runbook job completes,
its output displays. In your case, you should see Hello World.
3. When the runbook job finishes, close the Test pane.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Automation with DevOps 605
5. You just want to start the runbook, so select Start, and then when prompted, select Yes.
6. When the job pane opens for the runbook job that you created, leave it open so you can watch the
job's progress.
7. Verify that at when the job completes, the job statuses that display in Job Summary match the
statuses that you saw when you tested the runbook.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Automation with DevOps 607
Checkpoints
A checkpoint is a snapshot of the current state of the workflow. Checkpoints include the current value
for variables, and any output generated up to that point. (For more information on what a checkpoint is,
read the checkpoint74 webpage.)
If a workflow ends in an error or is suspended, the next time it runs it will start from its last checkpoint,
instead of at the beginning of the workflow. You can set a checkpoint in a workflow with the Check-
point-Workflow activity.
For example, in the following sample code if an exception occurs after Activity2, the workflow will end.
When the workflow is run again, it starts with Activity2 because this followed just after the last checkpoint
set.
74 https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow#checkpoints
MCT USE ONLY. STUDENT USE PROHIBITED 608 Module 15 Infrastructure and Configuration Azure Tools
<Activity1>
Checkpoint-Workflow
<Activity2>
<Exception>
<Activity3>
Parallel processing
A script block has multiple commands that run concurrently (or in parallel) instead of sequentially, as for
a typical script. This is referred to as parallel processing. (More information about parallel processing is
available on the Parallel processing75 webpage.)
In the following example, two vm0 and vm1 VMs will be started concurrently, and vm2 will only start after
vm0 and vm1 have started.
Parallel
{
Start-AzureRmVM -Name $vm0 -ResourceGroupName $rg
Start-AzureRmVM -Name $vm1 -ResourceGroupName $rg
}
Another parallel processing example would be the following constructs that introduce some additional
options:
●● ForEach -Parallel. You can use the ForEach -Parallel construct to concurrently process commands for
each item in a collection. The items in the collection are processed in parallel while the commands in
the script block run sequentially.
In the following example, Activity1 starts at the same time for all items in the collection. For each item,
Activity2 starts after Activity1 completes. Activity3 starts only after both Activity1 and Activity2 have
completed for all items.
●● ThrottleLimit - We use the ThrottleLimit parameter to limit parallelism. Too high of a ThrottleLimit can
cause problems. The ideal value for the ThrottleLimit parameter depends on several environmental
factors. Try start with a low ThrottleLimit value, and then increase the value until you find one that
works for your specific circumstances:
ForEach -Parallel -ThrottleLimit 10 ($<item> in $<collection>)
{
<Activity1>
<Activity2>
}
<Activity3>
A real world example of this could be similar to the following code, where a message displays for each
file after it is copied. Only after all files are completely copied does the final completion message display.
Workflow Copy-Files
{
75 https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow#parallel-processing
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Automation with DevOps 609
$files = @("C:\LocalPath\File1.txt","C:\LocalPath\File2.txt","C:\LocalPath\File3.txt")
76 https://azure.microsoft.com/en-us/tools/
77 https://azure.microsoft.com/en-us/develop/net/
78 https://docs.microsoft.com/en-us/java/azure/?view=azure-java-stable
79 https://docs.microsoft.com/en-us/javascript/azure/?view=azure-node-latest
80 https://github.com/Azure/azure-sdk-for-python
81 https://azure.microsoft.com/en-us/develop/php/
82 https://docs.microsoft.com/en-us/go/azure/
83 https://github.com/Azure
84 https://docs.microsoft.com/en-us/rest/api/?view=Azure
85 https://docs.microsoft.com/en-us/azure/api-management/
86 https://www.microsoft.com/en-us/maps/choose-your-bing-maps-api
MCT USE ONLY. STUDENT USE PROHIBITED
Additional Automation Tools 611
●● Although the request URI is included in the request message header, we call it out separately here
because most languages or frameworks require you to pass it separately from the request message.
●● URI scheme. Indicates the protocol used to transmit the request. For example, http or https.
●● URI host. Specifies the domain name or IP address of the server where the REST service endpoint is
hosted, such as graph.microsoft.com.
●● Resource path. Specifies the resource or resource collection, which might include multiple seg-
ments used by the service in determining the selection of those resources.
●● Query string (optional). Provides additional, simple parameters, such as the API version or resource
selection criteria.
2. HTTP request message header fields. This is a required HTTP method (also known as an operation or
verb) that tells the service what type of operation you are requesting. Azure REST APIs support GET,
HEAD, PUT, POST, and PATCH methods.
3. Optional additional header fields. These are only if required by the specified URI and HTTP method.
For example, an Authorization header that provides a bearer token containing client authorization
information for the request.
4. HTTP response message header fields. This is an HTTP status code, ranging from 2xx success codes to
4xx or 5xx error codes.
5. Optional HTTP response message body fields. These MIME-encoded response objects are returned in
the HTTP response body, such as a response from a GET method that is returning data. Typically,
these objects are returned in a structured format such as JSON or XML, as indicated by the Con-
tent-type response header.
For additional details, review the video How to call Azure REST APIs with Postman87.
87 https://docs.microsoft.com/en-us/rest/api/azure/
88 https://shell.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED 612 Module 15 Infrastructure and Configuration Azure Tools
Cloud Shell is also accessible from within the Azure portal by selecting the Azure Cloud Shell icon at the
top of the browser.
If you have time you can also review the PowerShell in Azure Cloud Shell GA89 video for more details.
Package Management
Package management allows you to install all the software you need in an environment, into your VM
during or after its deployment, or after its install.
Using package management, it's possible to manage all aspects of software such as installation, configu-
ration, upgrade, and uninstall. There's a wide range of packaged software available for you to install using
the package managers, such as Java, Microsoft Visual Studio, Google Chrome, GIT, and many more.
There are also a number of package management solutions available for you to use depending on your
environment and needs:
●● apt90: apt is the package manager for Debian Linux environments.
●● Yum91: Yum is the package manager for CentOS Linux environments.
●● Chocolatey92: Chocolatey is the software management solution built on Windows PowerShell for
Windows operating systems.
✔️ Note: In the following section we will cover installing Chocolatey, as an example. However, while the
other package management solutions use different syntax and commands, they have similar concepts.
Install Chocolatey
Chocolatey does not have an .msi package; it installs as a nupkg using a PowerShell install script. The
installation script is available to review at https://chocolatey.org/install.ps193.
You can run the script and install it in a variety of ways, which you can read about at More Install
Options94.
The following example installs the script via PowerShell:
1. Open a PowerShell window as administrator, and run the following command.
Set-ExecutionPolicy Bypass -Scope Process -Force; iwr https://chocolatey.org/install.ps1 -UseBasicParsing
| iex
3. To search for a Visual Studio package that you can use, run the following command:
89 https://azure.microsoft.com/en-us/resources/videos/azure-friday-powershell-in-azure-cloud-shell-ga/
90 https://wiki.debian.org/Apt
91 https://wiki.centos.org/PackageManagement/Yum
92 https://chocolatey.org/
93 https://chocolatey.org/install.ps1
94 https://chocolatey.org/install#install-with-powershellexe
MCT USE ONLY. STUDENT USE PROHIBITED
Additional Automation Tools 613
You can install packages manually via the command line using choco install. To install packages into your
development, test, and production environments, identify a package you want to install, and then list that
package in a PowerShell script. When deploying your VM, you then call the package as part of a custom
script extension. An example of such a PowerShell script would be:
# Set PowerShell execution policy
Set-ExecutionPolicy RemoteSigned -Force
# Install Chocolatey
iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex
refreshenv
refreshenv
MCT USE ONLY. STUDENT USE PROHIBITED 614 Module 15 Infrastructure and Configuration Azure Tools
Lab
Azure Deployments using Resource Manager
Templates
Steps for the labs are available on GitHub at the following websites in their Infrastructure as Code
sections:
●● Parts unlimited95
●● Parts unlimited MRP96
For the individual lab tasks for this module, select the following PartsUnlimited link and follow the
outlined steps for each lab task.
PartsUnlimited (PU)
●● Azure Deployments using Resource Manager templates97
95 https://microsoft.github.io/PartsUnlimited
96 https://microsoft.github.io/PartsUnlimitedMRP
97 http://microsoft.github.io/PartsUnlimited/iac/200.2x-IaC-AZ-400T05AppInfra.html
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 615
Multiple choice
Which method of approach for implementing Infrastructure as Code states what the final state of an
environment should be without defining how it should be achieved?
Scripted
Imperative
Object orientated
Declarative
Review Question 3
Which term defines the ability to apply one or more operations against a resource, resulting in the same
outcome every time?
Declarative
Idempotency
Configuration drift
Technical debt
Checkbox
Which of the following are possible causes of Technical debt?
(choose all that apply)
Unplanned for localization of an application
Accessibility
Changes made quickly, or directly to an application without using DevOps methodologies
Changing technologies or versions that are not accounted for as part of the dev process
MCT USE ONLY. STUDENT USE PROHIBITED 616 Module 15 Infrastructure and Configuration Azure Tools
Multiple choice
Which term is the process whereby a set of resources change their state over time from their original state in
which they were deployed?
Modularization
Technical debt
Configuration drift
Imperative
Multiple choice
Which of the following options is a method for running configuration scripts on a VM either during or after
deployment?
Using the (CSE)
Using Quickstart templates
Using the dependsOn parameter
Using Azure Key Vault
Multiple choice
When using Azure CLI, what's the first action you need to take when preparing to run a command or script ?
Define the Resource Manager template.
Specify VM extension details.
Create a resource group.
Log in to your Azure subscription.
Multiple choice
Which Resource Manager deployment mode only deploys whatever is defined in the template, and does not
remove or modify any other resources not defined in the template?
Validate
Incremental
Complete
Partial
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 617
Multiple choice
Which package management tool is a software management solution built on Powershell for Windows
operating systems?
Yum
Chocolatey
apt
Apache Maven
Checkbox
Which of the following version control tools are available for use with Azure DevOps?
(choose all that apply)
Subversion
Git
BitBucket
TFVC
MCT USE ONLY. STUDENT USE PROHIBITED 618 Module 15 Infrastructure and Configuration Azure Tools
Answers
Checkbox
What benefits from the list below can you achieve by modularizing your infrastructure and configuration
resources?
(Choose three)
■■ Easy to reuse across different environments
■■ Easier to manage and maintain your code
More difficult to sub-divide up work and ownership responsibilities
■■ Easier to troubleshoot
■■ Easier to extend and add to your existing infrastructure definitions
Explanation
The following answers are correct:
More difficult to sub-divide up work and ownership responsibilities is incorrect. It is easier to sub-divide up
work and ownership responsibilities.
Multiple choice
Which method of approach for implementing Infrastructure as Code states what the final state of an envi-
ronment should be without defining how it should be achieved?
Scripted
Imperative
Object orientated
■■ Declarative
Explanation
Declarative is the correct answer. The declarative approach states what the final state should be. When run,
the script or definition will initialize or configure the machine to have the finished state that was declared,
without defining how that final state should be achieved.
All other answers are incorrect. Scripted is not a methodology, and in the imperative approach, the script
states the how for the final state of the machine by executing through the steps to get to the finished state.
It defines what the final state needs to be, but also includes how to achieve that final state.
Object orientated is a coding methodology, but does include methodologies for how states and outcomes
are to be achieved.
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 619
Review Question 3
Which term defines the ability to apply one or more operations against a resource, resulting in the same
outcome every time?
Declarative
■■ Idempotency
Configuration drift
Technical debt
Explanation
Idempotency is the correct answer. It is a mathematical term that can be
used in the context of Infrastructure as Code and Configuration as Code, as
the ability to apply one or more operation against a resource, resulting in
the same outcome.
Multiple choice
Which of the following options is a method for running configuration scripts on a VM either during or
after deployment?
■■ Using the (CSE)
Using Quickstart templates
Using the dependsOn parameter
Using Azure Key Vault
Explanation
Using CSE is the correct answer, because it is a way to download and run scripts on your Azure VMs.
All other answers are incorrect.
Quickstart templates are publicly available starter templates that allow you get up and running quickly with
Resource Manager templates.
The *depensdOn *parameter defines depend resources in a Resource Manager template.
Azure Key Vault is a secrets-management service in Azure that allows you to store certificates, keys,
passwords, and so forth.
Multiple choice
When using Azure CLI, what's the first action you need to take when preparing to run a command or
script ?
Define the Resource Manager template.
Specify VM extension details.
Create a resource group.
■■ Log in to your Azure subscription.
Explanation
Log in to your Azure subscription is the correct answer. You can do so using the command az login.
All other answers are incorrect.
You do not need to define the Resource Manager template or specify the VM extension details, and you can-
not create a resource group without first logging into your Azure subscription.
Multiple choice
Which Resource Manager deployment mode only deploys whatever is defined in the template, and does
not remove or modify any other resources not defined in the template?
Validate
■■ Incremental
Complete
Partial
Explanation
Incremental is the correct answer.
Validate mode only compiles the templates, and validates the deployment to ensure the template is
functional. For example, it ensures there no circular dependencies and the syntax is correct.
Incremental mode only deploys whatever is defined in the template, and does not remove or modify any
resources that are not defined in the template. For example, if you have deployed a VM via template, and
then renamed the VM in the template, the first VM deployed will still remain after the template is run again.
Incremental mode is the default mode.
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 621
In Complete mode, Resource Manager deletes resources that exist in the resource group but aren't specified
in the template. For example, only resources defined in the template will be present in the resource group
after the template is deployed. As a best practice, use the Complete mode for production environments
where possible, to try to achieve idempotency in your deployment templates.
Multiple choice
Which package management tool is a software management solution built on Powershell for Windows
operating systems?
Yum
■■ Chocolatey
apt
Apache Maven
Explanation
Chocolatey is the correct answer.
apt is the package manager for Debian Linux environments.
Yum is the package manager for CentOS Linux environments.
Maven is a build automation for build artifacts used as part of a build and release pipeline, with Java-based
projects
Checkbox
Which of the following version control tools are available for use with Azure DevOps?
(choose all that apply)
■■ Subversion
■■ Git
■■ BitBucket
■■ TFVC
Explanation
All answers are correct.
Subversion, Git, BitBucket and TFVC are all repository types that are available with Azure DevOps.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 16 Azure Deployment Models and
Services
Module Overview
Module Overview
You’re ready to start deploying and migrating applications into Microsoft’s Azure cloud platform — but
there are different deployment models to contend with. Which should you choose? Each has strengths
and weaknesses depending on the service you are setting up. Some might require more attention than
others, but offer additional control. Others integrate services like load balancing or Operating Systems as
more of a Platform as a Service.
Learn the differences between IaaS, PaaS and FaaS, and when you might want to choose one over
another.
Learning Objectives
After completing this module, students will be able to:
●● Describe deployment models and services that are available with Azure
MCT USE ONLY. STUDENT USE PROHIBITED 624 Module 16 Azure Deployment Models and Services
1 https://azure.microsoft.com/en-us/services/virtual-machines/
MCT USE ONLY. STUDENT USE PROHIBITED
Deployment Modules and Options 625
●● PaaS:
●● Azure App service2 is a managed PaaS offering for hosting web apps, mobile app back-ends,
RESTful APIs, or automated business processes.
●● Azure Container Instances3 offer the fastest and simplest way to run a container in Azure without
having to provision any VMs or adopting a higher-level service.
●● Azure Cloud services4 is a managed service for running cloud applications.
●● FaaS:
●● Azure Functions5 is a managed FaaS service.
●● Azure Batch6 is also a managed FaaS service, and is for running large-scale parallel and HPC
applications.
●● Modern native cloud apps, providing massive scale and distribution:
●● Azure Service Fabric7 is a distributed systems platform that can run in many environments,
including Azure or on premises. Service Fabric is an orchestrator of microservices across a cluster
of machines.
●● AKS 8 lets you create, configure, and manage a cluster of VMs that are preconfigured to run
containerized applications.
2 https://azure.microsoft.com/en-us/services/app-service/
3 https://azure.microsoft.com/en-us/services/container-instances/
4 https://azure.microsoft.com/en-us/services/cloud-services/
5 https://azure.microsoft.com/en-us/services/functions/
6 https://azure.microsoft.com/en-us/services/batch/
7 https://azure.microsoft.com/en-us/services/service-fabric/
8 https://azure.microsoft.com/en-us/services/kubernetes-service/
MCT USE ONLY. STUDENT USE PROHIBITED 626 Module 16 Azure Deployment Models and Services
Treat this flowchart as a starting point. Every application has unique requirements, so use the recommen-
dation as a starting point. Then perform a more detailed evaluation, looking at aspects such as:
●● Feature sets
●● Service limits
●● Cost
●● SLA
●● Regional availability
●● Developer ecosystem and team skills
●● Compute comparison tables
If your application consists of multiple workloads, evaluate each workload separately. A complete solu-
tion might incorporate two or more compute services.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Infrastructure-as-a-Service (IaaS) Services 627
Linux
Azure provides endorsed Linux distributions. Endorsed distributions are distributions that are available on
Azure Marketplace and are fully supported. The images in Azure Marketplace are provided and main-
tained by the Microsoft partner who produces them. Some of the endorsed distributions available in
Azure Marketplace include:
●● CentOS
●● CoreOS
●● Debian
●● Oracle Linux
●● Red Hat
●● SUSE Linux Enterprise
●● OpenSUSE
●● Ubuntu
●● RancherOS
MCT USE ONLY. STUDENT USE PROHIBITED 628 Module 16 Azure Deployment Models and Services
There are several other Linux-based partner products that you can deploy to Azure VMs, including
Docker, Bitnami by VMWare, and Jenkins. A full list of endorsed Linux distributions is available at En-
dorsed Linux distributions on Azure9.
As an example, to find and list available RedHat SKUs in westus, run the following commands one after
the other:
az vm image list-publishers --location westus --query "[?contains(name, 'RedHat')]"
az vm image list-offers --location westus --publisher RedHat
az vm image list-skus --location westus --publisher RedHat --offer RHEL
If you want to use a Linux version not on the endorsed list and not available in Azure Marketplace, you
can install it directly.
9 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Infrastructure-as-a-Service (IaaS) Services 629
Prerequisites
●● You require an Azure subscription to perform the following steps. If you don't have one, you can
create one by following the steps outlined on the Create your Azure free account today10 webpage.
Note: If you use your own values for the parameters used in the following commands, such as resource
group name and scale set name, remember to change them in the subsequent commands as well to
ensure the commands run successfully.
Steps
1. Create a scale set. Before you can create a scale set, you must create a resource group using the
following command:
10 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Infrastructure-as-a-Service (IaaS) Services 631
az group create --name myResourceGroup --location < your closest datacenter >
Now create a virtual machine scale set with the following command:
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--generate-ssh-keys
This creates a scale set named myScaleSet that is set to automatically update as changes are applied. It
also generates Secure Shell (SSH) keys if they do not exist in ~/.ssh/id_rsa.
2. Deploy a sample application. To test your scale set, install a basic web application, and a basic NGINX
web server. The Azure Custom Script Extension (CSE) downloads and runs a script that installs the
sample web application on the VM instance (or instances). To install the basic web application, run the
following command:
az vmss extension set \
--publisher Microsoft.Azure.Extensions \
--version 2.0 \
--name CustomScript \
--resource-group myResourceGroup \
--vmss-name myScaleSet \
--settings '{"fileUris":["https://raw.githubusercontent.com/Microsoft/PartsUnlimitedMRP/master/
Labfiles/AZ-400T05-ImplemntgAppInfra/Labfiles/automate_nginx.sh"],"commandToExecute":"./automate_
nginx.sh"}'
3. Allow traffic to access the application. When you created the scale set, the Azure Load Balancer
deployed automatically. As a result, traffic distributes to the VM instances in the scale set. To allow
traffic to reach the sample web application, create a load balancer rule with the following command:
az network lb rule create \
--resource-group myResourceGroup \
--name myLoadBalancerRuleWeb \
--lb-name myScaleSetLB \
--backend-pool-name myScaleSetLBBEPool \
--backend-port 80 \
--frontend-ip-name loadBalancerFrontEnd \
--frontend-port 80 \
--protocol tcp
4. Obtain the public IP Address. To test your scale set and observe your scale set in action, access the
sample web application in a web browser, and then obtain the public IP address of your load balancer
using the following command:
az network public-ip show \
--resource-group myResourceGroup \
--name myScaleSetLBPublicIP \
--query '[ipAddress]' \
MCT USE ONLY. STUDENT USE PROHIBITED 632 Module 16 Azure Deployment Models and Services
--output tsv
5. Test your scale set. Enter the public IP address of the load balancer in a web browser. The load
balancer distributes traffic to one of your VM instances, as in the following screenshot:
6. Remove the resource group, scale set, and all related resources as follows. The –no-wait parameter
returns control to the prompt without waiting for the operation to complete. The –yes parameter
confirms that you wish to delete the resources without an additional prompt to do so.
az group delete --name myResourceGroup --yes --no-wait
Availability
Availability in infrastructure as a service (IaaS) services in Azure is provided both through the core
physical structural components of Azure (such as Azure regions, and Availability Zones), and also through
logical components that together provide for overall availability during outages, maintenance, and other
downtime scenarios.
Availability sets
Availability sets ensure that the VMs you deploy on Azure are distributed across multiple, isolated
hardware clusters. Doing this ensures that if a hardware or software failure within Azure happens, only a
subset of your VMs are impacted and your overall solution remains available and operational.
Availability sets are made up of update domains and fault domains:
●● Update domains. When a maintenance event occurs (such as a performance update or critical security
patch applied to the host), the update is sequenced through update domains. Sequencing updates
using update domains ensures that the entire datacenter isn't unavailable during platform updates
and patching. Update domains are a logical section of the datacenter, and they are implemented with
software and logic.
●● Fault domains. Fault domains provide for the physical separation of your workload across different
hardware in the datacenter. This includes power, cooling, and network hardware that supports the
physical servers located in the server racks. In the event the hardware that supports a server rack
becomes unavailable, only that rack of servers is affected by the outage.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Infrastructure-as-a-Service (IaaS) Services 633
Update management
You can also leverage the Update management service within a Windows or Linux VM to manage
updates and patches for the VMs. Directly from your VM, you can quickly assess the status of available
updates, schedule required updates installation, and review deployment results to verify updates were
applied successfully.
Boot diagnostics
The boot diagnostic agent captures screen output that can be used for troubleshooting purposes. This
capability is enabled by default with Windows VMs, but it's not automatically enabled when you create a
Linux VM using Azure CLI. The captured screenshots are stored in an Azure storage account, which is also
created by default.
Host metrics
An Azure VM, for both the Windows and Linux operating systems, have a dedicated host in Azure that it
interacts with. Metrics are automatically collected for the host, which you can view in the Azure portal.
You can enable diagnostic extensions in the Portal using the following steps:
1. In the Azure portal, choose Resource Groups, select myResourceGroupMonitor, and then in the
resource list, select myVM.
2. Select Diagnosis settings. In the Pick a storage account drop-down, choose or create a storage
account.
3. Select the Enable guest-level monitoring button.
Alerts
You can create alerts based on specific performance metrics. You can use these alerts to notify you, for
example, when average CPU usage exceeds a certain threshold or available free disk space drops below a
certain amount. Alerts display in the Azure portal, or you can have them sent via email. You can also
trigger Azure Automation runbooks or Azure Logic Apps in response to alerts being generated.
The following steps create an alert for average CPU usage:
1. In the Azure portal, select Resource Groups, select myResourceGroupMonitor, and then in the
resource list select myVM.
2. On the VM blade, select Alert rules. Then from the top of the Alerts blade, select Add metric alert.
3. Provide a name for your alert, such as myAlertRule.
4. To trigger an alert when CPU percentage exceeds 1.0 for five minutes, leave all the other default
settings selected.
5. Optionally, you can select the Email owners, Contributors, and Readers check boxes to send email
notifications. The default action is to present a notification in the portal only.
6. Select OK.
Load balancing
Azure Load Balancer is a Layer-4 (Transmission Control Protocol (TCP), User Datagram Protocol (UDP))
load balancer that provides high availability by distributing incoming traffic among healthy VMs.
For load balancing, you define a front-end IP configuration that contains one or more public IP address-
es. This configuration allows your load balancer and applications to be accessible over the internet.
Virtual machines connect to a load balancer using their virtual network interface card (NIC). To distribute
traffic to the VMs, a back-end address pool contains the IP addresses of the virtual NICs connected to the
load balancer.
To control the flow of traffic, you define load-balancer rules for specific ports and protocols that map to
your VMs.
When creating a VM scale set, a load balancer is automatically created as part of that process.
Fast startup
Because running a container requires only a few extra resources over the operating system, startup time
is faster, and roughly equivalent to the time required to start a new process. The only additional items the
OS needs to set up is the isolation for the process, which is done at the kernel level, and occurs quickly.
As the diagram shows, the Windows operating system always has its default host user mode processes
running Windows. On Windows, you now have additional services such as Docker and compute services
that manage containers.
When you start a new container, Docker talks to the compute services to create a new container based on
an image. For each container, Docker will create a Windows container, each of which will require a set of
system processes. These are always the same in every container. You then use your own application
process to differentiate each container. These can be Microsoft Internet Information Services (IIS) or SQL
Server processes that you run in the container.
On Windows Server 2016 and Windows Server 2019, you can run these containers so that they share the
Windows kernel. This method is quite efficient, and the processes that run in the container will have no
performance effect on the container because they access the kernel objects without indirect action.
On Windows Server 2019, you can now run Windows and Linux containers alongside each other.
Hyper-V containers
When containers share the kernel and memory, it creates a slight chance that if a vulnerability occurs in
the Windows operating system, an application might break out of its sandbox environment and inadvert-
ently do something malicious. To avoid this, Windows provides a more secure alternative of running
containers called Hyper-V containers.
The following diagram depicts the high-level architecture of Hyper-V containers on the Windows operat-
ing system. Hyper-V containers are supported on both Windows Server 2016 and newer versions, and on
the Windows 10 Anniversary edition.
MCT USE ONLY. STUDENT USE PROHIBITED 638 Module 16 Azure Deployment Models and Services
The main difference between Windows Server containers and Hyper-V containers is the isolation that the
latter provides. Hyper-V containers are the only type of containers you can run on the Windows 10
operating system. Hyper-V containers have a small footprint and start fast compared to a full VM. You
can run any image as a Hyper-V isolated container by using the –isolation option on the Docker com-
mand line, and specifying that the isolation be type hyperv. Refer to the following command for an
example:
docker run -it --isolation hyperv microsoft/windowsservercore cmd
This command will run a new instance of a container based on the image microsoft/windows-
servercore, and will run the command cmd.exe in interactive mode.
Nano Server
Nano Server is the headless deployment option for Windows Server 2016 and Windows Server 2019,
available via the semi-annual channel releases. It is specifically optimized for private clouds and data-
centers and for running cloud-based applications. It is intended to be run as a container in a container
host, such as a Server Core installation of Windows Server.
It's a remotely administered server operating system optimized for private clouds and datacenters. It's
similar to Windows Server in Server Core mode, but it's significantly smaller, has no local logon capability,
and only supports 64-bit applications, tools, and agents.
Nano Server also takes up far less disk space, sets up significantly faster, and requires far fewer updates
and restarts than Windows Server. When it does restart, it restarts much faster. Nano Server is ideal for a
number of scenarios:
●● As a compute host for Hyper-V VMs, either in clusters or not.
●● As a storage host for Scale-Out File Server.
●● As a Domain Name System (DNS) server
●● As a web server running IIS
●● As a host for applications that are developed using cloud application patterns and run in a container
or VM guest operating system
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Infrastructure-as-a-Service (IaaS) Services 639
11 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-automate-vm-deployment
12 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-linux
MCT USE ONLY. STUDENT USE PROHIBITED 640 Module 16 Azure Deployment Models and Services
●● Deploy and manage configurations. The PowerShell DSC extension helps you set up DSC on a VM to
manage configurations and environments.
●● Collect diagnostics data. The Azure Diagnostics extension helps you configure the VM to collect
diagnostics data you can use to monitor the health of your applications.
You can read more about vm extensions at Virtual machine extensions and features for Linux13, and
Virtual machine extensions and features for Windows.14
✔️ Note: Azure DevTest Labs is being expanded with new types of labs, namely Azure Lab Services. Azure
Lab Services lets you create managed labs, such as classroom labs. The service itself handles all the
infrastructure management for a managed lab, from spinning up VMs to handling errors, and scaling the
infrastructure. The managed labs are currently in preview, at the time of writing. Once the preview ends,
the new lab types and existing DevTest Labs come under the new common umbrella name of Azure Lab
Services where all lab types continue to evolve.
Usage scenarios
Some common use cases for using Azure DevTest Labs are as follows:
●● Use DevTest Labs for development environments. This enables you to host development machines for
developers so they can:
●● Quickly provision their development machines on demand.
●● Provision Windows and Linux environments using reusable templates and artifacts.
●● More easily customize their development machines whenever needed.
●● Use DevTest Labs for test environments. This enables you to host machines for testers so they can:
●● Test the latest version of their application by quickly provisioning Windows and Linux environ-
ments using reusable templates and artifacts.
●● Scale up their load testing by provisioning multiple test agents.
In addition, administrators can use DevTest Labs to control costs by ensuring that testers cannot get
more VMs than they need for testing, and VMs are shut down when not in use.
13 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/features-linux
14 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/features-windows
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Infrastructure-as-a-Service (IaaS) Services 641
●● Integrate DevTest Labs with Azure DevOps CI/CD pipeline. You can use the Azure DevTest Labs Tasks
extension that's installed in Azure DevOps to easily integrate your CI/CD build-and-release pipeline
with Azure DevTest Labs. The extension installs three tasks:
●● Create a VM
●● Create a custom image from a VM
●● Delete a VM
The process makes it easy to, for example, quickly deploy an image for a specific test task, and then
delete it when the test completes.
For more information on DevTest Labs, go to Azure Lab Services Documentation15.
15 https://docs.microsoft.com/en-us/azure/lab-services/
MCT USE ONLY. STUDENT USE PROHIBITED 642 Module 16 Azure Deployment Models and Services
●● Microsoft Visual Studio integration. Dedicated tools in Visual Studio streamline the work of creating,
deploying, and debugging.
●● API and mobile features. App Service provides turn-key CORS support for RESTful API scenarios, and
simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and
more.
●● Serverless code. Run a code snippet or script on-demand without having to explicitly provision or
manage infrastructure. Pay only for the compute time your code actually uses.
More general details are available on the App Service Documentation16 page.
Pricing tiers
The pricing tier for an App Service plan determines what App Service features you can use, and how
much you pay for the plan. Pricing tiers are:
●● Shared compute. Shared compute has two base tiers, Free, and Shared. They both run an app on the
same Azure VM as other App Service apps, including apps of other customers. These tiers allocate
CPU quotas to each app that runs on the shared resources, and the resources cannot scale out.
●● Dedicated compute. The Dedicated compute Basic, Standard, Premium, and PremiumV2 tiers run apps
on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources.
The higher the tier, the more VM instances are available for scale-out.
●● Isolated. This tier runs dedicated Azure VMs on dedicated Azure virtual networks. This provides
network isolation (on top of compute isolation) to your apps. It also provides the maximum scale-out
capabilities.
●● Consumption. This tier is only available to function apps. It scales the functions dynamically depend-
ing on workload.
More detail about the App Service plans are available on the Azure App Service plan overview17
webpage.
16 https://docs.microsoft.com/en-us/azure/app-service/
17 https://docs.microsoft.com/en-us/azure/app-service/overview-hosting-plans?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json
MCT USE ONLY. STUDENT USE PROHIBITED 644 Module 16 Azure Deployment Models and Services
Prerequisites
●● You require an Azure subscription to perform the following steps. If you don't have one you can
create one by following the steps outlined on the Create your Azure free account today18 webpage.
Steps:
1. Open Azure Cloud Shell by going to https://shell.azure.com19, or by using the Azure Portal, and
select Bash as the environment option.
2. Create a Java app by executing the Maven command in the Cloud Shell prompt to create a new app
named helloworld: Accept the default values as you go:
mvn archetype:generate -DgroupId=example.demo -DartifactId=helloworld -DarchetypeArtifactId=ma-
ven-archetype-webapp
18 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
19 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 645
3. Select the braces icon in Cloud Shell to open the editor. Use this code editor to open the project file
pom.xml in the helloworld directory.
4. Add the following plugin definition inside the <build> element of the pom.xml file"
<plugins>
<!--*************************************************-->
<!-- Deploy to Tomcat in App Service Linux -->
<!--*************************************************-->
<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-webapp-maven-plugin</artifactId>
<version>1.4.0</version>
<configuration>
</configuration>
</plugin>
</plugins>
7. After this step completes, verify deployment by opening the deployed application using the following
URL in your web browser, replacing webapp with the name of the deployed application. For example,
http://<webapp>.azurewebsites.net/helloworld.
Prerequisites
You require the following items to complete these walkthrough steps:
●● Visual Studio 2017. If you don't have Visual Studio 2017, you can install the Visual Studio Community
edition from the Visual Studio downloads20 webpage.
●● An Azure subscription. If you don't have one you can create one by following the steps outlined on
the Create your Azure free account today21 webpage.
Steps
1. In Visual Studio, create a project by selecting File, New, and then Project.
2. In the New Project dialog, select Visual C#, Web, and then ASP.NET Core Web Application.
3. Name the application myFirstAzureWebApp, and then select OK.
20 https://visualstudio.microsoft.com/downloads/
21 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 647
You can deploy any type of ASP.NET Core web app to Azure. For this walkthrough, select the **Web
Application** template. Ensure authentication is set to **No Authentication** and no other option is
selected, and then Select **OK**.
4. To run the web app locally. from the menu, select Debug, Start, without Debugging.
5. Launch the Publish wizard by going to Solution Explorer, right-clicking the myFirstAzureWebApp
project, and then selecting Publish.
MCT USE ONLY. STUDENT USE PROHIBITED 648 Module 16 Azure Deployment Models and Services
6. To open the Create App Service dialog, select App Service, and then select Publish.
7. Sign in to Azure by going to the Create App Service dialog, select Add an account…, and sign in to
your Azure subscription. If you're already signed in, select the account you want from the drop-down.
If you're already signed in, don't select the Create button.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 649
8. Next to Resource Group, select New. Name the resource group myResourceGroup, and then select
OK.
9. Next to Hosting Plan, select New. Use the following values, and then select OK
●● App Service Plan: myappserviceplan
●● Location: your nearest datacenter
●● Size: Free
MCT USE ONLY. STUDENT USE PROHIBITED 650 Module 16 Azure Deployment Models and Services
✔️ Note: An App Service plan specifies the location, size, and features of the web server farm that hosts
your app. You can save money when hosting multiple apps by configuring the web apps to share a single
App Service plan"
- Region (for example: North Europe, East US, or Southeast Asia)
- Instance size (small, medium, or large)
- Scale count (1 to 20 instances)
- SKU (Free, Shared, Basic, Standard, or Premium)
10. While still in the Create App Service dialog, enter a value for the app name, and then select Create.
✔️ Note:The app name must be a unique value. Valid characters are a-z, 0-9, and -). Alternatively, you can
accept the automatically-generated unique name. The resulting URL of the web app is http://app_name.
azurewebsites.net, where app_name is your app's name.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 651
11. After the wizard completes, it publishes the ASP.NET Core web app to Azure, and then launches the
app in the default browser.
The app name specified in the create and publish step is used as the URL prefix in the format
http://<app_name>.azurewebsites.net.
Congratulations, your ASP.NET Core web app is running live in Azure App Service.
✔️ Note: If you do not plan on using the resources, you should delete them to avoid incurring charges.
MCT USE ONLY. STUDENT USE PROHIBITED 652 Module 16 Azure Deployment Models and Services
Autoscale
Autoscale settings help ensure that you have the right amount of resources running to manage the
fluctuating load of your application. You can configure Autoscale settings to trigger based on metrics that
indicate load or performance, or at a scheduled date and time.
Metric
You can scale based on a resource metric, such as:
●● Scale based on CPU. You want to scale out or scale in based on a percentage CPU value.
●● Scale based on custom metric. To scale based on a custom metric, you designate a specific metric that
is relevant to your app architecture. For example, you might have a web front end and an API tier that
communicates with the backend, and you want to scale the API tier based on custom events in the
front end.
Schedule
You can scale based on a schedule as well. For example, you can:
●● Scale differently on weekdays Vs weekends. You don't expect traffic on weekends, hence you want to
scale down to 1 instance on weekends.
●● Scale differently during holidays. During holidays or specific days that are important for your business,
you might want to override the defaults scaling settings and have more capacity at your disposal.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 653
Autoscale profiles
There are three types of Autoscale profiles that you can configure depending on what you want to
achieve. Azure then evaluates which profile to execute at any given time. The profile types are:
●● Regular profile. This is the most common profile. If you don’t need to scale your resource based on
the day of the week, or on a particular day, you can use a regular profile.
●● Fixed date profile. This profile is for special cases. For example, let’s say you have an important event
coming up on December 26, 2019 (PST). You want the minimum and maximum capacities of your
resource to be different on that day, but still scale on the same metrics.
●● Recurrence profile. This type of profile enables you to ensure that this profile is always used on a
particular day of the week. Recurrence profiles only have a start time. They run until the next recur-
rence profile or fixed date profile is set to start.
Example
An example of how this looks in an Azure Resource Manager template is an Autoscale setting with one
profile, as below:
●● There are two metric rules in this profile: one for scale out, and one for scale in.
●● The scale-out rule is triggered when the VM scale set's average percentage CPU metric is greater
than 85 percent for the past 10 minutes.
●● The scale-in rule is triggered when the VM scale set's average is less than 60 percent for the past
minute.
{
"id": "/subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/autoscalesettings/setting1",
"name": "setting1",
"type": "Microsoft.Insights/autoscaleSettings",
"location": "East US",
"properties": {
"enabled": true,
"targetResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMa-
chineScaleSets/vmss1",
"profiles": [
{
"name": "mainProfile",
"capacity": {
"minimum": "1",
"maximum": "4",
"default": "1"
},
"rules": [
{
"metricTrigger": {
"metricName": "Percentage CPU",
"metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/
virtualMachineScaleSets/vmss1",
"timeGrain": "PT1M",
"statistic": "Average",
MCT USE ONLY. STUDENT USE PROHIBITED 654 Module 16 Azure Deployment Models and Services
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "GreaterThan",
"threshold": 85
},
"scaleAction": {
"direction": "Increase",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
},
{
"metricTrigger": {
"metricName": "Percentage CPU",
"metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/
virtualMachineScaleSets/vmss1",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "LessThan",
"threshold": 60
},
"scaleAction": {
"direction": "Decrease",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
}
]
}
]
}
}
You can review some best auto-scale best practices on the Best practices for Autoscale22 page.
22 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-best-practices
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 655
Prerequisites
●● You require need an Azure subscription to perform these steps. If you don't have one you can create
one by following the steps outlined on the Create your Azure free account today23 webpage.
Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or by using the Azure portal and select-
ing Bash as the environment option.
23 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED 656 Module 16 Azure Deployment Models and Services
2. Run the following command to configure a deployment user, replacing <username> and <password>
(including brackets) with a new user name and password. The user name must be unique within Azure,
and the password must be at least eight characters long with two of the following three elements:
letters, numbers, symbols:
Note: This deployment user is required for FTP and local Git deployment to a web app. The user name
and password are account level. They are different from your Azure subscription credentials.
az webapp deployment user set --user-name <*username*> --password <*password*>
You should get a JSON output, with the password shown as null. If you get a ‘Conflict’. Details: 409 error,
change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password.
3. Create a resource group in Azure by using the following command and substituting the resource
group name value for one of your own choice, and the location to a datacenter near you:
az group create --name myResourceGroup --location "West Europe"
4. Create an Azure App Service plan by running the following command, which creates an App Service
plan named myAppServicePlan in the Basic pricing tier:
az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1
--is-linux
5. Run the following command to create a web app in your App service plan, replacing <app name>
with a globally unique name, and the resource group and App Service plan names to values you
created earlier. This command points to the public Docker Hub image:
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name < app name >
--deployment-container-image-name microsoft/azure-appservices-go-quickstart
When the web app has been created, the Azure CLI shows output similar to the following example:
{
"availabilityState": "Normal",
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 657
"clientAffinityEnabled": true,
"clientCertEnabled": false,
"cloningInfo": null,
"containerSize": 0,
"dailyMemoryTimeQuota": 0,
"defaultHostName": "<app name>.azurewebsites.net",
"deploymentLocalGitUrl": "https://<username>@<app name>.scm.azurewebsites.net/<app name>.git",
"enabled": true,
< JSON data removed for brevity. >
}
Eliminate VM management
With Azure Container Instances you don’t need to own a VM to run your containers. This means that you
don’t need to worry about creating, managing, and scaling them. In the following picture, the network,
MCT USE ONLY. STUDENT USE PROHIBITED 658 Module 16 Azure Deployment Models and Services
virtual machine, and container host are entirely managed for you. However, this also means you have no
control over them.
Hypervisor-level security
Historically, containers have offered application dependency isolation and resource governance. Howev-
er, they have not been considered sufficiently hardened for hostile multi-tenant usage. In Azure Contain-
er Instances, your application is as isolated in a container as it would be in a VM.
The isolation between individual containers is achieved by using Hyper-V containers.
Persistent storage
To retrieve and persist state with Azure Container Instances, use Azure Files shares.
It is possible to run both long-running processes and task-based containers. This is controlled by the
container restart policy.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 659
Container groups
By default, containers are isolated from each other. But what if you need interaction between containers?
To support this kind of scenario, there is the concept of container groups. Containers inside a container
group are deployed on the same machine, and they use the same network. They also share their lifecycle,
meaning all containers in the group are started and stopped together.
Containers are always part of a container group. Even if you deploy a single container, it will be placed
into a new group automatically. When using Windows containers, a group can have only one container.
This is because network namespaces are not available on the Windows operating system.
Usage scenario
Azure Container Instances is a recommended compute option for any scenario that can operate in
isolated containers, such as simple applications, task automation, and build jobs. For scenarios requiring
full container orchestration (including service discovery across multiple containers, automatic scaling, and
coordinated application upgrades) we recommend Azure Kubernetes Service (AKS).
Prerequisites
●● You require an Azure subscription to perform these steps. If you don't have one you can create one by
following the steps outlined on the Create your Azure free account today24 webpage.
Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or using the Azure portal and selecting
Bash as the environment option
**Note**: You can use a local version on Azure CLI if you want, but it must be version 2.0.27 or later.
2. Create a resource group using the following command, substituting your values for the resource
group name and location:
az group create --name myResourceGroup --location eastus
3. Create a container, substituting your values for the resource group name and container name. Ensure
it is a unique DNS name. In the following command, we specify to open port 80 and apply a DNS
name on the container:
az container create --resource-group myResourceGroup --name mycontainer --image microsoft/aci-hel-
loworld --dns-name-label aci-demo --ports 80
4. Verify the container status by running the following command, again substituting your values where
appropriate:
az container show --resource-group myResourceGroup --name mycontainer --query "{FQDN:ipAddress.
fqdn,ProvisioningState:provisioningState}" --out table
24 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Platform-as-a-Service (PaaS) Services 661
5. If the container's ProvisioningState is Succeeded, navigate to its FQDN in your browser. If you see a
webpage similar to the follow example, congratulations! You've successfully deployed an application
running in a Docker container to Azure.
✔️ Note: You can check the container in the portal if you want. If you are finished using the resources in
Azure, delete it to ensure you do not incur costs.
MCT USE ONLY. STUDENT USE PROHIBITED 662 Module 16 Azure Deployment Models and Services
Serverless definition
The core characteristics that define a serverless service are:
●● Service is consumption based. The service provisions resources on demand, and you only pay for what
you use. Billing is typically calculated by the number of function calls, code execution time, and
memory used. (Supporting services such as networking and storage could be charged separately.)
●● Low management overhead. Because serverless service is cloud-hosted, you won't need to be patch-
ing VMs or have a burdensome operational workflow. Serverless services provide for the full abstrac-
tion of servers, so developers can just focus on their code. There are no distractions around server
management, capacity planning, or availability.
●● Auto-scale. Compute execution can be in milliseconds, so it's almost instant. It provides for event-
drive scalability. Application components react to events and trigger in near real-time with virtually
unlimited scalability.
Functions as a service
Function as a service (FaaS) is an industry programming model that uses Functions to help achieve
serverless compute. These functions have the following characteristics:
●● Single responsibility. Functions are single purposed, reusable pieces of code that process an input and
return a result.
●● Short-lived. Functions don't stick around when they've finished executing, which frees up resources
for further executions.
●● Stateless. Functions don't hold any persistent state and don't rely on the state of any other process.
●● Event driven and scalable. Functions respond to predefined events and are instantly replicated as
many times as needed.
Azure Functions
Azure Functions are Azure's implementation of the FaaS programming model, with additional capabili-
ties.
Azure Functions are ideal when you're only concerned with the code running your service and not the
underlying platform or infrastructure. Azure Functions are commonly used when you need to perform
work in response to an event (often via a REST request, timer, or message from another Azure service),
and when that work can be completed quickly, within seconds or less.
Azure Functions scale automatically, and charges accrue only when a function is triggered, so they're a
good choice when demand is variable. For example, you might be receiving messages from an Internet of
MCT USE ONLY. STUDENT USE PROHIBITED 664 Module 16 Azure Deployment Models and Services
Things (IoT) solution that monitors a fleet of delivery vehicles. You'll likely have more data arriving during
business hours. Azure Functions can scale out to accommodate these busier times.
Furthermore, Azure Functions are stateless; they behave as if they're restarted every time they respond to
an event. This is ideal for processing incoming data. And if state is required, they can be connected to an
Azure storage service. See Functions25 for more details.
Prerequisites
●● Use Azure Cloud Shell.
●● You require an Azure subscription to perform these steps. If you don't have one, you can create one
by following the steps outlined on the Create your Azure free account today27 webpage.
Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or via the Azure Portal and selecting Bash
as the environment option.
25 https://azure.microsoft.com/en-us/services/functions/
26 https://azure.microsoft.com/en-us/resources/azure-serverless-computing-cookbook/
27 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Serverless and HPC Computer Services 665
2. Create the local function app project by running the following command from the command line to
create a function app project in the MyFunctionProj folder of the current local directory. A GitHub
repo is also created in MyFunctionProj:
func init MyFunctionProj
When prompted, select a worker runtime from the following language choices:
- dotnet. Creates a .NET class library project (.csproj).
- node. Creates a JavaScript project.
3. Use the following command to navigate to the new MyFunctionProj project folder:
cd MyFunctionProj
4. Create a function using the following command, which creates an HTTP-triggered function named
MyHttpTrigger:
func new --name MyHttpTrigger --template "HttpTrigger"
5. Update the function. By default, the template creates a function that requires a function key when
making requests. To make it easier to test the function in Azure, you need to update the function to
allow anonymous access. The way that you make this change depends on your functions project
language. For C#:
●● Open the MyHttpTrigger.cs code file that is your new function. Use the following command to
update the AuthorizationLevel attribute in the function definition to a value of anonymous, and
save your changes:
[FunctionName("MyHttpTrigger")]
public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous,
"get", "post", Route = null)]HttpRequest req, ILogger log)
6. Run the function locally. The following command starts the function app, which runs using the same
Azure Functions runtime that is in Azure:
func host start --build
MCT USE ONLY. STUDENT USE PROHIBITED 666 Module 16 Azure Deployment Models and Services
%%%%%%
%%%%%%
@ %%%%%% @
@@ %%%%%% @@
@@@ %%%%%%%%%%% @@@
@@ %%%%%%%%%% @@
@@ %%%% @@
@@ %%% @@
@@ %% @@
%%
%
...
...
Http Functions:
HttpTrigger: http://localhost:7071/api/MyHttpTrigger
●● Copy the URL of your HttpTrigger function from the runtime output, and paste it into your browser's
address bar. Append the query string ?name=yourname to this URL, and execute the request. The
following image is the response in the browser to the GET request returned by the local function.
Now that you have run your function locally, you can create the function app and other required resourc-
es in Azure.
8. Create a resource group. An Azure resource group is a logical container into which Azure resources
such as function apps, databases, and storage accounts are deployed and managed. Use the following
command to create the resource group az group create.
az group create --name myResourceGroup --location westeurope
MCT USE ONLY. STUDENT USE PROHIBITED
Serverless and HPC Computer Services 667
9. Create an Azure Storage account. Functions uses a general-purpose account in Azure Storage to
maintain state and other information about your functions. Create a general-purpose storage account
in the resource group you created by using the az storage account create command.
In the following command, substitute a globally unique storage account name where you see the
<storage_name> placeholder. Storage account names must be between 3 and 24 characters in length,
and can contain numbers and lowercase letters only:
az storage account create --name <storage_name> --location westeurope --resource-group myRe-
sourceGroup --sku Standard_LRS
After the storage account has been created, the Azure CLI shows information similar to the following
example:
{
"creationTime": "2017-04-15T17:14:39.320307+00:00",
"id": "/subscriptions/bbbef702-e769-477b-9f16-bc4d3aa97387/resourceGroups/myresourcegroup/...",
"kind": "Storage",
"location": "westeurope",
"name": "myfunctionappstorage",
"primaryEndpoints": {
"blob": "https://myfunctionappstorage.blob.core.windows.net/",
"file": "https://myfunctionappstorage.file.core.windows.net/",
"queue": "https://myfunctionappstorage.queue.core.windows.net/",
"table": "https://myfunctionappstorage.table.core.windows.net/"
},
....
// Remaining output has been truncated for readability.
}
10. Create a function app. You must have a function app to host the execution of your functions. The
function app provides an environment for serverless execution of your function code. It lets you group
functions as a logic unit for easier management, deployment, and sharing of resources. Create a
function app by using the az functionapp create command.
In the following command, substitute a unique function app name where you see the <app_name>
placeholder, and the storage account name for <storage_name>. The <app_name> is used as the default
DNS domain for the function app, and so the name needs to be unique across all apps in Azure. You
should also set the <language> runtime for your function app, from dotnet (C#) or node (JavaScript).
az functionapp create --resource-group myResourceGroup --consumption-plan-location westeurope \
--name <app_name> --storage-account <storage_name> --runtime <language>
Setting the consumption-plan-location parameter means that the function app is hosted in a Consump-
tion hosting plan. In this serverless plan, resources are added dynamically as required by your functions,
and you only pay when functions are running.
After the function app has been created, the Azure CLI shows information similar to the following
example:
{
"availabilityState": "Normal",
"clientAffinityEnabled": true,
"clientCertEnabled": false,
"containerSize": 1536,
MCT USE ONLY. STUDENT USE PROHIBITED 668 Module 16 Azure Deployment Models and Services
"dailyMemoryTimeQuota": 0,
"defaultHostName": "quickstart.azurewebsites.net",
"enabled": true,
"enabledHostNames": [
"quickstart.azurewebsites.net",
"quickstart.scm.azurewebsites.net"
],
....
// Remaining output has been truncated for readability.
}
11. Deploy the function app project to Azure. After the function app is created in Azure, you can use the
func azure functionapp publish command to deploy your project code to Azure:
func azure functionapp publish <FunctionAppName>
You'll see something like the following output, which has been truncated for readability.
Getting site publishing info...
Preparing archive...
Uploading content...
Upload completed successfully...
Deployment completed successfully...
Syncing triggers...
✔️ Note: Remember to delete the resources if you are no longer using them.
Batch services
Azure Batch is a fully-managed cloud service that provides job scheduling and compute resource
management. It creates and manages a pool of compute nodes (VMs), installs the applications you want
to run, and schedules jobs to run on the nodes. It enables applications, algorithms, and computationally
intensive workloads to be broken into individual tasks for execution to be easily and efficiently run in
parallel at scale.
Using Azure Batch, there is no cluster or job scheduler software to install, manage, or scale. Instead, you
use Batch APIs and tools, command-line scripts, or the Azure portal to configure, manage, and monitor
your jobs.
Independent\parallel
This is the most commonly used scenario. The applications or tasks, do not communicate with each other.
Instead, they operate independently. The more VMs or nodes you can bring to a task, the quicker it will
complete. Examples of usage would be Monte Carlo risk simulations, transcoding, and rendering a movie
frame by frame.
Tightly coupled
From traditional high performance computing (HPC) such as scientific, or computing and engineering
tasks, applications or tasks communicate with each other. They would typically use the Message Passing
Interface (MPI) API for this inter-node communication. However, they can also use low-latency,
high-bandwidth Remote Direct Memory Access (RDMA) networking. Examples of usage would be car
crash simulations, fluid dynamics, and Artificial Intelligence (AI) training frameworks.
Running an application
To get an application to run, you must have the following items:
●● An application. This could just be a standard desktop application; it doesn't need to be cloud aware.
●● Resource management. You need a pool of VMs, which Batch service creates, manages, monitors, and
scales.
●● A method to get the application onto the VMs. You can:
●● Store the application in blob storage, and then copy it onto each VM.
●● Have a container image and deploy it.
●● Upload a zip or application package.
●● Create a custom VM image, then upload and use that.
●● Job scheduler. Create and define the tasks that will combine to make the job.
●● Output Storage: You need somewhere to place the output data, typically use Blob storage.
✔️ Note: The unit of execution is what can be run on the command line in the VM. The application itself
does not need to be repackaged.
MCT USE ONLY. STUDENT USE PROHIBITED
Serverless and HPC Computer Services 671
Cost
Batch services provide:
●● The ability to scale VMs as needed.
●● The ability to increase and decrease resources on demand.
●● Efficiency, as it makes best use of the resources.
●● Cost effectiveness, because you only pay for the infrastructure you use when you are using it.
✔️ Note: There is no additional charge for using a Batch service. You only pay for the underlying resourc-
es consumed, such as the VMs, storage, and networking.
MCT USE ONLY. STUDENT USE PROHIBITED 672 Module 16 Azure Deployment Models and Services
Key capabilities
By using Service Fabric, you can:
●● Deploy to Azure or to on-premises datacenters running Windows or Linux operating systems, with
zero code changes. You write once, and then deploy anywhere to any Service Fabric cluster.
●● Develop scalable applications composed of microservices by using the Service Fabric programming
models, containers, or any code.
●● Develop highly reliable stateless and stateful microservices. Simplify the design of your application by
using stateful microservices.
●● Use the Reliable Actors programming model to create cloud objects with self-contained code and
state.
●● Deploy and orchestrate containers that include Windows containers and Linux containers. Service
Fabric is a data-aware, stateful container orchestrator.
●● Deploy applications in seconds at high density, with hundreds or thousands of applications or con-
tainers per machine.
●● Deploy different versions of the same application side by side, and upgrade each application inde-
pendently.
●● Manage the lifecycle of your applications without any downtime, including breaking and nonbreaking
upgrades.
●● Scale out or scale in the number of nodes in a cluster. As you scale nodes, your applications automati-
cally scale.
●● Monitor and diagnose the health of your applications and set policies for performing automatic
repairs.
●● Watch the resource balancer orchestrate the redistribution of applications across the cluster. Service
Fabric recovers from failures and optimizes the distribution of load based on available resources.
✔️ Note: Service Fabric is currently undergoing a transition to open development. The goal is to move
the entire build, test, and development process to GitHub. You can view, investigate and contribute on
the
https://github.com/Microsoft/service-fabric/28 page. There are also many sample files and scenarios
to help in deployment and configuration.
Application Model
Service Fabric applications consist of one or more services that work together to automate business
processes. A service is an executable that runs independently of other services, and is composed of code,
configuration, and data. Each element is separately versionable and deployable.
28 https://github.com/Microsoft/service-fabric/
MCT USE ONLY. STUDENT USE PROHIBITED 674 Module 16 Azure Deployment Models and Services
Creating an application instance requires an application type, which is the template that specifies which
services are part of the application. This concept is similar to object-oriented programming. The applica-
tion type is comparable to class definition, and the application is comparable to the instance. You can
create multiple named application instances from one application type.
The same concept applies to services. The service type defines the code and configuration for the service
and the endpoints that the service uses for interaction. You can create multiple service instances by using
one service type. An application specifies how many instances of a service type should be created.
Both application type and service type are described through XML files. Every element of the application
model is independently versionable and deployable.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Service Fabric 675
Resource balancing
While applications are running in Azure Service Fabric, they are constantly monitored for health. Service
Fabric ensures that services keep running well and that the available server resources are used optimally.
This means that sometimes services are moved from busy nodes to less busy nodes to keep overall
resource consumption well balanced. The image above displays an imbalanced cluster. Node 2 hosts
three services, while Node 1, and Nodes 3-5 are empty. Service Fabric will detect this situation and
resolve it.
After Service Fabric completes the balancing operation, your cluster will look like the image above. Every
node now runs one service.
In reality, each node will likely run many services. Because every service is different, it's usually possible to
combine services on one node to make optimal use of the server's resources. Service Fabric does all of
this automatically.
Programming Models
When developing applications for use on Service Fabric, there are a number of options available.
MCT USE ONLY. STUDENT USE PROHIBITED 676 Module 16 Azure Deployment Models and Services
Windows
Azure Service Fabric comes with an SDK, and development tool support. You can develop Windows
clusters in C# using Microsoft Visual Studio 2015 or Visual Studio 2017.
Linux
Developing Java-based services for Linux clusters is probably easiest by using Eclipse Neon29. However,
it's also possible to program in C# using .NET Core and Visual Studio Code.
Programming models
You can choose from four different programming models to create a Service Fabric application:
●● Reliable Services. Reliable Services is a framework you can use to create services that use specific
features, which Service Fabric provides. One important feature is a distributed data storage mecha-
nism. Others are custom load and health reporting, and automatic endpoint registration. These enable
discoverability and interaction between services.
●● There are two distinct types of Reliable Services that you can create:
●● Stateless services are intended to perform operations that don't require keeping an internal
state. Examples are services that host ASP.NET Web APIs, or services that autonomously
process items read from a Service Bus Queue.
●● Stateful services keep an internal state, which is automatically stored redundantly across
multiple nodes for availability and error recovery. The data stores are called Reliable Collections.
●● Reliable Actors. Reliable Actors is a framework built on top of Reliable Services, which implements the
Virtual Actors' design pattern. An Actor encapsulates a small piece of state and behavior. One exam-
ple is Digital Twins, in which an Actor represents the state and the abilities of a device in the real
world. Many IoT applications use the Actor model to represent the state and abilities. The state of an
Actor can be volatile, or it can be kept in the distributed store. This store can be memory-based or on
a disk.
●● Guest executables. You can also package and run existing applications as a Service Fabric (stateless)
service. This makes Applications highly available. The platform ensures that the instances of an
application are running. You can also upgrade Applications with no downtime. If problems are
reported during an upgrade, Service Fabric can automatically roll back the deployment. Service Fabric
also enables you to run multiple applications together in a cluster, which reduces the need for
hardware resources.
✔️ Note: When using Guest executables, you cannot use some of the platform capabilities (such as the
Reliable Collections).
●● Containers. You can run Containers in a similar way as running guest executables. What’s different is
that Service Fabric can restrict resource consumption (CPU and memory, for example) per container.
Limiting resource consumption per service enables you to achieve even higher densities on your
cluster.
29 https://www.eclipse.org/neon/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Service Fabric 677
For example, in this image you see six nodes. Imagine that node pairs one and two, three and four,
and five and six each share a server rack, which means that they share a fault domain. Then, by policy,
node pairs one and six, two and three, and four and five were put in upgrade domains. This means
that changes to these node pairs are applied simultaneously. This includes changes to the cluster
software and to running services.
By upgrading two nodes at the same time, they complete quickly. By adding more upgrade domains,
your upgrades become more granular, and because of that, have a lower impact. The most commonly
used setup is to have one upgrade domain for one fault domain.
MCT USE ONLY. STUDENT USE PROHIBITED 678 Module 16 Azure Deployment Models and Services
This means that as exhibited in this image, upgrades are applied to one node at a time. When services
are deployed to both multiple fault domains and correctly configured upgrade domains, they are able
to manage node failures, even during upgrades.
Boundaries
You can grow or shrink the number of servers that define in the cluster by changing the size of a VM scale
set. However, there are some restrictions that apply.
Lower boundary
Earlier in this module, you learned that one of the platform services of Service Fabric is a distributed data
store. This means that data stored on one node is replicated to a quorum (majority) of secondary nodes.
to work properly, you'll need to have multiple healthy nodes in your cluster. The precise number needed
depends on the desired reliability of your services.
Durability tier Required minimum Supported VM Updates you make Updates and
number of VMs SKUs to your virtual maintenance
machine scale set initiated by Azure
Gold 5 Full-node SKUs Can be delayed Can be paused for
dedicated to a until approved by 2 hours per update
single customer the Service Fabric domain to allow
(for example, L32s, cluster additional time for
GS5, G5, DS15_v2, replicas to recover
D15_v2) from earlier
failures
Silver 5 VMs of single core Can be delayed Cannot be delayed
or above until approved by for any significant
the Service Fabric period of time
cluster
Bronze 1 All Will not be Cannot be delayed
delayed by the for any significant
Service Fabric period of time
cluster
If you're using the Bronze durability level, you must notify Service Fabric beforehand of your intention to
remove a node. This instructs Service Fabric to move services and data away from the node. In other
words, it drains the node.
Next, you need to remove that node from the cluster. You must run the PowerShell script Disable-Ser-
viceFabricNode for each node that you want to remove, and wait for Service Fabric to complete the
operation.
MCT USE ONLY. STUDENT USE PROHIBITED 680 Module 16 Azure Deployment Models and Services
If you don't properly remove the node from the cluster, Service Fabric will assume that the nodes have
simply failed and will return later after reporting them as having the status Down.
Windows Server
You can deploy a cluster manually by using a set of PowerShell tools on a prepared group of servers, and
then running Windows Server 2016 or Windows Server 2019. This approach is called a Service Fabric
standalone cluster.
It's also possible to create a cluster in Azure. You can do this using the Azure Portal, or by using an Azure
Resource Manager template. This will create a cluster and everything that's needed to run applications on
it.
30 https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-anywhere
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Service Fabric 681
Finally, there is an option to create a cluster specifically for development use that runs on just one
machine simulating multiple servers. This is called a local development cluster, and it allows developers to
debug their applications before deploying them to a production cluster.
✔️ Note: It's important to know that for every type of deployment, the actual Service Fabric binaries are
the same. This means that an application that works on a development cluster will also work on an
on-premises or cloud-hosted cluster without requiring modifications to the code. This is similar to the
portability that containerization offers.
Linux
At the time of this writing, Service Fabric for Linux has been released. However, it does not yet have
complete feature parity between Windows and Linux. This means that some Windows features are not
available on Linux. For example, you cannot create a Service Fabric anywhere cluster, and all program-
ming models are in preview (including Java/C# Reliable Actors, Reliable Stateless Services, and Reliable
Stateful Services).
You can create a Linux cluster in Azure and create a local development cluster on the Linux Ubuntu 16.04
and Red Hat Enterprise Linux 7.4 (preview support) operating systems.
For more details, see the Differences between Service Fabric on Linux and Windows31 page.
✔️ Note: Standalone clusters currently aren't supported for Linux. Linux is supported on one-box for
development and Linux virtual machine clusters.
✔️ Note: Detailed steps on how to set up a Service Fabric cluster are available on the Create a stan-
dalone cluster running on Windows Server page.32
Prerequisites:
●● You need to download setup files and PowerShell scripts for the Serve Fabric standalone package,
which you run to setup a service fabric cluster. You can download these from the Download Link -
Service Fabric Standalone Package - Windows Server page.33
31 https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-linux-windows-differences
32 https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server
33 https://go.microsoft.com/fwlink/?LinkId=730690
MCT USE ONLY. STUDENT USE PROHIBITED 682 Module 16 Azure Deployment Models and Services
✔️ Note: Remember that currently standalone clusters aren't supported for Linux. Linux is supported on
one-box for development and Linux multi-machine clusters on Azure. As such there is no equivalent
download package for Linux.
3. Create:
●● A creation script CreateServiceFabricCluster.ps1 is also provided. Running this will create
the entire cluster for you on all designated machines. You can run the following command:
.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -AcceptEULA
i.e.
Connect-ServiceFabricCluster -ConnectionEndpoint 192.13.123.2345:19000
34 https://go.microsoft.com/fwlink/?LinkId=730690
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Service Fabric 683
- Service Fabric Explorer is a service that runs in the cluster, which you
access using a browser. Open a browser and go to http://localhost:19000/
explorer.
5. Upgrade:
●● You can run the PowerShell script Start-ServiceFabricClusterUpgrade that is installed on
the nodes as part of the cluster deployment, to upgrade the cluster software to a specific version.
6. Remove:
●● If you need to remove a cluster, run the script RemoveServiceFabricCluster.ps1. This
removes Service Fabric from each machine in the configuration. Use the following command:
# Removes Service Fabric from each machine in the configuration
.\RemoveServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -Force
- To removes Service Fabric from the current machine, use the `.\CleanFab-
ric.ps1` script. You can also run the following command:
You'll likely add your own public IP address and load balancer to this cluster, if you need to run services
that are reachable over the internet.
Placement Constraints
You can use Placement constraints to:
●● Isolate workloads from each other.
●● Lift and shift an existing N-tier application into Azure Service Fabric.
●● Run services on specific server configurations.
MCT USE ONLY. STUDENT USE PROHIBITED 684 Module 16 Azure Deployment Models and Services
In the Microsoft Operations Management Suite (OMS) There is a management solution—or plug-in—for
OMS that is designed specifically for diagnostics on Service Fabric clusters. This solution is called Service
Fabric Analytics.
Adding this to your OMS workspace provides you with a dashboard. In one glance, you'll get an overview
of important issues and cluster and application events. These graphs are based on diagnostics data
gathered from the servers forming the Service Fabric cluster. By using the VM extension Windows Azure
Diagnostics, you install an agent on your VMs that is able to collect and upload this diagnostics data into
a storage account. OMS can access that data to analyze and present the information on the dashboard.
If you want to drill down to the details of the graphs, you can do so by clicking on the items in the table.
This will navigate you to the OMS Log Analytics management solution. By using Log Analytics, you can
view detailed descriptions of all captured diagnostics data.
You can review details of diagnostics events such as Event Trace for Windows (ETW), that were generated
by Services and Actors running in Service Fabric. ETW is a high-performance logging system that you can
use for logging information such as errors or diagnostics traces from your application.
OMS has been used for a lot of our diagnostics so far, but there are alternative tools that you can use.
MCT USE ONLY. STUDENT USE PROHIBITED 686 Module 16 Azure Deployment Models and Services
EventFlow
created by the Microsoft Visual Studio Team, EventFlow is an open-source library designed specifically for
in-process log collection. This library enables your services to send logs directly to a central location,
while not relying on an agent such as the Azure Diagnostics extension to do that. This makes sense if
services come and go, or when services need to send their data to varying central locations.
EventFlow does not rely on the Event Trace for Windows infrastructure; it can send logs to many outputs,
including Application Insights, OMS, Azure Event Hubs, and the console window.
In the code sample above, a backup is created first. After that operation completes, the method Post-
BackupCallAsync will be invoked. In this method, the local backup folder is copied to the central loca-
tion. The implementation of _centralBackupStore is omitted.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Service Fabric 687
Restoring backups
Restoring backups also requires some development effort. The Reliable Service needs to have code that is
executed by Service Fabric in a data loss situation such as a node failure or using code. After the data loss
is triggered, your Reliable Service can access your central storage location and download the contents to
the local backup folder. After that, data can be restored from the local backup folder. Every running
primary replica of your Stateful Reliable Services must restore its own backups. The following command
demonstrates how to do these steps:
public async Task BeginRestoreBackup()
{
var partitionSelector = PartitionSelector.PartitionKeyOf(Context.ServiceName,
((Int64RangePartitionInformation) Partition.PartitionInfo).LowKey);
var operationId = Guid.NewGuid();
In this code sample, backups are retrieved from a central location, and the local folder is used to call
RestoreAsync. The call to OnDataLossAsync can be triggered by executing the following command:
FabricClient.TestManagementClient.StartPartitionDataLossAsync
Alternatively, you can use the following PowerShell command:
Start-ServiceFabricPartitionDataLoss
Again, the implementation of _centralBackupStore is omitted.
Fire drills
Consider your backup strategy carefully. The amount of data loss that is acceptable differs for every
service. The size of the central store will grow quickly if you create many full backups.
Make sure to gain hands-on experience with creating and restoring backups by practicing it. This way,
you'll know that your solution works, and you won't discover that your backup strategy is insufficient
during a real disaster-recovery situation.
MCT USE ONLY. STUDENT USE PROHIBITED 688 Module 16 Azure Deployment Models and Services
YAML definition
A sample of a YAML definition would be similar to the following code:
version: '3'
services:
web:
image: microsoft/iis:nanoserver
ports:
- "80:80"
networks:
default:
external:
name: nat
This sample file results in a single container based on the image named microsoft/iis:nanoserver.
We did not specify a container registry to use, so Docker Hub is used by default. The container exposes
IIS at port 80. We connect the container port to the host port 80, by specifying 80:80. Finally, we select-
ed the default nat network to connect containers.
Note: The values for this command will be different for your own cluster.
2. Deploy the docker-compose.yml file by using the following command:
`New-ServiceFabricComposeDeployment -DeploymentName ComposeDemo -Compose x:\docker-com-
pose.yml`
4. When the deployment has finished, you can navigate to your new service by opening a browser and
navigating to the Service Fabric cluster domain name at port 80.
For example: http://devopsdemowin.westeurope.cloudapp.azure.com.
You should see the IIS information page running as a container on your Service Fabric cluster.
5. To remove a deployment, run the following command:
Remove-ServiceFabricComposeDeployment -DeploymentName ComposeDemo
MCT USE ONLY. STUDENT USE PROHIBITED 690 Module 16 Azure Deployment Models and Services
Lab
Deploying a Dockerized Java app to Azure Web
App for Containers
In this lab, Deploying a Dockerized Java app to Azure Web App for Containers35, you will learn:
●● Configuring a CI pipeline to build and publish Docker image
●● Deploying to an Azure Web App for containers
●● Configuring MySQL connection strings in the Web App
35 https://azuredevopslabs.com/labs/vstsextend/dockerjava/
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 691
Checkbox
Availability sets are made up of which of the following?
(choose two)
Update Domains
Azure AD Domain Services
Fault Domains
Event Domains
Dropdown
Complete the following sentence.
Azure App Service is an Azure Platform-as-Service offering that is used for ____________.
processing events with serverless code.
detecting, triaging, and diagnosing issues in your web apps and services.
building, testing, releasing, and monitoring your apps from within a single software application.
hosting web applications, REST APIs, and mobile back ends.
Checkbox
Which of the following are features of Web App for Containers?
(choose all that apply)
Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
Incrementally deploys apps into production with deployment slots and slot swaps.
Scales out automatically with auto-scale.
Uses the App Service Log Streaming feature to allow you to see logs from your application.
Supports PowerShell and Win-RM for remotely connecting directly into your containers.
MCT USE ONLY. STUDENT USE PROHIBITED 692 Module 16 Azure Deployment Models and Services
Multiple choice
Which of the following statements is best practice for Azure Functions?
Azure Functions should be stateful.
Azure Functions should be stateless.
Checkbox
Which of the following features are supported by Azure Service Fabric?
(choose all that apply)
Reliable Services
Reliable Actor patterns
Guest Executables
Container processes
Checkbox
Which of the following describe primary uses for Placement Constraints?
(choose all that apply)
Isolate workloads from each other
Control which nodes in a cluster that a service can run on
‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
Describe resources that nodes have, and that services consume, when they are run on a node.
Checkbox
Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
Basic Networking
Native Model
Advanced Networking
Resource Model
Multiple choice
True or false: containers are a natural fit for an event-driven architecture?
True
False
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 693
Multiple choice
Which of the following cloud service models provides the most control, flexibility, and portability?
Infrastructure-as-a-Service (IaaS)
Functions-as-a-Service (FaaS)
Platform-as-a-Service (PaaS)
MCT USE ONLY. STUDENT USE PROHIBITED 694 Module 16 Azure Deployment Models and Services
Answers
Multiple choice
Which of the following Azure products provides management capabilities for applications that run across
multiple Virtual Machines, and allows for the automatic scaling of resources, and load balancing of
traffic?
Azure Service Fabric
■■ Virtual Machine Scale Sets
Azure Kubernetes Service
Virtual Network
Explanation
Virtual Machine Scale Sets is the correct answer.
All other answers are incorrect.
Azure Service Fabric is for developing microservices and orchestrating containers on Windows or Linux.
Azure Kubernetes Service (AKS) simplifies the deployment, management, and operations of Kubernetes.
Virtual Network is for setting up and connecting virtual private networks.
With Azure VMs, scale is provided for by Virtual Machine Scale Sets (VMSS). Azure VMSS let you create and
manage groups of identical, load balanced VMs. The number of VM instances can increase or decrease
automatically, in response to demand or a defined schedule. Azure VMSS provide high availability to your
applications, and allow you to centrally manage, configure, and update large numbers of VMs. With Azure
VMSS, you can build large-scale services for areas such as compute, big data, and container workloads.
Checkbox
Availability sets are made up of which of the following?
(choose two)
■■ Update Domains
Azure AD Domain Services
■■ Fault Domains
Event Domains
Explanation
Update Domains and Fault Domains are the correct answers.
Azure AD Domain Services and Event Domains are incorrect answers.
Azure AD Domain Service provides managed domain services to a Windows Server Active Directory in
Azure. An event domain is a tool for managing and publishing information.
Update Domains are a logical section of the datacenter, implemented by software and logic. When a
maintenance event occurs (such as a performance update or critical security patch applied to the host), the
update is sequenced through Update Domains. Sequencing updates by using Update Domains ensures that
the entire datacenter does not fail during platform updates and patching.
Fault Domains provide for the physical separation of your workload across different hardware in the
datacenter. This includes power, cooling, and network hardware that supports the physical servers located in
server racks. If the hardware that supports a server rack becomes unavailable, only that specific rack of serv-
ers would be affected by the outage.
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 695
Dropdown
Complete the following sentence.
Azure App Service is an Azure Platform-as-Service offering that is used for ____________.
processing events with serverless code.
detecting, triaging, and diagnosing issues in your web apps and services.
building, testing, releasing, and monitoring your apps from within a single software application.
■■ hosting web applications, REST APIs, and mobile back ends.
Explanation
Hosting web applications, REST APIs, and mobile back ends, is the correct answer.
The other answers are incorrect because:
Processing events with serverless code is performed by Azure Functions.
Detecting, triaging, and diagnosing issues in your web apps and services is performed by Application
Insights.
Building, testing, releasing, and monitoring your apps from within a single software application is performed
by Visual Studio App Center.
Azure App Service is a Platform as Service offering on Azure, for hosting web applications, REST APIs, and
mobile back ends. With Azure App Service you can create powerful cloud apps quickly within a fully
managed platform. You can use Azure App Service to build, deploy, and scale enterprise-grade web, mobile,
and API apps to run on any platform. Azure App Service ensures your application meet rigorous perfor-
mance, scalability, security and compliance requirements, and benefit from using a fully managed platform
for performing infrastructure maintenance.
Checkbox
Which of the following are features of Web App for Containers?
(choose all that apply)
■■ Deploys containerized applications using Docker Hub, Azure Container Registry, or private registries.
■■ Incrementally deploys apps into production with deployment slots and slot swaps.
■■ Scales out automatically with auto-scale.
■■ Uses the App Service Log Streaming feature to allow you to see logs from your application.
■■ Supports PowerShell and Win-RM for remotely connecting directly into your containers.
Explanation
All of the answers are correct.
Web App for Containers from the Azure App Service allows customers to use their own containers, and
deploy them to Azure App Service as a web app. Similar to the Azure Web App solution, Web App for
Containers eliminates time-consuming infrastructure management tasks during container deployment,
updating, and scaling to help developers focus on coding and getting their apps to their end users faster.
Furthermore, Web App for Containers provides integrated CI/CD capabilities with DockerHub, Azure
Container Registry, and VSTS, as well as built-in staging, rollback, testing-in-production, monitoring, and
performance testing capabilities to boost developer productivity.
For Operations, Web App for Containers also provides rich configuration features so developers can easily
add custom domains, integrate with AAD authentication, add SSL certificates and more — all of which are
crucial to web app development and management. Web App for Containers provides an ideal environment
to run web apps that do not require extensive infrastructure control.
MCT USE ONLY. STUDENT USE PROHIBITED 696 Module 16 Azure Deployment Models and Services
Multiple choice
Which of the following statements is best practice for Azure Functions?
Azure Functions should be stateful.
■■ Azure Functions should be stateless.
Explanation
Azure Functions should be stateless is the correct answer.
Azure Functions should be stateful is an incorrect answer.
Azure Functions are an implementation of the Functions-as-a-Service programming model on Azure, with
additional capabilities. It is best practice to ensure that your functions are as stateless as possible. Stateless
functions behave as if they have been restarted, every time they respond to an event. You should associate
any required state information with your data instead. For example, an order being processed would likely
have an associated state member. A function could process an order based on that state, update the data as
required, while the function itself remains stateless. If you require stateful functions, you can use the Durable
Functions Extension for Azure Functions or output persistent data to an Azure Storage service.
Checkbox
Which of the following features are supported by Azure Service Fabric?
(choose all that apply)
■■ Reliable Services
■■ Reliable Actor patterns
■■ Guest Executables
■■ Container processes
Explanation
All of the answers are correct.
Reliable Services is a framework for creating services that use specific features provided by Azure Service
Fabric. The two distinct types of Reliable Services you can create are stateless services and stateful services.
Reliable Actors is a framework built on top of Reliable Services which implements the Virtual Actors design
pattern. An Actor encapsulates a small piece of a state or behavior. The state of an Actor can be volatile, or
it can be kept persistent in a distributed store. This store can be memory-based or on a disk.
Guest Executables are existing applications that you package and run as Service Fabric services (stateless).
This makes the applications highly available, as Service Fabric keeps the instances of your applications
running. Applications can be upgraded with no downtime, and Service Fabric can automatically roll back
deployments if needed.
Containers can be run in a way that is similar to running guest executables. Furthermore, with containers,
Service Fabric can restrict resource consumption per container (by CPU processes or memory usage, for
example). Limiting resource consumption per service allows you to achieve higher densities on your cluster.
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 697
Checkbox
Which of the following describe primary uses for Placement Constraints?
(choose all that apply)
■■ Isolate workloads from each other
■■ Control which nodes in a cluster that a service can run on
■■ ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
Describe resources that nodes have, and that services consume, when they are run on a node.
Explanation
The correct answers are: Isolate workloads from each other, control which nodes in a cluster that a service
can run on, and ‘Lift and shift’ an existing N-tier application into Azure Service Fabric.
Describe resources that nodes have, and that services consume, when they are run on a node, is an incor-
rect answer. Metrics are used to describe resources that nodes have, and services consume, when they are
run on a node.
Placement Constraints can control which nodes in a cluster that a service can run on. You can define any
set of properties by node type, and then set constraints for them. Placement Constraints are primarily used
to: Isolate workloads from each other; 'Lift and shift' an existing N-tier application into Azure Service Fabric;
Run services on specific server configurations.
Placement Constraints can restrict Service Fabric's ability to balance overall cluster resource consumption.
Make sure that your Placement Constraints are not too restrictive. Otherwise, if Service Fabric cannot
comply with a Placement Constraint, your service will not run.
Checkbox
Which of the following are network models for deploying a clusters in Azure Kubernetes Service (AKS)?
(choose two)
■■ Basic Networking
Native Model
■■ Advanced Networking
Resource Model
Explanation
Basic Networking and Advanced Networking are correct answers.
Native Model and Resource Model are incorrect answers because these are two deployment models
supported by Azure Service Fabric.
In AKS, you can deploy a cluster to use either Basic Networking or Advanced Networking. With Basic
Networking, the network resources are created and configured as the AKS cluster is deployed. Basic Net-
working is suitable for small development or test workloads, as you don't have to create the virtual network
and subnets separately from the AKS cluster. Simple websites with low traffic, or to lift and shift workloads
into containers, can also benefit from the simplicity of AKS clusters deployed with Basic Networking.
With Advanced Networking, the AKS cluster is connected to existing virtual network resources and configu-
rations. Advanced Networking allows for the separation of control and management of resources. When you
use Advanced Networking, the virtual network resource is in a separate resource group to the AKS cluster.
For most production deployments, you should plan for and use Advanced Networking.
MCT USE ONLY. STUDENT USE PROHIBITED 698 Module 16 Azure Deployment Models and Services
Multiple choice
True or false: containers are a natural fit for an event-driven architecture?
True
■■ False
Explanation
False is the correct answer.
True is an incorrect answer.
Architecture styles don't require the use of particular technologies, but some technologies are well-suited for
certain architectures. For example, containers are a natural fit for microservices, and an event-driven
architecture is generally best suited to IoT and real-time systems.
An N-tier architecture model is a natural fit for migrating existing applications that already use a layered
architecture
A Web-queue-worker architecture model is suitable for relatively simple domains with some resource-inten-
sive tasks.
The CQRS architecture model makes the most sense when it's applied to a subsystem of a larger architec-
ture.
A Big data architecture model divides a very large dataset into chunks, performing paralleling processing
across the entire set, for analysis and reporting.
Finally, the Big compute architecture model, also called high-performance computing (HPC), makes parallel
computations across a large number (thousands) of cores.
Multiple choice
Which of the following cloud service models provides the most control, flexibility, and portability?
■■ Infrastructure-as-a-Service (IaaS)
Functions-as-a-Service (FaaS)
Platform-as-a-Service (PaaS)
Explanation
Infrastructure-as-a-Service (IaaS) is the correct answer.
Functions-as-a-Service (FaaS) and Platform-as-a-Service (PaaS) are incorrect answers.
Of the three cloud service models mentioned, IaaS provides the most control, flexibility, and portability.
FaaS provides simplicity, elastic scale, and potential cost savings, because you pay only for the time your
code is running. PaaS falls somewhere between the two.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 17 Create and Manage Kubernetes
Service Infrastructure
Module Overview
Module Overview
As most modern software developers can attest, containers have provided enginnering teams with
dramatically more flexibility for running cloud-native applications on physical and virtual infrastructure.
Containers package up the services comprising an application and make them portable across different
compute environments, for both dev/test and production use. With containers, it’s easy to quickly ramp
application instances to match spikes in demand. And because containers draw on resources of the host
OS, they are much lighter weight than virtual machines. This means containers make highly efficient use
of the underlying server infrastructure.
So far so good. But though the container runtime APIs are well suited to managing individual containers,
they’re woefully inadequate when it comes to managing applications that might comprise hundreds of
containers spread across multiple hosts. Containers need to be managed and connected to the outside
world for tasks such as scheduling, load balancing, and distribution, and this is where a container orches-
tration tool like Kubernetes comes into its own.
An open source system for deploying, scaling, and managing containerized applications, Kubernetes
handles the work of scheduling containers onto a compute cluster and manages the workloads to ensure
they run as the user intended. Instead of bolting on operations as an afterthought, Kubernetes brings
software development and operations together by design. By using declarative, infrastructure-agnostic
constructs to describe how applications are composed, how they interact, and how they are managed,
Kubernetes enables an order-of-magnitude increase in operability of modern software systems.
Kubernetes was built by Google based on its own experience running containers in production, and it
surely owes much of its success to Google’s involvement. The Kubernetes platform is open source and
growing dramatically through open source contributions at a very rapid pace. Kubernetes marks a
breakthrough for devops because it allows teams to keep pace with the requirements of modern soft-
ware development.
MCT USE ONLY. STUDENT USE PROHIBITED 700 Module 17 Create and Manage Kubernetes Service Infrastructure
Learning Objectives
After completing this module, students will be able to:
●● Deploy and configure a Managed Kubernetes cluster
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Kubernetes Service (AKS) 701
There are several other container cluster orchestration technologies available such as Mespshpere DC/
OS1 and Docker Swarm2.
For more details about Kubernetes, go to Production-Grade Container Orchestration3 on the Kuber-
netes website.
AKS manages much of the Kubernetes resources for the end user, making it quicker and easier to deploy
and manage containerized applications without container orchestration expertise. It also eliminates the
burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on
demand without taking applications offline.
Azure AKS manages the following aspects of a Kubernetes cluster for you:
●● It manages critical tasks such as health monitoring and maintenance, such as Kubernetes version
upgrades and patching.
●● It performs simple cluster scaling.
●● It enables master nodes to be fully managed by Microsoft.
●● It leaves you responsible only for managing and maintaining the agent nodes.
●● It ensures master nodes are free, and you only pay for running agent nodes.
1 https://mesosphere.com/product/
2 https://www.docker.com/products/orchestration
3 https://kubernetes.io/
MCT USE ONLY. STUDENT USE PROHIBITED 702 Module 17 Create and Manage Kubernetes Service Infrastructure
Cluster master
When you create an AKS cluster, a cluster master is automatically created and configured. This cluster
master is provided as a managed Azure resource abstracted from the user. There is no cost for the cluster
master, only the nodes that are part of the AKS cluster.
The cluster master includes the following core Kubernetes components:
●● kube-apiserver. The API server is how the underlying Kubernetes APIs are exposed. This component
provides the interaction for management tools such as kubectl or the Kubernetes dashboard.
●● etcd. To maintain the state of your Kubernetes cluster and configuration, the highly available etcd is a
key value store within Kubernetes.
●● kube-scheduler. When you create or scale applications, the Scheduler determines what nodes can run
the workload, and starts them.
●● kube-controller-manager. The Controller Manager oversees a number of smaller controllers that
perform actions such as replicating pods and managing node operations.
Pods
Kubernetes uses pods to run an instance of your application. A pod represents a single instance of your
application. Pods typically have a 1:1 mapping with a container, although there are advanced scenarios
where a pod might contain multiple containers. These multi-container pods are scheduled together on
the same node, and allow containers to share related resources.
When you create a pod, you can define resource limits to request a certain amount of CPU or memory
resources. The Kubernetes Scheduler attempts to schedule the pods to run on a node with available
resources to meet the request. You can also specify maximum resource limits that prevent a given pod
from consuming too much compute resource from the underlying node.
✔️ Note: A best practice is to include resource limits for all pods to help the Kubernetes Scheduler under-
stand what resources are needed and permitted.
A pod is a logical resource, but the container (or containers) is where the application workloads run. Pods
are typically ephemeral, disposable resources. Therefore, individually scheduled pods miss some of the
high availability and redundancy features Kubernetes provides. Instead, pods are usually deployed and
managed by Kubernetes controllers, such as the Deployment controller.
Kubernetes networking
Kubernetes pods have limited lifespan, and are replaced whenever new versions are deployed. Settings
such as the IP address change regularly, so interacting with pods by using an IP address is not advised.
This is why Kubernetes services exist. To simplify the network configuration for application workloads,
Kubernetes uses Services to logically group a set of pods together and provide network connectivity.
Kubernetes Service is an abstraction that defines a logical set of pods, combined with a policy that
describes how to access them. Where pods have a shorter lifecycle, services are usually more stable and
are not affected by container updates. This means that you can safely configure applications to interact
with pods through the use of services. The service redirects incoming network traffic to its internal pods.
Services can offer more specific functionality, based on the service type that you specify in the Kuber-
netes deployment file.
If you do not specify the service type, you will get the default type, which is ClusterIP. This means that
your services and pods will receive virtual IP addresses that are only accessible from within the cluster.
Although this might be a good practice for containerized back-end applications, it might not be what you
want for applications that need to be accessible from the internet. You need to determine how to config-
ure your Kubernetes cluster to make those applications and pods accessible from the internet.
Services
The following Service types are available:
●● Cluster IP. This service creates an internal IP address for use within the AKS cluster. However, it's good
for internal-only applications that support other workloads within the cluster.
MCT USE ONLY. STUDENT USE PROHIBITED 704 Module 17 Create and Manage Kubernetes Service Infrastructure
●● NodePort. This service creates a port mapping on the underlying node, which enables the application
to be accessed directly with the node IP address and port.
●● Load Balancer. this service creates an Azure Load Balancer resource, configures an external IP address,
and connects the requested pods to the load balancer backend pool. To allow customers traffic to
reach the application, load balancing rules are created on the desired ports.
Ingress controllers
When you create a Load Balancer–type Service, an underlying Azure Load Balancer resource is created.
The load balancer is configured to distribute traffic to the pods in your service on a given port. The Load
Balancer only works at layer 4. The Service is unaware of the actual applications, and can't make any
additional routing considerations.
Ingress controllers work at layer 7, and can use more intelligent rules to distribute application traffic. A
common use of an Ingress controller is to route HTTP traffic to different applications based on the
inbound URL.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Kubernetes Service (AKS) 705
There are different implementations of the Ingress Controller concept. One example is the Nginx
Ingress Controller, which translates the Ingress Resource into a nginx.conf file. Other examples are
the ALB Ingress Controller (AWS) and the GCE Ingress Controllers (Google Cloud),
which make use of cloud native resources. Using the Ingress setup within Kubernetes makes it possible to
easily switch the reverse proxy implementation so that your containerized workload leverages the most
out of the cloud platform on which it is running.
Deployment
Kubernetes uses the term pod to package applications. A pod is a deployment unit, and it represents a
running process on the cluster. It consists of one or more containers, and configuration, storage resourc-
es, and networking support. Pods are usually created by a controller, which monitors it and provides
self-healing capabilities at the cluster level.
Pods are described by using YAML or JSON. Pods that work together to provide functionality are grouped
into services to create microservices. For example, a front-end pod and a back-end pod could be grouped
into one service.
You can deploy an application to Kubernetes by using the kubectl CLI, which can manage the cluster. By
running kubectl on your build agent, it's possible to deploy Kubernetes pods from Azure DevOps. It's
also possible to use the management API directly. There is also a specific Kubernetes task called Deploy
To Kubernetes that is available in Azure DevOps. More information about this will be covered in the
upcoming demonstration.
Continuous delivery
To achieve continuous delivery, the build-and-release pipelines are run for every check-in on the Source
repository.
MCT USE ONLY. STUDENT USE PROHIBITED 706 Module 17 Create and Manage Kubernetes Service Infrastructure
Prerequisites
●● Use the cloud shell.
●● You require an Azure subscription to be able to perform these steps. If you don't have one, you can
create it by following the steps outlined on the Create your Azure free account today4 page.
Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com, or using the Azure Portal and selecting
Bash as the environment option.
4 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Kubernetes Service (AKS) 707
After a few minutes, the command completes and returns JSON-formatted information about the cluster.
4. To manage a Kubernetes cluster, you use kubectl, the Kubernetes command-line client. If you use
Azure Cloud Shell, kubectl is already installed. To install kubectl locally, use the following com-
mand:
az aks install-cli
5. To configure kubectl to connect to your Kubernetes cluster, use the az aks get-credentials
command. This command downloads credentials and configures the Kubernetes CLI to use them:
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
6. Verify the connection to your cluster by running the following command. Make sure that the status of
the node is Ready:
kubectl get nodes
7. Create a file named azure-vote.yaml, and then copy it into the following YAML definition. If you use
the Azure Cloud Shell, you can create this file using vi or nano as if working on a virtual or physical
system:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
MCT USE ONLY. STUDENT USE PROHIBITED 708 Module 17 Create and Manage Kubernetes Service Infrastructure
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
You should receive output showing the Deployments and Services were created successfully after it runs
as per the below.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Kubernetes Service (AKS) 709
9. When the application runs, a Kubernetes service exposes the application front end to the internet. This
process can take a few minutes to complete. To monitor progress run the command
kubectl get service azure-vote-front --watch
10. Initially the EXTERNAL-IP for the azure-vote-front service is shown as pending.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-vote-front LoadBalancer 10.0.37.27 < pending > 80:30572/TCP 6s
11. When the EXTERNAL-IP address changes from pending to an actual public IP address, use CTRL-C to
stop the kubectl watch process. The following example output shows a valid public IP address
assigned to the service:
azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
12. To see the Azure Vote app in action, open a web browser to the external IP address of your service.
Monitor health and logs. When the AKS cluster was created, Azure Monitor for containers was enabled to
capture health metrics for both the cluster nodes and pods. These health metrics are available in the
Azure portal. To see current status, uptime, and resource usage for the Azure Vote pods, complete the
following steps in the Azure portal:
13. Open a web browser to the Azure portal https://portal.azure.com.
14. Select your resource group, such as myResourceGroup, then select your AKS cluster, such as myAKS-
Cluster.
15. Under Monitoring on the left-hand side, choose Insights
16. Across the top, choose to + Add Filter
17. Select Namespace as the property, then choose < All but kube-system >
MCT USE ONLY. STUDENT USE PROHIBITED 710 Module 17 Create and Manage Kubernetes Service Infrastructure
18. Choose to view the Containers. The azure-vote-back and azure-vote-front containers are displayed, as
shown in the following example:
19. To see logs for the azure-vote-front pod, select the View container logs link on the right-hand side of
the containers list. These logs include the stdout and stderr streams from the container
✔️ Note: If you are not continuing to use the Azure resources, remember to delete them to avoid
incurring costs.
Continuous Deployment
In Kubernetes you can update the service by using a rolling update. This will ensure that traffic to a
container is first drained, then the container is replaced, and finally, traffic is sent back again to the
container. In the meantime, your customers won't see any changes until the new containers are up and
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Kubernetes Service (AKS) 711
running on the cluster. The moment they are, new traffic is routed to the new containers and stopped to
the old containers. Running a rolling update is easy to do with the following command:
kubectl apply -f nameofyamlfile
The YAML file contains a specification of the deployment. The apply command is convenient because it
makes no difference whether the deployment was already on the cluster. This means that you can always
use the exact same steps regardless of whether you are doing an initial deployment or an update to an
existing deployment.
When you change the name of the image for a service in the YAML file Kubernetes will apply a rolling
update, taking into account the minimum number of running containers you want and how many at a
time it is allowed to stop. The cluster will take care of updating the images without downtime, assuming
that your application container is built stateless.
Updating Images
After you've successfully containerized your application, you'll need ensure that you update your image
regularly. This entails creating a new image for every change you make in your own code, and ensuring
that all layers receive regular patching.
A large part of a container image is the base OS layer, which contains the elements of the operating
system that are not shared with the container host.
The base OS layer gets updated frequently. Other layers, such as the IIS layer and ASP.NET layer in the
image are also updated. Your own images are built on top of these layers, and it's up to you to ensure
that they incorporate those updates.
Fortunately, the base OS layer actually consists of two separate images: a larger base layer and a smaller
update layer. The base layer changes less frequently than the update layer. Updating your image's base
OS layer is usually a matter of getting the latest update layer.
MCT USE ONLY. STUDENT USE PROHIBITED 712 Module 17 Create and Manage Kubernetes Service Infrastructure
If you're using a Docker file to create your image, patching layers should be done by explicitly changing
the image version number using the following commands:
```yml
FROM microsoft/windowsservercore:10.0.14393.321
RUN cmd /c echo hello world
```
into
```yml
FROM microsoft/windowsservercore:10.0.14393.693
RUN cmd /c echo hello world
```
When you build this Docker file, it now uses version 10.0.14393.693 of the
image microsoft/windowsservercore.
Latest tag
Don't be tempted to rely on the latest tag. To define repeatable custom images and deployments, you
should always be explicit about the base image versions that you are using. Also, just because an image is
tagged as the latest doesn't mean that it actually is the latest. The owner of the image needs to ensure
this.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Kubernetes Service (AKS) 713
✔️ Note: The last two segments of the version number of Windows Server Core and Nano images will
match the build number of the operating system inside.
MCT USE ONLY. STUDENT USE PROHIBITED 714 Module 17 Create and Manage Kubernetes Service Infrastructure
Lab
Deploying a multi-container application to Az-
ure Kubernetes Services
Azure Kubernetes Service (AKS) is the quickest way to use Kubernetes on Azure. Azure Kubernetes Service
(AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage
containerized applications without container orchestration expertise. It also eliminates the burden of
ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand,
without taking your applications offline. Azure DevOps helps in creating Docker images for faster deploy-
ments and reliability using the continuous build option.
One of the biggest advantage to use AKS is that instead of creating resources in cloud you can create
resources and infrastructure inside Azure Kubernetes Cluster through Deployments and Services manifest
files.
In this lab, Deploying a multi-container application to Azure Kubernetes Services5, you will learn:
●● Setting up an AKS Cluster
●● CI/CD Pipeline for building artifacts and deploying to Kubernetes
●● Access the Kubernetes web dashboard in Azure Kubernetes Service (AKS)
5 https://azuredevopslabs.com/labs/vstsextend/kubernetes/#access-the-kubernetes-web-dashboard-in-azure-kubernetes-service-aks
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 715
Multiple choice
Kubernetes CLI is called?
HELM
ACI
AKS
KUBECTL
Checkbox
For workloads running in AKS Kubernetes Web Dashboard allows you to view _______________________. Select
all that apply.
Config Map & Secrets
Logs
Storage
Azure Batch Metrics
Checkbox
Pods can be described using which of the following languages? Select all that apply.
JSON
XML
PowerShell
YAML
MCT USE ONLY. STUDENT USE PROHIBITED 716 Module 17 Create and Manage Kubernetes Service Infrastructure
Answers
Multiple choice
Is this statement true or false?
Azure Policy natively integrates with AKS, allowing you to enforce rules across multiple AKS clusters.
Track, validate and configure nodes, pods and container images for compliance.
■■ True
False
Multiple choice
Kubernetes CLI is called?
HELM
ACI
AKS
■■ KUBECTL
Checkbox
For workloads running in AKS Kubernetes Web Dashboard allows you to view _______________________.
Select all that apply.
■■ Config Map & Secrets
■■ Logs
■■ Storage
Azure Batch Metrics
Checkbox
Pods can be described using which of the following languages? Select all that apply.
■■ JSON
XML
PowerShell
■■ YAML
MCT USE ONLY. STUDENT USE PROHIBITED
Module 18 Third Party Infrastructure as Code
Tools available with Azure
Module Overview
Module Overview
Configuration management tools enable changes and deployments to be faster, repeatable, scalable,
predictable, and able to maintain the desired state, which brings controlled assets into an expected state.
Some advantages of using configuration management tools include:
●● Adherence to coding conventions that make it easier to navigate code
●● Idempotency, which means that the end state remains the same, no matter how many times the code
is executed
●● Distribution design to improve managing large numbers of remote servers
Some configuration management tools use a pull model, in which an agent installed on the servers runs
periodically to pull the latest definitions from a central repository and apply them to the server. Other
tools use a push model, where a central server triggers updates to managed servers.
Configuration management tools enables the use of tested and proven software development practices
for managing and provisioning data centers in real-time through plaintext definition files.
Learning Objectives
After completing this module, students will be able to:
●● Deploy and configure infrastructure using 3rd party tools and services with Azure, such as Chef,
Puppet, Ansible, SaltStack, and Terraform
MCT USE ONLY. STUDENT USE PROHIBITED 718 Module 18 Third Party Infrastructure as Code Tools available with Azure
Chef
What is Chef
Chef is an infrastructure automation tool that you use for deploying, configuring, managing, and ensur-
ing compliance of applications and infrastructure. It provides for a consistent deployment and manage-
ment experience.
Chef helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment by
using instructions (or recipes) to configure nodes. A node , or chef-client is any physical or virtual machine
(VM), cloud, or network device that is under management by Chef.
The following diagram is of the high-level Chef architecture:
Chef components
Chef has three main architectural components:
●● Chef Server. This is the management point. There are two options for the Chef Server: a hosted
solution and an on-premises solution.
●● Chef Client (node). This is a Chef agent that resides on the servers you are managing.
MCT USE ONLY. STUDENT USE PROHIBITED
Chef 719
●● Chef Workstation. This is the Admin workstation where you create policies and execute management
commands. You run the knife command from the Chef Workstation to manage your infrastructure.
Chef also uses concepts called cookbooks and recipes. Chef cookbooks and recipes are essentially the
policies that you define and apply to your servers.
Chef Automate
You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.
Chef Automate is a Chef product that allows you to package and test your applications, and provision and
update your infrastructure. Using Chef, you can manage changes to your applications and infrastructure
using compliance and security checks, and dashboards that give you visibility into your entire stack.
The Chef Automate image is available on the Azure Chef Server and has all the functionality of the legacy
Chef Compliance server. You can build, deploy, and manage your applications and infrastructure on
Azure. Chef Automate is available from the Azure Marketplace, and you can try it out with a free 30-day
license. You can deploy it in Azure straight away.
habitat is a container, a bare metal machine, or platform as a service (PaaS) is no longer the focus and
does not constrain the application.
For more information about Habitat, go to Use Habitat to deploy your application to Azure1.
●● InSpec is a free and open-source framework for testing and auditing your applications and infrastruc-
ture. InSpec works by comparing the actual state of your system with the desired state that you
express in easy-to-read and easy-to-write InSpec code. InSpec detects violations and displays findings
in the form of a report, but you are in control of remediation.
You can use InSpec to validate the state of your VMs running in Azure. You can also use InSpec to
scan and validate the state of resources and resource groups inside a subscription.
More information about InSpec is available at Use InSpec for compliance automation of your
Azure infrastructure2.
Chef Cookbooks
Chef uses a cookbook to define a set of commands that you execute on your managed client. A cook-
book is a set of tasks that you use to configure an application or feature. It defines a scenario, and
everything required to support that scenario. Within a cookbook, there are a series of recipes, which
define a set of actions to perform. Cookbooks and recipes are written in the Ruby language.
After you create a cookbook, you can then create a Role. A Role defines a baseline set of cookbooks and
attributes that you can apply to multiple servers. To create a cookbook, you use the chef generate
cookbook command.
Create a cookbook
Before creating a cookbook, you first configure your Chef workstation by setting up the Chef Develop-
ment Kit on your local workstation. You'll use the Chef workstation to connect to, and manage your Chef
server.
✔️ Note: You can download and install the Chef Development Kit from Chef downloads3.
Choose the Chef Development Kit that is appropriate to your operating system and version. For example:
●● macOSX/macOS
●● Debian
●● Red Hat Enterprise Linux SUSE
●● Linux Enterprise Server
●● Ubuntu
●● Windows
1. Installing the Chef Development Kit creates the Chef workstation automatically in your C:\Chef
directory. After installation completes, run the following example command that calls the Cookbook
web server for a policy that automatically deploys IIS:
chef generate cookbook webserver
1 https://docs.microsoft.com/en-us/azure/chef/chef-habitat-overview
2 https://docs.microsoft.com/en-us/azure/chef/chef-inspec-overview
3 https://downloads.chef.io/chefdk
MCT USE ONLY. STUDENT USE PROHIBITED
Chef 721
This command generates a set of files under the directory C:\Chef\cookbooks\webserver. Next, you
need to define the set of commands that you want the Chef client to execute on your managed VM. The
commands are stored in the default.rb file.
2. For this example, we will define a set of commands that installs and starts Microsoft Internet Informa-
tion Services (IIS), and copies a template file to the wwwroot folder. Modify the C:\chef\cookbooks\
webserver\recipes\default.rb file by adding the following lines:
powershell_script 'Install IIS' do
action :run
end
service 'w3svc' do
end
template 'c:\inetpub\wwwroot\Default.htm' do
source 'Default.htm.erb'
end
●● Upload your cookbooks and recipes to the Chef Automate server using the following command:
knife cookbook upload < cookbook name> --include-dependencies
●● Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers. Use the following command to create this role:
knife role create < role name >
●● Bootstrap the a node or client and assign a role using the following command:
knife bootstrap < FQDN-for-App-VM > --ssh-user <app-admin-username> --ssh-password <app-vm-
admin-password> --node-name < node name > --run-list role[ < role you defined > ] --sudo --verbose
You can also bootstrap Chef VM extensions for the Windows and Linux operating systems, in addition to
provisioning them in Azure using the Knife command. For more information, look up the ‘cloud-api’
bootstrap option in the Knife plugin documentation at https://github.com/chef/knife-azure4.
✔️ Note: You can also install the Chef extensions to an Azure VM using Windows PowerShell. By installing
the Chef Management Console, you can manage your Chef server configuration and node deployments
via a browser window.
4 https://github.com/chef/knife-azure
MCT USE ONLY. STUDENT USE PROHIBITED
Puppet 723
Puppet
What is Puppet
Puppet is a deployment and configuration management toolset that provides you with enterprise tools
that you need to automate an entire lifecycle on your Azure infrastructure. It also provides consistency
and transparency into infrastructure changes.
Puppet provides a series of open-source configuration management tools and projects. It also provides
Puppet Enterprise, which is a configuration management platform that allows you to maintain state in
both your infrastructure and application deployments.
5 https://azure.microsoft.com/en-us/marketplace/
MCT USE ONLY. STUDENT USE PROHIBITED 724 Module 18 Third Party Infrastructure as Code Tools available with Azure
Manifest files
Puppet uses a declarative file syntax to define state. It defines what the infrastructure state should be, but
not how it should be achieved. You must tell it you want to install a package, but not how you want to
install the package.
Configuration or state is defined in manifest files known as Puppet Program files. These files are responsi-
ble for determining the state of the application, and have the file extension .pp.
Puppet program files have the following elements:
●● class. This is a bucket that you put resources into. For example, you might have an Apache class with
everything required to run Apache (such as the package, config file. running server, and any users that
need to be created). That class then becomes an entity that you can use to compose other workflows.
●● resources. These are single elements of your configuration that you can specify parameters for.
●● module. This is the collection of all the classes, resources, and other elements of the Puppet program
file in a single entity.
class mrpapp {
class { 'configuremongodb': }
class { 'configurejava': }
}
class configuremongodb {
include wget
class { 'mongodb': }->
wget::fetch { 'mongorecords':
source => 'https://raw.githubusercontent.com/Microsoft/PartsUnlimitedMRP/master/deploy/Mon-
goRecords.js',
destination => '/tmp/MongoRecords.js',
timeout => 0,
}->
exec { 'insertrecords':
command => 'mongo ordering /tmp/MongoRecords.js',
path => '/usr/bin:/usr/sbin',
unless => 'test -f /tmp/initcomplete'
}->
file { '/tmp/initcomplete':
ensure => 'present',
}
}
class configurejava {
MCT USE ONLY. STUDENT USE PROHIBITED
Puppet 725
include apt
$packages = ['openjdk-8-jdk', 'openjdk-8-jre']
apt::ppa { 'ppa:openjdk-r/ppa': }->
package { $packages:
ensure => 'installed',
}
}
You can download customer Puppet modules that Puppet and the Puppet community have created from
puppetforge6. Puppetforge is a community repository that contains thousands of modules for download
and use, or modification as you need. This saves you the time necessary to recreate modules from
scratch.
6 https://forge.puppet.com/
MCT USE ONLY. STUDENT USE PROHIBITED 726 Module 18 Third Party Infrastructure as Code Tools available with Azure
Ansible
What is Ansible
Ansible is an open-source platform by Red Hat that automates cloud provisioning, configuration man-
agement, and application deployments. Using Ansible, you can provision VMs, containers, and your entire
cloud infrastructure. In addition to provisioning and configuring applications and their environments,
Ansible enables you to automate deployment and configuration of resources in your environment such
as virtual networks, storage, subnets, and resources groups.
Ansible is designed for multiple tier deployments. Unlike Puppet or Chef, Ansible is agentless, meaning
you don't have to install software on the managed machines.
Ansible also models your IT infrastructure by describing how all of your systems interrelate, rather than
managing just one system at a time.
Ansible Components
The following workflow and component diagram outlines how playbooks can run in different circum-
stances, one after another. In the workflow, Ansible playbooks:
1. Provision resources. Playbooks can provision resources. In the following diagram, playbooks create
load-balancer virtual networks, network security groups, and VM scale sets on Azure.
2. Configure the application. Playbooks can deploy applications to run particular services, such as
installing Apache Tomcat on a Linux machine to allow you to run a web application.
3. Manage future configurations to scale. Playbooks can alter configurations by applying playbooks to
existing resources and applications—in this instance to scale the VMs.
In all cases, Ansible makes use of core components such as roles, modules, APIs, plugins, inventory, and
other components.
✔️ Note: By default, Ansible manages machines using the ssh protocol.
✔️ Note: You don't need to maintain and run commands from any particular central server. Instead, there
is a control machine with Ansible installed, and from which playbooks are run.
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 727
7 https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html
8 https://galaxy.ansible.com/Azure/azure_preview_modules
MCT USE ONLY. STUDENT USE PROHIBITED 728 Module 18 Third Party Infrastructure as Code Tools available with Azure
Installing Ansible
To enable a machine to act as the control machine from which to run playbooks, you need to install both
Python and Ansible.
Python
When you install Python, you must install either Python 2 (version 2.7), or Python 3 (versions 3.5 and
later). You can use pip, the Python package manager, to install Python, or you can use other installation
methods.
Ansible on Linux
You can install Ansible on many different distributions of Linux, including, but not limited to:
●● Red Hat Enterprise Linux
●● CentOS
●● Debian
●● Ubuntu
●● Fedora
✔️ Note: Fedora is not supported as an endorsed Linux distribution on Azure. However, you can run it on
Azure by uploading your own image. All other Linux distributions are supported on Azure as endorsed by
Linux.
You can use the appropriate package manager software to install Ansible and Python, such as yum, apt,
or pip. For example, To install Ansible on Ubuntu, run the following command:
## Install pre-requisite packages
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev python-pip
## Install Ansible and Azure SDKs via pip
sudo pip install ansible[azure]
macOS
You can also install Ansible and Python on macOS, and use that environment as the control machine.
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 729
Upgrading Ansible
When Ansible manages remote machines, it doesn't leave software installed or running on them. There-
fore, there’s no real question about how to upgrade Ansible when moving to a new version.
Managed nodes
When managing nodes you need a way to communicate on the managed nodes or environments, which
is normally using SSH by default. This uses the SSH file transfer protocol. If that’s not available, you can
switch to Simple Control Protocol (SCP), which you can do in ansible.cfg. For Windows machines, use
Windows PowerShell.
You can find out more about installing Ansible on the Install Ansible on Azure virtual machines9 page.
Ansible on Azure
There are a number of ways you can use Ansible in Azure.
Azure marketplace
You can use one of the following images available as part of the Azure Marketplace:
●● Red Hat Ansible on Azure is available as an image on Azure Marketplace, and it provides a fully
configured version. This enables easier adoption for those looking to use Ansible as their provisioning
and configuration management tool. This solution template will install Ansible on a Linux VM along
with tools configured to work with Azure. This includes:
●● Ansible (the latest version by default. You can also specify a version number.)
●● Azure CLI 2.0
●● MSI VM extension
●● apt-transport-https
9 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ansible-install-configure?toc=%2Fen-us%2Fazure%2Fansible%2Ftoc.
json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
MCT USE ONLY. STUDENT USE PROHIBITED 730 Module 18 Third Party Infrastructure as Code Tools available with Azure
●● Ansible Tower (by Red Hat). Ansible Tower by Red Hat helps organizations scale IT automation and
manage complex deployments across physical, virtual, and cloud infrastructures. Built on the proven
open-source Ansible automation engine, Ansible Tower includes capabilities that provide additional
levels of visibility, control, security, and efficiency necessary for today's enterprises. With Ansible
Tower you can:
●● Provision Azure environments with ease using pre-built Ansible playbooks.
●● Use role-based access control (RBAC) for secure, efficient management.
●● Maintain centralized logging for complete auditability and compliance.
●● Utilize the large community of content available on Ansible Galaxy.
This offering requires the use of an available Ansible Tower subscription eligible for use in Azure. If you
don't currently have a subscription, you can obtain one directly from Red Hat.
Azure VMs
Another option for running Ansible on Azure is to deploy a Linux VM on Azure virtual machines, which is
infrastructure as a service (IaaS). You can then install Ansible and the relevant components, and use that
as the control machine.
✔️ Note: The Windows operating system is not supported as a control machine. However, you can run
Ansible from a Windows machine by utilizing other services and products such as Windows Subsystem
for Linux, Azure Cloud Shell, and Visual Studio Code.
For more details about running Ansible in Azure, visit:
●● Ansible on Azure documentation10 website
●● Microsoft Azure Guide11
Playbook structure
Playbooks are the language of Ansible's configurations, deployments, and orchestrations. You use them
to manage configurations of and deployments to remote machines. Playbooks are structured with YAML
(a data serialization language), and support variables. Playbooks are declarative and include detailed
information regarding the number of machines to configure at a time.
YML structure
YAML is based around the structure of key-value pairs. In the following example, the key is name, and the
value is namevalue:
name: namevalue
In the YAML syntax, a child key value pair is placed on new, and indented, line below its parent key. Each
sibling key value pair occurs on a new line at the same level of indentation as its sibling key value pair.
parent:
children:
first-sibling: value01
10 https://docs.microsoft.com/en-us/azure/ansible/?ocid=AID754288&wt.mc_id=CFID0352
11 https://docs.ansible.com/ansible/latest/scenario_guides/guide_azure.html
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 731
second-sibling: value02
The specific number of spaces used for indentation is not defined. You can indent each level by as many
spaces as you want. However, the number of spaces used for indentations at each level must be uniform
throughout the file.
When there is indentation in a YAML file, the indented key value pair is the value of it parent key.
Playbook components
The following list is of some of the playbook components:
●● name. The name of the playbook. This can be any name you wish.
●● hosts. Lists where the configuration is applied, or machines being targeted. Hosts can be a list of one
or more groups or host patterns, separated by colons. It can also contain groups such as web servers
or databases, providing that you have defined these groups in your inventory.
●● connection. Specifies the connection type.
●● remote_user. Specifies the user that will be connected to for completing the tasks.
●● var. Allows you to define the variables that can be used throughout your playbook.
●● gather_facts. Determines whether to gather node data or not. The value can be yes or no.
●● tasks. Indicates the start of the modules where the actual configuration is defined.
Running a playbook
You run a playbook using the following command:
ansible-playbook < playbook name >
You can also check the syntax of a playbook using the following command.
ansible-playbook --syntax-check
The syntax check command runs a playbook through the parser to verify that it has included items, such
as files and roles, and that the playbook has no syntax errors. You can also use the --verbose com-
mand.
●● To see a list of hosts that would be affected by running a playbook, run the command:
ansible-playbook playbook.yml --list-hosts
Sample Playbook
The following code is a sample playbook that will create a Linux virtual machine in Azure:
- name: Create Azure VM
hosts: localhost
connection: local
vars:
resource_group: ansible_rg5
location: westus
MCT USE ONLY. STUDENT USE PROHIBITED 732 Module 18 Third Party Infrastructure as Code Tools available with Azure
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network inteface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 733
ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: <your-key-data>
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest
✔️ Note: Ansible Playbook samples for Azure are available on GitHub on the Ansible Playbook Samples
for Azure12 page.
Run commands
Azure Cloud Shell has Ansible preinstalled. After you are signed into Azure Cloud Shell, specify the bash
console. You do not need to install or configure anything further to run Ansible commands from the Bash
console in Azure Cloud Shell.
Editor
You can also use the Azure Cloud Shell editor to review, open, and edit your playbook .yml files. You can
open the editor by selecting the curly brackets icon on the Azure Cloud Shell taskbar.
12 https://github.com/Azure-Samples/ansible-playbooks
MCT USE ONLY. STUDENT USE PROHIBITED 734 Module 18 Third Party Infrastructure as Code Tools available with Azure
13 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 735
location: eastus
10. Verify that you receive output similar to the following code:
PLAY [localhost] *********************************************************************************
11. Open Azure portal and verify that the resource group is now available in the portal.
14 https://code.visualstudio.com/
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 737
You can also view details of this extension on the Visual Studio Marketplace Ansible 15 page.
5. In Visual Studio Code, go to View > Command Palette…. Alternatively, you can select the settings
(cog) icon in the bottom, left corner of the Visual Studio Code window, and then select Command
Palette.
15 https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible&ocid=AID754288&wt.mc_id=CFID0352
MCT USE ONLY. STUDENT USE PROHIBITED 738 Module 18 Third Party Infrastructure as Code Tools available with Azure
7. When a browser launches and prompts you to sign in, select your Azure account. Verify that a mes-
sage displays stating that you are now signed in and can close the page.
8. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
9. Create a new file and paste in the following playbook text:
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: myResourceGroup
location: eastus
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 739
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: true
admin_password: Password0134
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest
13. A notice might appear in the bottom, left side, informing you that the action could incur a small
charge as it will use some storage when the playbook is uploaded to cloud shell. Select Confirm &
Don't show this message again.
14. Verify that the Azure Cloud Shell pane now displays in the bottom of Visual Studio Code and is
running the playbook.
MCT USE ONLY. STUDENT USE PROHIBITED
Ansible 741
15. When the playbook finishes running, open Azure and verify the resource group, resources, and VM
have all been created. If you have time, sign in with the user name and password specified in the
playbook to verify as well.
✔️ Note: If you want to use a public or private key pair to connect to the Linux VM, instead of a user
name and password you could use the following code in the previous Create VM module steps:
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert your ssh public key here... >
MCT USE ONLY. STUDENT USE PROHIBITED 742 Module 18 Third Party Infrastructure as Code Tools available with Azure
Terraform
What is Terraform
HashiCorp Terraform is an open-source tool that allows you to provision, manage, and version cloud
infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resourc-
es such as VMs, storage accounts, and networking interfaces.
Terraform's command-line interface (CLI) provides a simple mechanism to deploy and version the
configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and
preview infrastructure changes before you deploy them.
Terraform also supports multi-cloud scenarios. This means it enables developers to use the same tools
and configuration files to manage infrastructure on multiple cloud providers.
You can run Terraform interactively from the CLI with individual commands, or non-interactively as part of
a continuous integration pipeline.
There is also an enterprise version of Terraform available, Terraform Enterprise.
You can view more details about Terraform on the HashiCorp Terraform16 website.
Terraform components
Some of Terraform's core components include:
●● Configuration files. Text-based configuration files allow you to define infrastructure and application
configuration. These files end in the .tf or .tf.json extension. The files can be in either of the following
two formats:
●● Terraform. The Terraform format is easier for users to review, thereby making it more user friendly.
It supports comments, and is the generally recommended format for most Terraform files. Terra-
form files ends in .tf
●● JSON. The JSON format is mainly for use by machines for creating, modifying, and updating
configurations. However, it can also be used by Terraform operators if you prefer. JSON files end in
.tf.json.
The order of items (such as variables and resources) as defined within the configuration file does not
matter, because Terraform configurations are declarative.
●● Terraform CLI. This is a command-line interface from which you run configurations. You can run
command such as Terraform apply and Terraform plan, along with many others. A CLI configuration
file that configures per-user setting for the CLI is also available. However, this is separate from the CLI
infrastructure configuration. In Windows operating system environments, the configuration file is
named terraform.rc, and is stored in the relevant user's %APPDATA% directory. On Linux systems, the
file is named .terraformrc (note the leading period), and is stored in the home directory of the
relevant user.
16 https://www.terraform.io/
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 743
●● Modules. Modules are self-contained packages of Terraform configurations that are managed as a
group. You use modules to create reusable components in Terraform and for basic code organization.
A list of available modules for Azure is available on the Terraform Registry Modules17 webpage.
●● Provider. The provider is responsible for understanding API interactions and exposing resources.
●● Overrides. Overrides are a way to create configuration files that are loaded last and merged into
(rather than appended to) your configuration. You can create overrides to modify Terraform behavior
without having to edit the Terraform configuration. They can also be used as temporary modifications
that you can make to Terraform configurations without having to modify the configuration itself.
●● Resources. Resources are sections of a configuration file that define components of your infrastruc-
ture, such as VMs, network resources, containers, dependencies, or DNS records. The resource block
creates a resource of the given TYPE (first parameter) and NAME (second parameter). However, the
combination of the type and name must be unique. The resource's configuration is then defined and
contained within braces.
●● Execution plan. You can issue a command in the Terraform CLI to generate an execution plan. The
execution plan shows what Terraform will do when a configuration is applied. This enables you to
verify changes and flag potential issues. The command for the execution plan is Terraform plan.
●● Resource graph. Using a resource graph, you can build a dependency graph of all resources. You can
then create and modify resources in parallel. This helps provision and configure resources more
efficiently.
Terraform on Azure
You download Terraform for use in Azure via: Azure Marketplace, Terraform Marketplace, or Azure VMs.
Azure Marketplace
Azure Marketplace offers a fully-configured Linux image containing Terraform with the following charac-
teristics:
●● The deployment template will install Terraform on a Linux (Ubuntu 16.04 LTS) VM along with tools
configured to work with Azure. Items downloaded include:
●● Terraform (latest)
●● Azure CLI 2.0
●● Managed Service Identity (MSI) VM extension
●● Unzip
●● Jq
●● apt-transport-https
●● This image also configures a remote back-end to enable remote state management using Terraform.
Terraform Marketplace
The Terraform Marketplace image makes it easy to get started using Terraform on Azure, without having
to install and configure Terraform manually. There are no software charges for this Terraform VM image.
17 https://registry.terraform.io/browse?provider=azurerm
MCT USE ONLY. STUDENT USE PROHIBITED 744 Module 18 Third Party Infrastructure as Code Tools available with Azure
You pay only the Azure hardware usage fees that are assessed based on the size of the VM that's provi-
sioned.
Azure VMs
You can also deploy a Linux or Windows VM in Azure VM's IaaS service, install Terraform and the relevant
components, and then use that image.
Installing Terraform
To get started, you must install Terraform on the machine from which you are running the Terraform
commands.
Terraform can be installed on Windows, Linux or macOS environments. Go to the Download Terraform18
page, and choose the appropriate download package for your environment.
18 https://www.terraform.io/downloads.html
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 745
Linux
1. Download Terraform using the following command:
wget https://releases.hashicorp.com/terraform/0.xx.x/terraform_0.xx.x_linux_amd64.zip
4. Verify the installation by running the command Terraform. Verify that the Terraform help output
displays.
MCT USE ONLY. STUDENT USE PROHIBITED 746 Module 18 Third Party Infrastructure as Code Tools available with Azure
#!/bin/sh
echo "Setting environment variables for Terraform"
export ARM_SUBSCRIPTION_ID=your_subscription_id
export ARM_CLIENT_ID=your_appId
export ARM_CLIENT_SECRET=your_password
export ARM_TENANT_ID=your_tenant_id
✔️ Note: After you install Terraform, before you can apply config .tf files you must run the following
command to initialize Terraform for the installed instance:
Terraform init
tags {
MCT USE ONLY. STUDENT USE PROHIBITED 748 Module 18 Third Party Infrastructure as Code Tools available with Azure
tags {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.name}"
address_prefix = "10.0.1.0/24"
}
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 749
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}
tags {
environment = "Terraform Demo"
}
}
byte_length = 8
}
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
}
tags {
environment = "Terraform Demo"
}
}
you can use either a PowerShell or Bash shell to run it. In this walkthrough you create a resource group in
Azure using Terraform, in Azure Cloud Shell, with Bash.
The following screenshot displays Terraform running in Azure Cloud Shell with PowerShell.
The following image is an example of running Terraform in Azure Cloud Shell with a Bash shell.
Editor
You can also use the Azure Cloud Shell editor to review, open, and edit your .tf files. To open the editor,
select the braces on the Azure Cloud Shell taskbar.
MCT USE ONLY. STUDENT USE PROHIBITED 752 Module 18 Third Party Infrastructure as Code Tools available with Azure
Prerequisites
●● You do require an Azure subscription to perform these steps. If you don't have one you can create
one by following the steps outlined on the Create your Azure free account today19 webpage.
Steps
The following steps outline how to create a resource group in Azure using Terraform in Azure Cloud Shell,
with bash.
1. Open the Azure Cloud Shell at https://shell.azure.com. You can also launch Azure Cloud Shell
from within the Azure portal by selecting the Azure Cloud Shell icon.
2. If prompted, authenticate to Azure by entering your credentials.
3. In the taskbar, ensure that Bash is selected as the shell type.
4. Create a new .tf file and open the file for editing with the following command:
vi terraform-createrg.tf
19 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 753
provider "azurerm" {
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}
10. Run the configuration .tf file with the following command:
terraform apply
You should receive a prompt to indicate that a plan has been generated. Details of the changes should be
listed, followed by a prompt to apply or cancel the changes.
MCT USE ONLY. STUDENT USE PROHIBITED 754 Module 18 Third Party Infrastructure as Code Tools available with Azure
11. Enter a value of yes, and then select Enter. The command should run successfully, with output similar
to the following screenshot.
12. Open Azure portal and verify the new resource group now displays in the portal.
Prerequisites
●● This walkthrough requires Visual Studio Code. If you do not have Visual Studio Code installed, you can
download it from https://code.visualstudio.com/20. Download and install a version of Visual Studio
Code that is appropriate to your operating system environment, for example Windows, Linux, or
macOS.
●● You will require an active Azure subscription to perform the steps in this walkthrough. If you do not
have one, create an Azure subscription by following the steps outlined on the Create your Azure free
account today21 webpage.
Steps
1. Launch the Visual Studio Code editor.
2. The two Visual Studio Code extensions Azure Account and Azure Terraform must be installed. To install
the first extension, from inside Visual Studio Code, select File > Preferences > Extensions.
3. Search for and install the extension Azure Account.
4. Search for and install the extension Terraform. Ensure that you select the extension authored by
Microsoft, as there are similar extensions available from other authors
20 https://code.visualstudio.com/
21 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
MCT USE ONLY. STUDENT USE PROHIBITED 756 Module 18 Third Party Infrastructure as Code Tools available with Azure
You can view more details of this extension at the Visual Studio Marketplace on the Azure Terraform22
page.
5. In Visual Studio Code, open the command palette by selecting View > Command Palette. You can
also access the command palette by selecting the settings (cog) icon on the bottom, left side of the
Visual Studio Code window, and then selecting Command Palette.
6. In the Command Palette search field, type Azure:, and from the results, select Azure: Sign In.
7. When a browser launches and prompts you to sign in to Azure, select your Azure account. The
message You are signed in now and can close this page., should display in the browser.
22 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureterraform
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 757
8. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
9. Create a new file, then copy the following code and paste it into the file.
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
name = "terraform-rg2"
location = "eastus"
tags {
environment = "Terraform Demo"
}
}
tags {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.name}"
address_prefix = "10.0.1.0/24"
}
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
public_ip_address_allocation = "dynamic"
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}
tags {
environment = "Terraform Demo"
}
}
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 759
byte_length = 8
}
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
admin_password = "Password0134!"
}
MCT USE ONLY. STUDENT USE PROHIBITED 760 Module 18 Third Party Infrastructure as Code Tools available with Azure
os_profile_linux_config {
disable_password_authentication = false
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
}
tags {
environment = "Terraform Demo"
}
}
10. Save the file locally with the file name terraform-createvm.tf.
11. In Visual Studio Code,select View > Command Palette. Search for the command by entering terra-
form into the search field. Select the following command from the dropdown list of commands:
Azure Terraform: apply
12. If Azure Cloud Shell is not open in Visual Studio Code, a message might appear in the bottom, left
corner asking you if you want to open Azure Cloud Shell. Choose Accept, and select Yes.
13. Wait for the Azure Cloud Shell pane to appear in the bottom of Visual Studio Code window, and start
running the file terraform-createvm.tf. When you are prompted to apply the plan or cancel,
type Yes, and then press Enter.
MCT USE ONLY. STUDENT USE PROHIBITED
Terraform 761
14. After the command completes successfully, review the list of resources created.
15. Open the Azure Portal and verify the resource group, resources, and the VM has been created. If you
have time, sign in with the user name and password specified in the .tf config file to verify.
MCT USE ONLY. STUDENT USE PROHIBITED 762 Module 18 Third Party Infrastructure as Code Tools available with Azure
Note: If you wanted to use a public or private key pair to connect to the Linux VM instead of a user name
and password, you could use the os_profile_linux_config module, set the disable_password_authenti-
cation key value to true and include the ssh key details, as in the following code.
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
You'd also need to remove the password value in the os_profile module that present in the example
above.
Note: You could also embed the Azure authentication within the script. In that case, you would not need
to install the Azure account extension, as in the following example:
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
MCT USE ONLY. STUDENT USE PROHIBITED
Labs 763
Labs
Infrastructure as Code
Steps for the labs are available on GitHub at the below sites under the Infrastructure as Code sections
●● https://microsoft.github.io/PartsUnlimited
●● https://microsoft.github.io/PartsUnlimitedMRP
You should click on the links below, for the individual lab tasks for this module, and follow the steps
outlined there for each lab task.
PartsUnlimitedMRP (PUMRP)
●● Deploy app with Chef on Azure 23
●● Deploy app with Puppet on Azure24
●● Ansible with Azure25
23 http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithChefonAzure.html
24 http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-DeployappwithPuppetonAzure.html
25 http://microsoft.github.io/PartsUnlimitedMRP/iac/200.2x-IaC-AnsiblewithAzure.html
26 https://www.terraform.io/intro/index.html
27 https://azuredevopslabs.com/labs/vstsextend/terraform/
MCT USE ONLY. STUDENT USE PROHIBITED 764 Module 18 Third Party Infrastructure as Code Tools available with Azure
Checkbox
Which of the following are open-source products that are integrated into the Chef Automate image availa-
ble from Azure Marketplace?
Habitat
Facts
Console Services
InSpec
Checkbox
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
Master
Agent
Facts
Habitat
Dropdown
Complete the following sentence.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and ________.
Module
Habitat
InSpec
Cookbooks
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 765
Checkbox
Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
Puppet
Chef
Ansible
Multiple choice
True or false: The Control Machine in Ansible must have Python installed?
True
False
Checkbox
Which of the following statements about the cloud-init package are correct?
The --custom-data parameter passes the name of the configuration file (.txt).
Configuration files (.txt) are encoded in base64.
The YML syntax is used within the configuration file (.txt).
cloud-init works across Linux distributions.
Multiple choice
True or false: Terraform ONLY supports configuration files with the file extension .tf.
True
False
Multiple choice
Which of the following core Terraform components can modify Terraform behavior, without having to edit
the Terraform configuration?
Configuration files
Overrides
Execution plan
Resource graph
MCT USE ONLY. STUDENT USE PROHIBITED 766 Module 18 Third Party Infrastructure as Code Tools available with Azure
Answers
Checkbox
Which of the following are main architectural components of Chef?
(choose all that apply)
■■ Chef Server
Chef Facts
■■ Chef Client
■■ Chef Workstation
Explanation
The correct answers are Chef Server, Chef Client and Chef Workstation.
Chef Facts is an incorrect answer.
Chef Facts is not an architectural component of Chef. Chef Facts misrepresents the term 'Puppet Facts'.
Puppet Facts are metadata used to determine the state of resources managed by the Puppet automation
tool.
Chef has the following main architectural components. 'Chef Server' is the Chef management point. The
two options for the Chef Server are 'hosted' and 'on-premises'. 'Chef Client (node)' is an agent that sits on
the servers you are managing. 'Chef Workstation' is an Administrator workstation where you create Chef
policies and execute management commands. You run the Chef 'knife' command from the Chef Worksta-
tion to manage your infrastructure.
Checkbox
Which of the following are open-source products that are integrated into the Chef Automate image
available from Azure Marketplace?
■■ Habitat
Facts
Console Services
■■ InSpec
Explanation
The correct answers are Habitat and InSpec.
Facts and Console Services are incorrect answers.
Facts are metadata used to determine the state of resources managed by the Puppet automation tool.
Console Services is a web-based user interface for managing your system with the Puppet automation tool.
Habitat and InSpec are two open-source products that are integrated into the Chef Automate image
available from Azure Marketplace. Habitat makes the application and its automation the unit of deploy-
ment, by allowing you to create platform-independent build artifacts called 'habitats' for your applications.
InSpec allows you to define desired states for your applications and infrastructure. InSpec can conduct audits
to detect violations against your desired state definitions, and generate reports from its audit results.
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 767
Checkbox
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
■■ Master
■■ Agent
■■ Facts
Habitat
Explanation
The correct answers are Master, Agent and Facts.
Habitat is an incorrect answer.
Habitat is used with Chef for creating platform-independent build artifacts called for your applications.
Master, Agent and Facts are core components of the Puppet automation platform. Another core component
is 'Console Services'. Puppet Master acts as a center for Puppet activities and processes. Puppet Agent runs
on machines managed by Puppet, to facilitate management. Console Services is a toolset for managing and
configuring resources managed by Puppet. Facts are metadata used to determine the state of resources
managed by Puppet.
Dropdown
Complete the following sentence.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and ________.
■■ Module
Habitat
InSpec
Cookbooks
Explanation
Module is the correct answer.
All other answers are incorrect answers.
Habitat, InSpec and Cookbooks are incorrect because they relate to the Chef automation platform.
The main elements of a Puppet Program (PP) Manifest file are Class, Resource and Module. Classes define
related resources according to their classification, to be reused when composing other workflows. Resources
are single elements of your configuration which you can specify parameters for. Modules are collections of
all the classes, resources and other elements in a single entity.
Checkbox
Which of the following platforms use Agents to communicate with target machines?
(choose all that apply)
■■ Puppet
■■ Chef
Ansible
Explanation
The correct answers are: Puppet and Chef.
Ansible is an incorrect answer.
Ansible is agentless because you do not need to install an Agent on each of the target machines it manages.
Ansible uses the Secure Shell (SSH) protocol to communicate with target machines. You choose when to
conduct compliance checks and perform corrective actions, instead of using Agents and a Master to perform
MCT USE ONLY. STUDENT USE PROHIBITED 768 Module 18 Third Party Infrastructure as Code Tools available with Azure
the file extension .tf.json for Terraform JSON format configuration files. Terraform supports configuration
files in either .tf or .tf.json format. The Terraform .tf format is more human-readable, supports comments,
and is the generally recommended format for most Terraform files. The JSON format .tf.json is meant for
use by machines, but you can write your configuration files in JSON format if you prefer.
Multiple choice
Which of the following core Terraform components can modify Terraform behavior, without having to
edit the Terraform configuration?
Configuration files
■■ Overrides
Execution plan
Resource graph
Explanation
Overrides is the correct answer.
All other answers are incorrect answers.
Configuration files, in .tf or .tf.json format, allow you to define your infrastructure and application configura-
tions with Terraform.
Execution plan defines what Terraform will do when a configuration is applied.
Resource graph builds a dependency graph of all Terraform managed resources.
Overrides modify Terraform behavior without having to edit the Terraform configuration. Overrides can also
be used to apply temporary modifications to Terraform configurations without having to modify the
configuration itself.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 19 Implement Compliance and Securi-
ty in your Infrastructure
Module Overview
Module Overview
As many as four out of five companies leveraging a DevOps approach to software engineering do so
without integrating the necessary information security controls, underscoring the urgency with which
companies should be evaluating “Rugged” DevOps (also known as “shift left”) to build security into their
development life cycle as early as possible.
Rugged DevOps represents an evolution of DevOps in that it takes a mode of development in which
speed and agility are primary and integrates security, not just with automated tools and processes, but
also through cultural change emphasizing ongoing, flexible collaboration between release engineers and
security teams. The goal is to bridge the traditional gap between the two functions, reconciling rapid
deployment of code with the imperative for security.
For many companies, a common pitfall on the path to implementing rugged DevOps is implementing the
approach all at once rather than incrementally, underestimating the complexity of the undertaking and
producing cultural disruption in the process. Putting these plans in place is not a one-and-done process;
instead the approach should continuously evolve to support the various scenarios and needs that
DevOps teams encounter. The building blocks for Ruggid DevOps involves understanding and implemen-
tation of the following concepts,
●● Code Analysis
●● Change Management
●● Compliance Monitoring
●● Threat Investigation
●● Vulnerability assessment & KPIs
MCT USE ONLY. STUDENT USE PROHIBITED 772 Module 19 Implement Compliance and Security in your Infrastructure
Learning Objectives
After completing this module, students will be able to:
●● Define an infrastructure and configuration strategy and appropriate toolset for a release pipeline and
application infrastructure
●● Implement compliance and security in your application infrastructure
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance Principles with DevOps 773
The goal of a Rugged DevOps pipeline is to allow development teams to work fast without breaking their
project by introducing unwanted security vulnerabilities.
Note: Rugged DevOps is also sometimes referred to as DevSecOps. You might encounter both terms, but
each term refers to the same concept.
1 https://www.microsoft.com/en-us/security/operations/security-intelligence-report
MCT USE ONLY. STUDENT USE PROHIBITED 774 Module 19 Implement Compliance and Security in your Infrastructure
Two important features of Rugged DevOps pipelines that are not found in standard DevOps pipelines are:
●● Package management and the approval process associated with it. The previous workflow diagram
details additional steps that account for how software packages are added to the pipeline, and the
approval processes that packages must go through before they are used. These steps should be
enacted early in the pipeline, so that issues can be identified sooner in the cycle.
●● Source Scanner, also an additional step for scanning the source code. This step allows for security
scanning and checking for security vulnerabilities that are not present in the application code. The
scanning occurs after the app is built, but before release and pre-release testing. Source scanning can
identify security vulnerabilities earlier in the cycle.
In the remainder of this lesson, we address these two important features of Rugged DevOps pipelines,
the problems they present, and some of the solutions for them.
Package management
Just as teams use version control as a single source of truth for source code, Rugged DevOps relies on a
package manager as the unique source of binary components. By using binary package management, a
MCT USE ONLY. STUDENT USE PROHIBITED 776 Module 19 Implement Compliance and Security in your Infrastructure
development team can create a local cache of approved components, and make this a trusted feed for
the Continuous Integration (CI) pipeline.
In Azure DevOps, Azure Artifacts is an integral part of the component workflow for organizing and
sharing access to your packages. Azure Artifacts allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in
Git.
●● Protect your packages. Keep every public source package you use (including packages from npmjs
and NuGet .org) safe in your feed where only you can delete it and where it's backed by the enter-
prise-grade Azure Service Level Agreement (SLA).
●● Integrate seamless package handling into your Continuous Integration (CI)/ Continuous Development
(CD) pipeline. Easily access all your artifacts in builds and releases. Azure Artifacts integrates natively
with the Azure Pipelines CI/CD tool.
For more information about Azure Artifacts, visit the webpage What is Azure Artifacts?2
2 https://docs.microsoft.com/en-us/azure/devops/artifacts/overview?view=vsts
3 https://marketplace.visualstudio.com/items?itemName=ms.feed
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance Principles with DevOps 777
Note: After you publish a particular version of a package to a feed, that version number is permanently
reserved. You cannot upload a newer revision package with that same version number, or delete that
version and upload a new package with the same version number. The published version is immutable.
When consuming an OSS component, whether you are creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest, correct version to avoid any old vulnerabilities or license misuses.
2. Validate that the OSS components are the correct binaries for your version. In the release pipeline,
validate binaries to ensure that they are correct and to keep a traceable bill of materials.
3. Get notifications of component vulnerabilities immediately, and correct and redeploy the component
automatically to resolve security vulnerabilities or license misuses from reused software.
WhiteSource
The WhiteSource5 extension is available on the Azure DevOps Marketplace. Using WhiteSource, you can
integrate extensions with your CI/CD pipeline to address Rugged DevOps security-related issues. For a
team consuming external packages, the WhiteSource extension specifically addresses open source
security, quality, and license compliance concerns. Because most breaches today target known vulnerabil-
ities in common components, robust tools are essential to securing problematic open source compo-
nents.
4 https://marketplace.visualstudio.com/
5 https://marketplace.visualstudio.com/items?itemName=whitesource.whitesource
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance Principles with DevOps 779
For searching online repositories such as GitHub and Maven Central, WhiteSource also offers an innova-
tive browser extension. Even before choosing a new component, a developer can review its security
vulnerabilities, quality, and license issues, and whether it fits their company’s policies.
6 https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance Principles with DevOps 781
Fortify on Demand
Fortify on Demand delivers application SaaS. It automatically submits static and dynamic scan requests to
the application's SaaS platform. Static assessments are uploaded to Fortify on Demand. For dynamic
assessments, you can pre-configure a specific application URL.
Checkmarx functionality
Checkmarx functionality includes:
●● Best fix location. Checkmarx highlights the best place to fix your code to minimize the time required
to remediate the issue. A visual chart of the data flow graph indicates the ideal location in the code to
address multiple vulnerabilities within the data flow using a single line of code.
●● Quick and accurate scanning. Checkmarx helps reduce false positives, adapt the rule set to minimize
false positives, and understand the root cause for results.
●● Incremental scanning. Using Checkmarx, you can test just the code parts that have changed since last
code check in. This helps reduce scanning time by more than 80 percent. It also enables you to
incorporate the security gate within your continuous integration pipeline.
●● Seamless integration. Checkmarx works with all integrated development environments (IDEs), build
management servers, bug tracking tools, and source repositories.
●● Code portfolio protection. Checkmarx helps protect your entire code portfolio, both open source and
in-house source code. It analyzes open-source libraries, ensuring licenses are being adhered to, and
removing any open-source components that expose the application to known vulnerabilities. In
addition, Checkmarx Open Source helps provide complete code portfolio coverage under a single
unified solution with no extra installations or administration required.
●● Easy to initiate Open Source Analysis. With Checkmarx’s Open Source analysis, you don't need
additional installations or multiple management interfaces; you simply turn it on, and within minutes a
detailed report is generated with clear results and detailed mitigation instructions. Because analysis
7 https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast
MCT USE ONLY. STUDENT USE PROHIBITED 782 Module 19 Implement Compliance and Security in your Infrastructure
results are designed with the developer in mind, no time is wasted trying to understand the required
action items to mitigate detected security or compliance risks.
Veracode functionality
Veracode's functionality includes the following features:
●● Integrate application security into the development tools you already use. From within Azure DevOps
and Microsoft Team Foundation Server (TFS) you can automatically scan code using the Veracode
Application Security Platform to find security vulnerabilities. With Veracode you can import any
security findings that violate your security policy as work items. Veracode also gives you the option to
stop a build if serious security issues are discovered.
●● No stopping for false alarms. Because Veracode gives you accurate results and prioritizes them based
on severity, you don't need to waste resources responding to hundreds of false positives. Microsoft
has assessed over 2 trillion lines of code in 15 languages and over 70 frameworks. In addition, this
process continues to improve with every assessment because of rapid update cycles and continuous
improvement processes. If something does get through, you can mitigate it using the easy Veracode
workflow.
●● Align your application security practices with your development practices. Do you have a large or
distributed development team? Do you have too many revision control branches? You can integrate
your Azure DevOps workflows with the Veracode Developer Sandbox, which supports multiple
development branches, feature teams, and other parallel development practices.
●● Find vulnerabilities and fix them. Veracode gives you remediation guidance with each finding and the
data path that a malicious user would use to reach the application's weak point. Veracode also
highlights the most common sources of vulnerabilities to help you prioritize remediation. In addition,
when vulnerability reports don't provide enough clarity, you can set up one-on-one developer
8 https://marketplace.visualstudio.com/items?itemName=Veracode.veracode-vsts-build-extension
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance Principles with DevOps 783
consultations with Microsoft experts who have backgrounds in both security and software develop-
ment. Security issues that are found by Vercode and which could prevent you from releasing your
code show up automatically in your teams' list of work items, and are automatically updated and
closed after you scan your fixed code.
●● Proven onboarding process allows for scanning on day one. The cloud-based Veracode Application
Security Platform is designed to get you going quickly, in minutes even. Veracode's services and
support team can make sure that you are on track to build application security into your process.
9 https://www.whitesourcesoftware.com/
10 https://www.checkmarx.com/
11 https://www.veracode.com/
12 https://www.blackducksoftware.com/
MCT USE ONLY. STUDENT USE PROHIBITED 784 Module 19 Implement Compliance and Security in your Infrastructure
Build definitions can be scheduled (perhaps daily), or triggered with each commit. In either case, the build
definition can perform a longer static analysis scan (as illustrated in the following image). You can scan
the full code project and review any errors or warnings offline without blocking the CI flow.
13 http://aka.ms/jea
MCT USE ONLY. STUDENT USE PROHIBITED
Security and Compliance Principles with DevOps 785
code inside your infrastructure. Azure DevOps will encrypt the secrets in your pipeline, as a best
practice rotate the passwords just as you would with other credentials.
●● Permissions management. You can manage permissions to secure the pipeline with role-based access
control (RBAC), just as you would for your source code. This keeps you in control of who can edit the
build and release definitions that you use for production.
●● Dynamic scanning. This is the process of testing the running application with known attack patterns.
You could implement penetration testing as part of your release. You also could keep up to date on
security projects such as the Open Web Application Security Project (OWASP14) Foundation, then
adopt these projects into your processes.
●● Production monitoring. This is a key DevOps practice. The specialized services for detecting anoma-
lies related to intrusion are known as Security Information and Event Management. Azure Security
Center15 focuses on the security incidents that relate to the Azure cloud.
✔️ Note: In all cases, use Azure Resource Manager Templates or other code-based configurations. You
should also implement IaC best practices, such as only making changes in templates, to make changes
traceable and repeatable. Also, use provisioning and configuration technologies such as Desired State
Configuration (DSC), Azure Automation, and other third-party tools and products that can integrate
seamlessly with Azure.
14 https://www.owasp.org
15 https://azure.microsoft.com/en-us/services/security-center/
16 https://github.com/azsk/DevOpsKit-docs
MCT USE ONLY. STUDENT USE PROHIBITED 786 Module 19 Implement Compliance and Security in your Infrastructure
●● Alerting & monitoring. Security status visibility is important for both individual application teams and
central enterprise teams. Secure DevOps provides solutions that cater to the needs of both. Moreover,
the solution spans across all stages of DevOps, in effect bridging the security gap between the Dev
team and the Ops team through the single, integrated view it can generate.
●● Governing cloud risks. Underlying all activities in the Secure DevOps kit is a telemetry framework that
generates events such as capturing usage, adoption, and evaluation results. This enables you to make
measured improvements to security by targeting areas of high risk and maximum usage.
You can leverage and utilize the tools, scripts, templates, and best practice documentation that are
available as part of AzSK.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 787
Azure Security Center is part of the Center for Internet Security (CIS) Benchmarks17 recommendations.
17 https://www.cisecurity.org/cis-benchmarks/
MCT USE ONLY. STUDENT USE PROHIBITED 788 Module 19 Implement Compliance and Security in your Infrastructure
After the 60-day trial period is over, Azure Security Center is $15 per node per month. To upgrade a
subscription from the Free trial to the Standard version, you must be assigned the role of Subscription
Owner, Subscription Contributor, or Security Admin.
You can read more about Azure Security Center at Azure Security Center18.
The following examples are of how you can use Azure Security Center for the detect, assess, and diag-
nose stages of your incident response plan.
●● Detect. Review the first indication of an event investigation. For example, use the Azure Security
Center dashboard to review the initial verification of a high-priority security alert occurring.
18 https://azure.microsoft.com/en-us/services/security-center/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 789
●● Assess. Perform the initial assessment to obtain more information about a suspicious activity. For
example, you can obtain more information from Azure Security Center about a security alert.
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies. For example, you can follow the remediation steps described by Azure Security Center for a
particular security alert.
●● Use Azure Security Center recommendations to enhance security.
You can reduce the chances of a significant security event by configuring a security policy, and then
implementing the recommendations provided by Azure Security Center. A security policy defines the set
of controls that are recommended for resources within a specified subscription or resource group. In
Azure Security Center, you can define policies according to your company's security requirements.
Azure Security Center analyzes the security state of your Azure resources. When it identifies potential
security vulnerabilities, it creates recommendations based on the controls set in the security policy. The
recommendations guide you through the process of configuring the corresponding security controls. For
example, if you have workloads that don't require the Azure SQL Database Transparent Data Encryption
(TDE) policy, turn off the policy at the subscription level and enable it only on the resource groups where
SQL Database TDE is required.
You can read more about Azure security center at Azure security center19. More implementation and
scenario details are also available in the Azure security center planning and operations guide20.
Azure Policy
Azure Policy is an Azure service that you can use to create, assign, and, manage policies. Policies enforce
different rules and effects over your Azure resources, which ensures that your resources stay compliant
with your standards and SLAs.
Azure Policy uses policies and initiatives to provide policy enforcement capabilities. Azure Policy evaluates
your resources by scanning for resources that do not comply with the policies you create. For example,
you might have a policy that specifies a maximum size limit for VMs in your environment. After you
implement your maximum VM size policy, whenever a VM is created or updated Azure Policy will evaluate
the VM resource to ensure that the VM complies with the size limit that you set in your policy.
Azure Policy can help to maintain the state of your resources by evaluating your existing resources and
configurations, and remediating non-compliant resources automatically. It has built-in policy and initia-
tive definitions for you to use. The definitions are arranged in categories, such as Storage, Networking,
Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration (CI) and
continuous delivery (CD) pipeline policies that apply to the pre-deployment and post-deployment of your
applications.
19 https://azure.microsoft.com/en-us/services/security-center/
20 https://docs.microsoft.com/en-us/azure/security-center/security-center-planning-and-operations-guide
MCT USE ONLY. STUDENT USE PROHIBITED 790 Module 19 Implement Compliance and Security in your Infrastructure
Policies
Applying a policy to your resources with Azure Policy involves the following high-level steps:
1. Policy definition. Create a policy definition.
2. Policy assignment. Assign the definition to a scope of resources.
3. Remediation. Review the policy evaluation results and address any non-compliances.
Policy definition
A policy definition specifies the resources to be evaluated and the actions to take on them. For example,
you could prevent VMs from deploying if they are exposed to a public IP address. You could also prevent
a specific hard disk from being used when deploying VMs to control costs. Policies are defined in the Java
Script Object Notation (JSON) format.
The following example defines a policy that limits where you can deploy resources:
{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be specified when deploying resources",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy enables you to restrict the locations your organization can specify when
deploying resources.",
"policyRule": {
"if": {
"not": {
"field": "location",
"in": "[parameters('allowedLocations')]"
21 https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate?view=vsts
22 https://azure.microsoft.com/en-us/services/azure-policy/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 791
}
},
"then": {
"effect": "deny"
}
}
}
}
Policy assignment
Policy definitions, whether custom or built in, need to be assigned. A policy assignment is a policy defini-
tion that has been assigned to a specific scope. Scopes can range from a management group to a
resource group. Child resources will inherit any policy assignments that have been applied to their
parents. This means that if a policy is applied to a resource group, it's also applied to all the resources
within that resource group. However, you can define subscopes for excluding particular resources from
policy assignments.
You can assign policies via:
●● Azure portal
●● Azure CLI
●● PowerShell
Remediation
Resources found not to comply to a deployIfNotExists or modify policy condition can be put into a
compliant state through Remediation. Remediation instructs Azure Policy to run the deployIfNotExists
effect or the tag operations of the policy on existing resources. To minimize configuration drift, you can
bring resources into compliance using automated bulk remediation instead of going through them one
at a time.
You can read more about Azure Policy on the Azure Policy23 webpage.
23 https://azure.microsoft.com/en-us/services/azure-policy/
MCT USE ONLY. STUDENT USE PROHIBITED 792 Module 19 Implement Compliance and Security in your Infrastructure
Initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for meeting large-scale compliance goals. Even if you have a single
policy, we recommend using initiatives if you anticipate increasing your number of policies over time. The
application of an initiative definition to a specific scope is called an initiative assignment.
Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping sets of
policies into a single item. For example, you can create an initiative named Enable Monitoring in Azure
Security Center to monitor security recommendations from Azure Security Center. Under this example
initiative, you would have the following policy definitions:
●● Monitor unencrypted SQL Database in Security Center. This policy definition monitors unencrypted
SQL databases and servers.
●● Monitor OS vulnerabilities in Security Center. This policy definition monitors servers that do not satisfy
a specified OS baseline configuration.
●● Monitor missing Endpoint Protection in Security Center. This policy definition monitors servers
without an endpoint protection agent installed.
Initiative assignments
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. Scopes can
range from a management group to a resource group. You can assign initiatives in the same way that you
assign policies.
You can read more about policy definition and structure at Azure Policy definition structure24.
Usage scenarios
Azure Key Vault capabilities and usage scenarios include:
●● Secrets management. You can use Key Vault to securely store and control access to tokens, passwords,
certificates, API keys, and other secrets.
24 https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 793
●● Key management. As a key management solution, Key Vault makes it easy to create and control
encryption keys used to encrypt your data.
●● Certificate management. Key Vault lets you provision, manage, and deploy public and private Secure
Sockets Layer (SSL)/ Transport Layer Security (TLS) certificates for your Azure subscription and con-
nected resources more easily.
●● Store secrets backed by Hardware Security Modules (HSMs). Secrets and keys can be protected either
by software, or by Federal Information Processing Standard (FIPS) 140-2 Level 2 validated HSMs.
Usage scenarios
The following examples provide use-case scenarios for RBAC:
●● Allow a specific user to manage VMs in a subscription, and another user to manage virtual networks.
●● Permit the database administrator (DBA) group management access to Microsoft SQL Server databas-
es in a subscription.
●● Grant a user management access to certain types of resources in a resource group, such as VMs,
websites, and subnets.
●● Give an application access to all resources in a resource group.
To review access permissions, in a deployed VM, open the Access Control (IAM) blade in the Azure
portal. On this blade, you can review who has access, their role, and grant or remove access permissions.
25 https://azure.microsoft.com/en-us/services/key-vault/
MCT USE ONLY. STUDENT USE PROHIBITED 794 Module 19 Implement Compliance and Security in your Infrastructure
The following illustration is an example of the Access Control (IAM) blade for a resource group. Note
how the Roles tab displays the built-in roles that are available.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 795
For a full list of available built-in roles, go to the Built-in roles for Azure resources26 webpage.
RBAC uses an allow model for permissions. This means that when you are assigned a role, RBAC allows
you to perform certain actions such as read, write, or delete. Therefore, if one role assignment grants you
read permissions to a resource group, and a different role assignment grants you write permissions to the
same resource group, you will have both read and write permissions for that resource group.
Best practice
When using RBAC, segregate duties within your team and only grant users the access they need to
perform their jobs. Instead of giving everybody unrestricted access to your Azure subscription and
resources, allow only certain actions at a particular scope. In other words, grant users the minimal
privileges that they need to complete their work.
Note: For more information about RBAC, visit What is role-based access control (RBAC) for Azure
resources?27.
Locks
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage
locks from within the Azure portal. In Azure portal, locks are called Delete and Read-only respectively. To
review, add, or delete locks for a resource in Azure portal, go to the Settings section on the resource's
settings blade.
You might need to lock a subscription, resource group, or resource to prevent users from accidentally
deleting or modifying critical resources. You can set a lock level to CanNotDelete or ReadOnly:
●● CanNotDelete means that authorized users can read and modify a resource, but they cannot delete
the resource.
●● ReadOnly means that authorized users can read a resource, but they cannot modify or delete it.
You can read more about Locks on the Lock resources to prevent unexpected changes28 webpage.
Subscription governance
The following considerations apply to creating and managing Azure subscriptions:
●● Billing. You can generate billing reports for Azure subscriptions. If, for example, you have multiple
internal departments and need to perform a chargeback, you can create a subscription for a depart-
ment or project.
●● Access control. A subscription acts as a deployment boundary for Azure resources. Every subscription
is associated with an Azure Active Directory (Azure AD) tenant that provides administrators with the
ability to set up RBAC. When designing a subscription model, consider the deployment boundary.
Some customers keep separate subscriptions for development and production, and manage them
using RBAC to isolate one subscription from the other (from a resource perspective).
●● Subscription limits. Subscriptions are bound by hard limitations. For example, the maximum number
of Azure ExpressRoute circuits per subscription is 10. If you reach a limit, there is no flexibility. Keep
these limits in mind during your design phase. If you need to exceed the limits, you might require
additional subscriptions.
26 https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
27 https://docs.microsoft.com/en-us/azure/role-based-access-control/overview
28 https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-lock-resources
MCT USE ONLY. STUDENT USE PROHIBITED 796 Module 19 Implement Compliance and Security in your Infrastructure
Management groups
Management Groups can assist you with managing your Azure subscriptions. Management groups
manage access, policies, and compliance across multiple Azure subscriptions. They allow you to order
your Azure resources hierarchically into collections. Management groups facilitate the management of
resources at a level above the level of subscriptions.
In the following image, access is divided across different regions and business functions, such as market-
ing and IT. This helps you track costs and resource usage at a granular level, add security layers, and
segment workloads. You could even divide these areas further into separate subscriptions for Dev and
QA, or for specific teams.
You can manage your Azure subscriptions more effectively by using Azure Policy and Azure RBACs. These
tools provide distinct governance conditions that you can apply to each management group. Any
conditions that you apply to a management group will automatically be inherited by the resources and
subscriptions within that group.
Note: For more information about management groups and Azure, go to the Azure management
groups documentation, Organize your resources29 webpage.
Azure Blueprints
Azure Blueprints enables cloud architects to define a repeatable set of Azure resources that implement
and adhere to an organization's standards, patterns, and requirements. Azure Blueprints helps develop-
ment teams build and deploy new environments rapidly with a set of built-in components that speed up
development and delivery. Furthermore, this is done while staying within organizational compliance
requirements.
29 https://docs.microsoft.com/en-us/azure/governance/management-groups/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 797
Azure Blueprints provides a declarative way to orchestrate deployment for various resource templates
and artifacts, including:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
To implement Azure Blueprints, complete the following high-level steps:
1. Create a blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved.
The blueprints in Azure Blueprints are different from Azure Resource Manager templates. When Azure Re-
source Manager templates deploy resources, they have no active relationship with the deployed resourc-
es. (They exist in a local environment or in source control). By contrast, with Azure Blueprints, each
deployment is tied to an Azure Blueprints package. This means that the relationship with resources will be
maintained, even after deployment. This way, maintaining relationships improves deployment tracking
and auditing capabilities.
Usage scenario
Adhering to security and compliance requirements, whether government, industry, or organizational
requirements, can be difficult and time consuming. To help you to trace your deployments and audit
them for compliance, Azure Blueprints uses artifacts and tools that expedite your path to certification.
Azure Blueprints is also useful in Azure DevOps scenarios where blueprints are associated with specific
build artifacts and release pipelines, and blueprints can be tracked rigorously.
You can learn more about Azure Blueprints at Azure Blueprints30.
30 https://azure.microsoft.com/services/blueprints/
MCT USE ONLY. STUDENT USE PROHIBITED 798 Module 19 Implement Compliance and Security in your Infrastructure
31 https://portal.atp.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
Azure security Center 799
32 https://www.microsoft.com/en-ie/cloud-platform/enterprise-mobility-security-pricing
33 https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection/